Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I read the following in a review of Knuth's "The Art of Computer Programming":
"The very 'practicality' means that the would-be CS major has to learn Kernighan's mistakes in designing C, notably the infamous fact that a for loop evaluates the for condition repeatedly, which duplicates while and fails to match the behavior of most other languages which implement a for loop."
(<http://www.amazon.com/review/R9OVJAJQCP78N/ref=cm_cr_pr_viewpnt#R9OVJAJQCP78N>)
What is this guy talking about? How could you implement a for loop that wasn't just syntactic sugar for a while loop?
|
Consider this:
```
for i:=0 to 100 do { ... }
```
In this case, we could replace the final value, 100, by a function call:
```
for i:=0 to final_value() do { ... }
```
... and the `final_value`-function would be called only once.
In C, however:
```
for (int i=0; i<final_value(); ++i) // ...
```
... the `final_value`-function would be called for each iteration through the loop, thus making it a good practice to be more verbose:
```
int end = final_value();
for (int i=0; i<end; ++i) // ...
```
|
If all you want is a simple counting loop, then
```
for (i=0; i<100; i++) dostuff();
```
will be fine, and the compiler can optimize it.
If you use a function in the continue part of the for statement, like
```
for (i=0; i<strlen(s); i++) dostuff();
```
then the function will be evaluated every time and this is usually not a good idea as the function overheads will slow your process. Sometimes it can slow your process to the point of unusability.
If the function's return value will not change during the iteration, extract it from the loop:
```
slen = strlen(s);
for (i=0; i<slen; i++) dostuff();
```
But there are times when the function will be returning different values each call, and then you do not want it extracted from the loop:
```
for (isread(fd, &buffer, ISFIRST);
isstat(fd) >= 0;
isread(fd, &buffer, ISNEXT)
{
dostuff(buffer);
}
```
and you want it evaluated each time. (That is a slightly contrived example based on work that I do, but it shows the potential).
C gives you the raw ability to roll your loop any way you can. You have to know how your loop is supposed to work, and you optimize it as best you can, depending on your needs.
That last example could have been expressed as a while loop:
```
isread(fd, &buffer, ISFIRST);
while (isstat(fd) >= 0)
{
dostuff(buffer);
isread(fd, &buffer, ISNEXT);
}
```
but it's not as neat, and if I use a continue in the loop, then I have to call the iterating isread again. Putting the whole thing in a for loop makes it neater, and ensures that the iterating isread is called each loop.
I write lower-level functions so they can be used in for loops like this. It brings all elements of the while loop together so you can understand it more easily.
|
C for loop implemented differently than other languages?
|
[
"",
"c++",
"c",
"knuth",
""
] |
I'd like to create a basic "Hello World" style application for the IPhone using Java - can anyone tell me how?
|
You can't code in Java for iPhone. The iPhone only supports C/C++/Objective-C - Cocoa.
However, under the current license you can use translation tools that generate such code.
There are several solutions that do exactly that:
[Codename One](http://www.codenameone.com/) - focuses on building applications using Java with visual tools and simulators. Open source with a SaaS backend that removes the need for a Mac.
[XMLVM](http://www.xmlvm.org/) - a translator to convert Java bytecode to C/Objective-C. Open source, but requires writing iOS specific code at the moment.
There are also several proprietary solutions but I have no experience with them. E.g. Software AG has a tool called web objects.
|
Might want to check [Alchemo for iphone](http://www.innaworks.com/alcheMo-for-iPhone.html) never used it myself yet. It convert your CLDC 1.1 MIPD 2.0 code into native iPhone code. It even take care of the memory management.
|
How do I create a "Hello World" application in java for an iphone?
|
[
"",
"java",
"iphone",
""
] |
We are looking for a open source J2EE Application server for log budget deployments. We are considering JBoss and Glassfish. Which is the best open source application server? Any comparative study available?
|
JBoss has been used in production environment for over 6 years now.
Glassfish's codebase is over 10 years BUT most of the code is recent. Glassfish v3 (still in beta) has a microkernel (so is Jboss 5) architecture.
Glassfish has command line tools and a nice console. Since Glassfish is from sun, usually it will work well with the latest version of JDK/J2EE -- currently JDK 6/J2EE 5.
Glassfish should work well in windows, solaris and linux. Jboss(from redhat) should work just fine on linux, windows for sure.
community : a much much bigger community around Jboss. Glassfish still forming its community. Books and other 3rd party products are available for JBoss but not for Glassfish.
If something is mission critical and you need to get it up and running right now, you will NOT go wrong going with JBoss. However, if you have some time, in the long run Glassfish will NOT hurt you. Especially, if you're targeting Solaris.
|
After using JBoss from the very beginning and then switching to Glassfish in the last 6 months, I can honestly say that Glassfish is fast, reliable and full-featured. I would never consider going back. Im not talking "PetStore" here, my experience with these servers is in the high-transaction online stock-trading arena.
|
Which is the best Open source application server?
|
[
"",
"java",
""
] |
First, two examples:
```
// This works
int foo = 43;
long lFoo = foo;
// This doesn't
object foo = (int)43;
long? nullFoo = foo as long?; // returns null
long lFoo = (long)foo; // throws InvalidCastException
if (foo.GetType() == typeof(int))
Console.WriteLine("But foo is an int..."); // This gets written out
```
Now, my guess as to why the second doesn't work is because of boxing. The purpose behind this code is to implement `IComparable`. I need some way to coerce an object into either a long or a ulong as appropriate, or if it's neither, than to throw an error. I don't want to have to implement checks for each basic numeric type (byte, int, long, ubyte, ...) I'd rather just catch them in the largest numeric type and deal with it that way. Thoughts from all the smart people here? How can I unbox the object, preferably avoiding reflection, but I suppose if that's the only way... Or should I just not implement the non-generics version of `IComparable`?
**Edit:**
This seems to work, but seems like a horrible hack around the problem. Is it just me?
```
long lFoo = long.Parse(foo.ToString());
```
|
```
object foo = (int) 43;
long lFoo = ((IConvertible) foo).ToInt64(null);
```
|
When you're casting to a value type you're really forcing an unbox IL operation, which requires that the type you're casting to matches *exactly* the boxed value; there are no conversions, implicit, or explicit that can happen at the same time.
This usually means that you either need to do a switch using the typecode (or an if/else if using types), or, in your case, go with a check for null followed by Convert.ToInt64(), which should deal with it correctly.
|
C# boxing question
|
[
"",
"c#",
"icomparable",
"unboxing",
""
] |
The other day I set up an Ubuntu installation in a VM and went to gather the tools and libraries I figured I would need for programming mostly in C++.
I had a problem though, where to put things such as 3rd party source libraries, etc. From what I can gather, a lot of source distributions assume that a lot of their dependencies are already installed in a certain location and assume that a lot of tools are also installed in particular locations.
To give an example of what I currently do on Windows, is I have a directory where I keep all source code. `C:\code`. In this directory, I have a directory for all 3rd party libraries, `c:\code\thirdparty\libs`. This way I can easily set up relative paths for all of the dependencies of any projects I write or come across and wish to compile. The reason I am interested in setting up a linux programming environment is that it seems that both the tool and library dependency problems have been solved efficiently making it easy for example to build OpenSSH from source.
So what I was looking for was a decent convention I can use when I am trying to organize my projects and libraries on linux that is easy to maintain and easy to use.
|
Short answer: don't do a "heaps of code in local dir" thing.
Long answer: don't do a "heaps of code in local dir" thing, because it will be nightmare to keep up-to-date, and if you decide to distribute your code, it will be nightmare to package it for any decent distribution.
Whenever possible, stick to the libraries shipped in the distribution (ubuntu has 20000+ packets, it ought to have most of what you'll need prepackaged). When there is not package, you caninstall by hand to /usr/local (but see above about upgrades and DONT do that).
Better, use "stow" or "installwatch" (or both) to install to per-library dirs (/usr/local/stow/libA-ver123) and then symlink files from there to /usr/local or /usr/ (stow does the simlinking part). Or just package the lib for your distribution.
|
For libraries/includes...
```
/usr/local/lib
/usr/local/include
```
|
Linux programming environment configuration
|
[
"",
"c++",
"linux",
"development-environment",
""
] |
There's a common way to store multiple values in one variable, by using a bitmask. For example, if a user has read, write and execute privileges on an item, that can be converted to a single number by saying `read = 4 (2^2), write = 2 (2^1), execute = 1 (2^0)` and then add them together to get 7.
I use this technique in several web applications, where I'd usually store the variable into a field and give it a type of MEDIUMINT or whatever, depending on the number of different values.
What I'm interested in, is whether or not there is a practical limit to the number of values you can store like this? For example, if the number was over 64, you couldn't use (64 bit) integers any more. If this was the case, what would you use? How would it affect your program logic (ie: could you still use bitwise comparisons)?
*I know that once you start getting really large sets of values, a different method would be the optimal solution, but I'm interested in the boundaries of **this** method.*
|
Off the top of my head, I'd write a `set_bit` and `get_bit` function that could take an array of bytes and a bit offset in the array, and use some bit-twiddling to set/get the appropriate bit in the array. Something like this (in C, but hopefully you get the idea):
```
// sets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// result is 0 on success, non-zero on failure (offset out-of-bounds)
int set_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//set the right bit
bytes[offset >> 3] |= (1 << (offset & 0x7));
return 0; //success
}
//gets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// returns (-1) on error, 0 if bit is "off", positive number if "on"
int get_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//get the right bit
return (bytes[offset >> 3] & (1 << (offset & 0x7));
}
```
|
I've used bit masks in filesystem code where the bit mask is many times bigger than a machine word. think of it like an "array of booleans";
(journalling masks in flash memory if you want to know)
many compilers know how to do this *for you*. Adda bit of OO code to have types that operate senibly and then your code starts looking like it's intent, not some bit-banging.
My 2 cents.
|
Is there a practical limit to the size of bit masks?
|
[
"",
"sql",
"bit-manipulation",
"bitmask",
""
] |
Is there a catchall function somewhere that works well for sanitizing user input for SQL injection and XSS attacks, while still allowing certain types of HTML tags?
|
It's a common misconception that user input can be filtered. PHP even had a (now defunct) "feature", called [magic-quotes](https://en.wikipedia.org/wiki/Magic_quotes), that builds on this idea. It's nonsense. Forget about filtering (or cleaning, or whatever people call it).
What you should do, to avoid problems, is quite simple: whenever you embed a piece of data within a foreign code, you must format it according to the rules of that code. But you must understand that such rules could be too complicated to try to follow them all manually. For example, in SQL, rules for strings, numbers and identifiers are all different. For your convenience, in most cases there is a dedicated tool for such embedding. For example, when some data has to be used in the SQL query, instead of adding a variable directly to SQL string, it has to be done though a parameter in the query, using prepared statement. And it will take care of all the proper formatting.
Another example is HTML: If you embed strings within HTML markup, you must escape it with [`htmlspecialchars`](http://php.net/manual/function.htmlspecialchars.php). This means that every single `echo` or `print` statement should use `htmlspecialchars`.
A third example could be shell commands: If you are going to embed strings (such as arguments) to external commands, and call them with [`exec`](http://php.net/manual/function.exec.php), then you must use [`escapeshellcmd`](http://php.net/manual/function.escapeshellcmd.php) and [`escapeshellarg`](http://php.net/manual/function.escapeshellarg.php).
Also, a very compelling example is JSON. The rules are so numerous and complicated that you would never be able to follow them all manually. That's why you should never ever create a JSON string manually, but always use a dedicated function, [`json_encode()`](https://www.php.net/json_encode) that will correctly format every bit of data.
And so on and so forth ...
The *only* case where you need to actively filter data, is if you're accepting preformatted input. For example, if you let your users post HTML markup, that you plan to display on the site. However, you should be wise to avoid this at all cost, since no matter how well you filter it, it will always be a potential security hole.
|
Do not try to prevent SQL injection by sanitizing input data.
Instead, **do not allow data to be used in creating your SQL code**. Use Prepared Statements (i.e. using parameters in a template query) that uses bound variables. It is the only way to be guaranteed against SQL injection.
Please see my website <http://bobby-tables.com/> for more about preventing SQL injection.
|
How can I sanitize user input with PHP?
|
[
"",
"php",
"security",
"xss",
"sql-injection",
"user-input",
""
] |
I've seen [this question](https://stackoverflow.com/questions/49156/importing-javascript-in-jsp-tags) regading the importing of js-files related to the tag content itself. I have a similar problem, here I have a jsp tag that generates some HTML and has a generic js-implementation that handles the behavior of this HTML. Furthermore I need to write some initialization statements, so I can use it afterwards through JavaScript. To be possible to use this "handler" within my JavaScript, it should be somehow accessible.
The question is... Is it Ok to write inline <script> tags along with my HTML for instantiation and initialization purposes (personally I don't think its very elegant)? And about being accessible to the JS world, should I leave a global var referencing my handler object (not very elegant aswell I think), are there better ways to do it?
|
You should strive for javascript in its own files. This is usually done with [Progressive Enhancement](http://accessites.org/site/2007/02/graceful-degradation-progressive-enhancement/). But some times you don't have a choice, for instance when the same JSP renders pages in different languages. Here's a real-life example:
The JSP:
```
<script src="/javascript/article_admin.js"></script>
<script type="text/javascript">
NP_ArticleAdmin.initialize({
text: {
please_confirm_deletion_of: '<i18n:output text="please.confirm.deletion.of"/>',
this_cannot_be_undone: '<i18n:output text="this.cannot.be.undone"/>'
}
});
</script>
```
The javascript (article\_admin.js):
```
/*global NP_ArticleAdmin, jQuery, confirm */
NP_ArticleAdmin = function ($) {
var text;
function delete_article(event) {
var article = $(this).parents("li.article"),
id = article.attr("id"),
name = article.find("h3.name").html();
if (confirm(text.please_confirm_deletion_of + name + text.this_cannot_be_undone)) {
$.post("/admin/delete_article", {id: id});
article.fadeOut();
}
event.preventDefault();
return false;
}
function initialize(data) {
text = data.text;
$("#articles a.delete").click(delete_article);
}
return {initialize: initialize};
}(jQuery);
```
In this example, the only javascript in the JSP-file is the part that needs to be there. The core functionality is separated in its own js-file.
|
I'm not entirely sure what you asking here, but I don't there's anything wrong with including `<script>` tags in the JSP to instantiate javascript code. I often follow this model, writing the library code in external javascript files, and then calling the constructors to my objects from the `<script>` tags.
This makes debugging easy, since the logic is all in the external files (and firebug seems to have trouble with debugging inline javascript code). The libraries get cached, but the data instantiating them doesn't (which is the desired behavior).
The alternative is to have the instantiation code dynamically generated in an external javascript file or AJAX call. I've done this too, with positive results.
I think the deciding factor is how much dynamic data you have. If you need to represent large data structures, I would serve it out via an AJAX call that returns JSON. If its a simple call to a constructor, put it in the JSP.
As for the global variable, I will often have a global for the top-level object that kicks everything off. Inside that, are all the other references to the helper objects.
|
Using JavaScript within a JSP tag
|
[
"",
"javascript",
"jsp",
"jsp-tags",
""
] |
I have a console application that require to use some code that need administrator level. I have read that I need to add a Manifest file myprogram.exe.manifest that look like that :
```
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="requireAdministrator">
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
```
But it still doesn't raise the UAC (in the console or in debugging in VS). How can I solve this issue?
## Update
I am able to make it work if I run the solution in Administrator or when I run the /bin/\*.exe in Administrator. I am still wondering if it's possible to have something that will pop when the application start instead of explicitly right click>Run as Administrator?
|
You need to embed the UAC manifest as an embedded Win32 resource. See [Adding a UAC Manifest to Managed Code](http://blogs.msdn.com/shawnfa/archive/2006/04/06/568563.aspx).
In short, you use a Windows SDK command line tool to embed it into your executable.
You can automate this as a post-build step by placing the following line as a post build task in your VS project's properties:
```
mt.exe -manifest "$(ProjectDir)$(TargetName).exe.manifest" -updateresource:"$(TargetDir)$(TargetName).exe;#1"
```
|
For anyone using Visual Studio, it's super easy. I was about to go set up the Windows SDK and do mt.exe post-build steps and all that before realizing it's built into VS. I figured I'd record it for posterity.
1. Project | Add New Item -> Visual C# Items -> Application Manifest File
2. Open app.manifest, change requestedExecutionLevel.@level to "requireAdministrator"
3. Build
Ta-da
|
UAC need for console application
|
[
"",
"c#",
".net",
".net-2.0",
"uac",
"manifest",
""
] |
I am making a Python gui project that needs to duplicate the look of a Windows gui environment (ie Explorer). I have my own custom icons to draw but they should be selectable by the same methods as usual; click, ctrl-click, drag box etc. Are any of the gui toolkits going to help with this or will I have to implement it all myself. If there aren't any tools to help with this advice would be greatly appreciated.
*edit* I am not trying to recreate explorer, that would be madness. I simply want to be able to take icons and lay them out in a scrollable window. Any number of them may be selected at once. It would be great if there was something that could select/deselect them in the same (appearing at least) way that Windows does. Then all I would need is a list of all the selected icons.
|
Python has extensions for accessing the Win32 API, but good luck trying to re-write explorer in *that* by yourself. Your best bet is to use a toolkit like [Qt](http://doc.trolltech.com/4.4/gallery-windowsxp.html), but you'll still have to write the vast majority of the application from scratch.
Is there any way you can re-use explorer itself in your project?
---
Updated for edited question:
[GTK+](http://www.gtk.org/) has an [icon grid widget](http://library.gnome.org/devel/gtk/unstable/GtkIconView.html) that you could use. See a reference for [PyGTK+](http://www.pygtk.org/): [`gtk.IconView`](http://www.pygtk.org/docs/pygtk/class-gtkiconview.html)
|
In wxPython there's a plethora of ready-made list and tree controls (CustomTreeCtrl, TreeListCtrl, and others), a mixture of which you can use to create a simple explorer in minutes. The wxPython demo even has a few relevant examples (see the demo of MVCTree).
|
Something like Explorer's icon grid view in a Python GUI
|
[
"",
"python",
"user-interface",
""
] |
I'm relatively new to Python and am having problems programming with Scapy, the Python network manipulation tool. However, I can't tell if it's as much a Scapy problem as it is a being-a-Python-newbie problem. On the [scapy site](http://www.secdev.org/projects/scapy/build_your_own_tools.html), they give a sample program which I'm not able to run on my own machine:
```
#! /usr/bin/env python
import sys
from scapy import sr1,IP,ICMP
p=sr1(IP(dst=sys.argv[1])/ICMP())
if p:
p.show()
```
To which I get:
```
Traceback (most recent call last):
File "test.py", line 4, in <module>
from scapy import sr1,IP,ICMP
ImportError: cannot import name sr1
```
So my question then is: when installing Python libraries, do I need to change my path or anything similar? Also, is there something I can run in the interpreter to tell me the contents of the scapy package? I can run `from scapy import *` just fine, but since I have no idea what's inside it, it's hard to use it.
|
With the caveat from Federico Ramponi "You should use scapy as an interpreter by its own, not as a library", I want to answer the non-scapy-specific parts of the question.
**Q:** when installing Python libraries, do I need to change my path or anything similar?
**A:** I think you are talking about changing `PYTHONPATH` system-wide. This is usually not required or a good idea.
Third party Python libraries should either be installed in system directories, such as `/usr/lib/python2.5/site-packages`, or installed locally, in which case you might want to set `PYTHONPATH` in your Makefile or a in driver shell script.
**Q:** Also, is there something I can run in the interpreter to tell me the contents of the scapy package?
**A:** You can do something like this:
```
>>> import scapy
>>> dir(scapy)
```
Or even better:
```
>>> import scapy
>>> help(scapy)
```
Bonus question asked in a comment.
**Q:** Is 'import scapy' the same as 'from scapy import \*'?
**A:** `import scapy` binds the scapy name in the local namespace to the scapy module object. OTOH, `from scapy import *` does not bind the module name, but all public names defined in the scapy module are bound in the local namespace.
See paragraphs 6 and 7 of the Python Reference Manual, [6.12 The import statement](http://www.python.org/doc/2.5.2/ref/import.html).
|
I had the same problem, in the scapy v2.x use
```
from scapy.all import *
```
instead the v1.x
```
from scapy import *
```
as written [here](http://www.secdev.org/projects/scapy/doc/installation.html)
Enjoy it =)
|
Python - Library Problems
|
[
"",
"python",
"networking",
"scapy",
""
] |
I would like to implement an interactive evolutionary algorithm for generating music (probably just simple melodies to start with).
I'd like to use [JFugue](http://www.jfugue.org) for this. Its website claims that it is well-suited to evolutionary music, but I can't find any evolutionary examples.
I already have [a framework](https://watchmaker.dev.java.net) to provide the evolutonary machinery. What I am looking for is some simple, working code that demonstrates viable approaches for the musical part (e.g. suitable encodings and evolutionary operators for the evolved tunes).
I have some ideas how it might be achieved, but I'm not particularly knowledgeable about music theory, so to start with I'd like to just reimplement something that is known to work.
So does anybody have, or know of, any freely available code (any language is fine) that demonstrates one or more approaches to evolutionary music?
**EDIT:** I'm specifically looking for evolutionary code rather than other techniques that could be used for music synthesis.
|
So far I've found only [this](http://graphics.stanford.edu/~bjohanso/gp-music/gp_music-old.html), which is a genetic programming example in C.
**Update (January 2010):** And [this online system](http://darwintunes.org/), which doesn't have source code but is an example of what can be achieved. I also found [Grammidity](http://grammidity.sourceforge.net/), which allows for sequences of MIDI events to be evolved from grammars.
**Update (July 2011):** I've just found [some relevant Python code on the MIT Open Courseware site](http://ocw.mit.edu/courses/music-and-theater-arts/21m-380-music-and-technology-algorithmic-and-generative-music-spring-2010/assignments-and-projects/genetic-algorithms/).
|
You probably want to look into *Markov Chains* - They're probably of more use to you than an evolutionary algorithm to start with, as judging the quality of the output in order to breed the best is going to be a nightmare (subjective and time-consuming), and they're ideal for combining with evolutionary programming.
There's a good introduction to them on [Coding Horror](http://www.codinghorror.com/blog/archives/001132.html). The article explains the concept and demonstrates the remarkable results, the comments mention music a lot more (I was lost for hours in the comments and the sites they linked to). There's some more specific details and further reading on [Wikipedia](http://en.wikipedia.org/wiki/Markov_chain).
Markov chains have an element of probability to them, so this is where you can combine them with evolutionary programming - by evolving a method of choosing alternatives other than the roll of a dice.
|
Looking for evolutionary music example code
|
[
"",
"java",
"genetic-algorithm",
"evolutionary-algorithm",
"jfugue",
""
] |
I'm currently adding some new extended classes to this code:
```
foreach (BaseType b in CollectionOfExtendedTypes) {
if (b is ExtendedType1) {
((ExtendedType1) b).foo = this;
}
else if (b is ExtendedType2) {
((ExtenedType2) b).foo = this;
}
else {
b.foo = this;
}
}
```
and was curious if there is a way to use the `is` keyword functionality in a switch statement?
|
This really looks like a situation for a good polymorphic implementation. If you override the appropriate methods in the derived classes, you may not need the checks in the loop at all.
|
The latest version of C# (7) now includes this functionality
[Type pattern](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/switch#type-pattern)
The type pattern enables concise type evaluation and conversion. When used with the switch statement to perform pattern matching, it tests whether an expression can be converted to a specified type and, if it can be, casts it to a variable of that type. Its syntax is:
```
case type varname
```
|
using the 'is' keyword in a switch in c#
|
[
"",
"c#",
"switch-statement",
""
] |
I have a grid that is binded to a collection. For some reason that I do not know, now when I do some action in the grid, the grid doesn't update.
Situation : When I click a button in the grid, it increase a value that is in the same line. When I click, I can debug and see the value increment but the value doesn't change in the grid. **BUT** when I click the button, minimize and restore the windows, the value are updated... what do I have to do to have the value updated like it was before?
**UPDATE**
This is NOT SOLVED but I accepted the best answer around here.
It's not solved because it works as usuall when the data is from the database but not from the cache. Objects are serialized and threw the process the event are lost. This is why I build them back and it works for what I know because I can interact with them BUT it seem that it doesn't work for the update of the grid for an unkown reason.
|
In order for the binding to be bidirectional, from control to datasource and from datasource to control the datasource must implement property changing notification events, in one of the 2 possible ways:
* Implement the [INotifyPropertyChanged](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.inotifypropertychanged) interface, and raise the event when the properties change :
```
public string Name
{
get
{
return this._Name;
}
set
{
if (value != this._Name)
{
this._Name= value;
NotifyPropertyChanged("Name");
}
}
}
```
* Inplement a changed event for every property that must notify the controls when it changes. The event name must be in the form *PropertyName*Changed :
```
public event EventHandler NameChanged;
public string Name
{
get
{
return this._Name;
}
set
{
if (value != this._Name)
{
this._Name= value;
if (NameChanged != null) NameChanged(this, EventArgs.Empty);
}
}
}
```
\*as a note your property values are the correct ones after window maximize, because the control rereads the values from the datasource.
|
It sounds like you need to call DataBind in your update code.
|
C# grid binding not update
|
[
"",
"c#",
"data-binding",
"datagridview",
""
] |
A strict equality operator will tell you if two object **types** are equal. However, is there a way to tell if two objects are equal, **much like the hash code** value in Java?
Stack Overflow question *[Is there any kind of hashCode function in JavaScript?](https://stackoverflow.com/questions/194846)* is similar to this question, but requires a more academic answer. The scenario above demonstrates why it would be necessary to have one, and I'm wondering if there is any **equivalent solution**.
|
**The short answer**
The simple answer is: No, there is no generic means to determine that an object is equal to another in the sense you mean. The exception is when you are strictly thinking of an object being typeless.
**The long answer**
The concept is that of an Equals method that compares two different instances of an object to indicate whether they are equal at a value level. However, it is up to the specific type to define how an `Equals` method should be implemented. An iterative comparison of attributes that have primitive values may not be enough: an object may contain attributes which are not relevant to equality. For example,
```
function MyClass(a, b)
{
var c;
this.getCLazy = function() {
if (c === undefined) c = a * b // imagine * is really expensive
return c;
}
}
```
In this above case, `c` is not really important to determine whether any two instances of MyClass are equal, only `a` and `b` are important. In some cases `c` might vary between instances and yet not be significant during comparison.
Note this issue applies when members may themselves also be instances of a type and these each would all be required to have a means of determining equality.
Further complicating things is that in JavaScript the distinction between data and method is blurred.
An object may reference a method that is to be called as an event handler, and this would likely not be considered part of its 'value state'. Whereas another object may well be assigned a function that performs an important calculation and thereby makes this instance different from others simply because it references a different function.
What about an object that has one of its existing prototype methods overridden by another function? Could it still be considered equal to another instance that it otherwise identical? That question can only be answered in each specific case for each type.
As stated earlier, the exception would be a strictly typeless object. In which case the only sensible choice is an iterative and recursive comparison of each member. Even then one has to ask what is the 'value' of a function?
|
Why reinvent the wheel? Give [Lodash](http://lodash.com/docs#isEqual) a try. It has a number of must-have functions such as [isEqual()](http://lodash.com/docs#isEqual).
```
_.isEqual(object, other);
```
It will brute force check each key value - just like the other examples on this page - using [ECMAScript 5](http://en.wikipedia.org/wiki/ECMAScript#Versions) and native optimizations if they're available in the browser.
Note: Previously this answer recommended [Underscore.js](http://underscorejs.org/), but [lodash](http://lodash.com) has done a better job of getting bugs fixed and addressing issues with consistency.
|
How can I determine equality for two JavaScript objects?
|
[
"",
"javascript",
"object",
"equals",
"hashcode",
""
] |
Is it possible to reference system environment variables (as opposed to Java system properties) in a log4j xml configuration file?
I'd like to be able to do something like:
```
<level value="${env.LOG_LEVEL}" />
```
and have it get that from the system environment variables, so I can avoid having to pass in so many things with -D parameters.
|
This syntax is documented only in log4j 2.X so make sure you are using the correct version. It does not work on the 1.X versions.
```
<Appenders>
<File name="file" fileName="${env:LOG_PATH}">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m %ex%n</Pattern>
</PatternLayout>
</File>
</Appenders>
```
|
I tried to do that recently and couldn't get it to work. What I ended up doing is sending a variable at startup. So say you have an environment variable called $LOG\_LEVEL:
```
<level value="${log_level}" />
```
and at startup...
```
java -Dlog_level=$LOG_LEVEL your_app
```
|
Using system environment variables in log4j xml configuration
|
[
"",
"java",
"xml",
"configuration",
"log4j",
"environment-variables",
""
] |
I need to convert a Word document into HTML file(s) in Java. The function will take input an word document and the output will be html file(s) based on the number of pages the word document has i.e. if the word document has 3 pages then there will be 3 html files generated having the required page break.
I searched for open source/non-commercial APIs which can convert doc to html but for no result. Anybody who have done this type of job before please help.
Thanks
|
We use tm-extractors (<http://mvnrepository.com/artifact/org.textmining/tm-extractors>), and fall back to the commercial Aspose (<http://www.aspose.com/>). Both have native Java APIs.
|
I recommend the [JODConverter](http://www.artofsolving.com/opensource/jodconverter), It leverages OpenOffice.org, which provides arguably the best import/export filters for OpenDocument and Microsoft Office formats available today.
JODConverter has a lot of documents, scripts, and tutorials to help you out.
|
Convert Word doc to HTML programmatically in Java
|
[
"",
"java",
"html",
"ms-word",
""
] |
In Java, suppose I have a String variable S, and I want to search for it inside of another String T, like so:
```
if (T.matches(S)) ...
```
(note: the above line was T.contains() until a few posts pointed out that that method does not use regexes. My bad.)
But now suppose S may have unsavory characters in it. For instance, let S = "[hi". The left square bracket is going to cause the regex to fail. Is there a function I can call to escape S so that this doesn't happen? In this particular case, I would like it to be transformed to "\[hi".
|
[`String.contains`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/String.html#contains(java.lang.CharSequence)) does not use regex, so there isn't a problem in this case.
Where a regex is required, rather rejecting strings with regex special characters, use [`java.util.regex.Pattern.quote`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/regex/Pattern.html#quote(java.lang.String)) to escape them.
|
As [Tom Hawtin](https://stackoverflow.com/questions/168639/escaping-a-string-from-getting-regex-parsed-in-java#168652) said, you need to quote the pattern. You can do this in two ways (edit: actually three ways, as pointed out by @[diastrophism](https://stackoverflow.com/questions/168639/escaping-a-string-from-getting-regex-parsed-in-java#169133)):
1. Surround the string with "\Q" and "\E", like:
```
if (T.matches("\\Q" + S + "\\E"))
```
2. Use [Pattern](http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html) instead. The code would be something like this:
```
Pattern sPattern = Pattern.compile(S, Pattern.LITERAL);
if (sPattern.matcher(T).matches()) { /* do something */ }
```
This way, you can cache the compiled Pattern and reuse it. If you are using the same regex more than once, you almost certainly want to do it this way.
Note that if you are using regular expressions to test whether a string is inside a larger string, you should put .\* at the start and end of the expression. But this will not work if you are quoting the pattern, since it will then be looking for actual dots. So, are you absolutely certain you want to be using regular expressions?
|
Escaping a String from getting regex parsed in Java
|
[
"",
"java",
"regex",
"string",
"escaping",
""
] |
What is the best way to save enums into a database?
I know Java provides `name()` and `valueOf()` methods to convert enum values into a String and back. But are there any other (flexible) options to store these values?
Is there a smart way to make enums into unique numbers (`ordinal()` is not safe to use)?
#### Update
Thanks for all awesome and fast answers! It was as I suspected.
However, a note to [toolkit](https://stackoverflow.com/users/3295): That is one way. The problem is that I would have to add the same methods to each enum type that I create. That's a lot of duplicated code and, at the moment, Java does not support any solutions for this (a Java enum cannot extend other classes).
|
We *never* store enumerations as numerical ordinal values anymore; it makes debugging and support way too difficult. We store the actual enumeration value converted to string:
```
public enum Suit { Spade, Heart, Diamond, Club }
Suit theSuit = Suit.Heart;
szQuery = "INSERT INTO Customers (Name, Suit) " +
"VALUES ('Ian Boyd', %s)".format(theSuit.name());
```
and then read back with:
```
Suit theSuit = Suit.valueOf(reader["Suit"]);
```
The problem was in the past staring at Enterprise Manager and trying to decipher:
```
Name Suit
------------ ----
Kylie Guénin 2
Ian Boyd 1
```
verses
```
Name Suit
------------ -------
Kylie Guénin Diamond
Ian Boyd Heart
```
the latter is much easier. The former required getting at the source code and finding the numerical values that were assigned to the enumeration members.
Yes it takes more space, but the enumeration member names are short, and hard drives are cheap, and it is much more worth it to help when you're having a problem.
Additionally, if you use numerical values, you are tied to them. You cannot nicely insert or rearrange the members without having to force the old numerical values. For example, changing the Suit enumeration to:
```
public enum Suit { Unknown, Heart, Club, Diamond, Spade }
```
would have to become :
```
public enum Suit {
Unknown = 4,
Heart = 1,
Club = 3,
Diamond = 2,
Spade = 0 }
```
in order to maintain the legacy numerical values stored in the database.
## How to sort them in the database
The question comes up: lets say i wanted to order the values. Some people may want to sort them by the `enum`'s ordinal value. Of course, ordering the cards by the numerical value of the enumeration is meaningless:
```
SELECT Suit FROM Cards
ORDER BY SuitID; --where SuitID is integer value(4,1,3,2,0)
Suit
------
Spade
Heart
Diamond
Club
Unknown
```
That's not the order we want - we want them in enumeration order:
```
SELECT Suit FROM Cards
ORDER BY CASE SuitID OF
WHEN 4 THEN 0 --Unknown first
WHEN 1 THEN 1 --Heart
WHEN 3 THEN 2 --Club
WHEN 2 THEN 3 --Diamond
WHEN 0 THEN 4 --Spade
ELSE 999 END
```
The same work that is required if you save integer values is required if you save strings:
```
SELECT Suit FROM Cards
ORDER BY Suit; --where Suit is an enum name
Suit
-------
Club
Diamond
Heart
Spade
Unknown
```
But that's not the order we want - we want them in enumeration order:
```
SELECT Suit FROM Cards
ORDER BY CASE Suit OF
WHEN 'Unknown' THEN 0
WHEN 'Heart' THEN 1
WHEN 'Club' THEN 2
WHEN 'Diamond' THEN 3
WHEN 'Space' THEN 4
ELSE 999 END
```
My opinion is that this kind of ranking belongs in the user interface. If you are sorting items based on their enumeration value: you're doing something wrong.
But if you wanted to really do that, i would create a `Suits` [dimension table](https://en.wikipedia.org/wiki/Dimension_(data_warehouse)):
| Suit | SuitID | Rank | Color |
| --- | --- | --- | --- |
| Unknown | 4 | 0 | NULL |
| Heart | 1 | 1 | Red |
| Club | 3 | 2 | Black |
| Diamond | 2 | 3 | Red |
| Spade | 0 | 4 | Black |
This way, when you want to change your cards to use [***Kissing Kings* New Deck Order**](https://www.quora.com/When-you-buy-a-deck-of-cards-does-it-come-mixed-or-in-order) you can change it for display purposes without throwing away all your data:
| Suit | SuitID | Rank | Color | CardOrder |
| --- | --- | --- | --- | --- |
| Unknown | 4 | 0 | NULL | NULL |
| Spade | 0 | 1 | Black | 1 |
| Diamond | 2 | 2 | Red | 1 |
| Club | 3 | 3 | Black | -1 |
| Heart | 1 | 4 | Red | -1 |
Now we are separating an internal programming detail (enumeration name, enumeration value) with a display setting meant for users:
```
SELECT Cards.Suit
FROM Cards
INNER JOIN Suits ON Cards.Suit = Suits.Suit
ORDER BY Suits.Rank,
Card.Rank*Suits.CardOrder
```
|
Unless you have specific performance reasons to avoid it, I would recommend using a separate table for the enumeration. Use foreign key integrity unless the extra lookup really kills you.
### Suits table:
```
suit_id suit_name
1 Clubs
2 Hearts
3 Spades
4 Diamonds
```
### Players table
```
player_name suit_id
Ian Boyd 4
Shelby Lake 2
```
1. If you ever refactor your enumeration to be classes with behavior (such as priority), your database already models it correctly
2. Your DBA is happy because your schema is normalized (storing a single integer per player, instead of an entire string, which may or may not have typos).
3. Your database values (`suit_id`) are independent from your enumeration value, which helps you work on the data from other languages as well.
|
Ways to save enums in database
|
[
"",
"java",
"database",
"enums",
""
] |
I need to serialize a huge amount of data (around 2gigs) of small objects into a single file in order to be processed later by another Java process. Performance is kind of important. Can anyone suggest a good method to achieve this?
|
Have you taken a look at google's [protocol buffers](http://code.google.com/p/protobuf/)? Sounds like a use case for it.
|
I don't know why Java Serialization got voted down, it's a perfectly viable mechanism.
It's not clear from the original post, but is all 2G of data in the heap at the same time? Or are you dumping something else?
Out of the box, Serialization isn't the "perfect" solution, but if you implement Externalizable on your objects, Serialization can work just fine. Serializations big expense is figuring out what to write and how to write it. By implementing Externalizable, you take those decisions out of its hands, thus gaining quite a boost in performance, and a space savings.
While I/O is a primary cost of writing large amounts of data, the incidental costs of converting the data can also be very expensive. For example, you don't want to convert all of your numbers to text and then back again, better to store them in a more native format if possible. ObjectStream has methods to read/write the native types in Java.
If all of your data is designed to be loaded in to a single structure, you could simply do ObjectOutputStream.writeObject(yourBigDatastructure), after you've implemented Externalizable.
However, you could also iterate over your structure and call writeObject on the individual objects.
Either way, you're going to need some "objectToFile" routine, perhaps several. And that's effectively what Externalizable provides, as well as a framework to walk your structure.
The other issue, of course, is versioning, etc. But since you implement all of the serialization routines yourself, you have full control over that as well.
|
Java: Serializing a huge amount of data to a single file
|
[
"",
"java",
"serialization",
""
] |
I am building Java web applications, and I hate the traditional "code-compile-deploy-test" cycle. I want to type in one tiny change, then see the result INSTANTLY, without having to compile and deploy.
Fortunately, [Jetty](http://www.mortbay.org/jetty/) is great for this. It is a pure-java web server. It comes with a really nice [maven plugin](http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin) which lets you launch Jetty reading directly from your build tree -- no need to package a war file or deploy. It even has a scanInterval setting: put this to a non-zero value and it will watch your java files and various config files for changes and automatically re-deploy a few seconds after you make a change.
There's just one thing keeping me from nirvana. I have javascript and css files in my src/main/webapp directory which just get served up by Jetty. I would like to be able to edit *these* and have the changes show up when I refresh the page in the browser. Unfortunately, Jetty holds these files open so I can't (on Windows) modify them while it is running.
Does anyone know how to make Jetty let go of these files so I can edit them, then serve up the edited files for subsequent requests?
|
Jetty uses memory-mapped files to buffer static content, which causes the file-locking in Windows. Try setting `useFileMappedBuffer` for `DefaultServlet` to `false`.
[Troubleshooting Locked files on Windows (from the Jetty wiki)](http://www.eclipse.org/jetty/documentation/current/troubleshooting-locked-files-on-windows.html) has instructions.
|
While one of the answers above is exactly right for configuring jetty by xml, if you want to configure this option in code (for an embedded server) the answer is different and not found on that page.
You'll find a number of suggestions online including
context.getInitParams().put("useFileMappedBuffer", "false");
Or overriding the WebAppContext, or using a fully qualified name for the init parameter. None of these suggestions worked for me (using Jetty 7.2.2). Part of the problem was that the useFileMappedBuffer option needs to be set on the servlet that the WebAppContext is using to serve the static files, rather than on the context.
In the end I did something like this on a straightforward ServletContextHandler
```
// Startup stuff
final Server server = new Server(port);
ServletContextHandler handler = new ServletContextHandler();
handler.setResourceBase(path);
SessionManager sm = new HashSessionManager();
SessionHandler sh = new SessionHandler(sm);
handler.setSessionHandler(sh);
DefaultServlet defaultServlet = new DefaultServlet();
ServletHolder holder = new ServletHolder(defaultServlet);
holder.setInitParameter("useFileMappedBuffer", "false");
handler.addServlet(holder, "/");
server.setHandler(handler);
server.start();
server.join();
```
|
How to make Jetty dynamically load "static" pages
|
[
"",
"java",
"spring",
"jetty",
""
] |
I have a regexp I'm using with sed, but now I need to make it work in PHP also. I can't use system calls as they are disabled.
```
$ cat uglynumber.txt:
Ticket number : 303905694, FOO:BAR:BAR: Some text
Case ID:123,456,789:Foobar - Some other text
303867970;[FOOBAR] Some text goes here
Case Ref: 303658850 - Some random text here - host.tld #78854w
```
```
$ cat uglynumbers.txt | sed "s/[, ]//g;s/.*\([0-9]\{9\}\).*/\1/g"
303905694
123456789
303867970
303658850
```
So, how to do the same with PHP?
I found one example like this, but I can't inject that regexp into that.
```
if (preg_match("/.../", $line, $matches)) {
echo "Match was found";
echo $matches[0];
}
```
|
Your specific SED example is obviously 2 regular expressions, 1 being replacing the commas, and one being technically grabbing the 9 digit continuous numbers.
The first half of your SED string is best fit with the `preg_replace()` function.
```
//`sed s/regex/replace_value/flags`
preg_replace('/regex/flags', 'replace_value', $input);
```
The second half of your SED string would be a `preg_match_all()`:
```
//`sed ...;s/regex/\1/flags`
$matches_array = array();
preg_match_all('/regex/flags', $input, &$matches_array);
```
So your specific code will look something like:
```
<?php
$input = file_get_contents('uglynumbers.txt');
$input = preg_replace('/[, ]/m','', $input);
$matches = array();
//No need for the .* or groupings, just match all occurrences of [0-9]{9}
if( preg_match_all('/[0-9]{9}/m', $input, $matches) )
{
//...
var_dump($matches);
}
```
---
It looks like `g` is an SED modifier meaning match all lines. `preg_match_all()` should already takes care of this modifier but `m` seems like an appropriate replacement as per the manual on [PCRE modifiers](http://php.net/manual/en/reference.pcre.pattern.modifiers.php).
|
Try using [`preg_replace()`](https://www.php.net/preg_replace) instead of `preg_match()`. `grep` is to `sed` what `preg_match` is to `preg_replace`.
|
Converting a sed string to PHP
|
[
"",
"php",
"regex",
""
] |
I am trying to prevent data races in a multihreaded server. My problem is the following: there is a `List<RServer>`, the type `RServer` is a class with several fields. Now, the server has several threads all running at the same time and they can modify both the `List` (adding more items) and the individual `RServer` instances (changing the fields).
So my strategy is to make a `readonly object RServerLock = new object( )` in each of the `RServer` instances and additionally a `readonly object RServerListLock = new object( )` and enclose all the code that modifies either (the `List` or a `RServer` instance) in a `lock`. Is this safe? What happens if a thread tries to lock a `RServerLock` *while* another one is locking it?
|
If you have a contended lock, the second thread has to wait until the first releases the lock.
Your plan sounds *nearly* okay - but you need to lock when *reading* data as well, to make sure you get the most recent values, and consistent ones. Otherwise you could be half way through writing some values in one thread, and see some of the new values - but possibly not all - and the old values, all at the same time in a different thread.
If you can avoid doing this as much as possible, your life will be easier :) Immutable types make threading a lot simpler.
Don't forget that if you ever have code which will need two locks at the same time (e.g. adding one RServer and modifying another, atomically) you *must* make sure that you always aquire locks in the same order - if one thread tries to acquire lock B while it's holding lock A, and a different thread tries to acquire lock A while it's holding lock B, you'll end up with deadlock.
See my [threading tutorial](http://pobox.com/~skeet/csharp/threads) or [Joe Albahari's](http://www.albahari.com/threading/default.aspx) for more details. Also, if you're interested in concurrency, Joe Duffy has an [excellent book](https://rads.stackoverflow.com/amzn/click/com/032143482X) which is coming out very soon.
|
Looks like you have a prime candidate for a ReaderWriterLock. The best class to use (if your runtime supports it, I think 3.0+) is ReaderWriterLockSlim as the original ReaderWriterLock has performance issues.
One of the MSDN magazine authors also came across a problem with the RWLS class, I won't go into the specifics here, but you can look at it [here](http://msdn.microsoft.com/en-us/magazine/cc163532.aspx "MSDN Article").
I know the following code will spawn the fury of the IDisposable purists, but sometimes it really makes nice syntactic sugar. In any case, you may find the following useful:
```
/// <summary>
/// Opens the specified reader writer lock in read mode,
/// specifying whether or not it may be upgraded.
/// </summary>
/// <param name="slim"></param>
/// <param name="upgradeable"></param>
/// <returns></returns>
public static IDisposable Read(this ReaderWriterLockSlim slim, bool upgradeable)
{
return new ReaderWriterLockSlimController(slim, true, upgradeable);
} // IDisposable Read
/// <summary>
/// Opens the specified reader writer lock in read mode,
/// and does not allow upgrading.
/// </summary>
/// <param name="slim"></param>
/// <returns></returns>
public static IDisposable Read(this ReaderWriterLockSlim slim)
{
return new ReaderWriterLockSlimController(slim, true, false);
} // IDisposable Read
/// <summary>
/// Opens the specified reader writer lock in write mode.
/// </summary>
/// <param name="slim"></param>
/// <returns></returns>
public static IDisposable Write(this ReaderWriterLockSlim slim)
{
return new ReaderWriterLockSlimController(slim, false, false);
} // IDisposable Write
private class ReaderWriterLockSlimController : IDisposable
{
#region Fields
private bool _closed = false;
private bool _read = false;
private ReaderWriterLockSlim _slim;
private bool _upgrade = false;
#endregion Fields
#region Constructors
public ReaderWriterLockSlimController(ReaderWriterLockSlim slim, bool read, bool upgrade)
{
_slim = slim;
_read = read;
_upgrade = upgrade;
if (_read)
{
if (upgrade)
{
_slim.EnterUpgradeableReadLock();
}
else
{
_slim.EnterReadLock();
}
}
else
{
_slim.EnterWriteLock();
}
} // ReaderWriterLockSlimController
~ReaderWriterLockSlimController()
{
Dispose();
} // ~ReaderWriterLockSlimController
#endregion Constructors
#region Methods
public void Dispose()
{
if (_closed)
return;
_closed = true;
if (_read)
{
if (_upgrade)
{
_slim.ExitUpgradeableReadLock();
}
else
{
_slim.ExitReadLock();
}
}
else
{
_slim.ExitWriteLock();
}
GC.SuppressFinalize(this);
} // void Dispose
#endregion Methods
} // Class ReaderWriterLockSlimController
```
Put that in an extension method class (public static class [Name]) and use it as follows:
```
using(myReaderWriterLockSlim.Read())
{
// Do read operations.
}
```
Or
```
using(myReaderWriterLockSlim.Read(true))
{
// Read a flag.
if(flag)
{
using(myReaderWriterLockSlim.Write()) // Because we said Read(true).
{
// Do read/write operations.
}
}
}
```
Or
```
using(myReaderWriterLockSlim.Write()) // This means you can also safely read.
{
// Do read/write operations.
}
```
|
c# locking
|
[
"",
"c#",
"multithreading",
"locking",
""
] |
Given an `InputStream` called `in` which contains audio data in a compressed format (such as MP3 or OGG), I wish to create a `byte` array containing a WAV conversion of the input data. Unfortunately, if you try to do this, JavaSound hands you the following error:
```
java.io.IOException: stream length not specified
```
I managed to get it to work by writing the wav to a temporary file, then reading it back in, as shown below:
```
AudioInputStream source = AudioSystem.getAudioInputStream(new BufferedInputStream(in, 1024));
AudioInputStream pcm = AudioSystem.getAudioInputStream(AudioFormat.Encoding.PCM_SIGNED, source);
AudioInputStream ulaw = AudioSystem.getAudioInputStream(AudioFormat.Encoding.ULAW, pcm);
File tempFile = File.createTempFile("wav", "tmp");
AudioSystem.write(ulaw, AudioFileFormat.Type.WAVE, tempFile);
// The fileToByteArray() method reads the file
// into a byte array; omitted for brevity
byte[] bytes = fileToByteArray(tempFile);
tempFile.delete();
return bytes;
```
This is obviously less desirable. Is there a better way?
|
The problem is that the most AudioFileWriters need to know the file size in advance if writing to an OutputStream. Because you can't provide this, it always fails. Unfortunatly, the default Java sound API implementation doesn't have any alternatives.
But you can try using the AudioOutputStream architecture from the Tritonus plugins (Tritonus is an open source implementation of the Java sound API): <http://tritonus.org/plugins.html>
|
I notice this one was asked very long time ago. In case any new person (using Java 7 and above) found this thread, note there is a better new way doing it via Files.readAllBytes API. See:
[How to convert .wav file into byte array?](https://stackoverflow.com/questions/13802441/how-to-convert-wav-file-into-byte-array)
|
Convert audio stream to WAV byte array in Java without temp file
|
[
"",
"java",
"audio",
"wav",
"javasound",
""
] |
I have heard mixed opinions over the amount of memory that a byte takes up in a java program.
I am aware you can store no more than +127 in a java byte, and the [documentation](http://java.sun.com/docs/books/tutorial/java/nutsandbolts/datatypes.html) says that a byte is only 8 bits but [here](http://www.jguru.com/faq/view.jsp?EID=13647) I am told that it actually takes up the same amount of memory as an int, and therefore is just a Type that helps in code comprehension and not efficiency.
Can anyone clear this up, and would this be an implementation specific issue?
|
Okay, there's been a lot of discussion and not a lot of code :)
Here's a quick benchmark. It's got the normal caveats when it comes to this kind of thing - testing memory has oddities due to JITting etc, but with suitably large numbers it's useful anyway. It has two types, each with 80 members - LotsOfBytes has 80 bytes, LotsOfInts has 80 ints. We build lots of them, make sure they're not GC'd, and check memory usage:
```
class LotsOfBytes
{
byte a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, aa, ab, ac, ad, ae, af;
byte b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf;
byte c0, c1, c2, c3, c4, c5, c6, c7, c8, c9, ca, cb, cc, cd, ce, cf;
byte d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, da, db, dc, dd, de, df;
byte e0, e1, e2, e3, e4, e5, e6, e7, e8, e9, ea, eb, ec, ed, ee, ef;
}
class LotsOfInts
{
int a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, aa, ab, ac, ad, ae, af;
int b0, b1, b2, b3, b4, b5, b6, b7, b8, b9, ba, bb, bc, bd, be, bf;
int c0, c1, c2, c3, c4, c5, c6, c7, c8, c9, ca, cb, cc, cd, ce, cf;
int d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, da, db, dc, dd, de, df;
int e0, e1, e2, e3, e4, e5, e6, e7, e8, e9, ea, eb, ec, ed, ee, ef;
}
public class Test
{
private static final int SIZE = 1000000;
public static void main(String[] args) throws Exception
{
LotsOfBytes[] first = new LotsOfBytes[SIZE];
LotsOfInts[] second = new LotsOfInts[SIZE];
System.gc();
long startMem = getMemory();
for (int i=0; i < SIZE; i++)
{
first[i] = new LotsOfBytes();
}
System.gc();
long endMem = getMemory();
System.out.println ("Size for LotsOfBytes: " + (endMem-startMem));
System.out.println ("Average size: " + ((endMem-startMem) / ((double)SIZE)));
System.gc();
startMem = getMemory();
for (int i=0; i < SIZE; i++)
{
second[i] = new LotsOfInts();
}
System.gc();
endMem = getMemory();
System.out.println ("Size for LotsOfInts: " + (endMem-startMem));
System.out.println ("Average size: " + ((endMem-startMem) / ((double)SIZE)));
// Make sure nothing gets collected
long total = 0;
for (int i=0; i < SIZE; i++)
{
total += first[i].a0 + second[i].a0;
}
System.out.println(total);
}
private static long getMemory()
{
Runtime runtime = Runtime.getRuntime();
return runtime.totalMemory() - runtime.freeMemory();
}
}
```
Output on my box:
```
Size for LotsOfBytes: 88811688
Average size: 88.811688
Size for LotsOfInts: 327076360
Average size: 327.07636
0
```
So obviously there's some overhead - 8 bytes by the looks of it, although somehow only 7 for LotsOfInts (? like I said, there are oddities here) - but the point is that the byte fields appear to be packed in for LotsOfBytes such that it takes (after overhead removal) only a quarter as much memory as LotsOfInts.
|
Yes, a byte variable in Java is in fact 4 bytes in memory. However this doesn't hold true for arrays. The storage of a byte array of 20 bytes is in fact only 20 bytes in memory.
That is because the Java Bytecode Language only knows two integer number types: ints and longs. So it must handle all numbers internally as either type and these types are 4 and 8 bytes in memory.
However, Java knows arrays with every integer number format. So the storage of short arrays is in fact two bytes per entry and one byte per entry for byte arrays.
The reason why I keep saying "the storage of" is that an array is also an object in Java and every object requires multiple bytes of storage on its own, regardless of the storage that instance variables or the array storage in case of arrays require.
|
Size of a byte in memory - Java
|
[
"",
"java",
"performance",
"memory",
""
] |
I have a class that I need to binary serialize. The class contains one field as below:
```
private T[,] m_data;
```
These multi-dimensional arrays can be fairly large (hundreds of thousands of elements) and of any primitive type. When I tried standard .net serialization on an object the file written to disk was large and I think .net is storing a lot of repeated data about element types and possibly not as efficiently as could be done.
I have looked around for custom serializers but have not seen any that deal with multi-dimensional generic arrays. I have also experimented with built-in .net compression on a byte array of the memory stream following serializing with some success, but not as quick / compressed as I had hoped.
My question is, should I try and write a custom serializer to optimally serialize this array for the appropriate type (this seems a little daunting), or should I use standard .net serialization and add compression?
Any advice on the best approach would be most appreciated, or links to resources showing how to tackle serialization of a multi-dimensional generic array - as mentioned [existing examples](http://www.codeproject.com/KB/dotnet/FastSerializer.aspx) I have found do not support such structures.
|
Here's what I came up with. The code below makes an int[1000][10000] and writes it out using the BinaryFormatter to 2 files - one zipped and one not.
The zipped file is 1.19 MB (1,255,339 bytes)
Unzipped is 38.2 MB (40,150,034 bytes)
```
int width = 1000;
int height = 10000;
List<int[]> list = new List<int[]>();
for (int i = 0; i < height; i++)
{
list.Add(Enumerable.Range(0, width).ToArray());
}
int[][] bazillionInts = list.ToArray();
using (FileStream fsZ = new FileStream("c:\\temp_zipped.txt", FileMode.Create))
using (FileStream fs = new FileStream("c:\\temp_notZipped.txt", FileMode.Create))
using (GZipStream gz = new GZipStream(fsZ, CompressionMode.Compress))
{
BinaryFormatter f = new BinaryFormatter();
f.Serialize(gz, bazillionInts);
f.Serialize(fs, bazillionInts);
}
```
I can't think of a better/easy way to do this. The zipped version is pretty damn tight.
I'd go with the BinaryFormatter + GZipStream. Making something custom would not be fun at all.
---
[edit by MG]
I hope you won't be offended by an edit, but the uniform repeated Range(0,width) is skewing things vastly; change to:
```
int width = 1000;
int height = 10000;
Random rand = new Random(123456);
int[,] bazillionInts = new int[width, height];
for(int i = 0 ; i < width;i++)
for (int j = 0; j < height; j++)
{
bazillionInts[i, j] = rand.Next(50000);
}
```
And try it; you'll see `temp_notZipped.txt` at 40MB, `temp_zipped.txt` at 62MB. Not so appealing...
|
The best code length/output size ratio would be to encode your array using BitConverter, converting all elements into their compact binary format. It's manual, I know, but will save 80-90% space compared to .NET binary serialization.
|
Optimising binary serialization for multi-dimensional generic arrays
|
[
"",
"c#",
".net",
"serialization",
"binary",
""
] |
As a followup to my [question about the java.awt.Component.getName() property](https://stackoverflow.com/questions/227143/java-what-is-javaawtcomponentgetname-and-setname-used-for), I'm wondering if there is a way of squirreling an arbitrary object somewhere in a Component, similar to the [tag property in .NET](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.tag.aspx)? This would be really handy for any code that does work to a component but doesn't necessarily know what that component is.
For example, say I'm trying to implement an application-wide help system that knows to look at any component currently pointed to by the mouse, reach into that component and pull out it's help text and display it in it's own pane on the screen (no, I don't want to use a tooltip). My answer currently is to use the Name (getName()/setName()) to store the help text, and this will work, but the Name has to be a string. If I wanted to get fancier and store anything other than a string, I think I'd be stuck.
|
I generally create a hash and put (component, cookie) whenever I add a component to the screen. When you need your cookie object back (in an event perhaps), the event always gets a copy of the component, and then you are just a get(component) away from your cookie.
In some extreme conditions, I've subclassed the control and just added a field. It's a quick and dirty fix since subclassing the component is just like a few lines of code and can go in the same class the file where you are generating your screen. This is only useful if you just need to store your data connected to a single type of control.
|
`JComponent` has `putClientProperty` and `getClientProperty`.
|
Java: Any way of squirreling an object about a Component somewhere in the Component?
|
[
"",
"java",
"awt",
""
] |
We are using a WCF service layer to return images from a repository. Some of the images are color, multi-page, nearly all are TIFF format. We experience slowness - one of many issues.
1.) What experiences have you had with returning images via WCF
2.) Do you have any suggestions tips for returning large images?
3.) All messages are serialized via SOAP correct?
4.) Does wcf do a poor job of compressing the large tiff files?
Thanks all!
|
Okay Just to second the responses by ZombieSheep and Seba Gomez, you should definitely look at streaming your data. By doing so you could seamlessly integrate the GZipStream into the process. On the client side you can reverse the compression process and convert the stream back to your desired image.
By using streaming there is a select number of classes that can be used as parameters/return types and you do need to modify your bindings throughout.
Here is the [MSDN site](http://msdn.microsoft.com/en-us/library/ms789010.aspx) on enabling streaming. [This](http://msdn.microsoft.com/en-us/library/ms731913.aspx) is the MSDN page that describes the restrictions on streaming contracts.
I assume you are also controlling the client side code, this might be really hard if you aren't. I have only used streaming when I had control of both the server and client.
Good luck.
|
If you are using another .Net assembly as your client, you can use two methodologies for returning large chunks of data, streaming or MTOM.
Streaming will allow you to pass a TIFF image as if it were a normal file stream on the local filesystem. See [here](http://msdn.microsoft.com/en-us/library/aa751889.aspx) for more details on the choices and their pros and cons.
Unfortunately, you're still going to have to transfer a large block of data, and I can't see any way around that, considering the points already raised.
|
WCF - returning large images - your experience and tips on doing so
|
[
"",
"c#",
".net",
"wcf",
""
] |
I have a .NET web-service client that has been autogenerated from a wsdl-file using the wsdl.exe tool.
When I first instantiate the generated class, it begins to request a bunch of documents from w3.org and others. The first one being <http://www.w3.org/2001/XMLSchema.dtd>
Besides not wanting to cause unnecessary traffic to w3.org, I need to be able to run the application without a connection to the Internet (the web-service is a "Intra-web-service").
Anyone know the solution?
If it helps, here is the stacktrace I get when I do not have Internet:
```
"An error has occurred while opening external DTD 'http://www.w3.org/2001/XMLSchema.dtd': The remote name could not be resolved: 'www.w3.org'"
at System.Net.HttpWebRequest.GetResponse()
at System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials)
at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials)
at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn)
at System.Xml.XmlTextReaderImpl.OpenStream(Uri uri)
at System.Xml.XmlTextReaderImpl.DtdParserProxy_PushExternalSubset(String systemId, String publicId)
at System.Xml.XmlTextReaderImpl.Throw(Exception e)
at System.Xml.XmlTextReaderImpl.DtdParserProxy_PushExternalSubset(String systemId, String publicId)
at System.Xml.XmlTextReaderImpl.DtdParserProxy.System.Xml.IDtdParserAdapter.PushExternalSubset(String systemId, String publicId)
at System.Xml.DtdParser.ParseExternalSubset()
at System.Xml.DtdParser.ParseInDocumentDtd(Boolean saveInternalSubset)
at System.Xml.DtdParser.Parse(Boolean saveInternalSubset)
at System.Xml.XmlTextReaderImpl.DtdParserProxy.Parse(Boolean saveInternalSubset)
at System.Xml.XmlTextReaderImpl.ParseDoctypeDecl()
at System.Xml.XmlTextReaderImpl.ParseDocumentContent()
at System.Xml.XmlTextReaderImpl.Read()
at System.Xml.Schema.Parser.StartParsing(XmlReader reader, String targetNamespace)
at System.Xml.Schema.Parser.Parse(XmlReader reader, String targetNamespace)
at System.Xml.Schema.XmlSchemaSet.ParseSchema(String targetNamespace, XmlReader reader)
at System.Xml.Schema.XmlSchemaSet.Add(String targetNamespace, XmlReader schemaDocument)
at [...]WebServiceClientType..cctor() in [...]
```
|
I needed the XmlResolver, so [tamberg's solution](https://stackoverflow.com/questions/217841/net-autogenerated-web-service-client-how-do-i-avoid-requesting-schemas-from-w3o#218105) did not quite work. I solved it by implementing my own XmlResolver that read the necessary schemas from embedded resources instead of downloading them.
The problem did not have anything to do with the autogenerated code, by the way.
The web-service-client had another implementation file that contained something like this:
```
public partial class [...]WebServiceClientType
{
private static readonly XmlSchemaSet _schema;
static KeyImportFileType()
{
_schema = new XmlSchemaSet();
_schema.Add(null, XmlResourceResolver.GetXmlReader("http://www.w3.org/TR/2002/REC-xmldsig-core-20020212/xmldsig-core-schema.xsd"));
_schema.Add(null, XmlResourceResolver.GetXmlReader("http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/xenc-schema.xsd"));
_schema.Compile();
}
```
and it was this class-constructor that failed.
|
if you have access to the XmlReader (or XmlTextReader) you can do the following:
```
XmlReader r = ...
r.XmlResolver = null; // prevent xsd or dtd parsing
```
Regards,
tamberg
|
.NET autogenerated web-service client: How do I avoid requesting schemas from w3.org?
|
[
"",
"c#",
".net",
"web-services",
"wsdl",
"offline",
""
] |
I am trying to link to a file that has the '#' character in via a window.open() call. The file does exist and can be linked to just fine using a normal anchor tag.
I have tried escaping the '#' character with '%23' but when the window.open(myurl) gets processed, the '%23' becomes '%2523'. This tells me that my url string is being escapped by the window.open call changing the '%' to the '%25'.
Are there ways to work around this extra escaping.
Sample code:
```
<script language="javascript">
function escapePound(url)
{
// original attempt
newUrl = url.replace("#", "%23");
// first answer attempt - doesn't work
// newUrl = url.replace("#", "\\#");
return newUrl;
}
</script>
<a href="#top" onclick="url = '\\\\MyUNCPath\\PropertyRushRefi-Add#1-ABCDEF.RTF'; window.open(escapePound(url)); return true;">Some Doc</a>
```
URL that yells says "file://MyUNCPath/PropertyRushRefi-Add%25231-ABCDEF.RTF" cannot be found
|
You seek the dark magicks of [encodeURI](http://www.w3schools.com/jsref/jsref_encodeURI.asp):
```
window.open("http://your-url.com/" + encodeURIComponent("foo#123.jpg"));
```
|
Did you try using the standard text escape char "\"?
```
\#
```
|
javascript window.open() and # symbol
|
[
"",
"javascript",
"url-rewriting",
""
] |
In few months I start a project in PHP and I am hesitating to do like usual : develop on my Windows Vista 64bits machine. Since I know few things on Linux, I think it can be a good way to learn by working on a Linux distribution.
**Do you think it's a good idea or not?** I would run a [VirtualBox](http://www.virtualbox.org/) with Ubuntu (on my Vista64bits). I was thinking to install [XAMPP](http://www.apachefriends.org/en/xampp-linux.html) to be able to develop in PHP.
If you think it's a good idea, feel free to suggest me some tutorial about what I should know with Virtualizing an OS, or Linux/dev.
**Update**
I have build many website in PHP with Windows, the question is more about if it's a good way to start learning Linux by developing on it via a Virtual machine? I have 4 gigs rams, will it be smooth if I install Eclipse in the Virtual Machine? etc.
|
You should really develop on the same platform where you are going to deploy. I'm not saying it is bad to do differently, but it can save you some pain in the long run. OTOH, you might learn faster about platform differences that way. So, the main question is: do you want to have a production system running ASAP without much headache? Or, you want to spend some time and make some effort to learn how to develop cross-platform stuff?
And yes, there are differences. For example, case sensitive and case in-sensitive filenames. Then, some PHP functions use native C functions that have different implementation. For example, printf() does not produce the same amount of whitespace for some of the types. Resolution of time measurement (milliseconds vs microseconds) can be different, etc. Then, you have different ways filesystem permissions are handled. These are just some recent problems I've found that I can remember off the top of my head.
|
PHP **should** be the same on any platform - so *where* you develop shouldn't matter.
However, in my experience and observation,more sites running PHP are running on Linux than Windows.
Getting Apache and PHP setup on something like Ubuntu or Fedora is a cinch, and testing everything is pretty simple, too.
Also, when you go live with your site, what platform will it be running on? I prefer to do development on the platform it will be running on whenever possible.
|
PHP website, should I develop into a Linux distribution instead of Windows?
|
[
"",
"php",
"linux",
"xampp",
"virtualbox",
""
] |
As a new Eclipse user, I am constantly annoyed by how long it takes compiler error messages to display. This is mostly only a problem for long errors that don't fit in the status bar or the "Problems" tab. But I get enough long errors in Java—especially with generics—that this is a nagging issue. (Note: The correct answer to this question is not "get better at using generics." ;-)
The ways I have found to display an error are:
1. Press `Ctrl+.` or execute the command "Next Annotation". The next error is highlighted and its associated message appears in the status bar (if it is short enough). The error is also highlighted in the "Problems" tab, if it is open, but the tab is not automatically brought to the top.
2. Hover the mouse over the error. After a noticeable lag, the error message appears as a "tool tip", along with any associated "Quick Fixes."
3. Hover the mouse over the error icon on the left side of the editing pane. After a noticeable lag, all of the error messages for that line appear as a "tool tip." Clicking on the icon brings up "Quick Fixes."
What I would like is for `Ctrl+.` to automatically and instantly bring up the complete error message (I don't care where). Is this a configurable option?
[UPDATE] @asterite's "`Ctrl+. F2`" is almost it. How do I make "Next Annotation, then Show Tooltip Description" a macro bound to a single keystroke?
|
Well, you can press F2 to display the popup that normally shows javadoc. If there's an error, it will display the error message with available quick fixes.
So you can do Ctrl+., F2 repeatedly to achieve what you want.
|
> "After a noticeable lag, the error message
> appears as a "tool tip", along with
> any associated "Quick Fixes."
Dont know if this will help you but you can reduce the lag in the pop up by changing the value of auto activation delay in the in the menu located at:
```
Window>Preferences>Java>>Editor>ContentAssist
```
|
Fast compiler error messages in Eclipse
|
[
"",
"java",
"eclipse",
"ide",
""
] |
I have an array I've created in JavaScript. The end result comes out to element1,element2,,,element5,element6,,,element9.... etc
Once passed to ColdFusion, it removes the null elements, I end up with element1,element2,element5,element6,element9
I need to maintain these spaces, any ideas? My problem may begin before this, to explain in more detail...
I have a form with 13 elements that are acting as a search/filter type function. I want to "post" with AJAX, in essence, i'm using a button to call a jQuery function and want to pass the fields to a ColdFusion page, then have the results passed back. The JavaScript array may not even be my best option.
Any ideas?
|
Are you deserializing the jS array into a list? CF ignores empty list fields using its built-in functions. This can be worked around by processing the text directly. Someone has already done this for you, fortunately. There are several functions at cflib.org, like:
* [ListFix](http://cflib.org/udf/ListFix)
* [ListLenIncNulls](http://cflib.org/udf/ListLenIncNulls)
etc, etc, etc.
|
In exchanging data between javascript and coldfusion have a look at using JSON.
<http://www.json.org>
<http://www.epiphantastic.com/cfjson/>
|
JavaScript array to ColdFusion
|
[
"",
"javascript",
"jquery",
"forms",
"coldfusion",
""
] |
I've been writing a little application that will let people upload & download files to me. I've added a web service to this applciation to provide the upload/download functionality that way but I'm not too sure on how well my implementation is going to cope with large files.
At the moment the definitions of the upload & download methods look like this (written using Apache CXF):
```
boolean uploadFile(@WebParam(name = "username") String username,
@WebParam(name = "password") String password,
@WebParam(name = "filename") String filename,
@WebParam(name = "fileContents") byte[] fileContents)
throws UploadException, LoginException;
byte[] downloadFile(@WebParam(name = "username") String username,
@WebParam(name = "password") String password,
@WebParam(name = "filename") String filename) throws DownloadException,
LoginException;
```
So the file gets uploaded and downloaded as a byte array. But if I have a file of some stupid size (e.g. 1GB) surely this will try and put all that information into memory and crash my service.
So my question is - is it possible to return some kind of stream instead? I would imagine this isn't going to be terribly OS independent though. Although I know the theory behind web services, the practical side is something that I still need to pick up a bit of information on.
Cheers for any input,
Lee
|
[Stephen Denne](https://stackoverflow.com/q/132618/16800) has a Metro implementation that satisfies your requirement. My answer is provided below after a short explination as to why that is the case.
Most Web Service implementations that are built using HTTP as the message protocol are REST compliant, in that they only allow simple send-receive patterns and nothing more. This greatly improves interoperability, as all the various platforms can understand this simple architecture (for instance a Java web service talking to a .NET web service).
If you want to maintain this you could provide chunking.
```
boolean uploadFile(String username, String password, String fileName, int currentChunk, int totalChunks, byte[] chunk);
```
This would require some footwork in cases where you don't get the chunks in the right order (Or you can just require the chunks come in the right order), but it would probably be pretty easy to implement.
|
Yes, it is possible with Metro. See the [Large Attachments](https://metro.java.net/guide/ch06.html#large-attachments) example, which looks like it does what you want.
> JAX-WS RI provides support for sending and receiving large attachments in a streaming fashion.
>
> * Use MTOM and DataHandler in the programming model.
> * Cast the DataHandler to StreamingDataHandler and use its methods.
> * Make sure you call StreamingDataHandler.close() and also close the StreamingDataHandler.readOnce() stream.
> * Enable HTTP chunking on the client-side.
|
Can a web service return a stream?
|
[
"",
"java",
"web-services",
"cxf",
""
] |
We have a number of projects that use the same and/or similar package names. Many or these projects will build jar files that are used by other projects. We have found a number of foo.util foo.db and foo.exceptions where the same class names are being used leading to name space conflicts.
Does anyone know of a tool that will search a set of java code bases and automatically find name space conflicts and ambiguous imports?
|
It's simpler to fix your names in each individual project.
Really.
You don't need to **know** all the conflicts. Your package names should be unique in the first place. If they aren't unique, you need to rethink how you're assigning your package names. If they're "flat" (`foo.this` and `foo.that`) you need to make them taller and much more specific.
That's why the examples are always `org.apache.project.component.lower.level.names`.
You should have `com.projectX.foo.this` and `com.projectZ.foo.that` to **prevent** the possibility of duplication.
"But all that recompiling," you say. You'll have to do that anyway. Don't waste a lot of time trying to discover the exact, complete extent. Go with what you know, start fixing things now, and work your way through your code base fixing one thing at a time.
|
If you can load the projects into Eclipse, the Problems view will give you the conflicts and ambiguous imports. There is also a organize imports wizard that will help with any unnecessary imports.
|
Tool for finding package namespace conflicts in java code
|
[
"",
"java",
"refactoring",
""
] |
I'm trying to select a random 10% sampling from a small table. I thought I'd just use the RAND() function and select those rows where the random number is less than 0.10:
```
SELECT * FROM SomeTable
WHERE SomeColumn='SomeCondition' AND
RAND() < 0.10
```
But I soon discovered that RAND() always returns the same number! Reminds me of this [xkcd cartoon](http://xkcd.com/221/).

OK, no problem, the RAND function takes a seed value. I will be running this query periodically, and I want it to give different results if I run it on a different day, so I seed it with a combination of the date and a unique row ID:
```
SELECT * FROM SomeTable
WHERE SomeColumn='SomeCondition' AND
RAND(CAST(GETDATE) AS INTEGER) + RowID) < 0.10
```
I still don't get any results! When I show the random numbers returned by RAND, I discover that they're all within a narrow range. It appears that getting a random number from RAND requires you to use a random seed. If I had a random seed in the first place, I wouldn't need a random number!
I've seen the previous discussions related to this problem:
[SQL Server Random Sort](https://stackoverflow.com/questions/52964/sql-server-random-sort)
[How to request a random row in SQL?](https://stackoverflow.com/questions/19412/how-to-request-a-random-row-in-sql)
They don't help me. TABLESAMPLE works at the page level, which is great for a big table but not for a small one, and it looks like it applies prior to the WHERE clause. TOP with NEWID doesn't work because I don't know ahead of time how many rows I want.
Anybody have a solution, or at least a hint?
**Edit:** Thanks to AlexCuse for a [solution](https://stackoverflow.com/questions/163604/what-am-i-doing-wrong-when-using-rand-in-ms-sql-server-2005#163843) which works for my particular case. Now to the larger question, how to make RAND behave?
|
This type of approach (shown by ΤΖΩΤΖΙΟΥ) will not guarantee a 10% sampling. It will only give you all rows where Rand() is evaluated to < .10 which will not be consistent.
Something like
```
select top 10 percent * from MyTable order by NEWID()
```
will do the trick.
**edit:** there is not really a good way to make RAND behave. This is what I've used in the past (kludge alert - it kills you not being able to use Rand() in a UDF)
```
CREATE VIEW RandView AS
SELECT RAND() AS Val
GO
CREATE FUNCTION RandomFloat()
RETURNS FLOAT
AS
BEGIN
RETURN (SELECT Val FROM RandView)
END
```
Then you just have `select blah, dbo.RandomFloat() from table` in your query.
|
If your table has a column (perhaps even the **rowid** column) that is numeric in the general sense, like integer, floating point or SQL numeric, please try the following:
```
SELECT * FROM SomeTable WHERE SomeColumn='SomeCondition' AND 0*rowid+RAND() < 0.10
```
In order to evaluate `RAND()` once for *every row*, not once at *the start of your query*.
The query optimizer is to blame. Perhaps there is another way, but I believe this will work for you.
|
What am I doing wrong when using RAND() in MS SQL Server 2005?
|
[
"",
"sql",
"sql-server",
"random",
""
] |
I have the following code:
```
MemoryStream foo(){
MemoryStream ms = new MemoryStream();
// write stuff to ms
return ms;
}
void bar(){
MemoryStream ms2 = foo();
// do stuff with ms2
return;
}
```
Is there any chance that the MemoryStream that I've allocated will somehow fail to be disposed of later?
I've got a peer review insisting that I manually close this, and I can't find the information to tell if he has a valid point or not.
|
If something is Disposable, you should always Dispose it. You should be using a `using` statement in your `bar()` method to make sure `ms2` gets Disposed.
It will eventually get cleaned up by the garbage collector, but it is always good practice to call Dispose. If you run FxCop on your code, it would flag it as a warning.
|
You won't leak anything - at least in the current implementation.
Calling Dispose won't clean up the memory used by MemoryStream any faster. It *will* stop your stream from being viable for Read/Write calls after the call, which may or may not be useful to you.
If you're absolutely sure that you *never* want to move from a MemoryStream to another kind of stream, it's not going to do you any harm to not call Dispose. However, it's generally good practice partly because if you ever *do* change to use a different Stream, you don't want to get bitten by a hard-to-find bug because you chose the easy way out early on. (On the other hand, there's the YAGNI argument...)
The other reason to do it anyway is that a new implementation *may* introduce resources which would be freed on Dispose.
|
Is a memory leak created if a MemoryStream in .NET is not closed?
|
[
"",
"c#",
".net",
"memory-leaks",
"memorystream",
""
] |
I have a 2D character array:
`char nm[MAX1][MAX2] = { "john", "bob", "david" };`
I want to swap two of these elements (without `std::swap`) by simply writing
`swapPointers(nm[0], nm[1]);`
where `swapPointers` looks like this
```
void swapPointers(char *&a, char *&b)
{
char *temp = a;
a = b;
b = a;
}
```
However, this does not compile (and while adding casts makes it compile, the pointers end up pointing to wrong/weird locations).
Can anybody help?
Thanks!
|
Zan is close, but his problem is that his 'swap' function can take any pointer to characters. This can cause problems if misused. Here is a safer version:
```
void swap(char (&x)[MAX2], char (&y)[MAX2])
{
char temp[MAX2];
memcpy(temp, x, MAX2);
memcpy(x, y, MAX2);
memcpy(y, temp, MAX2);
}
```
There is also a misunderstanding on the part of the poster: 'nm' is a 2-dimensional array of characters. There are no pointers. nm[0], nm[2], etc... are also not pointers either -- they are still (1-dimensional) arrays. The fact that 1-dimensional arrays are implicitly convertible to pointers causes this type of confusion among many C and C++ programmers.
In order to swap the data in the 2-dimensional array, you have to swap blocks of memory of size MAX2 -- as indicated by both 'swap' functions Zan and I wrote.
|
You cannot swap those pointers by reassigning the pointers, because those pointers point into a 2-D character array.
nm[a] and nm[b] are very strongly `const` because nm is a truly `const` object. If it wasn't, you could move C variables around in RAM by reassigning their names.
Just think of the havoc! So you can't do that. :-)
To swap what those pointers point to, you need to swap the values in those array locations.
```
swap(char *a, char *b)
{
char temp[MAX1];
memcpy(temp, a, MAX1);
memcpy(b, a, MAX1);
memcpy(a, temp, MAX1);
}
```
|
Transparently swapping pointers to character arrays in C++
|
[
"",
"c++",
"pointers",
""
] |
I'm writing a routine that validates data before inserting it into a database, and one of the steps is to see if numeric values fit the precision and scale of a Numeric(x,y) SQL-Server type.
I have the precision and scale from SQL-Server already, but what's the most efficient way in C# to get the precision and scale of a CLR value, or at least to test if it fits a given constraint?
At the moment, I'm converting the CLR value to a string, then looking for the location of the decimal point with .IndexOf(). Is there a faster way?
|
```
System.Data.SqlTypes.SqlDecimal.ConvertToPrecScale( new SqlDecimal (1234.56789), 8, 2)
```
gives 1234.57. it will truncate extra digits after the decimal place, and will throw an error rather than try to truncate digits before the decimal place (i.e. ConvertToPrecScale(12344234, 5,2)
|
Without triggering an exception, you could use the following method to determine if the value fits the precision and scale constraints.
```
private static bool IsValid(decimal value, byte precision, byte scale)
{
var sqlDecimal = new SqlDecimal(value);
var actualDigitsToLeftOfDecimal = sqlDecimal.Precision - sqlDecimal.Scale;
var allowedDigitsToLeftOfDecimal = precision - scale;
return
actualDigitsToLeftOfDecimal <= allowedDigitsToLeftOfDecimal &&
sqlDecimal.Scale <= scale;
}
```
|
What is an efficient way to check the precision and scale of a numeric value?
|
[
"",
"c#",
"sql-server",
".net-3.5",
""
] |
How do you fix a names mismatch problem, if the client-side names are keywords or reserved words in the server-side language you are using?
The DOJO JavaScript toolkit has a QueryReadStore class that you can subclass to submit REST patterned queries to the server. I'm using this in conjunction w/ the FilteringSelect Dijit.
I can subclass the QueryReadStore and specify the parameters and arguments getting passed to the server. But somewhere along the way, a "start" and "count" parameter are being passed from the client to the server. I went into the API and discovered that the QueryReadStore.js is sending those parameter names.
I'm using Fiddler to confirm what's actually being sent and brought back. The server response is telling me I have a parameter names mismatch, because of the "start" and "count" parameters. The problem is, I can't use "start" and "count" in PL/SQL.
Workaround or correct implementation advice would be appreciated...thx.
//I tried putting the code snippet in here, but since it's largely HTML, that didn't work so well.
|
While it feels like the wrong thing to do, because I'm hacking at a well tested, nicely written JavaScript toolkit, this is how I fixed the problem:
I went into the DOJOX QueryReadStore.js and **replaced** the "start" and "count" references with acceptable (to the server-side language) parameter names.
I would have like to handled the issue via my PL/SQL *(but I don't know how to get around reserved words)* or client-side code *(subclassing did not do the trick)*...without getting into the internals of the library. But it works, and I can move on.
|
As opposed to removing it from the API, as you mentioned, you can actually create a subclass with your own fetch, and remove start/count parameters (theoretically). Have a look at this URL for guidance:
<http://www.sitepen.com/blog/2008/06/25/web-service-data-store/>
Start and count are actually very useful because they allow you to pass params for the query that you can use to filter massive data sets, and it helps to manage client-side paging. I would try to subclass instead, intercept, and remove.
|
How Do You Fix A Parameter Names Mismatch - DOJO and PL/SQL
|
[
"",
"javascript",
"web-applications",
"plsql",
"dojo",
""
] |
How do I pass a parameter from a page's useBean in JSP to a servlet in Java? I have some data in a form that gets passed no problem with a submit button, but no way to send anything else. Please help? Here is my code:
```
<input name = "deleteGameButton" type = "submit" value = "Delete"
onclick = "submitToServlet('DeleteGameServlet');">
```
Here is the corresponding javascript:
```
function submitToServlet(newAction)
{
document.userGameForm.action = newAction;
}
```
I'd like the servlet to have access to userBean
```
<jsp:useBean id = "userBean" scope = "session" class = "org.project.User" />
```
|
You kind of mess things here.
onclick() is Javascript and executed on client side. It has no (direct) way to update session-scoped bean. That bean is left on server-side, and was used when the HTML page was **generated**. To pass parameters back to servlet you need to use good old form fields, and submit the form.
Add more fields to the form, set their values before submit, then submit.
In Servlet call request.getParameter("name");
P.S. To automate this kind of things **USE STRUTS**. :-) Struts does exactly what you want: before passing the parameters to action, it populates the bean with those parameters. Transparently.
|
It depends exactly what you are trying to do. The
`<jsp:useBean id = "userBean" scope = "session" class = "org.project.User" />`
tag will allow you to use the userBean attribute of the session in your jsp. If there is not a userBean attribute in the session, it will create a new one (using the default constructor for org.project.User) and place it in the session.
Then, when you get to the servlet, you can retrieve it with:
```
User user = (User)request.getSession().getAttribute("userBean");
```
|
How to pass parameter to servlet
|
[
"",
"java",
"html",
"forms",
"jsp",
"parameters",
""
] |
I want to start working with TDD but I don't know really where to start. We coding with .NET (C#/ASP.NET).
|
See the questions [Why should I practice Test Driven Development and how should I start?](https://stackoverflow.com/questions/4303/why-should-i-practice-test-driven-development-and-how-should-i-start), [Moving existing code to Test Driven Development](https://stackoverflow.com/questions/167079/moving-existing-code-to-test-driven-development), [What is unit testing?](https://stackoverflow.com/questions/1383/what-is-unit-testing) and [What is TDD?](https://stackoverflow.com/questions/2260/what-is-tdd)
|
There's a good book called [Test Driven Development in Microsoft .NET](http://www.microsoft.com/MSPress/books/6778.aspx) that you might check out. It is essentially the same as the classic [Test Driven Development by Example](https://rads.stackoverflow.com/amzn/click/com/0321146530), but with the Microsoft platform in mind.
|
What are the best steps to start programming with TDD with C#?
|
[
"",
"c#",
".net",
"tdd",
""
] |
I'm actually developing a Web Service in Java using Axis 2.
I designed my service as a POJO (Plain Old Java Object) with public method throwing exceptions :
```
public class MyService {
public Object myMethod() throws MyException {
[...]
}
}
```
I then generated the WSDL using Axis2 ant task. With the WSDL I generate a client stub to test my service. The generated code contains a "MyExceptionException" and the "myMethod" in the stub declare to throw this :
```
public class MyServiceStub extends org.apache.axis2.client.Stub {
[...]
public MyServiceStub.MyMethodResponse myMethod(MyServiceStub.MyMethod myMethod)
throws java.rmi.RemoteException, MyExceptionException0 {
[...]
}
[...]
}
```
But when calling the method surrounded by a catch, the "MyExceptionException" is never transmitted by the server which transmit an AxisFault instead (subclass of RemoteException).
I assume the problem is server-side but don't find where. The service is deployed as an aar file in the axis2 webapp on a tomcat 5.5 server. The services.xml looks like this :
```
<?xml version="1.0" encoding="UTF-8"?>
<service name="MyService" scope="application">
<description></description>
<messageReceivers>
<messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-only"
class="org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver"/>
<messageReceiver mep="http://www.w3.org/2004/08/wsdl/in-out"
class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/>
</messageReceivers>
<parameter name="ServiceClass">MyService</parameter>
<parameter name="ServiceTCCL">composite</parameter>
</service>
```
If the behavior is normal then I'll drop the use of Exceptions (which is not vital to my project) but I'm circumspect why Java2WSDL generate custom <wsdl:fault> in operation input & output declaration and WSDL2Java generate an Exception class (and declare to throw it in the stub method) if this is not usable...
|
I don't really think there is a problem. Your Client calls a method on the server. That method results in an exception. Axis transforms this exception to something which can be send to the client to indicate the error.
All exceptions, as far as I know, are wrapped into an AxisFault which is then transmitted to the client as, I believe, a SoapFault message with as description the exception message.
In other words, the client should only see AxisFaults as the exception (exception class) is not serialized and send. Server exceptions should become AxisFaults at the client side.
|
Have you tried using Axis2 with Lady4j, it solved this issue for us.
|
Web Service throwing exception using Axis2 Java
|
[
"",
"java",
"web-services",
"exception",
"axis",
""
] |
I have a collection of data stored in XDocuments and DataTables, and I'd like to address both as a single unified data space with XPath queries. So, for example, "/Root/Tables/Orders/FirstName" would fetch the value of the Firstname column in every row of the DataTable named "Orders".
Is there a way to do this without copying all of the records in the DataTable into the XDocument?
I'm using .Net 3.5
|
I eventually figured out the answer to this myself. I discovered a class in System.Xml.LINQ called XStreamingElement that can create an XML structure on-the-fly from a LINQ expression. Here's an example of casting a DataTable into an XML-space.
```
Dictionary<string,DataTable> Tables = new Dictionary<string,DataTable>();
// ... populate dictionary of tables ...
XElement TableRoot = new XStreamingElement("Tables",
from t in Tables
select new XStreamingElement(t.Key,
from DataRow r in t.Value.Rows
select new XStreamingElement("row",
from DataColumn c in t.Value.Columns
select new XElement(c.ColumnName, r[c])))))
```
The result is an XElement (TableRoot) with a structure similar to the following, assuming the dictionary contains one table called "Orders" with two rows.
```
<Tables>
<Orders>
<row>
<sku>12345</sku>
<quantity>2</quantity>
<price>5.95</price>
</row>
<row>
<sku>54321</sku>
<quantity>3</quantity>
<price>2.95</price>
</row>
</Orders>
</Tables>
```
That can be merged with a larger XElement/XDocument based hierarchy and queried with XPath.
|
.NETs XPath stuff operates on the IXPathNavigable interface. Every IXPathNavigable has a CreateNavigator() method that returns an IXPathNavigator.
In order to expose all of your data sources as one large document you would need to create a class implementing IXPathNavigable, containing all the xpath data sources. The CreateNavigator method should return a custom XPathNavigator that exposes the contents as one large data source.
Unfortunately, implementing this navigator is somewhat fiddly, and care must be taken especially when jumping between documents,
|
Abstracting storage data structures within XPath
|
[
"",
"c#",
"xml",
".net-3.5",
"datatable",
""
] |
I run a browser based game at www.darknovagames.com. Recently, I've been working on reformatting the site with CSS, trying to get all of its pages to verify according to the HTML standard.
I've been toying with this idea of having the navigation menu on the left AJAX the pages in (rather than taking the user to a separate page each time, requiring a reload of the title and nav bar, which almost never change) and I know that if I do so, I will probably break the Forward/Back buttons in the browser. My question I guess is, should I go ahead and AJAX the site, thus requiring the user to use the sites navigation to play the game, or should I leave the site as it currently stands, and use standard hyperlinks and things for navigation?
The reason I ask I guess is that I built a forums system into the site, and a lot of times I would want to link say to a particular topic within the forums.
I'm also open to suggestions. Is there a standard (preferably without traditional frames) way to make only the body area of the site reload, while still changing the URL so that users can bookmark and forward/back, etc? That could potentially solve my problem as well. I'm just asking for the best solution here, not an answer to a specific question. ^\_^
Thanks
|
If you're going to enable AJAX, don't do it at the expense of having accessible URLs to every significant page on your site. This is the backbone of a navigable site that people can use.
When you shovel all your functionality into AJAX calls and callbacks, you're basically forcing your users into a single path to access the features and content that they want -- which is totally against how the web is meant to function. People rely on the address bar and the back button. If you override all your links so that your site is essentially a single page that only updates through AJAX, you're limiting your users' ability to navigate your site and find what they need. It also stops your users from being able to share what they find (which, that's part of the point, right?).
Think about a user's mental map of your site. If they know they came in through the home page, then they went to search for something, then they landed on a games page, then they started playing a particular game, that's four distinct units of action that the user took. They might have done a few other smaller, more insignificant actions on each of these pages -- but these are the main units. When they click the Back button, they should expect to go back through the path they came in on. If you are loading all these pages through AJAX calls, you're providing a site whose functionality runs contrary to what the user expects.
Break your site out into every significant function (ie, search, home, profiles, games -- it'll be dictated by what your site is all about). Anywhere you link to these pages, do it through a regular link and a static URL.
AJAX is fine. But the art of it is knowing when to use it and when not to. If you keep to the model I've sketched out above, your users will appreciate it.
|
Use ajax for portions of the page that needs to update, not the entire thing. For that you should use templates.
When you want to still preserve the back button for your various state changes on the page, combine them with # achors to alter the url (without forcing the browser to issue another GET).
For example, gmail's looks like this:
mail.google.com/#inbox/message-1234
everything past the # was a page state change that happened via ajax. If I press Back, I'll go to the inbox again (again, without another browser GET)
|
AJAX and the Browser Back Button
|
[
"",
"javascript",
"html",
"ajax",
"navigation",
""
] |
In many symbolic math systems, such as Matlab or Mathematica, you can use a variable like `Ans` or `%` to retrieve the last computed value. Is there a similar facility in the Python shell?
|
Underscore.
```
>>> 5+5
10
>>> _
10
>>> _ + 5
15
>>> _
15
```
|
Just for the record, ipython takes this one step further and you can access every result with \_ and its numeric value
```
In [1]: 10
Out[1]: 10
In [2]: 32
Out[2]: 32
In [3]: _
Out[3]: 32
In [4]: _1
Out[4]: 10
In [5]: _2
Out[5]: 32
In [6]: _1 + _2
Out[6]: 42
In [7]: _6
Out[7]: 42
```
And it is possible to edit ranges of lines with the %ed macro too:
```
In [1]: def foo():
...: print "bar"
...:
...:
In [2]: foo()
bar
In [3]: %ed 1-2
```
|
Get last result in interactive Python shell
|
[
"",
"python",
""
] |
I have a string containing a date, and another string containing the date format of the first string. Is there a function that I can call to convert that date into something like a SYSTEMTIME structure? Basically, I'd like the opposite of [GetDateFormat()](http://msdn.microsoft.com/en-us/library/ms776293(VS.85).aspx).
|
No, but you can write a function that will convert the date format string into a `sscanf` format string and series of variables. Then copy the data from the variables to the `SYSTEMTIME` structure and you're all set.
It's not trivial, but it is probably the most flexible arrangement if you must support many different date formats.
|
It doesn't do quite what you've described, but I'd start by using `COleDateTime::ParseDateTime`. It works with localized date-time formats.
|
Parse a Date from a String in Win32
|
[
"",
"c++",
"windows",
"winapi",
""
] |
I have a sorted collection of objects (it can be either SortedList or SortedDictionary, I will use it mainly for reading so add performance is not that important). How can I get the i-th value?
So e.g. when I have numbers 1, 2, 3, 4, 5 in the collection and I want the median (so 3 in this example), how can I do it?
|
Try something like this:
list.Values[list.Count / 2];
Note that a true median would average the two numbers in the middle if Count is even.
|
You can use code like
```
list.Values[index]
```
for a sorted list.
The easiest way with a SortedDictonary would be to use the ElementAt() method:
```
dict.ElementAt(index).Value
```
However, this is slower than in the list case.
In either case, you need to check your count. If it is odd, take index = (list.length-1) / 2 ). If it is even, take index1 = list.length/2 AND index2 = list.length/2 - 1 and average the values.
|
Getting i-th value from a SortedList or SortedDictionary
|
[
"",
"c#",
".net",
"collections",
""
] |
My application deals with e-mails coming from different sources, e.g. Outlook and IMAP mailboxes. Before parsing them, I write them to the temporary directory (keeping them in memory is not an option). While parsing, I might be writing attachments to the temp directory (for example, if they are too large to keep in memory or for full-text extraction).
But in the wild, two things happen that seemed very strange in the first place but could all be traced back to virus scanner behaviour:
* I'm sometimes unable to open files which I've written myself a few milliseconds ago. They are obviously locked by virus scanners to ensure that they are clean. I get an Exception.
* If files are considered dangerous by the virus scanner, it deletes them at some point of time.
To cope with this behaviour, I've written a few methods that try again if open fails or do some checks if files exist, but I'm unable to use them in every part of the application (3rd party code, for example filters), so things got better, but not 100% perfect and my source code looks ugly in parts because of this.
How do you cope with virus scanners?
|
Write your files with encryption. I would think you wouldn't need anything too tricky or involved. Also encrypt or mangle the filenames as the virus scanner might be triggered by that as well.
|
If changing Virus scanner configuration is not the ideal option for you. Could you keep the file open from it's creation up to the end of your process? If you got an handle on the file, it will not be available for the Virus scanner.
|
Virus scanners locking and deleting temporary files - best way to cope with them?
|
[
"",
"c#",
"locking",
"temporary-files",
"antivirus",
""
] |
Is there a way to make a python module load a dll in my application directory rather than the version that came with the python installation, without making changes to the python installation (which would then require I made an installer, and be careful I didn't break other apps for people by overwrting python modules and changing dll versions globaly...)?
Specifically I would like python to use my version of the sqlite3.dll, rather than the version that came with python (which is older and doesn't appear to have the fts3 module).
|
Ok it turns out python always loads the dll in the same directory as the pyd file, regardless of what the python and os paths are set to.
So I needed to copy the \_sqlite3.pyd from python/v2.5/DLLS to my apps directory where the new sqlite3.dll is, making it load my new dll, rather than the one that comes with python (since the pyd files seem to follow the PYTHONPATH, even though the actual dlls themselves don't).
|
If you're talking about Python module DLLs, then simply modifying `sys.path` should be fine. However, if you're talking about DLLs *linked* against those DLLs; i.e. a `libfoo.dll` which a `foo.pyd` depends on, then you need to modify your PATH environment variable. I wrote about [doing this for PyGTK a while ago](http://glyf.livejournal.com/7878.html), but in your case I think it should be as simple as:
```
import os
os.environ['PATH'] = 'my-app-dir' + os.pathsep + os.environ['PATH']
```
That will insert `my-app-dir` at the head of your Windows path, which I believe also controls the load-order for DLLs.
Keep in mind that you will need to do this before loading the DLL in question, i.e., before importing anything interesting.
sqlite3 may be a bit of a special case, though, since it is distributed with Python; it's obviously kind of tricky to test this quickly, so I haven't checked `sqlite3.dll` specifically.
|
python module dlls
|
[
"",
"python",
"module",
""
] |
I have a "settings file" in my Winforms application called Settings.settings with a partial class for custom methods, etc. Is there a way to load / save dynamic settings based on arbitrary keys?
For example, I have some ListViews in my application in which I want to save / load the column widths; Instead of creating a width setting for each column for each list view I would like a simple method to load / save the widths automatically.
Below is an example of the **save** method I have tried:
```
internal sealed partial class Settings
{
public void SetListViewColumnWidths(ListView listView)
{
String baseKey = listView.Name;
foreach (ColumnHeader h in listView.Columns)
{
String key = String.Format("{0}-{1}", baseKey, h.Index);
this[key] = h.Width;
}
}
}
```
When running that code I get the error **"The settings property 'TestsListView-0' was not found."** Is there something I am missing?
|
Store your column width settings in an Xml Serializable object. Ie, something that implements [IXmlSerializable](http://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx) then create a single setting entry of that type in Settings.settings.
A good option would probably be an Xml Serializable Dictionary. A quick [google search](http://www.google.com.au/search?&q=xml+serializable+dictionary) found quite a few different blog posts that describe how to implement that.
As mentioned in other answers you'll need to ensure that this object is a User setting. You may also need to initialize the setting instance. Ie, create a XmlSerializableDictionary() instance and assign it to the setting if the setting is null. The settings subsystem doesn't create default instances of complex setting objects.
Also, if you want these settings to persist between assembly versions (ie, be upgradable) you will need to upgrade the settings on application startup. This is described in detail on [Miha Markič's](http://cs.rthand.com/blogs/blog_with_righthand/archive/2005/12/09/246.aspx) blog and [Raghavendra Prabhu's](http://blogs.msdn.com/rprabhu/articles/433979.aspx) blog.
|
I think the error
> The settings property
> 'key' was not found.
occurs because the 'key' value does not exist in your settings file (fairly self-explanatory).
As far as I am aware, you can't add settings values programmatically, you might need to investigate adding all of the settings you need to the file after all, although once they are there, I think you'll be able to use the sort of code you've given to save changes.
To Save changes, you'll need to make sure they are 'User' settings, not 'Application'.
The Settings file is quite simple XML, so you might be able to attack the problem by writing the XML directly to the file, but I've never done it, so can't be sure it would work, or necessarily recommend that approach.
<http://msdn.microsoft.com/en-us/library/cftf714c.aspx> is the MSDN link to start with.
|
Winforms - Dynamic Load / Save Settings
|
[
"",
"c#",
"winforms",
"dynamic",
"settings",
""
] |
I need to import all ad groups in a few OUs into a table in SQL Server 2008. Once I have those I need to import all the members of those groups to a different table. I can use c# to do the work and pass the data to SQL server or do it directly in SQL server.
Suggestions on the best way to approach this?
|
Arry,
I don't know exactly, but found some links that may help you. I think the hottest track is this expression:
```
"(&(objectCategory=Person)(memberOf=DN=GroupName, OU=Org, DC=domain,
DC=com))"
```
I found it in [LDAP Query for group members](http://www.houseoffusion.com/groups/cf-talk/thread.cfm/threadid:55298) on a ColdFusion community's site. I'm more or less sure the filter can easily be applied to your query. I'm sorry, but I cannot test it, because I have no AD around here.
This one could also be a bit (but less) interesting:
<http://forge.novell.com/pipermail/cldap-dev/2004-April/000042.html>
Hope this helps, cheers,
Matthias
|
Add a Linked Server to your SQL Server and query the Active Directory via LDAP queries. This here described this quite well:
[Create a SQL Server View of your AD Users, Brendan Tompkins (MVP)](http://codebetter.com/blogs/brendan.tompkins/archive/2003/12/19/4746.aspx)
|
Importing Active Directory Users and Groups into SQL Server (possibly via c#)
|
[
"",
"c#",
"sql-server",
"active-directory",
""
] |
I'm trying to write a web application using SpringMVC. Normally I'd just map some made-up file extension to Spring's front controller and live happily, but this time I'm going for REST-like URLs, with no file-name extensions.
Mapping everything under my context path to the front controller (let's call it "**app**") means I should take care of static files also, something I'd rather not do (why reinvent yet another weel?), so some combination with tomcat's default servlet (let's call it "**tomcat**") appears to be the way to go.
I got the thing to work doing something like
```
<servlet-mapping>
<servlet-name>app</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>tomcat</servlet-name>
<url-pattern>*.ext</url-pattern>
</servlet-mapping>
```
and repeating the latter for each one of the file extensions of my static content. I'm just wondering why the following setups, which to me are equivalent to the one above, don't work.
```
<!-- failed attempt #1 -->
<servlet-mapping>
<servlet-name>app</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>tomcat</servlet-name>
<url-pattern>*.ext</url-pattern>
</servlet-mapping>
<!-- failed attempt #2 -->
<servlet-mapping>
<servlet-name>app</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>tomcat</servlet-name>
<url-pattern>/some-static-content-folder/*</url-pattern>
</servlet-mapping>
```
Can anyone shed some light?
|
I think I may know what is going on.
In your working web.xml you have set your servlet to be the default servlet (/ by itself is the default servlet called if there are no other matches), it will answer any request that doesn't match another mapping.
In Failed 1 your /\* mapping does appear to be a valid path mapping. With the /\* mapping in web.xml it answers all requests except other path mappings. According to the specification extension mappings are implicit mappings that are overwritten by explicit mappings. That's why the extension mapping failed. Everything was explicitly mapped to app.
In Failed 2, App is responsible for everything, except content that matches the static content mapping. To show what is happening in the quick test I set up. Here is an example. `/some-static-content-folder/` contains `test.png`
Trying to access test.png I tried:
```
/some-static-content-folder/test.png
```
and the file was not found. However trying
```
/some-static-content-folder/some-static-content-folder/test.png
```
it comes up. So it seems that the Tomcat default servlet (6.0.16 at least) drops the servlet mapping and will try to find the file by using the remaining path. According to this post [Servlet for serving static content](https://stackoverflow.com/questions/132052) Jetty gives the behavior you and I were expecting.
Is there some reason you can't do something like map a root directory for your rest calls. Something like app mapped to /rest\_root/\* than you are responsible for anything that goes on in the rest\_root folder, but anywhere else should be handled by Tomcat, unless you make another explicit mapping. I suggest setting your rest servlet to a path mapping, because it declares the intent better. Using / or /\* don't seem appropriate, since you have to map out the exceptions. Using SO as an example, my rest mappings would be something like
> /users/\* for the user servlet
>
> /posts/\* for the posts servlet
Mapping order
1. Explicit (Path mappings)
2. Implicit (Extension mappings)
3. Default (/)
Please correct anything that I got wrong.
|
For reference, the "failed attempt #2" is perfectly correct in version of Tomcat >= to 6.0.29.
It was the result of a Tomcat bug that get fixed in version 6.0.29:
<https://issues.apache.org/bugzilla/show_bug.cgi?id=50026>
```
<!-- Correct for Tomcat >= 6.0.29 or other Servlet containers -->
<servlet-mapping>
<servlet-name>app</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>default</servlet-name>
<url-pattern>/some-static-content-folder/*</url-pattern>
</servlet-mapping>
```
|
Can anyone explain servlet mapping?
|
[
"",
"java",
"jakarta-ee",
"servlets",
"spring-mvc",
"web.xml",
""
] |
I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk.
Has anybody had any luck with this? Would you mind documenting a brief tutorial?
As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate.
Thanks.
|
wokkel is the future of twisted words. [metajack](http://metajack.im/) wrote a really nice [blog post](http://metajack.im/2008/09/25/an-xmpp-echo-bot-with-twisted-and-wokkel/) on getting started.
If you want a nice, functional sample project to start with, check out my [whatsup](http://github.com/dustin/whatsup) bot.
|
I have written a simple Jabber bot (and thus Google talk bot) using the `xmpppy` library, which works well. The examples on [xmpppy](http://xmpppy.sourceforge.net/) should get you started (specifically [`bot.py`](http://xmpppy.sourceforge.net/examples/bot.py))
As for something actually implemented in twisted.Words:
[Here](http://yoan.dosimple.ch/blog/2007/01/30/) is a simple tutorial on creating a bot that prints every received message to the local terminal (and a version that replies with the revere of the received message). To track the online time of various accounts, you would add a callback for "presences" (going online/offline/away etc are "presence changes", in Jabber terminology)
For a more complete system, [pownce-jabber-bot](http://code.google.com/p/pownce-jabber-bot/) uses twisted.words and [wokkel](http://wokkel.ik.nu/) for the jabber interface.
[The powncebot/\_\_init\_\_.py](http://code.google.com/p/pownce-jabber-bot/source/browse/trunk/powncebot/__init__.py?spec=svn15&r=15) file seems like a good place to start - it's seems pretty simple.
|
How do you create a simple Google Talk Client using the Twisted Words Python library?
|
[
"",
"python",
"twisted",
"xmpp",
"google-talk",
""
] |
**Note: Mathematical expression evaluation is not the focus of this question. I want to compile and execute new code at runtime in .NET.** That being said...
I would like to allow the user to enter any equation, like the following, into a text box:
```
x = x / 2 * 0.07914
x = x^2 / 5
```
And have that equation applied to incoming data points. The incoming data points are represented by **x** and each data point is processed by the user-specified equation. I did this years ago, but I didn't like the solution because it required parsing the text of the equation for every calculation:
```
float ApplyEquation (string equation, float dataPoint)
{
// parse the equation string and figure out how to do the math
// lots of messy code here...
}
```
When you're processing boatloads of data points, this introduces quite a bit of overhead. I would like to be able to translate the equation into a function, on the fly, so that it only has to be parsed once. It would look something like this:
```
FunctionPointer foo = ConvertEquationToCode(equation);
....
x = foo(x); // I could then apply the equation to my incoming data like this
```
Function ConvertEquationToCode would parse the equation and return a pointer to a function that applies the appropriate math.
The app would basically be writing new code at run time. Is this possible with .NET?
|
Yes! Using methods found in the [Microsoft.CSharp](http://msdn.microsoft.com/en-us/library/microsoft.csharp.aspx), [System.CodeDom.Compiler](http://msdn.microsoft.com/en-us/library/system.codedom.compiler.aspx), and [System.Reflection](http://msdn.microsoft.com/en-us/library/system.reflection.aspx) name spaces. Here is a simple console app that compiles a class ("SomeClass") with one method ("Add42") and then allows you to invoke that method. This is a bare-bones example that I formatted down to prevent scroll bars from appearing in the code display. It is just to demonstrate compiling and using new code at run time.
```
using Microsoft.CSharp;
using System;
using System.CodeDom.Compiler;
using System.Reflection;
namespace RuntimeCompilationTest {
class Program
{
static void Main(string[] args) {
string sourceCode = @"
public class SomeClass {
public int Add42 (int parameter) {
return parameter += 42;
}
}";
var compParms = new CompilerParameters{
GenerateExecutable = false,
GenerateInMemory = true
};
var csProvider = new CSharpCodeProvider();
CompilerResults compilerResults =
csProvider.CompileAssemblyFromSource(compParms, sourceCode);
object typeInstance =
compilerResults.CompiledAssembly.CreateInstance("SomeClass");
MethodInfo mi = typeInstance.GetType().GetMethod("Add42");
int methodOutput =
(int)mi.Invoke(typeInstance, new object[] { 1 });
Console.WriteLine(methodOutput);
Console.ReadLine();
}
}
}
```
|
You might try this: [Calculator.Net](http://weblogs.asp.net/pwelter34/archive/2007/05/05/calculator-net-calculator-that-evaluates-math-expressions.aspx)
It will evaluate a math expression.
From the posting it will support the following:
```
MathEvaluator eval = new MathEvaluator();
//basic math
double result = eval.Evaluate("(2 + 1) * (1 + 2)");
//calling a function
result = eval.Evaluate("sqrt(4)");
//evaluate trigonometric
result = eval.Evaluate("cos(pi * 45 / 180.0)");
//convert inches to feet
result = eval.Evaluate("12 [in->ft]");
//use variable
result = eval.Evaluate("answer * 10");
//add variable
eval.Variables.Add("x", 10);
result = eval.Evaluate("x * 10");
```
[Download Page](http://www.loresoft.com/Applications/Calculator/Download/default.aspx)
And is distributed under the BSD license.
|
Is it possible to compile and execute new code at runtime in .NET?
|
[
"",
"c#",
".net",
"compilation",
"runtime",
""
] |
How do you create an application shortcut (.lnk file) in C# or using the .NET framework?
The result would be a .lnk file to the specified application or URL.
|
It's not as simple as I'd have liked, but there is a great class call [ShellLink.cs](http://www.vbaccelerator.com/home/NET/Code/Libraries/Shell_Projects/Creating_and_Modifying_Shortcuts/ShellLink_Code.html) at
[vbAccelerator](http://www.vbaccelerator.com/home/index.html)
This code uses interop, but does not rely on WSH.
Using this class, the code to create the shortcut is:
```
private static void configStep_addShortcutToStartupGroup()
{
using (ShellLink shortcut = new ShellLink())
{
shortcut.Target = Application.ExecutablePath;
shortcut.WorkingDirectory = Path.GetDirectoryName(Application.ExecutablePath);
shortcut.Description = "My Shorcut Name Here";
shortcut.DisplayMode = ShellLink.LinkDisplayMode.edmNormal;
shortcut.Save(STARTUP_SHORTCUT_FILEPATH);
}
}
```
|
Nice and clean. (**.NET 4.0**)
```
Type t = Type.GetTypeFromCLSID(new Guid("72C24DD5-D70A-438B-8A42-98424B88AFB8")); //Windows Script Host Shell Object
dynamic shell = Activator.CreateInstance(t);
try{
var lnk = shell.CreateShortcut("sc.lnk");
try{
lnk.TargetPath = @"C:\something";
lnk.IconLocation = "shell32.dll, 1";
lnk.Save();
}finally{
Marshal.FinalReleaseComObject(lnk);
}
}finally{
Marshal.FinalReleaseComObject(shell);
}
```
That's it, no additional code needed. *CreateShortcut* can even load shortcut from file, so properties like *TargetPath* return existing information. [Shortcut object properties](http://msdn.microsoft.com/en-us/library/f5y78918%28v=vs.84%29.aspx).
Also possible this way for versions of .NET unsupporting dynamic types. (**.NET 3.5**)
```
Type t = Type.GetTypeFromCLSID(new Guid("72C24DD5-D70A-438B-8A42-98424B88AFB8")); //Windows Script Host Shell Object
object shell = Activator.CreateInstance(t);
try{
object lnk = t.InvokeMember("CreateShortcut", BindingFlags.InvokeMethod, null, shell, new object[]{"sc.lnk"});
try{
t.InvokeMember("TargetPath", BindingFlags.SetProperty, null, lnk, new object[]{@"C:\whatever"});
t.InvokeMember("IconLocation", BindingFlags.SetProperty, null, lnk, new object[]{"shell32.dll, 5"});
t.InvokeMember("Save", BindingFlags.InvokeMethod, null, lnk, null);
}finally{
Marshal.FinalReleaseComObject(lnk);
}
}finally{
Marshal.FinalReleaseComObject(shell);
}
```
|
Creating application shortcut in a directory
|
[
"",
"c#",
".net",
"file-io",
"shortcut",
""
] |
I have some linq entities that inherit something like this:
```
public abstract class EntityBase { public int Identifier { get; } }
public interface IDeviceEntity { int DeviceId { get; set; } }
public abstract class DeviceEntityBase : EntityBase, IDeviceEntity
{
public abstract int DeviceId { get; set; }
}
public partial class ActualLinqGeneratedEntity : DeviceEntityBase
{
}
```
In a generic method I am querying DeviceEnityBase derived entities with:
```
return unitOfWork.GetRepository<TEntity>().FindOne(x => x.DeviceId == evt.DeviceId);
```
where TEntity has a contraint that is it a DeviceEntityBase. This query is always failing with an InvalidOperationException with the message "Class member DeviceEntityBase.DeviceId is unmapped". Even if I add some mapping info in the abstract base class with
```
[Column(Storage = "_DeviceId", DbType = "Int", Name = "DeviceId", IsDbGenerated = false, UpdateCheck = UpdateCheck.Never)]
```
|
LINQ-to-SQL has *some* support for inheritance via a discriminator ([here](http://www.davidhayden.com/blog/dave/archive/2007/10/28/LINQToSQLEnumSupportExampleDiscriminatorColumnInheritanceMapping.aspx), [here](http://weblogs.asp.net/zeeshanhirani/archive/2008/06/25/inheritance-in-linq-to-sql.aspx)), but you can only query on classes that are defined in the LINQ model - i.e. data classes themselves, and (more perhaps importantly for this example) the query itself must be phrased in terms of data classes: although TEntity is a data class, it knows that the property here is declared on the entity base.
One option might be dynamic expressions; it the classes themselves declared the property (i.e. lose the base class, but keep the interface) - but this isn't trivial.
The Expression work would be something like below, noting that you might want to either pass in the string as an argument, or obtain the primary key via reflection (if it is attributed):
```
static Expression<Func<T, bool>> BuildWhere<T>(int deviceId) {
var id = Expression.Constant(deviceId, typeof(int));
var arg = Expression.Parameter(typeof(T), "x");
var prop = Expression.Property(arg, "DeviceId");
return Expression.Lambda<Func<T, bool>>(
Expression.Equal(prop, id), arg);
}
```
|
Wow, looks like for once I may be able to one-up @MarcGravell!
I had the same problem, then I discovered [this answer](https://stackoverflow.com/questions/1021274/linq-to-sql-mapping-exception-when-using-abstract-base-classes/1068499#1068499), which solved the problem for me!
In your case, you would say:
```
return unitOfWork.GetRepository<TEntity>().Select(x => x).FindOne(x => x.DeviceId == evt.DeviceId);
```
and Bob's your uncle!
|
LinqToSql and abstract base classes
|
[
"",
"c#",
"linq-to-sql",
""
] |
How do you prepare your SQL deltas? do you manually save each schema-changing SQL to a delta folder, or do you have some kind of an automated diffing process?
I am interested in conventions for versioning database schema along with the source code. Perhaps a pre-commit hook that diffs the schema?
Also, what options for diffing deltas exist aside from [DbDeploy](http://dbdeploy.com/)?
**EDIT:** seeing the answers I would like to clarify that I am familiar with the standard scheme for running a database migration using deltas. My question is about creating the deltas themselves, preferably automatically.
Also, the versioning is for PHP and MySQL if it makes a difference. (No Ruby solutions please).
|
See
[Is there a version control system for database structure changes?](https://stackoverflow.com/questions/308/is-there-a-version-control-system-for-database-structure-changes)
[How do I version my MS SQL database in SVN?](https://stackoverflow.com/questions/173/how-do-i-version-my-ms-sql-database-in-svn)
and Jeff's article
[Get Your Database Under Version Control](http://blog.codinghorror.com/get-your-database-under-version-control/)
I feel your pain, and I wish there were a better answer. This might be closer to what you were looking for.
[Mechanisms for tracking DB schema changes](https://stackoverflow.com/questions/1607/mechanisms-for-tracking-db-schema-changes#22535)
Generally, I feel there is no adequate, accepted solution to this, and I roll my own in this area.
|
You might take a look at another, similar thread: [How do I version my MS SQL database in SVN?](https://stackoverflow.com/questions/173/how-do-i-version-my-ms-sql-database-in-svn).
|
How do you version your database schema?
|
[
"",
"sql",
"mysql",
"schema",
"versioning",
""
] |
I know how to find a method in java using a fixed string,
```
someClass.getMethod("foobar", argTypes);
```
but is there a way to use a regular expression rather than a fixed string to find a method on a given class?
An example of the usage might be if I wanted to find a method that was called either "foobar" or "fooBar". Using a regular expression like "foo[Bb]ar" would match either of these method names.
|
You should apply your regexp on [getDeclaredMethods](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Class.html#getDeclaredMethods())() reflection method (or GetMethods() if you want only the public ones).
[Warning: both methods will throw a SecurityException if there is a security manager.]
You apply it on each name of each method returned by getDeclaredMethod() and only memorize in a Collection the compliant Methods.
Something like!
```
try
{
final Pattern aMethodNamePattern = Pattern.compile("foo[Bb]ar");
final List<Method> someMethods = aClass.getDeclaredMethods();
final List<Method> someCompliantMethods = new ArrayList<Method>();
for(final Method aMethod: someMethods)
{
final String aMethodName = aMethod.getName();
final Matcher aMethodNameMatcher = aMethodNamePattern.getMatcher(aMethodName);
if(aMethodNameMatcher.matches() == true)
{
someCompliantMethods.add(aMethod);
}
}
catch(...) // catch all exceptions like SecurityException, IllegalAccessException, ...
```
|
Not directly. You could loop over all the methods and check each.
```
Pattern p = Pattern.compile("foo[Bb]ar");
for(Method m : someClass.getMethods()) {
if(p.matcher(m.getName()).matches()) {
return m;
}
}
```
|
Can I use regular expressions to find a method on a class in java?
|
[
"",
"java",
"reflection",
""
] |
[YUI Compressor](http://developer.yahoo.com/yui/compressor/) does not accept wildcard parameters, so I cannot run it like this:
```
C:>java -jar yuicompressor.jar *.js
```
But I have over 500 files and would rather not have to create a batch file like this:
```
C:>java -jar yuicompressor.jar file1.js -o deploy\file1.js
C:>java -jar yuicompressor.jar file2.js -o deploy\file2.js
...
C:>java -jar yuicompressor.jar file500.js -o deploy\file500.js
```
And of course my file names are *not* in such uniform way.
Is there way to automate this without writing any code? :)
|
I might go for a makefile (I think it would probably be more maintainable long term), but if you want a quick-n-dirty Windows batch command something like the following should work:
```
for %%a in (*.js) do @java -jar yuicompressor.jar "%%a" -o "deploy\%%a"
```
|
If you are geared towards Java, you can also use Ant for conversion. I've found a [blog entry](http://blog.gomilko.com/2007/11/29/yui-compression-tool-as-ant-task/) about an [Ant Taks for the YUI Compressor](http://www.ubik-ingenierie.com/ubikwiki/index.php?title=Minifying_JS/CSS). Disclaimer: Never tried it - sorry
|
How to automate JavaScript files compression with YUI Compressor?
|
[
"",
"javascript",
"batch-file",
"yui",
""
] |
> **Possible Duplicate:**
> [Reference: Comparing PHP's print and echo](https://stackoverflow.com/questions/7094118/reference-comparing-phps-print-and-echo)
Is there any major and fundamental difference between these two functions in PHP?
|
From:
<http://web.archive.org/web/20090221144611/http://faqts.com/knowledge_base/view.phtml/aid/1/fid/40>
1. Speed. There is a difference between the two, but speed-wise it
should be irrelevant which one you use. echo is marginally faster
since it doesn't set a return value if you really want to get down to the
nitty gritty.
2. Expression. `print()` behaves like a function in that you can do:
`$ret = print "Hello World"`; And `$ret` will be `1`. That means that print
can be used as part of a more complex expression where echo cannot. An
example from the PHP Manual:
```
$b ? print "true" : print "false";
```
print is also part of the precedence table which it needs to be if it
is to be used within a complex expression. It is just about at the bottom
of the precedence list though. Only `,` `AND` `OR` `XOR` are lower.
3. Parameter(s). The grammar is: `echo expression [, expression[,
expression] ... ]` But `echo ( expression, expression )` is not valid.
This would be valid: `echo ("howdy"),("partner")`; the same as: `echo
"howdy","partner"`; (Putting the brackets in that simple example
serves
no purpose since there is no operator precedence issue with a single
term like that.)
So, echo without parentheses can take multiple parameters, which get
concatenated:
```
echo "and a ", 1, 2, 3; // comma-separated without parentheses
echo ("and a 123"); // just one parameter with parentheses
```
`print()` can only take one parameter:
```
print ("and a 123");
print "and a 123";
```
|
They are:
* print only takes one parameter, while echo can have multiple parameters.
* print returns a value (1), so can be used as an expression.
* echo is slightly faster.
|
How are echo and print different in PHP?
|
[
"",
"php",
"echo",
""
] |
I'm trying to come up with a Java regex that will match a filename only if it has a valid extension. For example it should match "foo.bar" and "foo.b", but neither "foo." nor "foo".
I've written the following test program
```
public static void main(String[] args) {
Pattern fileExtensionPattern = Pattern.compile("\\.\\w+\\z");
boolean one = fileExtensionPattern.matcher("foo.bar").matches();
boolean two = fileExtensionPattern.matcher("foo.b").matches();
boolean three = fileExtensionPattern.matcher("foo.").matches();
boolean four = fileExtensionPattern.matcher("foo").matches();
System.out.println(one + " " + two + " " + three + " " + four);
}
```
I expect this to print "true true false false", but instead it prints false for all 4 cases. Where am I going wrong?
Cheers,
Don
|
The [Matcher.matches()](http://java.sun.com/javase/6/docs/api/java/util/regex/Matcher.html#matches()) function tries to match the pattern against the entire input. Thus, you have to add `.*` to the beginning of your regex (and the `\\Z` at the end is superfluous, too), or use the [find()](http://java.sun.com/javase/6/docs/api/java/util/regex/Matcher.html#find()) method.
|
```
public boolean isFilename(String filename) {
int i=filename.lastInstanceOf(".");
return(i != -1 && i != filename.length - 1)
}
```
Would be significantly faster and regardless of what you do, putting it in a method would be more readable.
|
Java file extension regex
|
[
"",
"java",
"regex",
""
] |
Currently we are using prototype and jQuery as our js frameworks. Right now, jQuery is set to $j() to prevent conflicts from prototype.
In the past, we've used a lot of prototype's Element.down(), Element.next(), and Element.previous() to traverse the DOM. However, I need a simple way to retrieve the last child element. I know i can loop through an array by using Element.childElements() but I would like something inline that reads cleanly and can be pipelined.
Just thought I would ask before I go reinventing the wheel. Here's a snippet of code that has lastChild in it that needs to be replaced:
```
_find : function(rows, address) {
var obj = null;
for (var i=0; i < rows.length && obj == null; i++) {
if (rows[i].down().className == 'b')
obj = this._find(rows[i].lastChild.down().down().childElements(), address);
else if (rows[i].lastChild.getAttribute('tabAddress') == address)
return rows[i].lastChild;
}
return obj;
}
```
|
Guys, note that the selector functions return arrays of elements (not single elements), so you must adddress the element in the result array by index: [0].
Code in prototype
```
//if you only have the id of the parent
var lastChild = $$("#parent :last-child")[0];
//or
//if you have the actual DOM element
var lastChild = $(element).select(":last-child")[0];
```
Code in Jquery
```
//if you only have the id of the parent
var lastChild = $("#parent :last-child")[0];
//or
//if you have the actual DOM element
var lastChild = $(":last-child", element)[0];
```
Code in plain vanilla javascript
```
var element = document.getElementById("parent");
var lastChild = element.childNodes[element.childNodes.length - 1];
```
Also note that these can return null if the parent element has no child nodes.
* [Some info on the CSS :last-child selector](http://www.quirksmode.org/css/firstchild.html)
|
Try this it has always worked for me in jQuery
```
var lastChild = $("#parent :last-child");
```
<http://docs.jquery.com/Selectors/lastChild>
|
Best way to implement .lastChild using Prototype or jQuery
|
[
"",
"javascript",
"jquery",
"prototypejs",
""
] |
When overriding the MembershipProvider and calling it directly, is there a way to fill the NameValueCollection config parameter of the Initialize method without manually looking through the config file for the settings?
Obviously this Initialize is being called by asp.net and the config is being filled somewhere. I have implemented my own MembershipProvider and it works fine through the build in controls.
I would like to create a new instance of my provider and make a call to it directly, but I don't really want to parse the .config for the MembershipProvider, it's connection string name and then the connection string if it's already being done somewhere.
|
tvanfosson- Thanks for the help. (if I had the 15 points necessary I would vote you up)
From your link I was able to figure it out. It turns out the second parameter to the Initialize proceedure was the list of parameters from the provider and could be reached in the following way:
```
string configPath = "~/web.config";
Configuration config = WebConfigurationManager.OpenWebConfiguration(configPath);
MembershipSection section = (MembershipSection)config.GetSection("system.web/membership");
ProviderSettingsCollection settings = section.Providers;
NameValueCollection membershipParams = settings[section.DefaultProvider].Parameters;
Initialize(section.DefaultProvider, membershipParams);
```
|
Not sure why you want to create a new one, but if you create it yourself, you'll need to read the web config and get the values yourself to pass to Initialize() as this is done outside the class. I'm sure, though, that there is already a section handler for this section so it should be just a matter of doing:
```
MembershipSection section = WebConfigurationManager.GetSection("membership");
```
Then find your provider and accessing its properties to construct the NameValueCollection. I don't think you will have to write any code to parse the configuration section.
Here is the [MembershipSection](http://msdn.microsoft.com/en-us/library/system.web.configuration.membershipsection.aspx) documentation at MSDN. Drill down from there.
|
Custom MembershipProvider Initialize method
|
[
"",
"c#",
".net",
"membership-provider",
""
] |
I have a DataSet consisting of XML data, I can easily output this to a file:
```
DataSet ds = new DataSet();
DataTable dt = new DataTable();
ds.Tables.Add(dt);
ds.Load(reader, LoadOption.PreserveChanges, ds.Tables[0]);
ds.WriteXml("C:\\test.xml");
```
However what I want to do is compress the XML into a ZIP or other type of compressed file and then just save this file to disk while splitting the ZIP file into 1MB chunks. I do not really want to save the uncompressed file, and then zip it, then split it.
**What I'm looking for specifically is:**
1. a suitable compression library that I can stream the XML to and have the zip file(s) saved to disk
2. some sample C# code that can show me how to do this.
|
I've managed to compress a DataSet's XML stream using .NET 2.0's gzip compression.
Here's the blog post I made a few years ago about it:
[Saving DataSets Locally With Compression](http://www.madprops.org/blog/saving-datasets-locally-with-compression)
... and here's the code I added to my DataSet's partial class to write the compressed file (the blog post has the reading code too):
```
public void WriteFile(string fileName)
{
using (FileStream fs = new FileStream(fileName, FileMode.Create))
{
Stream s;
if (Path.GetExtension(fileName) == ".cmx")
{
s = new GZipStream(fs, CompressionMode.Compress);
}
else if (Path.GetExtension(fileName) == ".cmz")
{
s = new DeflateStream(fs, CompressionMode.Compress);
}
else
{
s = fs;
}
WriteXml(s);
s.Close();
}
}
```
Note that this code uses different compression schemes based on the file's extension. That was purely so I could test one scheme against the other with my DataSet.
|
This works with streams or files, has a good license and source: <http://www.codeplex.com/DotNetZip>
Here's the code to do exactly what the original poster asked: write a DataSet into a zip that is split into 1mb chunks:
```
// get connection to the database
var c1= new System.Data.SqlClient.SqlConnection(connstring1);
var da = new System.Data.SqlClient.SqlDataAdapter()
{
SelectCommand= new System.Data.SqlClient.SqlCommand(strSelect, c1)
};
DataSet ds1 = new DataSet();
// fill the dataset with the SELECT
da.Fill(ds1, "Invoices");
// write the XML for that DataSet into a zip file (split into 1mb chunks)
using(Ionic.Zip.ZipFile zip = new Ionic.Zip.ZipFile())
{
zip.MaxOutputSegmentSize = 1024*1024;
zip.AddEntry(zipEntryName, (name,stream) => ds1.WriteXml(stream) );
zip.Save(zipFileName);
}
```
|
.NET - Stream DataSet (of XML data) to ZIP file?
|
[
"",
"c#",
"compression",
"zip",
""
] |
I have an app that has impersonation used throughout. But when a user is logged in as an admin, a few operation require them to write to the server itself. Now if these users do not have rights on the actual server (some don't) it will not let them write.
What I want to do is turn off impersonation for just a couple commands.
Is there a way to do something like this?
```
using(HostingEnvironment.Impersonate.Off())
//I know this isn't a command, but you get the idea?
```
Thank you.
|
Make sure the Application Pool do have the proper rights that you need.
Then, when you want to revert to the application pool identity... run the following:
```
private WindowsImpersonationContext context = null;
public void RevertToAppPool()
{
try
{
if (!WindowsIdentity.GetCurrent().IsSystem)
{
context = WindowsIdentity.Impersonate(System.IntPtr.Zero);
}
}
catch { }
}
public void UndoImpersonation()
{
try
{
if (context != null)
{
context.Undo();
}
}
catch { }
}
```
|
I am not sure if this is the preferred approach but when I wanted to do this I new'd up an instance of a [WindowsIdentity](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.aspx) and called the [Impersonate](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.impersonate.aspx) method. This allows subsequent code to impersonate a different Windows user. It returns a [WindowsImpersonationContext](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsimpersonationcontext.aspx) that has an [Undo](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsimpersonationcontext.undo.aspx) method which reverts the impersonation context back again.
|
Can I turn off impersonation just in a couple instances
|
[
"",
"c#",
"asp.net",
"impersonation",
""
] |
I want to write a C# application where it connects to a telnet server and communicates to the server. I was wondering how a telnet server sends information to the client. I have looked (skimmed) over the [Telnet RFC](http://www.faqs.org/rfcs/rfc854.html) and looked at the incoming packets coming in from the server and still a bit confused. Can somebody give me an example how the server moves the cursor around in the telnet client and how it can change colors of characters? (An example would be most appreciated)
**Update I**
Here's some additional VT100 resources:
1. [ANSI/VT100 Terminal Control](http://graphcomp.com/info/specs/ansi_col.html)
2. [vt100\_codes\_news.txt](http://www.cs.utk.edu/~shuford/terminal/vt100_codes_news.txt)
**Update II**
With much research and time, here what I found out: [Telnet Programming with C#](http://srand2.blogspot.com/2009/02/telnet-programming-with-c.html)
|
Moving the cursor and changing the color isn't done by the telnet protocol.
Your telnet client is emulating a [terminal](http://en.wikipedia.org/wiki/Computer_terminal), most probably a VT-100 variant. To move the cursor and change the color, the server sends escape sequences especific to the type of terminal being emulated (which one is sent in the telnet protocol negotiation).
If you don't want these escape sequences, telling the server on the telnet protocol negotiation you are a "dumb" terminal should be enough. If you want them (or if the server assumes everyone has a VT-100 and always sends them) you will have to implement a VT-100 terminal emulator (or at least enough of it to discard what you don't want).
|
A [simple Google search](http://www.google.com/search?q=c%23+telnet+client&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) reveals many open source Telnet (and other network protocol) clients written in C#. You could just download the source code to one and see how they implement connection negotiation and commands.
|
How does Telnet server communicate to the client?
|
[
"",
"c#",
"terminal",
"client-server",
"telnet",
"vt100",
""
] |
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob?
(I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
|
If you want to store a pickled object, you'll need to use a blob, since it is binary data. However, you can, say, base64 encode the pickled object to get a string that can be stored in a text field.
Generally, though, doing this sort of thing is indicative of bad design, since you're storing opaque data you lose the ability to use SQL to do any useful manipulation on that data. Although without knowing what you're actually doing, I can't really make a moral call on it.
|
I needed to achieve the same thing too.
I turns out it caused me quite a headache before I finally figured out, [thanks to this post](http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-12/msg00352.html), how to actually make it work in a binary format.
### To insert/update:
```
pdata = cPickle.dumps(data, cPickle.HIGHEST_PROTOCOL)
curr.execute("insert into table (data) values (:data)", sqlite3.Binary(pdata))
```
You must specify the second argument to dumps to force a binary pickling.
Also note the **sqlite3.Binary** to make it fit in the BLOB field.
### To retrieve data:
```
curr.execute("select data from table limit 1")
for row in curr:
data = cPickle.loads(str(row['data']))
```
When retrieving a BLOB field, sqlite3 gets a 'buffer' python type, that needs to be strinyfied using **str** before being passed to the loads method.
|
Can I pickle a python dictionary into a sqlite3 text field?
|
[
"",
"python",
"sqlite",
"pickle",
""
] |
1. How do I get a method's execution time?
2. Is there a `Timer` utility class for things like timing how long a task takes, etc?
Most of the searches on Google return results for timers that schedule threads and tasks, which is not what I want.
|
There is always the old-fashioned way:
```
long startTime = System.nanoTime();
methodToTime();
long endTime = System.nanoTime();
long duration = (endTime - startTime); //divide by 1000000 to get milliseconds.
```
|
I go with the simple answer. Works for me.
```
long startTime = System.currentTimeMillis();
doReallyLongThing();
long endTime = System.currentTimeMillis();
System.out.println("That took " + (endTime - startTime) + " milliseconds");
```
It works quite well. The resolution is obviously only to the millisecond, you can do better with System.nanoTime(). There are some limitations to both (operating system schedule slices, etc.) but this works pretty well.
Average across a couple of runs (the more the better) and you'll get a decent idea.
|
How do I time a method's execution in Java?
|
[
"",
"java",
"timing",
"execution-time",
""
] |
What is the most efficient way to convert a MySQL query to CSV in PHP please?
It would be best to avoid temp files as this reduces portability (dir paths and setting file-system permissions required).
The CSV should also include one top line of field names.
|
```
SELECT * INTO OUTFILE "c:/mydata.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY "\n"
FROM my_table;
```
(*the documentation for this is here: <http://dev.mysql.com/doc/refman/5.0/en/select.html>*)
or:
```
$select = "SELECT * FROM table_name";
$export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) );
$fields = mysql_num_fields ( $export );
for ( $i = 0; $i < $fields; $i++ )
{
$header .= mysql_field_name( $export , $i ) . "\t";
}
while( $row = mysql_fetch_row( $export ) )
{
$line = '';
foreach( $row as $value )
{
if ( ( !isset( $value ) ) || ( $value == "" ) )
{
$value = "\t";
}
else
{
$value = str_replace( '"' , '""' , $value );
$value = '"' . $value . '"' . "\t";
}
$line .= $value;
}
$data .= trim( $line ) . "\n";
}
$data = str_replace( "\r" , "" , $data );
if ( $data == "" )
{
$data = "\n(0) Records Found!\n";
}
header("Content-type: application/octet-stream");
header("Content-Disposition: attachment; filename=your_desired_name.xls");
header("Pragma: no-cache");
header("Expires: 0");
print "$header\n$data";
```
|
Check out this [question / answer](https://stackoverflow.com/questions/81934/easy-way-to-export-a-sql-table-without-access-to-the-server-or-phpmyadmin#81951). It's more concise than @Geoff's, and also uses the builtin fputcsv function.
```
$result = $db_con->query('SELECT * FROM `some_table`');
if (!$result) die('Couldn\'t fetch records');
$num_fields = mysql_num_fields($result);
$headers = array();
for ($i = 0; $i < $num_fields; $i++) {
$headers[] = mysql_field_name($result , $i);
}
$fp = fopen('php://output', 'w');
if ($fp && $result) {
header('Content-Type: text/csv');
header('Content-Disposition: attachment; filename="export.csv"');
header('Pragma: no-cache');
header('Expires: 0');
fputcsv($fp, $headers);
while ($row = $result->fetch_array(MYSQLI_NUM)) {
fputcsv($fp, array_values($row));
}
die;
}
```
|
PHP code to convert a MySQL query to CSV
|
[
"",
"php",
"mysql",
"csv",
""
] |
If I create classes, that are used at the moment only in a single thread, should I make them thread-safe, even if I don't need that at the moment? It could be happen, that I later use this class in multiple threads, and at that time I could get race conditions and may have a hard time to find them if I didn't made the class thread-safe in the first place. Or should I make the class not thread-safe, for better performance? But premature optimization is evil.
Differently asked: Should I make my classes thread-safe if needed (if used in multiple threads, otherwise not) or should I optimize this issue then needed (if I see that the synchronization eats up an important part of processing time)?
If I choose one of the both ways, are there methods to reduce the disadvantages? Or exists a third possibility, that I should use?
**EDIT**: I give the reason this question came up to my mind. At our company we have written a very simple user-management that writes the data into property-files. I used it in a web-app and after some work on it I got strange errors, that the user-management forgot about properties of users(including name and password) and roles. That was very annoying but not consistently reproducible, so I think it was race condition. Since I synchronized all methods reading and writing from/on disk, the problem disappeared. So I thought, that I probably could have been avoided all the hassle, if we had written the class with synchronization in the first place?
**EDIT 2**: As I look over the tips of Pragmatic Programmer, I saw tip #41: Always Design for Concurrency. This doesn't say that all code should be thread-safe, but it says the design should have the concurrency in mind.
|
Start from the data. Decide which data is explicitly shared and protect it. If at all possible, encapsulate the locking with the data. Use pre-existing thread-safe concurrent collections.
Whenever possible, use immutable objects. Make attributes final, set their values in the constructors. If you need to "change" the data consider returning a new instance. Immutable objects don't need locking.
For objects that are not shared or thread-confined, do not spend time making them thread-safe.
Document the expectations in the code. The JCIP annotations are the best pre-defined choice available.
|
I used to try to make everything thread-safe - then I realised that the very meaning of "thread-safe" depends on the usage. You often just can't predict that usage, and the caller will *have* to take action anyway to use it in a thread-safe way.
These days I write almost everything assuming single threading, and put threading knowledge in the select few places where it matters.
Having said that, I do also (where appropriate) create immutable types, which are naturally amenable to multi-threading - as well as being easier to reason about in general.
|
Should I always make my java-code thread-safe, or for performance-reasons do it only when needed?
|
[
"",
"java",
"multithreading",
"performance",
"concurrency",
"thread-safety",
""
] |
What's the easiest way of me converting the simpler regex format that most users are used to into the correct re python regex string?
As an example, I need to convert this:
```
string = "*abc+de?"
```
to this:
```
string = ".*abc.+de.?"
```
Of course I could loop through the string and build up another string character by character, but that's surely an inefficient way of doing this?
|
Those don't look like regexps you're trying to translate, they look more like unix shell globs. Python has a [module](http://www.python.org/doc/2.5.2/lib/module-fnmatch.html) for doing this already. It doesn't know about the "+" syntax you used, but neither does my shell, and I think the syntax is nonstandard.
```
>>> import fnmatch
>>> fnmatch.fnmatch("fooabcdef", "*abcde?")
True
>>> help(fnmatch.fnmatch)
Help on function fnmatch in module fnmatch:
fnmatch(name, pat)
Test whether FILENAME matches PATTERN.
Patterns are Unix shell style:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
An initial period in FILENAME is not special.
Both FILENAME and PATTERN are first case-normalized
if the operating system requires it.
If you don't want this, use fnmatchcase(FILENAME, PATTERN).
>>>
```
|
.replacing() each of the wildcards is the quick way, but what if the wildcarded string contains other regex special characters? eg. someone searching for 'my.thing\*' probably doesn't mean that '.' to match any character. And in the worst case things like match-group-creating parentheses are likely to break your final handling of the regex matches.
re.escape can be used to put literal characters into regexes. You'll have to split out the wildcard characters first though. The usual trick for that is to use re.split with a matching bracket, resulting in a list in the form [literal, wildcard, literal, wildcard, literal...].
Example code:
```
wildcards= re.compile('([?*+])')
escapewild= {'?': '.', '*': '.*', '+': '.+'}
def escapePart((parti, part)):
if parti%2==0: # even items are literals
return re.escape(part)
else: # odd items are wildcards
return escapewild[part]
def convertWildcardedToRegex(s):
parts= map(escapePart, enumerate(wildcards.split(s)))
return '^%s$' % (''.join(parts))
```
|
String Simple Substitution
|
[
"",
"python",
"string",
""
] |
Is there any good way to use a windows application written in C# to display/control a powerpoint slideshow? Ultimately I would like to show thumbnails in a form and clicking these thumbnails would advance the slides shown on a second monitor (similar to using Powerpoint itself to show a slideshow on a second monitor).
I would like to be able to use Powerpoint Viewer if Powerpoint is not installed.
There seems to be some ActiveX-controls that allows integration of Powerpoint in a form, but most of these seem to cost money, does anyone have experience using one of these controls?
Edit: I know that there is an object model accessable by adding a reference to Microsoft.Office.InterOp.Powerpoint, but I want to be able to distribute the resulting program without having Microsoft Office as a prerequisite, that was why I mentioned Powerpoint Viewer because it can be distributed freely.
|
This kb lays out the basics for working with the powerpoint presentation viewer object model. I'd suggest you include the viewer when you distribute your application.
<http://support.microsoft.com/kb/265385>
Once you get a reference to the viewer (top level) object it is basically the same as working with the powerpoint.presentation object model, but with less functionality (i.e. editing, etc)
If you are working with Powerpoint 2007, then you can add editing functionality by using the System.XML and related namespaces to work with the presentation file as an open xml file.
Regarding the comments about UNO/openoffice.org, I think these miss the point, as you cannot use UNO for working with powerpoint, it is for openoffice and that was not the origianl requirement of the questioner.
there are 3rd party toolsets like aspose, but then your goal was to work with the powerpoint viewer component (free) so I'm guessing you want to avoid paying for dev tools? Either way viewer components OM is perfectly suitable for displaying and previewing and existing slide show. (you will need one copy of Powerpoint so that you can author the presentation from scratch, plus Visual Studio to create the VSTO project).
|
One of our softwares here at work does that. Initially we used MS Office but recently we switched to use [OpenOffice.org Uno](http://udk.openoffice.org/) since it offers better control than MS Office and is easier to work with. It has [.NET CLI-bindings](http://wiki.services.openoffice.org/wiki/Uno/CLI).
To answer your question, yes it can be done but our engineers would recommend you use OpenOffice.org instead.
|
Using C# to display powerpoint
|
[
"",
"c#",
".net",
"powerpoint",
""
] |
I'm new to Windows programming and after reading the Petzold book I wonder: is it still good practice to use the `TCHAR` type and the `_T()` function to declare strings or should I just use the `wchar_t` and `L""` strings in new code?
I will target only modern Windows (as of this writing versions 10 and 11) and my code will be [i18n](http://en.wikipedia.org/wiki/Internationalization_and_localization) from the start up.
|
I would still use the TCHAR syntax if I was doing a new project today. There's not much practical difference between using it and the WCHAR syntax, and I prefer code which is explicit in what the character type is. Since most API functions and helper objects take/use TCHAR types (e.g.: CString), it just makes sense to use it. Plus it gives you flexibility if you decide to use the code in an ASCII app at some point, or if Windows ever evolves to Unicode32, etc.
If you decide to go the WCHAR route, I would be explicit about it. That is, use CStringW instead of CString, and casting macros when converting to TCHAR (eg: CW2CT).
That's my opinion, anyway.
|
The short answer: **NO**.
Like all the others already wrote, a lot of programmers still use TCHARs and the corresponding functions. In my humble opinion **the whole concept was a bad idea**. [UTF-16](http://en.wikipedia.org/wiki/UTF-16/UCS-2) string processing is a lot different than simple ASCII/MBCS string processing. If you use the same algorithms/functions with both of them (this is what the TCHAR idea is based on!), you get very bad performance on the UTF-16 version if you are doing a little bit more than simple string concatenation (like parsing etc.). The main reason are [Surrogates](http://msdn.microsoft.com/en-us/library/dd374069.aspx).
With the sole exception when you **really** have to compile your application for a system which doesn't support Unicode I see no reason to use this baggage from the past in a new application.
|
Is TCHAR still relevant?
|
[
"",
"c++",
"c",
"windows",
"unicode",
"wchar-t",
""
] |
How do you pass options to an executable? Is there an easier way than making the options boolean arguments?
EDIT: The last two answers have suggested using arguments. I know I can code a workable solution like that, but I'd rather have them be options.
EDIT2: Per requests for clarification, I'll use this simple example:
It's fairly easy to handle arguments because they automatically get parsed into an array.
```
./printfile file.txt 1000
```
If I want to know what the name of the file the user wants to print, I access it via argv[1].
Now about how this situation:
```
./printfile file.txt 1000 --nolinebreaks
```
The user wants to print the file with no line breaks. This is not required for the program to be able to run (as the filename and number of lines to print are), but the user has the option of using if if s/he would like. Now I could do this using:
```
./printfile file.txt 1000 true
```
The usage prompt would inform the user that the third argument is used to determine whether to print the file with line breaks or not. However, this seems rather clumsy.
|
You seem to think that there is some fundamental difference between "options" that start with "`--`" and "arguments" that don't. The only difference is in how you parse them.
It might be worth your time to look at GNU's `getopt()`/`getopt_long()` option parser. It supports passing arguments with options such as `--number-of-line-breaks 47`.
|
Command-line arguments is the way to go. You may want to consider using Boost.ProgramOptions to simplify this task.
|
C++ Passing Options To Executable
|
[
"",
"c++",
"arguments",
""
] |
Suppose I have a hash\_map and a code like
```
// i is an iterator
i = hash_map.erase(i)
```
But GCC's STL doesn't return iterator in erase, but a void. Now is a code like
```
hash_map.erase(i++)
```
safe (i.e. does not invalidate the iterator or does any other unexpected or unpleasant things)? Please note this is a hash\_map.
|
Yes, this is safe, because the value of `i` will have been set to the next value, before the current value is erased.
According to the [SGI documentation about hashed containers](http://www.sgi.com/tech/stl/HashedAssociativeContainer.html) invalidation does not occur for non-erased elements, nor even for resizing (there is no word on whether insertions cause resizing, so to be careful I admit that as a possibility)---but in the latter case, the iteration order will be changed. But this doesn't apply here, unless you go out of your way to resize the container during traversal or something. :-)
|
You can encapsulate erasing to provide the same interface for all containers you use:
```
namespace detail {
template<typename Container, typename R>
struct SelectErase {
// by default, assume the next iterator is returned
template<typename Iterator>
Iterator erase(Container& c, Iterator where) {
return c.erase(where);
}
};
// specialize on return type void
template<typename Container>
struct SelectErase<Container, void> {
template<typename Iterator>
Iterator erase(Container& c, Iterator where) {
Iterator next (where);
++next;
c.erase(where);
return next;
}
};
template<typename I, typename Container, typename R>
SelectErase<Container,R> select_erase(R (Container::*)(I)) {
return SelectErase<Container,R>();
}
} // namespace detail
template<typename Container, typename Iterator>
Iterator erase(Container& container, Iterator where) {
return detail::select_erase<Iterator>(&Container::erase).erase(container, where);
}
```
This requires either:
1. c.erase returns the iterator for the next item. This is how vector, deque, and list work.
2. c.erase returns void and does not invalidate the next iterator. This is how map, set, and (non-stdlib) hash\_map work.
|
What is a safe equivalent of non-void STL erase?
|
[
"",
"c++",
"stl",
"iterator",
"hashmap",
""
] |
I'm trying to make a simple blackjack program. Sadly, I'm having problems right off the bat with generating a deck of cards.
```
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<char> deck;
char suit[] = {'h','d','c','s'};
char card[] = {'2','3','4','5','6','7','8','9','10','J','Q','K','A'};
for (int j=0; j<13; j++) {
for (int i=0; i<4; i++) {
deck.push_back(card[j] suit[i]);
}
}
return 0;
}
```
I know my problem begins with me trying to assign the value '10' to a char. Obviously I couldn't get this to compile but I'm sure when I try to assign the card values to the vector deck I'll also get an error since I used variable type 'char'. Knowing what kind of variable type to use seems to be killing me. Also, would 'deck.push\_back(card[j] suit[i]);' be the correct code to combine the card and suit, or do you have to put something between card[j] and suit[i]? I'd appreciate it if any of you could lead me in the right direction. Also as a little side note, this is part of a homework assignment so please don't just give me entire blocks of code. Thanks for your help.
|
Try to create class of Card with suit and card as a member and set it as a type of vector. Like
```
public class Card {
public:
Card(char suit, char card);
char suit, card;
};
int main() {
vector<Card> deck;
char suit[] = {'h','d','c','s'};
char card[] = {'2','3','4','5','6','7','8','9','T','J','Q','K','A'};
for (int j=0; j<13; j++) {
for (int i=0; i<4; i++) {
deck.push_back(new Card(card[j],suit[i]));
}
}
return 0;
}
```
also using enums instead of chars in suit and card would make it clearer.
|
I think what you are looking to use is an enumeration. It will make your code clearer and resolve your problem.
```
enum SUIT { HEART, CLUB, DIAMOND, SPADE };
enum VALUE { ONE, TWO, THREE, ..., TEN, JACK, QUEEN, KING};
```
|
Generating a Deck of Cards
|
[
"",
"c++",
""
] |
Ok, so it's *almost* as easy as pie already. But it really should be as easier than it is.
I think I should be able to connect to another database just by putting a JDBC connection string into TNSNAMES. Every database vendor has a type-4 JDBC driver and there's usually a good, free alternative.
With Oracle being such keen Java fans, and with a JVM built-in to the database I'd have thought a JDBC-based linking technology would have been a no-brainer. It seems a natural extension to have a JDBC connection string in TNSNAMES and everything would "just work" - you could "sql\*plus" to anything.
But it doesn't work this way. If you want to connect to another non-Oracle database You have to buy something called Oracle Gateways or mess around with ODBC (through something called Generic Connectivity).
[**Originality warning**... This is related to a [previous question](https://stackoverflow.com/questions/186443/what-options-are-available-for-connecting-to-a-microsoft-sql-server-database-fr) of mine but someone suggested I enter a supplementary comment as a separate question. Who am I to argue?]
|
It's a real question - perhaps slightly jokey but certainly not rhetorical. It is entirely in Oracle's interest to make it really easy to access other people's data. At the moment there's lots of ways to do it but none sufficiently straightforward. There's a JVM in the database and JDBC drivers to every other database - it should "just work" so I can only assume it hasn't been made to "just work" for a reason.
My question is whether anyone knows that reason. Does anyone?
|
The answer is the same as for the following questions:
1. Why doesn't Oracle provide an efficient way of unloading data into a non-proprietary format (e.g. comma-delimited or XML)
2. Why do most Oracle non-DB products only work with the Oracle RDBMS? (without having to use Oracle Database Gateways)
You've ever heard of the concept of [Vendor lock-in](http://en.wikipedia.org/wiki/Vendor_lock-in)?
|
Why isn't querying a JDBC-compliant database from Oracle as easy as pie?
|
[
"",
"java",
"oracle",
"jdbc",
""
] |
I'm looking for recommendations on books about MVC on the desktop. If they use Java, that is a bonus.
Some background:
I'm writing a desktop application in Java. It's an audio application that has a number of views and a central model called a Library with playlists, effects lists and a folder structure to organize them. In this application I'd like to have menus, context-menus and drag and drop support for various user actions. I've been struggling with how to achieve this using MVC.
I started with all the logic/controllers in the main class but have started to separate them out into their own classes. Now I need to start using listeners and observers to handle messages between the views and the controller. This led to me creating a number of interfaces and looping through my listeners in several places to fire off various messages. But that loop code keeps getting repeated (not DRY), so I'm assuming that now I should create different types of Event classes, create those events in my views and use a single method within the view to fire it off to the various listeners.
**Update:** Arguabley it shouldn't matter much but I'm using SWT, not Swing.
|
I've had the same problem: it really takes a lot of discipline to write a (non trivial) swing app, because all the listeners and events and asynchronous processing make up really fast for a big pile of unmaintainable code.
I found that classic MVC isn't enough, you have to look into more specific patterns like [Presentation Model](http://martinfowler.com/eaaDev/PresentationModel.html) and such. The only book I found covering this patterns when applied to desktop applications is [Desktop Java Live](http://www.sourcebeat.com/books/desktopjavalive.html), by Scott Delap. While the majority of swing books deal with techniques to solve specific problems (how to make a gridless jtable, how to implement a round button, ...), Delap's book will help you architect a medium-sized swing application, best practices, etc.
|
In C# rather then Java, but Jeremy Miller has a [bunch of posts](http://codebetter.com/blogs/jeremy.miller/archive/tags/Build+your+own+CAB/default.aspx) regarding desktop apps and MVP/MVC (and a whole bunch of other related stuff).
|
Recommended books on Desktop Application development using MVC
|
[
"",
"java",
"model-view-controller",
"desktop",
""
] |
Is there any class in the .NET framework that can read/write standard .ini files:
```
[Section]
<keyname>=<value>
...
```
Delphi has the `TIniFile` component and I want to know if there is anything similar for C#?
|
The creators of the .NET framework want you to use XML-based config files, rather than INI files. So no, there is no built-in mechanism for reading them.
There are third party solutions available, though.
* INI handlers can be obtained as [NuGet packages](https://www.nuget.org/packages?q=ini), such as [INI Parser](https://www.nuget.org/packages/ini-parser/).
* You can write your own INI handler, which is the old-school, laborious way. It gives you more control over the implementation, which you can use for bad or good. See e.g. [an INI file handling class using C#, P/Invoke and Win32](http://www.codeproject.com/KB/cs/cs_ini.aspx).
|
## Preface
Firstly, read this MSDN blog post on [the limitations of INI files](https://devblogs.microsoft.com/oldnewthing/20071126-00/?p=24383). If it suits your needs, read on.
This is a concise implementation I wrote, utilising the original Windows P/Invoke, so it is supported by all versions of Windows with .NET installed, (i.e. Windows 98 - Windows 11). I hereby release it into the public domain - you're free to use it commercially without attribution.
## The tiny class
Add a new class called `IniFile.cs` to your project:
```
using System.IO;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Text;
// Change this to match your program's normal namespace
namespace MyProg
{
class IniFile // revision 11
{
string Path;
string EXE = Assembly.GetExecutingAssembly().GetName().Name;
[DllImport("kernel32", CharSet = CharSet.Unicode)]
static extern long WritePrivateProfileString(string Section, string Key, string Value, string FilePath);
[DllImport("kernel32", CharSet = CharSet.Unicode)]
static extern int GetPrivateProfileString(string Section, string Key, string Default, StringBuilder RetVal, int Size, string FilePath);
public IniFile(string IniPath = null)
{
Path = new FileInfo(IniPath ?? EXE + ".ini").FullName;
}
public string Read(string Key, string Section = null)
{
var RetVal = new StringBuilder(255);
GetPrivateProfileString(Section ?? EXE, Key, "", RetVal, 255, Path);
return RetVal.ToString();
}
public void Write(string Key, string Value, string Section = null)
{
WritePrivateProfileString(Section ?? EXE, Key, Value, Path);
}
public void DeleteKey(string Key, string Section = null)
{
Write(Key, null, Section ?? EXE);
}
public void DeleteSection(string Section = null)
{
Write(null, null, Section ?? EXE);
}
public bool KeyExists(string Key, string Section = null)
{
return Read(Key, Section).Length > 0;
}
}
}
```
## How to use it
Open the INI file in one of the 3 following ways:
```
// Creates or loads an INI file in the same directory as your executable
// named EXE.ini (where EXE is the name of your executable)
var MyIni = new IniFile();
// Or specify a specific name in the current dir
var MyIni = new IniFile("Settings.ini");
// Or specify a specific name in a specific dir
var MyIni = new IniFile(@"C:\Settings.ini");
```
You can write some values like so:
```
MyIni.Write("DefaultVolume", "100");
MyIni.Write("HomePage", "http://www.google.com");
```
To create a file like this:
```
[MyProg]
DefaultVolume=100
HomePage=http://www.google.com
```
To read the values out of the INI file:
```
var DefaultVolume = MyIni.Read("DefaultVolume");
var HomePage = MyIni.Read("HomePage");
```
Optionally, you can set `[Section]`'s:
```
MyIni.Write("DefaultVolume", "100", "Audio");
MyIni.Write("HomePage", "http://www.google.com", "Web");
```
To create a file like this:
```
[Audio]
DefaultVolume=100
[Web]
HomePage=http://www.google.com
```
You can also check for the existence of a key like so:
```
if(!MyIni.KeyExists("DefaultVolume", "Audio"))
{
MyIni.Write("DefaultVolume", "100", "Audio");
}
```
You can delete a key like so:
```
MyIni.DeleteKey("DefaultVolume", "Audio");
```
You can also delete a whole section (including all keys) like so:
```
MyIni.DeleteSection("Web");
```
Please feel free to comment with any improvements!
|
Reading/writing an INI file
|
[
"",
"c#",
".net",
"ini",
""
] |
*Note: I found this ["Creating a Word Doc in C#.NET"](https://stackoverflow.com/questions/10412/creating-a-word-doc-in-cnet), but that is not what I want.*
Do you know how to create a **.odt** to create file from C# .NET?
Is there a .NET component or wrapper for an OpenOffice.org library to do this?
|
Have a look at AODL (see <http://odftoolkit.org/projects/odftoolkit/pages/AODL>).
* fully managed .NET 1.1 (so it runs on MS.Net and Mono)
* support for text and spreadsheet documents
* create, read, edit, save documents
* ...
EDIT by kame:
New link
[AODL-Wiki](https://wiki.openoffice.org/wiki/AODL)
|
You can check out the [OASIS Standards site](http://www.oasis-open.org/specs/index.php#opendocument) for information on the ODT standard. From what I've seen, they're using an XML based standard and have an XSD available for the the document standard, so you could use that in conjunction with your own code to build a document file in the proper format.
|
How to create .odt files with C#.NET?
|
[
"",
"c#",
"document",
"openoffice.org",
""
] |
Alright, I'm trying to read a comma delimited file and then put that into a ListView (or any grid, really). I have the delimiting part of the job taken care of, with the fields of the file being put into a multidimensional string array. The problem is trying to get it into the ListView.
It appears that there isn't a real way of adding columns or items dynamically, since each column and item needs to be manually declared. This poses a problem, because I need the ListView to be as large as the file is, who's size isn't set. It could be huge one time, and small another.
Any help with this would be appreciated.
---
In response to Jeffrey's answer.
I would do exactly that, but the problem that I'm running into is a basic one. How can I create these objects without naming them. Noobie question, but a problem for me, sadly. This is what I have so far.
```
int x = 0;
int y = 0;
while (y < linenum)
{
while (x < width)
{
ListViewItem listViewItem1 = new ListViewItem(list[y,x]);
x++;
}
y++;
x = 0;
}
```
What should I do for the name of listViewItem1?
|
Just loop through each of the arrays in that you've created and create a new ListViewItem object (there is a constructor that takes an array of strings, I believe). The pass the ListViewItem to the ListView.Items.Add() method.
|
You can [load a csv file with ado.net](http://www.dotnetspider.com/resources/646-Handling-CSV-Files-ADO.aspx) and bind it to a datagrids data source. Or you could use [linq for xml](http://weblogs.asp.net/pleloup/archive/2008/04/12/linq-to-csv-library.aspx) to parse the file and bind those results to a datagrid's data source property.
|
Reading Comma Delimited File and Putting Data in ListView - C#
|
[
"",
"c#",
"listview",
"user-interface",
""
] |
In a previous question, I asked about various ORM libraries. It turns out Kohana looks very clean yet functional for the purposes of ORM. I already have an MVC framework that I am working in though. If I don't want to run it as a framework, what is the right fileset to include to just give me the DB and ORM base class files?
Update:
I jumped in and started looking at the ORM source code.. One thing was immediately confusing to me.. all the ORM classes have the class name appended with \_CORE i.e. ORM\_Core ORM\_Iterator\_Core, but the code everywhere is extending the ORM class. Problem is, I've searched the whole code base 6 different ways, and I've never seen a plain ORM class def nor an ORM interface def or anything.. Could someone enlighten me on where that magic happens?
|
Why not just have a
```
class ORM extends ORM_Core {}
```
somewhere in your code? This removes the need to use any of the loader code.
You'll also need Kohana\_Exception, the Database library (and appropraite driver), Kohana::config(), Kohana::auto\_load(), Kohana::log() methods (search Database.php for those).
Kohana is a great MVC framework, but not really designed to be taken apart in chunks like that. You may want to also investigate [Doctrine](http://www.doctrine-project.org/), another ORM for PHP (that IS designed to be stand-alone)
|
It turns out that Kohana uses magic class loading so that if a defined class with an \_Core extention doesn't exist as a class
i.e. ORM\_Core exists, but ORM doesn't, so Kohana will magically define an ORM class
Since the package uses 100% magic class loading.
In case anyone is interested, I'm documenting my finds here so everyone can find it later:
```
From Kohana.php in the system directory:
<-- snip if ($extension = self::find_file($type, self::$configuration['core']['extension_prefix'].$class))
{
// Load the extension
require $extension;
}
elseif ($suffix !== 'Core' AND class_exists($class.'_Core', FALSE))
{
// Class extension to be evaluated
$extension = 'class '.$class.' extends '.$class.'_Core { }';
-->
<-- snip
// Transparent class extensions are handled using eval. This is
// a disgusting hack, but it gets the job done.
eval($extension);
-->
```
So it does an eval..
|
Is it possible to install Kohana libraries for ORM in PHP without using the entire framework?
|
[
"",
"php",
"orm",
"kohana",
""
] |
I am planning a PHP application that needs to store date/times in an MSSQL database. (For the curious, it is a calendar application.) What is the preferred format to store this information?
MSSQL has its own datetime data type, which works well in the database itself and is very readable. However, there aren't any MSSQL functions to translate datetime values to PHP's preferred format--UNIX timestamp. This makes it a bit more painful to use with PHP. UNIX timestamp is attractive because that's what PHP likes, but it's certainly not as readable and there aren't a bunch of nice built-in MSSQL functions for working with the data.
Would you store this information as datetime data type, as UNIX timestamps (as int, bigint, or varchar datatype), as both formats side by side, or as something else entirely?
|
I would store the dates in the MS-SQL format to assist in using the date manipulation functions in T-SQL to their fullest. It's easier to write and read
```
SELECT * FROM Foo
WHERE DateDiff(d,field1,now()) < 1
```
Than to try and perform the equivalent operation by manipulating integers
To convert a MsSQL date into a unix timestamp use dateDiff:
```
SELECT DATEDIFF(s,'1970-01-01 00:00:00',fieldName) as fieldNameTS
FROM TableName
WHERE fieldName between '10/1/2008' and '10/31/2008'
```
To Convert an Unix Timestamp into a MsSQL Date, you can either do it in PHP:
```
$msSQLDate = date("Y-m-d H:i:s", $unixDate );
```
or in MsSQL
```
INSERT INTO TableName (
fieldName
) VALUES (
DATEADD(s,'1970-01-01 00:00:00', ? )
)
```
Where parameter one is int($unixDate)
|
I'd recommend the same as i do for all dates in any db engine, the db native type. (DATETIME)
Just use "YYYY-MM-DD HH:MM:SS" for inserting in php: `date('Y-m-d H:i:s', $myTimeStampInSeconds);`
-edit in response to comments below here -
1. for selected columns you can use `$timestamp = strtotime( $yourColumnValue );`
2. i recommend storing in the databas native format because you can then use SQL to compare records using SQL date/time functions like DATEADD() etc.
|
What is the preferred format to store date/times in a SQL Server database when PHP is your primary language?
|
[
"",
"php",
"sql-server",
"datetime",
"time",
""
] |
Is there a way to merge two primary keys into one and then cascade update all affected relationships? Here's the scenario:
Customers (idCustomer int PK, Company varchar(50), etc)
CustomerContacts (idCustomerContact int PK, idCustomer int FK, Name varchar(50), etc)
CustomerNotes (idCustomerNote int PK, idCustomer int FK, Note Text, etc)
Sometimes customers need to be merged into one. For example, you have a customer with the id of 1 and another with the id of 2. You want to merge both, so that everything that was 2 is now 1. I know I could write a script that updates all affected tables one by one, but I'd like to make it more future proof by using the cascade rules, so I don't have to update the script every time there is a new relationship added.
Any ideas?
|
There's no automatic way to do it, but you have a couple options, you can manually write the procedures, or you can either code generate the merge on a regular basis or dynamically generate it at run-time. To do this, you can use the `INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS` and `INFORMATION_SCHEMA.KEY_COLUMN_USAGE` and `INFORMATION_SCHEMA.TABLE_CONSTRAINTS` and `INFORMATION_SCHEMA.COLUMNS and INFORMATION_SCHEMA.TABLES` to build the procedure dynamically.
You can also simply wrap the entire operation in a transaction (a good idea anyway). The last step will be to remove the customer being merged out of, so if there is RI on a table you never added and you try to do a merge, it will fail because you cannot remove the customer being merged out of since there are dependent records in a table which wasn't already added to the merge procedure.
|
Any more recent solution to this?
I have kind of the same problem and at the moment building the procedures dynamically seems too complicated. Here's how it could work in theory, but I guess it doesn't?
In one transaction:
1) Temporarily disable primary key constraint on Customers
2) Update Primary ID of Cingular to '1' which has a relationship update cascade rule taking care of the children
3) Use a secondary key field to delete (only) Cingular
4) Enable primary key constraint on Customers
Looking forward to something like this in future T-SQL:
DELETE WITH UPDATE idCustomer = 1 FROM Customers WHERE idCustomer = 2;
;-)
|
Merge Primary Keys - Cascade Update
|
[
"",
"sql",
"sql-server",
"cascade",
""
] |
My MFC application using the "ESRI MapObjects LT2" ActiveX plugin throws an ASSERT at me when closing it.
The error occurs in `cmdtarg.cpp`:
```
CCmdTarget::~CCmdTarget()
{
#ifndef _AFX_NO_OLE_SUPPORT
if (m_xDispatch.m_vtbl != 0)
((COleDispatchImpl*)&m_xDispatch)->Disconnect();
ASSERT(m_dwRef <= 1); //<--- Fails because m_dwRef is 3
#endif
m_pModuleState = NULL;
}
```
I built the (native C++) application with VC9.
When I compile the application with VC6, it behaves nicely.
What could be the reason for this?
|
The following solved it for me:
In the window that contains the control, add an OnDestroy() handler:
```
void CMyWnd::OnDestroy()
{
// Apparently we have to disconnect the (ActiveX) Map control manually
// with this undocumented method.
COleControlSite* pSite = GetOleControlSite(MY_DIALOG_CONTROL_ID);
if(NULL != pSite)
{
pSite->ExternalDisconnect();
}
CWnd::OnDestroy();
}
```
|
That looks like a reference count. Could this "target" be referenced by something else, something that's not releasing it?
|
ActiveX plugin causes ASSERT to fail on application exit in VS2008
|
[
"",
"c++",
"activex",
"visual-c++-6",
"visual-c++-2008",
""
] |
I am a big fan of letting the compiler do as much work for you as possible. When writing a simple class the compiler can give you the following for 'free':
* A default (empty) constructor
* A copy and move constructor
* A destructor
* Assignment operators (`operator=`)
But it cannot seem to give you any comparison operators - such as `operator==` or `operator!=`. For example:
```
class foo
{
public:
std::string str_;
int n_;
};
foo f1; // Works
foo f2(f1); // Works
foo f3;
f3 = f2; // Works
if (f3 == f2) // Fails
{ }
if (f3 != f2) // Fails
{ }
```
Is there a good reason for this? Why would performing a member-by-member comparison be a problem? Obviously if the class allocates memory then you'd want to be careful, but for a simple class surely the compiler could do this for you?
|
The compiler wouldn't know whether you wanted a pointer comparison or a deep (internal) comparison.
It's safer to just not implement it and let the programmer do that themselves. Then they can make all the assumptions they like.
|
The argument that if the compiler can provide a default copy constructor, it should be able to provide a similar default `operator==()` makes a certain amount of sense. I think that the reason for the decision not to provide a compiler-generated default for this operator can be guessed by what Stroustrup said about the default copy constructor in "The Design and Evolution of C++" (Section 11.4.1 - Control of Copying):
> I personally consider it unfortunate
> that copy operations are defined by
> default and I prohibit copying of
> objects of many of my classes.
> However, C++ inherited its default
> assignment and copy constructors from
> C, and they are frequently used.
So instead of "why doesn't C++ have a default `operator==()`?", the question should have been "why does C++ have a default assignment and copy constructor?", with the answer being those items were included reluctantly by Stroustrup for backwards compatibility with C (probably the cause of most of C++'s warts, but also probably the primary reason for C++'s popularity).
For my own purposes, in my IDE the snippet I use for new classes contains declarations for a private assignment operator and copy constructor so that when I gen up a new class I get no default assignment and copy operations - I have to explicitly remove the declaration of those operations from the `private:` section if I want the compiler to be able to generate them for me.
|
Why don't C++ compilers define operator== and operator!=?
|
[
"",
"c++",
"operators",
"language-design",
"comparison-operators",
"default-comparisons",
""
] |
I recently stumbled across [this entry in the google testing blog](http://googletesting.blogspot.com/2008/08/by-miko-hevery-so-you-decided-to.html) about guidelines for writing more testable code. I was in agreement with the author until this point:
> Favor polymorphism over conditionals: If you see a switch statement you should think polymorphisms. If you see the same if condition repeated in many places in your class you should again think polymorphism. Polymorphism will break your complex class into several smaller simpler classes which clearly define which pieces of the code are related and execute together. This helps testing since simpler/smaller class is easier to test.
I simply cannot wrap my head around that. I can understand using polymorphism instead of RTTI (or DIY-RTTI, as the case may be), but that seems like such a broad statement that I can't imagine it actually being used effectively in production code. It seems to me, rather, that it would be easier to add additional test cases for methods which have switch statements, rather than breaking down the code into dozens of separate classes.
Also, I was under the impression that polymorphism can lead to all sorts of other subtle bugs and design issues, so I'm curious to know if the tradeoff here would be worth it. Can someone explain to me exactly what is meant by this testing guideline?
|
Actually this makes testing and code easier to write.
If you have one switch statement based on an internal field you probably have the same switch in multiple places doing slightly different things. This causes problems when you add a new case as you have to update all the switch statements (if you can find them).
By using polymorphism you can use virtual functions to get the same functionality and because a new case is a new class you don't have to search your code for things that need to be checked it is all isolated for each class.
```
class Animal
{
public:
Noise warningNoise();
Noise pleasureNoise();
private:
AnimalType type;
};
Noise Animal::warningNoise()
{
switch(type)
{
case Cat: return Hiss;
case Dog: return Bark;
}
}
Noise Animal::pleasureNoise()
{
switch(type)
{
case Cat: return Purr;
case Dog: return Bark;
}
}
```
In this simple case every new animal causes requires both switch statements to be updated.
You forget one? What is the default? BANG!!
Using polymorphism
```
class Animal
{
public:
virtual Noise warningNoise() = 0;
virtual Noise pleasureNoise() = 0;
};
class Cat: public Animal
{
// Compiler forces you to define both method.
// Otherwise you can't have a Cat object
// All code local to the cat belongs to the cat.
};
```
By using polymorphism you can test the Animal class.
Then test each of the derived classes separately.
Also this allows you to ship the Animal class (**Closed for alteration**) as part of you binary library. But people can still add new Animals (**Open for extension**) by deriving new classes derived from the Animal header. If all this functionality had been captured inside the Animal class then all animals need to be defined before shipping (Closed/Closed).
|
## Do not fear...
I guess your problem lies with familiarity, not technology. Familiarize yourself with C++ OOP.
## C++ is an OOP language
Among its multiple paradigms, it has OOP features and is more than able to support comparison with most pure OO language.
Don't let the "C part inside C++" make you believe C++ can't deal with other paradigms. C++ can handle a lot of programming paradigms quite graciously. And among them, OOP C++ is the most mature of C++ paradigms after procedural paradigm (i.e. the aforementioned "C part").
## Polymorphism is Ok for production
There is no "subtle bugs" or "not suitable for production code" thing. There are developers who remain set in their ways, and developers who'll learn how to use tools and use the best tools for each task.
## switch and polymorphism are [almost] similar...
... But polymorphism removed most errors.
The difference is that you must handle the switches manually, whereas polymorphism is more natural, once you get used with inheritance method overriding.
With switches, you'll have to compare a type variable with different types, and handle the differences. With polymorphism, the variable itself knows how to behave. You only have to organize the variables in logical ways, and override the right methods.
But in the end, if you forget to handle a case in switch, the compiler won't tell you, whereas you'll be told if you derive from a class without overriding its pure virtual methods. Thus most switch-errors are avoided.
All in all, the two features are about making choices. But Polymorphism enable you to make more complex and in the same time more natural and thus easier choices.
## Avoid using RTTI to find an object's type
RTTI is an interesting concept, and can be useful. But most of the time (i.e. 95% of the time), method overriding and inheritance will be more than enough, and most of your code should not even know the exact type of the object handled, but trust it to do the right thing.
If you use RTTI as a glorified switch, you're missing the point.
(Disclaimer: I am a great fan of the RTTI concept and of dynamic\_casts. But one must use the right tool for the task at hand, and most of the time RTTI is used as a glorified switch, which is wrong)
## Compare dynamic vs. static polymorphism
If your code does not know the exact type of an object at compile time, then use dynamic polymorphism (i.e. classic inheritance, virtual methods overriding, etc.)
If your code knows the type at compile time, then perhaps you could use static polymorphism, i.e. the CRTP pattern <http://en.wikipedia.org/wiki/Curiously_Recurring_Template_Pattern>
The CRTP will enable you to have code that smells like dynamic polymorphism, but whose every method call will be resolved statically, which is ideal for some very critical code.
## Production code example
A code similar to this one (from memory) is used on production.
The easier solution revolved around a the procedure called by message loop (a WinProc in Win32, but I wrote a simplier version, for simplicity's sake). So summarize, it was something like:
```
void MyProcedure(int p_iCommand, void *p_vParam)
{
// A LOT OF CODE ???
// each case has a lot of code, with both similarities
// and differences, and of course, casting p_vParam
// into something, depending on hoping no one
// did a mistake, associating the wrong command with
// the wrong data type in p_vParam
switch(p_iCommand)
{
case COMMAND_AAA: { /* A LOT OF CODE (see above) */ } break ;
case COMMAND_BBB: { /* A LOT OF CODE (see above) */ } break ;
// etc.
case COMMAND_XXX: { /* A LOT OF CODE (see above) */ } break ;
case COMMAND_ZZZ: { /* A LOT OF CODE (see above) */ } break ;
default: { /* call default procedure */} break ;
}
}
```
Each addition of command added a case.
The problem is that some commands where similar, and shared partly their implementation.
So mixing the cases was a risk for evolution.
I resolved the problem by using the Command pattern, that is, creating a base Command object, with one process() method.
So I re-wrote the message procedure, minimizing the dangerous code (i.e. playing with void \*, etc.) to a minimum, and wrote it to be sure I would never need to touch it again:
```
void MyProcedure(int p_iCommand, void *p_vParam)
{
switch(p_iCommand)
{
// Only one case. Isn't it cool?
case COMMAND:
{
Command * c = static_cast<Command *>(p_vParam) ;
c->process() ;
}
break ;
default: { /* call default procedure */} break ;
}
}
```
And then, for each possible command, instead of adding code in the procedure, and mixing (or worse, copy/pasting) the code from similar commands, I created a new command, and derived it either from the Command object, or one of its derived objects:
This led to the hierarchy (represented as a tree):
```
[+] Command
|
+--[+] CommandServer
| |
| +--[+] CommandServerInitialize
| |
| +--[+] CommandServerInsert
| |
| +--[+] CommandServerUpdate
| |
| +--[+] CommandServerDelete
|
+--[+] CommandAction
| |
| +--[+] CommandActionStart
| |
| +--[+] CommandActionPause
| |
| +--[+] CommandActionEnd
|
+--[+] CommandMessage
```
Now, all I needed to do was to override process for each object.
Simple, and easy to extend.
For example, say the CommandAction was supposed to do its process in three phases: "before", "while" and "after". Its code would be something like:
```
class CommandAction : public Command
{
// etc.
virtual void process() // overriding Command::process pure virtual method
{
this->processBefore() ;
this->processWhile() ;
this->processAfter() ;
}
virtual void processBefore() = 0 ; // To be overriden
virtual void processWhile()
{
// Do something common for all CommandAction objects
}
virtual void processAfter() = 0 ; // To be overriden
} ;
```
And, for example, CommandActionStart could be coded as:
```
class CommandActionStart : public CommandAction
{
// etc.
virtual void processBefore()
{
// Do something common for all CommandActionStart objects
}
virtual void processAfter()
{
// Do something common for all CommandActionStart objects
}
} ;
```
As I said: Easy to understand (if commented properly), and very easy to extend.
The switch is reduced to its bare minimum (i.e. if-like, because we still needed to delegate Windows commands to Windows default procedure), and no need for RTTI (or worse, in-house RTTI).
The same code inside a switch would be quite amusing, I guess (if only judging by the amount of "historical" code I saw in our app at work).
|
Do polymorphism or conditionals promote better design?
|
[
"",
"c++",
"oop",
"tdd",
"polymorphism",
""
] |
I have a collection of elements that I need to operate over, calling member functions on the collection:
```
std::vector<MyType> v;
... // vector is populated
```
For calling functions with no arguments it's pretty straight-forward:
```
std::for_each(v.begin(), v.end(), std::mem_fun(&MyType::myfunc));
```
A similar thing can be done if there's one argument to the function I wish to call.
My problem is that I want to call a function on elements in the vector if it meets some condition. `std::find_if` returns an iterator to the first element meeting the conditions of the predicate.
```
std::vector<MyType>::iterator it =
std::find_if(v.begin(), v.end(), MyPred());
```
I wish to find **all** elements meeting the predicate and operate over them.
I've been looking at the STL algorithms for a "`find_all`" or "`do_if`" equivalent, or a way I can do this with the existing STL (such that I only need to iterate once), rather than rolling my own or simply do a standard iteration using a for loop and comparisons.
|
Boost Lambda makes this easy.
```
#include <boost/lambda/lambda.hpp>
#include <boost/lambda/bind.hpp>
#include <boost/lambda/if.hpp>
std::for_each( v.begin(), v.end(),
if_( MyPred() )[ std::mem_fun(&MyType::myfunc) ]
);
```
You could even do away with defining MyPred(), if it is simple. This is where lambda really shines. E.g., if MyPred meant "is divisible by 2":
```
std::for_each( v.begin(), v.end(),
if_( _1 % 2 == 0 )[ std::mem_fun( &MyType::myfunc ) ]
);
```
---
**Update:**
Doing this with the C++0x lambda syntax is also very nice (continuing with the predicate as modulo 2):
```
std::for_each( v.begin(), v.end(),
[](MyType& mt ) mutable
{
if( mt % 2 == 0)
{
mt.myfunc();
}
} );
```
At first glance this looks like a step backwards from boost::lambda syntax, however, it is better because more complex functor logic is trivial to implement with c++0x syntax... where anything very complicated in boost::lambda gets tricky quickly. Microsoft Visual Studio 2010 beta 2 currently implements this functionality.
|
I wrote a `for_each_if()` and a `for_each_equal()` which do what I think you're looking for.
`for_each_if()` takes a predicate functor to evaluate equality, and `for_each_equal()` takes a value of any type and does a direct comparison using `operator ==`. In both cases, the function you pass in is called on each element that passes the equality test.
```
/* ---
For each
25.1.1
template< class InputIterator, class Function, class T>
Function for_each_equal(InputIterator first, InputIterator last, const T& value, Function f)
template< class InputIterator, class Function, class Predicate >
Function for_each_if(InputIterator first, InputIterator last, Predicate pred, Function f)
Requires:
T is of type EqualityComparable (20.1.1)
Effects:
Applies f to each dereferenced iterator i in the range [first, last) where one of the following conditions hold:
1: *i == value
2: pred(*i) != false
Returns:
f
Complexity:
At most last - first applications of f
--- */
template< class InputIterator, class Function, class Predicate >
Function for_each_if(InputIterator first,
InputIterator last,
Predicate pred,
Function f)
{
for( ; first != last; ++first)
{
if( pred(*first) )
f(*first);
}
return f;
};
template< class InputIterator, class Function, class T>
Function for_each_equal(InputIterator first,
InputIterator last,
const T& value,
Function f)
{
for( ; first != last; ++first)
{
if( *first == value )
f(*first);
}
return f;
};
```
|
using STL to find all elements in a vector
|
[
"",
"c++",
"stl",
""
] |
I need to [serialize](https://en.wikipedia.org/wiki/Serialization) an object to [JSON](https://www.json.org/). I'm using [jQuery](https://api.jquery.com/). Is there a "standard" way to do this?
My specific situation: I have an array defined as shown below:
```
var countries = new Array();
countries[0] = 'ga';
countries[1] = 'cd';
...
```
and I need to turn this into a string to pass to [`$.ajax()`](https://www.w3schools.com/jquery/ajax_ajax.asp) like this:
```
$.ajax({
type: "POST",
url: "Concessions.aspx/GetConcessions",
data: "{'countries':['ga','cd']}",
...
```
|
[JSON-js](https://github.com/douglascrockford/JSON-js) - JSON in JavaScript.
To convert an object to a string, use `JSON.stringify`:
```
var json_text = JSON.stringify(your_object, null, 2);
```
To convert a JSON string to object, use `JSON.parse`:
```
var your_object = JSON.parse(json_text);
```
It was recently recommended by [John Resig](http://ejohn.org/blog/ecmascript-5-strict-mode-json-and-more/):
> ...PLEASE start migrating
> your JSON-using applications over to
> Crockford's json2.js. It is fully
> compatible with the ECMAScript 5
> specification and gracefully degrades
> if a native (faster!) implementation
> exists.
>
> In fact, I just landed a change in jQuery yesterday that utilizes the
> JSON.parse method if it exists, now
> that it has been completely specified.
I tend to trust what he says on JavaScript matters :)
[All modern browsers](http://caniuse.com/json) (and many older ones which aren't ancient) support the [JSON object](http://ecma262-5.com/ELS5_Section_15.htm#Section_15.12) natively. The current version of Crockford's JSON library will only define `JSON.stringify` and `JSON.parse` if they're not already defined, leaving any browser native implementation intact.
|
I've been using [jquery-json](https://code.google.com/p/jquery-json/) for 6 months and it works great. It's very simple to use:
```
var myObj = {foo: "bar", "baz": "wockaflockafliz"};
$.toJSON(myObj);
// Result: {"foo":"bar","baz":"wockaflockafliz"}
```
|
Serializing to JSON in jQuery
|
[
"",
"javascript",
"jquery",
"ajax",
"json",
"serialization",
""
] |
I want to be able to do the following:
```
$normal_array = array();
$array_of_arrayrefs = array(&$normal_array);
// Here I want to access the $normal_array reference **as a reference**,
// but that doesn't work obviously. How to do it?
end($array_of_arrayrefs)["one"] = 1; // choking on this one
print $normal_array["one"]; // should output 1
```
|
`end()` doesn't return a reference of the last value, but rather the last value itself. Here is a workaround:
```
$normal_array = array();
$array_of_arrayrefs = array( &$normal_array );
$refArray = &end_byref( $array_of_arrayrefs );
$refArray["one"] = 1;
print $normal_array["one"]; // should output 1
function &end_byref( &$array ) {
$lastKey = end(array_keys($array));
end($array);
return $array[$lastKey];
}
```
|
Here are a couple of approaches, neither of which I find particularly satisfying.
I'm sure there's a better way..
```
<?php
$normal_array = array();
$array_of_arrayrefs = array( "blah", &$normal_array );
foreach ($array_of_arrayrefs as &$v);
$v["one"] = 1;
echo $normal_array["one"]; //prints 1
?>
<?php
$normal_array = array();
$array_of_arrayrefs = array( "blah", &$normal_array );
$lastIndex = @end(array_keys($array_of_arrayrefs)); //raises E_STRICT because end() expects referable.
$array_of_arrayrefs[$lastIndex]["one"] = 1;
echo $normal_array["one"]; //prints 1
?>
```
|
Set element on last reference in an array of references
|
[
"",
"php",
"arrays",
"reference",
"pass-by-reference",
""
] |
I'm looking into some possible options for unit testing C++ classes.
So, short and to the point, what are you using?
|
I'm using [cppunit](http://sourceforge.net/projects/cppunit). It is a pretty good port of the iconic JUnit to c++.
|
[UnitTest++](http://unittest-cpp.sourceforge.net/). In the past I used Boost Test, which is also pretty good, but I ran across a problem where boost test wanted an operator<< defined and it wouldn't accept my overloaded operator<<. UnitTest++ didn't flinch a bit.
|
What are you using to unit test your C++ code?
|
[
"",
"c++",
"unit-testing",
""
] |
I have an XML file loaded into a DOM document,
I wish to iterate through all 'foo' tags, getting values from every tag below it. I know I can get values via
```
$element = $dom->getElementsByTagName('foo')->item(0);
foreach($element->childNodes as $node){
$data[$node->nodeName] = $node->nodeValue;
}
```
However, what I'm trying to do, is from an XML like,
```
<stuff>
<foo>
<bar></bar>
<value/>
<pub></pub>
</foo>
<foo>
<bar></bar>
<pub></pub>
</foo>
<foo>
<bar></bar>
<pub></pub>
</foo>
</stuff>
```
iterate over every *foo* tag, and get specific *bar* or *pub*, and get values from there.
Now, how do I iterate over *foo* so that I can still access specific child nodes by name?
|
Not tested, but what about:
```
$elements = $dom->getElementsByTagName('foo');
$data = array();
foreach($elements as $node){
foreach($node->childNodes as $child) {
$data[] = array($child->nodeName => $child->nodeValue);
}
}
```
|
It's generally much better to use XPath to query a document than it is to write code that depends on knowledge of the document's structure. There are two reasons. First, there's a lot less code to test and debug. Second, if the document's structure changes it's a lot easier to change an XPath query than it is to change a bunch of code.
Of course, you have to learn XPath, but (most of) XPath isn't rocket science.
PHP's DOM uses the `xpath_eval` method to perform XPath queries. It's documented [here](https://www.php.net/manual/en/function.xpath-eval.php), and the user notes include some pretty good examples.
|
How do I iterate through DOM elements in PHP?
|
[
"",
"php",
"xml",
"dom",
""
] |
I need to be able to take a string like:
```
'''foo, bar, "one, two", three four'''
```
into:
```
['foo', 'bar', 'one, two', 'three four']
```
I have an feeling (with hints from #python) that the solution is going to involve the shlex module.
|
The shlex module solution allows escaped quotes, one quote escape another, and all fancy stuff shell supports.
```
>>> import shlex
>>> my_splitter = shlex.shlex('''foo, bar, "one, two", three four''', posix=True)
>>> my_splitter.whitespace += ','
>>> my_splitter.whitespace_split = True
>>> print list(my_splitter)
['foo', 'bar', 'one, two', 'three', 'four']
```
escaped quotes example:
```
>>> my_splitter = shlex.shlex('''"test, a",'foo,bar",baz',bar \xc3\xa4 baz''',
posix=True)
>>> my_splitter.whitespace = ',' ; my_splitter.whitespace_split = True
>>> print list(my_splitter)
['test, a', 'foo,bar",baz', 'bar \xc3\xa4 baz']
```
|
It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes?
Your syntax looks very much like the common CSV file format, which is supported by the Python standard library:
```
import csv
reader = csv.reader(['''foo, bar, "one, two", three four'''], skipinitialspace=True)
for r in reader:
print r
```
Outputs:
```
['foo', 'bar', 'one, two', 'three four']
```
HTH!
|
How can i parse a comma delimited string into a list (caveat)?
|
[
"",
"python",
"split",
"escaping",
"quotes",
""
] |
If I were to use more than one, what order should I use modifier keywords such as:
`public`, `private`, `protected`, `virtual`, `abstract`, `override`, `new`, `static`, `internal`, `sealed`, and any others I'm forgetting.
|
I had a look at Microsoft's [Framework Design Guidelines](https://msdn.microsoft.com/en-us/library/ms229042%28v=vs.100%29) and couldn't find any references to what order modifiers should be put on members. Likewise, a look at the [C# 5.0 language specification](https://www.microsoft.com/en-gb/download/details.aspx?id=7029) proved fruitless. There were two other avenues to follow, though: [EditorConfig files](https://learn.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options?view=vs-2017) and [ReSharper](https://www.jetbrains.com/resharper).
---
# .editorconfig
The MSDN page, [.NET coding convention settings for EditorConfig](https://learn.microsoft.com/en-us/visualstudio/ide/editorconfig-code-style-settings-reference?view=vs-2017) says:
> In Visual Studio 2017, you can define and maintain consistent code style in your codebase with the use of an [EditorConfig](https://learn.microsoft.com/en-us/visualstudio/ide/create-portable-custom-editor-options?view=vs-2017) file.
>
> # Example EditorConfig file
>
> To help you get started, here is an example .editorconfig file with the default options:
>
> ```
> ###############################
> # C# Code Style Rules #
> ###############################
>
> # Modifier preferences
> csharp_preferred_modifier_order = public,private,protected,internal,static,extern,new,virtual,abstract,sealed,override,readonly,unsafe,volatile,async:suggestion
> ```
In other words: the default order for modifiers, following the default editorconfig settings is:
```
{ public / private / protected / internal / protected internal / private protected } // access modifiers
static
extern
new
{ virtual / abstract / override / sealed override } // inheritance modifiers
readonly
unsafe
volatile
async
```
---
## ReSharper
[ReSharper](https://www.jetbrains.com/resharper/), however, is more forthcoming. The defaults for ReSharper 2018.11, with access modifiers (which are exclusive) and inheritance modifiers (which are exclusive), grouped together is:
```
{ public / protected / internal / private / protected internal / private protected } // access modifiers
new
{ abstract / virtual / override / sealed override } // inheritance modifiers
static
readonly
extern
unsafe
volatile
async
```
This is stored in the `{solution}.dotsettings` file under the
```
"/Default/CodeStyle/CodeFormatting/CSharpFormat/MODIFIERS_ORDER/@EntryValue"
```
node - the ReSharper default2 is:
```
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/MODIFIERS_ORDER/@EntryValue">
public protected internal private new abstract virtual sealed override static readonly extern unsafe volatile async
</s:String>
```
1 [ReSharper 2018.1](https://www.jetbrains.com/resharper/whatsnew/#v2018-1) says that it has "*Full understanding of C# 7.2*" and explicitly mentions the `private protected` access modifier.
2 ReSharper only saves settings which differ from the default, so in general this node, as it is, will not be seen in the `dotsettings` file.
---
## `new static` vs `static new`
The MSDN page for [Compiler Warning CS0108](https://msdn.microsoft.com/en-us/library/3s8070fc.aspx) gives the example of a public field `i` on a base class being hidden by a public static field `i` on a derived class: their suggestion is to change `static` to `static new`:
> ```
> public class clx
> {
> public int i = 1;
> }
>
> public class cly : clx
> {
> public static int i = 2; // CS0108, use the new keyword
> // Use the following line instead:
> // public static new int i = 2;
> }
> ```
Likewise, the IntelliSense in Visual Studio 2015 also suggests changing `static` to `static new`
[](https://i.stack.imgur.com/7vWEY.png)
which is the same if the field `i` in the base class is also `static`.
That said, a cursory search on GitHub found that some projects override this default to put `static` *before*, not *after* `new`, the inheritance modifiers and `sealed`, e.g.
[the ReSharper settings for StyleCop GitHub project](https://github.com/StyleCop/StyleCop/blob/master/Project/Src/AddIns/ReSharper/StyleCop.dotSettings#L75):
```
<s:String x:Key="/Default/CodeStyle/CodeFormatting/CSharpFormat/MODIFIERS_ORDER/@EntryValue">
public protected internal private static new abstract virtual override sealed readonly extern unsafe volatile async
</s:String>
```
however since `static` cannot be used in conjunction with the inheritance modifiers or `sealed`, this is just a distinction between `new static` (the default, and suggested by the default editorconfig file) and `static new` (suggested by ReSharper).
Personally I prefer the latter, but Google searches in [referencesource.microsoft.com](http://referencesource.microsoft.com/) for [`new static`](https://www.google.co.uk/?gws_rd=ssl#q=inurl:referencesource.microsoft.com%2F+%22new+static%22) vs [`static new`](https://www.google.co.uk/?gws_rd=ssl#q=inurl:referencesource.microsoft.com%2F+%22static+new%22) in 2015 and 2018 gave:
```
(in 2015) (in 2018)
new static 203 427
static new 10 990
```
which implies that the preference at Microsoft is `static new`.
|
[StyleCop](https://stylecop.codeplex.com) is available as a [Visual Studio extension](https://marketplace.visualstudio.com/items?itemName=ChrisDahlberg.StyleCop) or a [NuGet package](https://www.nuget.org/packages/StyleCop.MSBuild) and can validate your source code against the rules some teams in Microsoft use. StyleCop likes the access modifier to come first.
EDIT: Microsoft isn't itself totally consistent; different teams use different styles. For example StyleCop suggests putting using directives in the namespace, but this is not followed in the Roslyn source code.
|
Is there a convention to the order of modifiers in C#?
|
[
"",
"c#",
""
] |
Is it possible to define a class in C# such that
```
class GenericCollection<T> : SomeBaseCollection<T> where T : Delegate
```
I couldn't for the life of me accomplish this last night in .NET 3.5. I tried using
`delegate, Delegate, Action<T> and Func<T, T>`
It seems to me that this should be allowable in some way. I'm trying to implement my own EventQueue.
I ended up just doing this [primitive approximation mind you].
```
internal delegate void DWork();
class EventQueue {
private Queue<DWork> eventq;
}
```
But then I lose the ability to reuse the same definition for different types of functions.
Thoughts?
|
A number of classes are unavailable as generic contraints - Enum being another.
For delegates, the closest you can get is ": class", perhaps using reflection to check (for example, in the static constructor) that the T *is* a delegate:
```
static GenericCollection()
{
if (!typeof(T).IsSubclassOf(typeof(Delegate)))
{
throw new InvalidOperationException(typeof(T).Name + " is not a delegate type");
}
}
```
|
Yes it's possible in C# 7.3, Constraints family increased to include `Enum`, `Delegate` and `unmanaged` types.
You can write this code without a problem:
```
void M<D, E, T>(D d, E e, T* t) where D : Delegate where E : Enum where T : unmanaged
{
}
```
[From Docs](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/constraints-on-type-parameters#unmanaged-constraint):
> Beginning with C# 7.3, you can use the unmanaged constraint to specify
> that the type parameter must be a non-nullable unmanaged type. The
> unmanaged constraint enables you to write reusable routines to work
> with types that can be manipulated as blocks of memory
Useful links:
[The future of C#](https://www.youtube.com/watch?v=QZ0rWLaMZeI&t=650s), from Microsoft Build 2018
[What's new in C# 7.3?](https://dev.to/borrrden/whats-new-in-c-73-26fk)
|
C# Generics won't allow Delegate Type Constraints
|
[
"",
"c#",
"generics",
"events",
"delegates",
"constraints",
""
] |
What is the correct way to do this? For example, how would I change a stored procedure with this signature:
```
CREATE PROCEDURE dbo.MyProcedure
@Param BIT = NULL
AS
SELECT *
FROM dbo.SomeTable T
WHERE T.SomeColumn = @Param
```
So that giving @Param with a value of 1 or 0 performs the filter, but not specifying it or passing NULL performs no filtering?
|
Assuming that NULL means "don't care" then use
```
CREATE PROCEDURE dbo.MyProcedure
@Param BIT = NULL
AS
SELECT *
FROM dbo.SomeTable T
WHERE T.SomeColumn = @Param OR @Param IS NULL
```
|
There's more than one way. Here's one:
```
SELECT *
FROM dbo.SomeTable T
WHERE T.SomeColumn = COALESCE(@Param, T.SomeColumn)
```
but this will not include rows for which T.SomeColumn is NULL.
The following alternative will include those rows:
```
SELECT *
FROM dbo.SomeTable T
WHERE T.SomeColumn = @Param OR @Param IS NULL
```
but it has the disadvantage of the repeated parameter, which is not nice in case you're using another way to pass in parameters, for example, using a placeholder.
|
Using a tristate parameter in a stored procedure
|
[
"",
"sql",
"stored-procedures",
""
] |
The SQL implementation of relational databases has been around in their current form for something like 25 years (since System R and Ingres). Even the main (loosely adhered to) standard is ANSI-92 (although there were later updates) is a good 15 years old.
What innovations can you think of with SQL based databases in the last ten years or so. I am specifically excluding OLAP, Columnar and other non-relational (or at least non SQL) innovations. I also want to exclude 'application server' type features and bundling (like reporting tools)
Although the basic approach has remained fairly static, I can think of:
* Availability
* Ability to handle larger sets of data
* Ease of maintenance and configuration
* Support for more advanced data types (blob, xml, unicode etc)
Any others that you can think of?
|
* Hash joins
* Cost-based optimizers (pretty much turned query-writing on its head)
* Partitioning (enables much better VLDB management)
* Parallel (multi-threaded) query processing
* Clustering (not just availability but scalability too)
* More flexibility in SQL as well as easier integration of SQL with 3GL languages
* Better diagnostics capabilities
|
Analytic functions like RANK
|
What relational database innovations have there been in the last 10 years
|
[
"",
"sql",
"database",
"relational",
""
] |
Should I create two `CFile` objects and copy one into the other character by character? Or is there something in the library that will do this for me?
|
I would just use the [`CopyFile`](https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-copyfile) Win32 API function, but the example code in the [`CFile::Open`](https://learn.microsoft.com/en-us/cpp/mfc/reference/cfile-class?view=msvc-160#open) documentation shows how to copy files with `CFile` (using pretty much the method you suggest).
|
It depends on what you want to do. There are a number of ways to copy files:
* `CopyFile()`
* `CopyFileEx()`
* `SHFileOperation()`
* `IFileOperation` (replaces `SHFileOperation()` in Vista)
|
In an MFC application, what's the easiest way to copy a file from one directory to another?
|
[
"",
"c++",
"windows",
"mfc",
"file",
""
] |
In Eclipse (Ganymede) I'm debugging some code that uses Apache Commons HttpClient and would like to step into the HttpClient code. I've downloaded the source code and tried to attach it in the normal fashion (CTRL-click on the method name and use the Attach Source button). I've tried to attach both as external file and external folder with no success. I've attached source before with no issues and can currently step into Hibernate source code successfully.
I've even tried editing the .classpath file directly to add sourcepath manually. Still no luck. Refreshing the project, doing a clean build, closing and re-opening Eclipse do not solve the issue. Frustratingly, Eclipse provides no error message; it just does not attach the source.
Here are the entries in .claspath:
```
<!-- Hibernate. Works -->
<classpathentry kind="lib" path="/myEAP/EarContent/APP-INF/lib/hibernate.jar" sourcepath="D:/Data/Download/hibernate-3.2.2.ga/hibernate-3.2/src"/>
<!-- Commons HttpClient. Will not attach -->
<classpathentry kind="lib" path="/myEAP/EarContent/APP-INF/lib/commons-httpclient.jar" sourcepath="D:/Data/Download/commons-httpclient-3.1/src/java"/>
```
I've tried changing the path to D:/Data/Download/commons-httpclient-3.1/src and that does not work either.
The directory structure is:
```
D
Data
Download
commons-httpclient-3.1
src
java
org
apache
commons
httpclient
AutoCloseInputStream.java
... (and so forth)
```
|
Try pointing it at a directory containing the top level package directly, "D:/Data/Download/commons-httpclient-3.1/src/java" for you. What worked for me was creating a new src zip file containing the "org" folder and everything beneath it.
Here's my .classpath entry, (which works for me) in case it helps:
```
<classpathentry kind="lib" path="/blib/java/commons-httpclient-3.1/commons-httpclient-3.1.jar" sourcepath="/blib/java/commons-httpclient-3.1/commons-httpclient-3.1-src.zip"/>
```
|
I've found that sometimes, you point to the directory you'd assume was correct, and then it still states that it can't find the file in the attached source blah blah.
These times, I've realized that the last path element was "src". Just removing this path element (thus indeed pointing one level above the actual path where the "org" or "com" folder is located) magically makes it work.
Somehow, Eclipse seems to imply this "src" path element if present, and if you then have it included in the source path, Eclipse chokes. Or something like that.
|
Attach Source Issue in Eclipse
|
[
"",
"java",
"eclipse",
"eclipse-3.4",
"ganymede",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.