Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
It is possible to create an array at compile time like;
```
int[] myValues = new int[] { 1, 2, 3 } ;
```
But I would like to do something like this;
```
List<int> myValues = new List<int>() { 1, 2, 3 };
```
The compiler says No. Is there a way to do this (C# 2.0) without using LINQ (C# 3.0)? | ```
List<int> myValues = new List<int>(new int[] { 1, 2, 3 } );
```
This will create an intermediate array however so there may be a more efficient way of doing the same thing.
EDIT:
John Feminella suggested creating a factory method to accept a list of parameters and return a List which you could implement as follows:
```
List<T> CreateList<T>(params T[] values)
{
return new List<T>(values);
}
```
which you can use as follows:
```
List<int> myValues = CreateList(1, 2, 3);
``` | Patrick has the answer you are looking for. But I wanted to add a little bit. If you want to do longer ranges of numbers and don't feel like typing them out by hand, you should look at the Enumerable.Range method. It can be used to generate a range of sequential numbers at runtime. For instance, your sample could have been written as follows
```
var list = Enumerable.Range(1,3).ToList();
``` | Create List<int> with values at compile time | [
"",
"c#",
"generics",
"c#-2.0",
""
] |
I am writing a Japanese language quiz program and I don't want to require people to install the East Asian language pack for Windows XP. I am using GDI+ for drawing text. I tried downloading a free Unicode font and using that to draw text. I tested it on my computer (with East asian pack installed) and it displayed the Japanese characters correctly using the downloaded font, but on another computer without the language pack it didn't work.
I am looking for suggestions on how to make this work. The program is pretty simple and is written in C++ using only Win32 API, GDI+, and STL. If possible I would like to stick to just these libraries. I do have UNICODE #defined.
The code that I am basically using for drawing text is pasted below:
```
#include <gdiplus.h>
using namespace Gdiplus ;
Graphics * gi ;
PrivateFontCollection _fonts ;
_fonts.AddFontFile(fontFilename) ;
Font * currentFont = new Font(fontName, fontSize, FontStyleRegular, UnitPoint, &_fonts) ;
std::wstring text = L"Some text" ;
gi->DrawString(text.c_str(), (INT)text.size(), currentFont, rectangle, &format, &brush) ;
``` | I was wrong. If you have a font that includes Japanese Characters it will display correctly in Windows XP even if East Asian language Pack is not installed.
If you have the East Asian Language Pack installed and if your font doesn't support Japanese characters, Windows will pick from between one of two fonts that it thinks matches your font the closest. This caused me to think that the fonts I was using included Japanese characters since it would change the way the characters were drawn when I switched between my font and the system font.
Microsoft has a free to install Japanese font called [Meiryo Collection Version 5.00](http://www.microsoft.com/downloads/details.aspx?FamilyID=f7d758d2-46ff-4c55-92f2-69ae834ac928&displaylang=en). Also here is a [page](http://www.wazu.jp/gallery/Fonts_Japanese.html) listing more Japanese fonts , The only problem with the fonts listed is the licensing is not clear. | The solution is fairly easy. Since you don't want to ship the font, and you can't count on any font being available, you will have to ship images of all characters you want to use. I suggest to use PNG, as JPEG is aimed at photos (lots of small gradients) | Need help to display Japanese Text using GDI+ without installing East Asian Language pack in Windows XP | [
"",
"c++",
"windows",
"winapi",
"unicode",
"gdi+",
""
] |
Why does SQL Server insist that the temp table already exists! one or the other will happen!! , so it will never be the case.
```
declare @checkvar varchar(10)
declare @tbl TABLE( colx varchar(10) )
set @checkvar ='a'
INSERT INTO @tbl (colx) VALUES('a')
INSERT INTO @tbl (colx) VALUES('b')
INSERT INTO @tbl (colx) VALUES('c')
INSERT INTO @tbl (colx) VALUES('d')
IF @checkvar is null select colx INTO #temp1 FROM @tbl
ELSE select colx INTO #temp1 FROM @tbl WHERE colx =@checkvar
```
error is :There is already an object named '#temp1' in the database.
Is there an elegant way around this?
if @checkvar is null, i want the whole table
otherwise, give me just the values where @checkvar = something
EDIT: the column is a varchar, not an int. | Can't you just rewrite the statement?
```
SELECT colx INTO #temp1 FROM @tbl WHERE (@checkvar IS NULL) OR (colx = @checkVar)
``` | You can create an empty temp table with the desired structure by using `WHERE 1=0`. Then insert the desired records with your original code
```
SELECT colx INTO #temp1
FROM @tbl
WHERE 1 = 0 // this is never true
IF @checkvar IS NULL
BEGIN
INSERT INTO #temp1 (colName)
SELECT colx FROM @tbl
END
ELSE
BEGIN
INSERT INTO #temp1 (colName)
SELECT colx
FROM @tbl
WHERE colx = @checkvar
END
``` | Using temp tables in IF .. ELSE statements | [
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
The [Same Origin Policy Documentation](https://developer.mozilla.org/en/Same_origin_policy_for_JavaScript) says this:
> There is one exception to the same
> origin rule. A script can set the
> value of document.domain to a suffix
> of the current domain. If it does so,
> the shorter domain is used for
> subsequent origin checks. For example,
> assume a script in the document at
> <http://store.company.com/dir/other.html>
> executes the following statement:
>
> document.domain = "company.com";
>
> After
> that statement executes, the page
> would pass the origin check with
> <http://company.com/dir/page.html>.
> However, by the same reasoning,
> company.com could not set
> document.domain to othercompany.com.
Do all popular browsers support this? If not, which ones don't? | Firefox 2,3, IE6,7,8, Chrome, and Safari 2 and 3, Opera 9 all support document.domain;
Other "newer" browsers likely will as well, however those are the ones that I've actually tested my code (which makes use of document.domain) | Document domain should be lowercase and the rules are like this
```
// Actual domain is "www.foo.com"
document.domain = "foo.com"; // this is valid
// Actual domain is "bar.foo.com"
document.domain = "www.foo.com"; // this is invalid, "bar.foo.com" is not a subdomain of "www.foo.com"
// Actual domain is "blah.bar.foo.com"
document.domain = "bar.foo.com" // Ok
document.domain = "foo.com" // Still ok
document.domain = "bar.foo.com" // Invalid, you can't change it back to a more specific domain.
``` | Does setting document.domain work in all (most) browsers? | [
"",
"javascript",
"browser",
"cross-browser",
""
] |
In my html file I put as
```
<script src='http://otherdomain/file.js' ></script>
```
When I run my webpage, that other javascript creates some html content using `docuemnt.write` command.
How can I dynamically read that dynamic html using the javascript on my html page?
Is it possible? | You could overwrite `document.write` to just *store* the input and not write it. When you're done, restore it to normal behavior:
```
<script>
var outputBuffer = "", old = document.write;
document.write = function(buffer) {
outputBuffer += buffer; // store the contents for later use
};
</script>
<script src="http://otherdomain.com/file.js"></script>
<script>
document.write = old; // restore document.write to normal behavior
document.write(outputBuffer); // if you want to write it out
</script>
```
A working demo: <http://jsbin.com/isuqi> | If you know where the other JavaScript writes its content it's easy to read it. If for example everything is written into the following div-container:
```
<div id="updatable"></div>
```
Then you could retrieve it by calling something like `document.getElementById("updatable").childNodes` or `document.getElementById("updatable").innerHTML` | read javascript source | [
"",
"javascript",
""
] |
How do I create a decorator that applies to classes?
Specifically, I want to use a decorator `addID` to add a member `__id` to a class, and change the constructor `__init__` to take an `id` argument for that member.
```
def getId(self): return self.__id
classdecorator addID(cls):
def __init__(self, id, *args, **kws):
self.__id = id
self.getId = getId
cls.__init__(self, *args, **kws)
@addID
class Foo:
def __init__(self, value1):
self.value1 = value1
```
The above should be equivalent to:
```
class Foo:
def __init__(self, id, value1):
self.__id = id
self.value1 = value1
def getId(self): return self.__id
``` | I would second the notion that you may wish to consider a subclass instead of the approach you've outlined. However, not knowing your specific scenario, YMMV :-)
What you're thinking of is a metaclass. The `__new__` function in a metaclass is passed the full proposed definition of the class, which it can then rewrite before the class is created. You can, at that time, sub out the constructor for a new one.
Example:
```
def substitute_init(self, id, *args, **kwargs):
pass
class FooMeta(type):
def __new__(cls, name, bases, attrs):
attrs['__init__'] = substitute_init
return super(FooMeta, cls).__new__(cls, name, bases, attrs)
class Foo(object):
__metaclass__ = FooMeta
def __init__(self, value1):
pass
```
Replacing the constructor is perhaps a bit dramatic, but the language does provide support for this kind of deep introspection and dynamic modification. | Apart from the question whether class decorators are the right solution to your problem:
In Python 2.6 and higher, there are class decorators with the @-syntax, so you can write:
```
@addID
class Foo:
pass
```
In older versions, you can do it another way:
```
class Foo:
pass
Foo = addID(Foo)
```
Note however that this works the same as for function decorators, and that the decorator should return the new (or modified original) class, which is not what you're doing in the example. The addID decorator would look like this:
```
def addID(original_class):
orig_init = original_class.__init__
# Make copy of original __init__, so we can call it without recursion
def __init__(self, id, *args, **kws):
self.__id = id
self.getId = getId
orig_init(self, *args, **kws) # Call the original __init__
original_class.__init__ = __init__ # Set the class' __init__ to the new one
return original_class
```
You could then use the appropriate syntax for your Python version as described above.
But I agree with others that inheritance is better suited if you want to override `__init__`. | How to decorate a class? | [
"",
"python",
"decorator",
""
] |
When comparing similar lines, I want to highlight the differences on the same line:
```
a) lorem ipsum dolor sit amet
b) lorem foo ipsum dolor amet
lorem <ins>foo</ins> ipsum dolor <del>sit</del> amet
```
While difflib.HtmlDiff appears to do this sort of inline highlighting, it produces very verbose markup.
Unfortunately, I have not been able to find another class/method which does not operate on a line-by-line basis.
Am I missing anything?
Any pointers would be appreciated! | For your simple example:
```
import difflib
def show_diff(seqm):
"""Unify operations between two compared strings
seqm is a difflib.SequenceMatcher instance whose a & b are strings"""
output= []
for opcode, a0, a1, b0, b1 in seqm.get_opcodes():
if opcode == 'equal':
output.append(seqm.a[a0:a1])
elif opcode == 'insert':
output.append("<ins>" + seqm.b[b0:b1] + "</ins>")
elif opcode == 'delete':
output.append("<del>" + seqm.a[a0:a1] + "</del>")
elif opcode == 'replace':
raise NotImplementedError("what to do with 'replace' opcode?")
else:
raise RuntimeError("unexpected opcode")
return ''.join(output)
>>> sm= difflib.SequenceMatcher(None, "lorem ipsum dolor sit amet", "lorem foo ipsum dolor amet")
>>> show_diff(sm)
'lorem<ins> foo</ins> ipsum dolor <del>sit </del>amet'
```
This works with strings. You should decide what to do with "replace" opcodes. | Here's an inline differ inspired by @tzot's [answer above](https://stackoverflow.com/a/788780/37020) (also Python 3 compatible):
```
def inline_diff(a, b):
import difflib
matcher = difflib.SequenceMatcher(None, a, b)
def process_tag(tag, i1, i2, j1, j2):
if tag == 'replace':
return '{' + matcher.a[i1:i2] + ' -> ' + matcher.b[j1:j2] + '}'
if tag == 'delete':
return '{- ' + matcher.a[i1:i2] + '}'
if tag == 'equal':
return matcher.a[i1:i2]
if tag == 'insert':
return '{+ ' + matcher.b[j1:j2] + '}'
assert False, "Unknown tag %r"%tag
return ''.join(process_tag(*t) for t in matcher.get_opcodes())
```
It's not perfect, for example, it would be nice to expand 'replace' opcodes to recognize the full word replaced instead of just the few different letters, but it's a good place to start.
Sample output:
```
>>> a='Lorem ipsum dolor sit amet consectetur adipiscing'
>>> b='Lorem bananas ipsum cabbage sit amet adipiscing'
>>> print(inline_diff(a, b))
Lorem{+ bananas} ipsum {dolor -> cabbage} sit amet{- consectetur} adipiscing
``` | Python difflib: highlighting differences inline? | [
"",
"python",
"diff",
""
] |
I'm writing unit-tests for an app that uses a database, and I'd like to be able to run the app against some sample/test data - but I'm not sure of the best way to setup the initial test data for the tests.
What I'm looking for is a means to run the code-under-test against the same database (or schematically identical) that I currently use while debugging - and before each test, I'd like to ensure that the database is reset to a clean slate prior to inserting the test data.
I realize that using an IRepository pattern would allow me to remove the complexity of testing against an actual database, but I'm not sure that will be possible in my case.
Any suggestions or articles that could point me in the right direction?
Thanks!
--EDIT--
Thanks everyone, those are some great suggestions! I'll probably go the route of mocking my data access layer, combined with some simple set-up classes to generate exactly the data I need per test. | Here's the general approach I try to use. I conceive of tests at about three or four levels:: unit-tests, interaction tests, integration tests, acceptance tests.
At the unit test level, it's just code. Any database interaction is mocked out, either manually or using one of the popular frameworks, so loading data is not an issue. They run quick, and make sure the objects work as expected. This allows for very quick write-test/write code/run test cycles. The mock objects serve up the data that is needed by each test.
Interaction tests test the interactions of non-trivial class interactions. Again, no database required, it's mocked out.
Now at the integration level, I'm testing integration of components, and that's where real databases, queues, services, yada yada, get thrown in. If I can, I'll use one of the popular in-memory databases, so initialization is not an issue. It always starts off empty, and I use utility classes to scrub the database and load exactly the data I want before each test, so that there's no coupling between the tests.
The problem I've hit using in-memory databases is that they often don't support all the features I need. For example, perhaps I require an outer join, and the in-memory DB doesn't support that. In that case, I'll typically test against a local conventional database such as MySQL, again, scrubbing it before each test. Since the app is deployed to production in a separate environment, that data is untouched by the testing cycle. | The best way I've found to handle this is to use a static test database with known data, and use transactions to ensure that your tests don't change anything.
In your test setup you would start a transaction, and in your test cleanup, you would roll the transaction back. This lets you modify data in your tests but also makes sure everything gets restored to its original state when the test completes. | Unit-Testing: Database set-up for tests | [
"",
"c#",
"database",
"unit-testing",
""
] |
While using std::for\_each algorithm how do I break when a certain condition is satisfied? | You can use [`std::find_if`](https://en.cppreference.com/w/cpp/algorithm/find) algorithm, which will stop and return the iterator to the first element where the predicate condition applied to returns `true`. So your predicate should be changed to return a boolean as the continue/break condition.
However, this is a hack, so you can use the algorithms.
Another way is to use BOOST\_FOREACH. | You can use std::any\_of (or std::all\_of or std::none\_of) e.g. like this:
```
std::vector<int> a;
// ...
std::all_of(a.begin(), a.end(), [&](int val) {
// return false if you want to break, true otherwise
});
```
However, this is a wasteful solution (return values are not really used for anything), and you're better off writing you own loop. | Breaking in std::for_each loop | [
"",
"c++",
"stl",
""
] |
For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there.
I'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP\_KEEPALIVE options to their lowest.
When I use nonblocking sockets, I've noticed that a recv() will return an error ("resource temporarily unavailable") when it reads from a live socket, but returns "" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but *after* sending some data.
Aside from connecting/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process? | I'd recommend not leaving your (single) test socket connected - make a new connection each time you need to poll. Every load balancer / server availability system I've ever seen uses this method instead of a persistent connection.
If the remote server hasn't responded within a reasonable amount of time (e.g. 10s) mark it as "down". Use timers and signals rather than function response codes to handle that timeout. | "it becomes difficult to tell when the remote host goes down"
Correct. This is a feature of TCP. The whole point of TCP is to have an enduring connection between ports. Theoretically an application can drop and reconnect to the port through TCP (the socket libraries don't provide a lot of support for this, but it's part of the TCP protocol). | Monitoring a tcp port | [
"",
"python",
"tcp",
"monitoring",
"port",
""
] |
im wondering about how to set up a clever way to have all my input 'clean', a procedure to run at the begin of every my script.
I thought to create a class to do that, and then, add a 2 letter prefix in the begin of every input to identify the kind of input, for example:
```
in-mynumber
tx-name
ph-phone
em-email
```
So, at the top of my scripts i just run a function (for example):
```
function cleanInputs(){
foreach($_GET AS $taintedKey => $taintedValue){
$prefix = substr($taintedKey, 0, 2);
switch($prefix){
case 'in':
//I assume this input is an integer
$cGet[$taintedKey] = intval($taintedValue);
break;
case 'tx':
//i assume this input is a normal text
//can contains onely letters, numbers and few symbols
if(preg_match($regExp, $taintedValue)){
$cGet[$taintedKey] = $taintedValue;
}else{
$cGet[$taintedKey] = false;
}
break;
case 'em':
//i assume this input is a valid email
if(preg_match('/^[a-zA-Z0-9-_.]+@[a-zA-Z0-9-_.]+.[a-zA-Z]{2,4}$/', $taintedValue)){
$cGet[$taintedKey] = $taintedValue;
}else{
$cGet[$taintedKey] = false;
}
break;
}
}
}
```
..so i'll create other 2 arrays, $cGet and $cPost with the clean data respectively of $\_GET and $\_POST, and in my script i'lllook for use those arrays, completely forget the $\_GET/$\_POST
I'm even thinkin about add a second prefix to determinate the input's max lenght... for example:
tx-25-name
..but im not pretty sure about that.. and if i take this way, maybe a OOP approach will be better.
What do you think about that?
Seem be a good way to use?
The negatives point that i can actually see (i havent still used that way, is just a wonder of this morning)
1. The prefix, and so the procedures, must be many if i want my application not to be much restrictive;
2. My sent variable's names will become little longer (but we are talking of 3-6 chars, shouldnt be a problem)
Any suggestion is really appreciated!
**EDIT:**
Im not triyn to reinvent the wheel, my post was't about the sistem to sanitizing input, but is about the procedure to do it. I use htmlpurifier to clen the possibly xss injection in html data, and of course i use the parametrized queryes. Im just wondering if is better take input by input, or sanitize them all at the begin and consider they clean in the rest of the script.
The method i thougt is not miracolous and nothing new under the sun, but i think that truncate the input if is not in the format that i aspect, can be usefull...
Why check for sql injection in the 'name' field, that must contain just letters and the apostrophe char?
Just remove everythings that is not letter or apostophe, add slashes for the last one, and run into a parametrized query.
Then, if you aspect an email, just delete everythings that is not an email.. | The idea is fine in itself, however I wonder if it really will be very useful.
For one thing, SQL injections and HTML injections can (should) be protected in another way. SQL injections are prevented by parametrized queries (a must-have this day and age); and HTML injections are prevented by `htmlspecialchars()` method, which should be called **right before outputting the string to the user**. Don't store encoded strings in the DB or (even worse) - encode them as soon as receiving them. Working with them will be a hell later.
Other than these two injection attacks, what will your method do? Well, it can do some regexps for stuff like numbers, phone numbers, emails, names and dates. But that's about it. Unfortunately that's just a part of all the validations you will have to do. Other common cases that you cannot validate there are cross-checking of inputs (start date before end date), and checking that a value is in a list of allowed predefined values (say, for a `<select>` element). And there are an infinite number of custom validation steps that you will have in your application as well. Is it worth to split up all validation in "generic type validation" and "custom rule validation"? I don't know. Perhaps. Or perhaps this will just make a bigger mess. | There are many [well-made PHP tested classes](http://www.google.fr/search?q=php+sanitize&btnG=Rechercher&meta=) that already sanitize inputs. Why make another one? Besides, sanitizing input is more than just verifying data types. It implies checking for sql injections, xss attacks, etc... | Can this be a efficent and reliable way to purify user's input? | [
"",
"php",
"validation",
""
] |
i have looked for and compreensive article on this and have yet to find one good enough.
Can some one explain to me the workings of the transaction types?
(Required, RequiresNew, Mandatory, Never...) and mainly how these affect service calls between services with diferent types? | [A developer's guide to EJB transaction management](http://www.kevinboone.com/ejb-transactions.html) seems like a pretty comprehensive guide to me. | Enterprise Java Beans 3.0 by Bill Burke and Richard Monson Haefel is an excellent reference for anything to do with EJB3.
The txn attributes are pretty easy in fact. Basically you use these at method level in EJBs using annotation.
1. Required means to run that method a txn is required. If the caller was in a txn it will be used. If not a new txn will be created and used.
2. RequiresNew means a new txn is needed to run that method. If the caller was in a txn it will be suspended and a new txn created and used. If not a new txn will be created and used
3. Mandatory means the caller needs to be in a txn when calling the said method. Otherwise an error is raised.
4. Supports means the method doesnt care if the caller was in a txn or not.
5. NotSupported - if the caller was in a txn that txn will be suspended and the method will be run without a txn.
6. Never - same as NotSupported but this one will raise an error if the caller had a txn but NotSupported wont raise an error. | EJB Transaction Type | [
"",
"java",
"transactions",
"ejb-3.0",
"ejb",
""
] |
'[Strict aliasing](https://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule)' optimization needs special care from the source code, s.a. using a union instead of pointer casts. Is there a way to detect using preprocessor directives (#if/else) whether the compiler is trying to do such optimizations?
I would like to maintain the old and non-strict-aliasing-prepared code path for processors and compilers that don't care. It seems faster.
Edit: [GCC predefined macros](http://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html#Common-Predefined-Macros) does not seem to have anything about aliasing. In other words, I'm most interested on gcc 4.x, but also in a general solution (which does not seem to exist). | Completely implementation dependant - you need to check the docs for your specific compiler(s). And when asking questions like this, it's a good idea to mention which compiler(s) you are using.
A semi-portable way to do this is from your Makefile - define different targets for aliased & unaliased versions and define your own STRICT\_ALIASING (or whatever) preprocessor symbol for the aliased version. | [For GCC, see the "may\_alias" attribute](http://gcc.gnu.org/onlinedocs/gcc-4.3.3/gcc/Type-Attributes.html) | How to detect 'strict aliasing' at compile time? | [
"",
"c++",
"c",
"optimization",
"pointers",
""
] |
In other words, how does the implementation keeps track of the count?
Is there a map-like object maintained which is accessible by all the `shared_ptr` instances whose key is the pointer's address and value is the number of references? If I've to implement a `shared_ptr`, this is the first idea that's coming to my mind.
Is there a possibility for a memory leak in case of these reference-counting smart pointers? If so, how can I avoid them? | I've seen two different non-intrusive approaches to this:
1. The smart pointer allocates a small
block of memory to contain the
reference counter. Each copy of the
smart pointer then receives a
pointer to the actual object and a
pointer to the reference count.
2. In addition to an object pointer,
each smart pointer contains a
previous and next pointer, thereby
forming a doubly-linked list of
smart pointers to a particular
object. The reference count is
implicit in the list. When a smart
pointer is copied, it adds itself to
the list. Upon destruction, each
smart pointer removes itself from
the list. If it's the last one in
the list it then frees the
referenced object as well.
If you go [here](http://ootips.org/yonat/4dev/smart-pointers.html) and scroll to the bottom, there is an excellent diagram which explains these methods much more clearly. | Many answers address the way the reference count is stored (it is stored in a shared memory for all shared\_ptr that hold the same native pointer), but most elude the problem of leaks.
The easiest way of leaking memory with reference counted pointers is creating cycles. As an example, a doubly linked list where all the pointers are shared\_ptr with at least two elements is guaranteed not to be deleted. Even if external pointers are freed, the internal pointers will still count, and the reference count will not reach 0. That is, at least, with the most naïve implementation.
The easiest solution to the cycle problem is mixing shared\_ptr (reference counted pointers) with weak pointers that do not share the ownership of the object.
Shared pointers will share both the resource (pointer) and the additional reference\_count information. When you use weak pointers, the reference count is doubled: there is a shared pointer reference count and a weak pointer reference count. The resource is released whenever the shared pointer count reaches 0, but the reference\_count information is left alive until the last weak pointer is released.
In the doubly linked list, the external reference is held in a shared\_ptr, while the internal links are just weak\_ptr. Whenever there are no external references (shared\_ptr) the elements of the list are released, deleting the weak references. At the end all weak references have been deleted and the last weak pointer to each resources frees the reference\_count information.
It is less confusing than the above text seems... I'll try again later. | How does a reference-counting smart pointer's reference counting work? | [
"",
"c++",
"memory-management",
"memory-leaks",
"smart-pointers",
"shared-ptr",
""
] |
I am trying to make Django view that will give JSON responce with earliest and latest objects. But unfotunately it fails to work with this error.
```
'str' object has no attribute '_meta'
```
I have other serialization and it works.
Here is the code.
```
def get_calendar_limits(request):
result = serializers.serialize("json", Session.objects.aggregate(Max('date'), Min('date')), ensure_ascii=False)
return HttpResponse(result, mimetype="application/javascript")
```
Thanks a lot beforehand. | Take a look at the following:
```
objects= Session.objects.aggregate(Max('date'), Min('date'))
print [ type[o] for o in objects ]
result = serializers.serialize("json", objects, ensure_ascii=False)
```
You might want to just run the above in interactive Python as an experiment.
What type are your objects? Is that type serializable? | I get the same error when trying to serialize an object that is not derived from Django's Model | Django serializer gives 'str' object has no attribute '_meta' error | [
"",
"python",
"django",
"json",
""
] |
We have a WPF application where parts of it may throw exceptions at runtime. I'd like to globally catch any unhandled exceptions and log them, but otherwise continue program execution as if nothing happened (kinda like VB's `On Error Resume Next`).
Is this possible in C#? And if so, where exactly would I need to put the exception handling code?
Currently I can't see any single point where I could wrap a `try`/`catch` around and which would catch all exceptions that could occur. Even then, I would have left whatever has been executed because of the catch. Or am I thinking in horribly wrong directions here?
**ETA:** Because many people below pointed it out: The application is not for controlling nuclear power plants. If it crashes, it's not that big a deal, but it throws random exceptions that are mostly UI-related that are a nuisance in the context where it would be used. There were (and probably still are) a few of those and since it uses a plugin architecture and may be extended by others (also students in that case; so *no* experienced developers that are able to write completely error-free code).
As for the exceptions that get caught: I do log them to a log file, including the complete stack trace. That was the whole point of that exercise. Just to counter those people that were taking my analogy to VB's OERN too literally.
I know that blindly ignoring certain classes of errors is dangerous and might corrupt my application instance. As said before, this program isn't mission-critical for anyone. No-one in their right mind would bet the survival of the human civilization on it. It's simply a little tool for testing certain design approaches wrt. software engineering.
For the immediate use of the application there are not many things that can happen on an exception:
* No exception handling – error dialog and application exit. Experiment has to be repeated, though likely with another subject. No errors have been logged, which is unfortunate.
* Generic exception handling – benign error trapped, no harm done. This should be the common case judged from all errors we were seeing during development. Ignoring this kind of errors should have no immediate consequences; the core data structures are tested well enough that they will easily survive this.
* Generic exception handling – serious error trapped, possibly crash at a later point. This may happen rarely. We've never seen it so far. The error is logged anyway and a crash might be inevitable. So this is conceptually similar to the very first case, except that we have a stack trace. And in the majority of cases the user won't even notice.
As for the experiment data generated by the program: A serious error would at worst just cause no data to be recorded. Subtle changes that change the result of the experiment ever so slightly are pretty unlikely. And even in that case, if the results seem dubious the error was logged; one can still throw away that data point if it's a total outlier.
To summarize: Yes, I consider myself still at least partially sane and I don't consider a global exception handling routine which leaves the program running to be necessarily totally evil. As said twice before, such a decision might be valid, depending on the application. In this case it was judged a valid decision and not total and utter bullshit. **For any other application that decision might look different.** But please don't accuse me or the other people who worked on that project to potentially blow up the world just because we're ignoring errors.
Side note: There is exactly one user for that application. It's not something like Windows or Office that gets used by millions where the cost of having exceptions bubble to the user at all would be very different in the first place already. | Use the [`Application.DispatcherUnhandledException Event`](http://msdn.microsoft.com/en-us/library/system.windows.application.dispatcherunhandledexception.aspx). See [this question](https://stackoverflow.com/questions/1472498/wpf-global-exception-handler) for a summary (see [Drew Noakes' answer](https://stackoverflow.com/a/1472562/873263)).
Be aware that there'll be still exceptions which preclude a successful resuming of your application, like after a stack overflow, exhausted memory, or lost network connectivity while you're trying to save to the database. | Example code using **NLog** that will catch exceptions thrown from **all threads in the AppDomain**, from the **UI dispatcher thread** and from the **async functions**:
### App.xaml.cs :
```
public partial class App : Application
{
private static Logger _logger = LogManager.GetCurrentClassLogger();
protected override void OnStartup(StartupEventArgs e)
{
base.OnStartup(e);
SetupExceptionHandling();
}
private void SetupExceptionHandling()
{
AppDomain.CurrentDomain.UnhandledException += (s, e) =>
LogUnhandledException((Exception)e.ExceptionObject, "AppDomain.CurrentDomain.UnhandledException");
DispatcherUnhandledException += (s, e) =>
{
LogUnhandledException(e.Exception, "Application.Current.DispatcherUnhandledException");
e.Handled = true;
};
TaskScheduler.UnobservedTaskException += (s, e) =>
{
LogUnhandledException(e.Exception, "TaskScheduler.UnobservedTaskException");
e.SetObserved();
};
}
private void LogUnhandledException(Exception exception, string source)
{
string message = $"Unhandled exception ({source})";
try
{
System.Reflection.AssemblyName assemblyName = System.Reflection.Assembly.GetExecutingAssembly().GetName();
message = string.Format("Unhandled exception in {0} v{1}", assemblyName.Name, assemblyName.Version);
}
catch (Exception ex)
{
_logger.Error(ex, "Exception in LogUnhandledException");
}
finally
{
_logger.Error(exception, message);
}
}
``` | Globally catch exceptions in a WPF application? | [
"",
"c#",
"wpf",
"exception",
""
] |
I recently read that for a faster web page load it's a good practice to put the JavaScript links at the end. I did, but now the functions of the referenced file doesn't work. If I put the link at the beginning of the page, everything is fine.
Does this thing of putting JavaScript at the end work only under certain circumstances? | I went through some testing with this as well. If you are loading a Javascript file it is faster to put it at the end BUT it does come with some important caveats.
The first is that doing this often made some of my visual effects noticeable. For example, if I was using jQuery to format a table, the table would come up unformatted and then the code would run to reformat it. I didn't find this to be a good user experience and would rather the page came up complete.
Secondly, putting it at the end made it hard to put code in your pages because often functions didn't exist yet. If you have this in your page:
```
$(function() {
// ...
});
```
Well that won't work until the jQuery object is defined. If it's defined at the end of your page the above will produce an error.
Now you could argue that all that styling code could be put in the external file but that would be a mistake for performance reasons. I started off doing that on a project and then found my page took a second to run through all the Javascript that had been centralized. So I created functions for the relevant behaviour and then called the appropriate ones in each page, reducing the Javascript load run time to 50-200ms.
Lastly, you can (and should) minimize the load time of Javascript by versioning your Javascript files and then using far-futures Expires headers so they're only loaded once (each time they're changed), at which point where they are in the file is largely irrelevant.
So all in all I found putting putting the Javascript files at the end of the file to be cumbersome and ultimately unnecessary. | You do have to pay attention to the ordering, but libraries like JQuery make it easy to do it right. At the end of the page, include all the .JS files you need, and then, either in a separate file or in the page itself, put the Jquery calls to act on the page contents.
Because JQ deals with css-style selectors, it's very easy to avoid any Javascript in the main body of the page - instead you attach them to IDs and classes.
This is called [Unobtrusive Javascript](http://en.wikipedia.org/wiki/Unobtrusive_JavaScript) | Putting JavaScript at the end of the page produces an error | [
"",
"javascript",
""
] |
How can I in jQuery test when a javascript function is fully loaded?
I would like to use a gif, which displays loading, while the javascript function loads, and hide it when the function is fully loaded? | ```
$(function(){
$("#loadingGIF").show();
WaitForFunction();
});
function WaitForFunction()
{
if (!$.isFunction(FUNCTION_TO_WAIT_ON_HERE)) {
setTimeout( WaitForFunction, 100);
return;
}
Function_Loaded();
}
function Function_Loaded(){
$("#loadingGIF").hide();
}
``` | Just call yourself after defining the function. The statements after the function definition will only be executed after the preceding source text is read (and thus executed). | Wait until function is loaded | [
"",
"javascript",
"jquery",
""
] |
Having two InputStreams in Java, is there a way to merge them so you end with one InputStream that gives you the output of both streams? How? | As commented, it's not clear what you mean by merge.
Taking available input "randomly" from either is complicated by `InputStream.available` not necessarily giving you a useful answer and blocking behaviour of streams. You would need two threads to be reading from the streams and then passing back data through, say, `java.io.Piped(In|Out)putStream` (although those classes have issues). Alternatively for some types of stream it may be possible to use a different interface, for instance `java.nio` non-blocking channels.
If you want the full contents of the first input stream followed by the second: `new java.io.SequenceInputStream(s1, s2)`. | `java.io.SequenceInputStream` might be what you need. It accepts an enumeration of streams, and will output the contents of the first stream, then the second, and so on until all streams are empty. | How do you merge two input streams in Java? | [
"",
"java",
"io",
"inputstream",
""
] |
Pretty straightforward question. I'm building an app which reads data from a SQL Server 2005 instance. I wanted to run some tests on my laptop (which does not have SQL 2005) so I was looking at swapping in a local database for the purpose of the tests.
I'm using VS2008 so the Compact Edition DB seemed a natural choice. I had hoped to just swap out my connection string, but it seems It will only let me connect to the CE database using the SqlCeConnection and not SqlConnection. Any way around this, modifiers I can use in the connection string perhaps? | It's actually very possible to user SQL CE instead of full-blown SQL Server by only modifying configuration parameters: change connection string and use `IDbXXX` family interfaces wherever possible instead of platform-specific `SqlXXX` and `SqlCeXXX` ones. See [DbProviderFactories](http://msdn.microsoft.com/en-us/library/system.data.common.dbproviderfactories.aspx).
Be advised, however, of differences in SQL dialects of these two platforms. | Yes you can use SQL Compact and/or SQL Server by referring to the base classes instead. For example:
```
IDbConnection Connection;
if (Compact)
Connection = new SqlCeConnection();
else
Connection = new SqlConnection();
```
The connection string needs to point at your data file for SQL Compact, for example: `"Data Source=urData.sdf;Persist Security Info=False;"`. More [examples here](http://www.c-sharpcorner.com/uploadfile/nipuntomar/connection-strings-for-sql-server-compact-edition/).
This link will explain what [differences there are between SQL Server and SQL Compact](http://erikej.blogspot.co.uk/2011/01/comparison-of-sql-server-compact-4-and.html), as it's not identical. | Connecting to SQL CE db using SQLConnection | [
"",
"c#",
"sql-server-ce",
"connection-string",
""
] |
I am working on some legacy code and have come across something that I'm not sure of. We have a `class y` that is declared inside of another `class x`. `Class y` is only ever used inside of `class x` but my question is why wouldn't you create a separate class file and put `class y` in there instead of declaring it inside of `class x`? Isn't this violating OOP's or is it just a matter of style since it is only ever used inside of this class. I'm refactoring some of this code and my first reaction would be to separate `class y` out into it's own file.
```
namespace Library
{
public class x
{
// methods, properties, local members of class x
class y
{
// methods, properties, local members of class y
}
}
}
``` | You create an inner class because it is only ever used within the scope of class x and it logically fits in the factoring/architecture of class x.
Class y might also be privy to implementation details of class x that are not meant to be known to the public. | This has permissions implications. A top-level "class y" would be "internal" - however, here "y" is private to "x". This approach is helpful for implementation details (for example cache rows etc). Likewise, `y` has access to all private state of `x`.
There are also implications with generics; `x<T>.y` is generic "of T", inherited from the outer class. You can see this here, where `Bar` has full use of `T` - and note that any static fields of `Bar` are scoped per-`T`.
```
class Foo<T> {
void Test(T value) {
Bar bar = new Bar();
bar.Value = value;
}
class Bar {
public T Value { get; set; }
}
}
```
Often people incorrectly think they need to define `Bar` as `Bar<T>` - this is now (effectively) *doubly* generic - i.e. `Foo<TOld, T>` - where `TOld` is the (now unavailable) `T` from `Foo<T>`. So don't do that! Or if you **want** it to be doubly-generic, pick different names. Fortunately, the compiler warns you about this... | Class declared inside of another class in C# | [
"",
"c#",
"oop",
"class",
""
] |
Hey. I've got a dumb, but nasty problem.
If I've got this (simplified) situation:
```
<div onclick="doSomething(); return false;">
lorem ipsum <a href="somewhere.html">dolor sit</a> amet
</div>
```
...is there any (convenient) way to prevent the onclick-handler of the parent div from being triggered when the link is clicked.
In my scenario, I've got a big wrapping div that is made into a "link" with an onclick-handler, but I'd like to overlay some text data with links in it, but at least in Firefox when I click on the links in the text, the onclick of the parent is executed first. | You get around this by testing the original target of the event in your handler. Something along the lines of:
```
function doSomething ( e ) {
// get event object and target element
var e = e || window.event;
var target = e.srcElement || e.target;
// is target a div?
if ( target && /^div$/i.test( target.nodeName ) ) {
// do your magic in here ...
}
}
``` | With jQuery you can add a click handler to each of your anchors and use [event.stopPropagation()](http://docs.jquery.com/Events/jQuery.Event#event.stopPropagation.28.29) to keep the event from bubbling up to the DIV.
```
$('div#wrapperDiv > a').click( function(event) {
event.stopPropagation();
});
``` | Links on top of elements with onclick-handlers | [
"",
"javascript",
"html",
""
] |
I have a hierarchy of three interfaces, grandparent, parent and child. **Parent and child have a method "add", which requires different input parameters in the child.** While it's no problem to add the required signature **in the child, the inherited method will be pointless**, so is there a way to not have it in there at all? **The other methods work fine.**
Maybe, to achieve what I want, I can improve the design altogether, so I'll shortly outline what the interfaces are about:
I collect meter readings that consist of a time and a value. The grandparent interface is for a single reading. I also have classes that represent a number of consecutive readings (a series), and one that contains multiple series running over the same period of time (let's just call that a table).
The table can be viewed as a series (which aggregates the values orthogonally to the time axis), and both table and series can be viewed as a single reading (the implementations providing different means of aggregation), hence the inheritance. This seems to work out fine, but for the add method. (I can add a single point to the series, but for the table I need an additional parameter to tell me to which series it belongs.) | Maybe it would make sense to break the interface inheritance all together. Just have specific interfaces for specific types of behaviors. Whatever classes you have that implement these interfaces can just pick the ones that make sense, and won't have to worry about implementing methods that don't make sense. | No, you cannot avoid inheriting a method, since doing so would violate the [Liskov substitution principle](http://en.wikipedia.org/wiki/Liskov_substitution_principle).
In practice, you could have implementations throw an `UnsupportedOperationException`, but that would be pretty nasty.
Can't you implement the inherited method with some sort of default value for the series? | In a Java interface, how can I *not* use one particular method inherited from a parent interface? | [
"",
"java",
"interface",
""
] |
I'm little confused here now. Let me explain:
I've seen people talking about adding a button or some other control to the page in asp.net (3.5) and when the control renders it changes the Id of that control, eg. Button1 becomes Button1\_somethingsomething which prevents them from using jQuery and what they end up using is something such as `<%controlId.ClientId %>`
So I did a little test
```
1. I added a button on the page:
<form id="form1" runat="server">
<div>
<asp:Button ID="Button1" runat="server" Text="Button" />
<div>
2. Then I added a JavaScript and jQuery:
<script type="text/javascript">
$(document).ready(function() {
$("#Button1").click(function() {
alert("Hello world!");
});
});
</script>
3. The generated html is this:
<div>
<input type="submit" name="Button1" value="Button" id="Button1" />
<div>
```
Now, I don't see ASP.NET (asp.net 3.5) changing the ids. Why do I see different behavior?
Btw. This does work when I hit the button!
Thanks. | ASP.NET only changes the IDs when the control is inside of a [Naming Container](http://msdn.microsoft.com/en-us/library/system.web.ui.inamingcontainer.aspx). This could include certain user controls, ContentPlaceHolders from a master page, and repeating controls (Repeater, GridView, etc.) | You may not always need to use YourControl.ClientID, but it is good practice so that if and when your control DOES end up inside a container, you wont have to go back and fix it. | why I don't see generated IDs for server side controls? | [
"",
".net",
"asp.net",
"javascript",
"jquery",
"asp.net-3.5",
""
] |
I did a build of my SharePoint site template solution assembly and successfully deployed it to SharePoint, it was version 6.4.0.2032. I did some testing and found a couple problems with my code. I fixed the issues. Uninstalled my solution via "setup.bat /uninstall". Rebuilt my assembly to version 6.4.0.2033. I again installed my new template successfully, but when attempting to add one of my webparts to the page, SharePoint continues to look for the the old version of my assembly.
Am I missing a step?
Here is the snippet from the log in C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\LOGS:
```
04/08/2009 13:04:58.18 w3wp.exe (0x0AA8) 0x0BE4 Windows SharePoint Services Web Parts 8l4f Monitorable Error importing WebPart. Assembly SharePoint.Site, Version=6.4.0.2032, Culture=neutral, PublicKeyToken=db45c0486d0dc06d, TypeName. SharePoint.Site.MetadataSearch, SharePoint.Site, Version=6.4.0.2032, Culture=neutral, PublicKeyToken=db45c0486d0dc06d
``` | When a previous version is removed using "setup.bat /uninstall" I've noticed that the corresponding ".webpart" files for the WebParts do not get removed. When the new version is re-depolyed these ".webpart" files do not get updated and continue to point to the previous assembly version.
To see what assembly version your Web Parts are referencing:
1. Open the top level site settings in SharePoint (Site Actions > Site Settings > Go to top level site settings)
2. Click "Web Parts" under the "Galeries" column
3. Click the "Edit" icon next to your Web Parts
4. Click the "View XML" tool bar button
You should be able to find the corresponding new ".webpart" file (which should reference your new assembly) within your compiled solution. Then just upload it to this Web Part Gallery list (remember to check "overwrite the existing files")
If you can't find the ".webpart" file, you can always just download the copy from the "Web Part Gallery" and manually modify it.
Hope that helps. | In SharePoint you have a lot of references to the assemblies. Some are stored in files on the disk (page references in the layout files) and others are stored in the content database (page references in content files). SharePoint also add SafeControls to the web.config file when you deploy using the solutions framework. These entries reference assemblies by their strong names.
My experience is that you should avoid changing assembly versions for SharePoint solutions - it will save you all kind of troubles. To keep track of the assembly versions, you should use the assembly file version instead. This will not cause errors with SharePoint.
Did I mention solution upgrades? Just think about upgrading an assembly in a farm where you web part has already been added to dozens of pages. All these pages would reference the old assembly and would probably cause unhandled errors after the upgrade.
The assembly file version property is set in the AssemblyInfo.cs file:
```
[assembly: AssemblyVersion("6.0.0.0")]
[assembly: AssemblyFileVersion("6.4.0.2033")]
``` | Unable to deploy new SharePoint site template assembly version | [
"",
"c#",
"visual-studio-2008",
"sharepoint",
"assemblies",
"deployment",
""
] |
I'm using a [coda slider](http://jqueryfordesigners.com/coda-slider-effect/) like consctuct on one of my pages. Naturally, the anchor ("#currentTab") information is lost after a postback. This is annoying because when you press a button on a certain tab, you always end up on the first tab after the postback.
What is the best way of letting this information survive a postback? | Either execute your postback as an AJAX request, or add some javascript to the form that will send the anchor value to the server
Rough example
```
<form onsubmit="this.anchor.value=top.location.hash">
<input name="anchor" type="hidden" value="">
<!-- rest of form -->
</form>
```
Then you'll need a convention to return it to the client and perform the appropriate action. | Try this is your `page_load` event
```
Me.Form.Attributes("onsubmit") = "this.action+=top.location.hash;"
``` | Retain anchor after postback in asp.net | [
"",
"asp.net",
"javascript",
""
] |
I need to create table on the fly in C#.net program. Task is to export excel file to SQL Database. How to write store procedure for creating table if column name and table name are passed as parameters? I tried to write following SP.
```
ALTER PROCEDURE dbo.CreateTempTable
@Param1 VarChar(50),
@Param2 VarChar(50),
@Param3 VarChar(50),
@Param4 VarChar(50)
AS
if exists(select * from sys.objects where object_id = OBJECT_ID('[dbo].[Temp]') AND type='U')
begin
Drop Table [dbo].[Temp]
end
Create Table [dbo].[Temp]([@CountryID] [int],
[@Param1] [VarChar](150),
[@Param2] [DateTime],
[@Param3] [VarChar](50),
[@Param4] [VarChar](50));
RETURN
```
While executing if I give input as
```
@Param1 - 'CountryId'
@Param2 - 'CountryName'
@Param3 - 'CreateDate'
@Param4 - 'CreatedBy'
```
The column created as @Param1, @Param2, @Param3, @Param4 instead of CountryId... etc. | The body of your sproc should be
```
exec ('CREATE TABLE [dbo.Temp] (' + @param1 + ' int,' + @param2 + ' [DateTime], ' + @Param3 + ' VarChar, ' + @Param4 + ' VarChar)')
```
Although, as you're calling it from .net, you might as well just build up a string in .net and execute that as inline SQL...
eg:
```
string sql = string.Format("CREATE TABLE [dbo.Temp] ({0} int, {1} [DateTime], {2} VarChar, {3} VarChar)", fieldName1, fieldName2, fieldName3, fieldName4);
```
And then just execute that as SQL. | You may use the [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms175170.aspx) system stored procedure with a custom string that you have to build in your program. | Creating Table in SQL2005 through .net program | [
"",
"c#",
".net",
"sql-server-2005",
""
] |
A textbook I recently read discussed row major & column major arrays. The book primarily focused on 1 and 2 dimensional arrays but didn't really discuss 3 dimensional arrays. I'm looking for some good examples to help solidify my understanding of addressing an element within a multi-dimensional array using row major & column major arrays.
```
+--+--+--+ |
/ / / /| |
+--+--+--+ + | +---+---+---+---+
/ / / /|/| | / / / / /|
+--+--+--+ + + | +---+---+---+---+ +
/ / / /|/|/| | / / / / /|/|
+--+--+--+ + + + | +---+---+---+---+ + +
/ / / /|/|/|/| | / / / / /|/|/|
+--+--+--+ + + + + | +---+---+---+---+ + + +
/ / / /|/|/|/|/ | |000|001|002|003|/|/|/|
+--+--+--+ + + + + | +---+---+---+---+ + + +
|00|01|02|/|/|/|/ | |004|005|006|007|/|/|/|
+--+--+--+ + + + | +---+---+---+---+ + + +
|03|04|05|/|/|/ | |008|009|00A|00B|/|/|/
+--+--+--+ + + | +---+---+---+---+ + +
|06|07|08|/|/ | |00C|00D|00E|00F|/|/
+--+--+--+ + | +---+---+---+---+ +
|09|0A|0B|/ | |010|011|012|013|/
+--+--+--+ | +---+---+---+---+
arr[5][3][4] | arr[3][4][5]
```
**NOTE:** Original question incorrectly represented arr[3][4][5]. I have learned that the original subscript represents depth. The data has been corrected to reflect intended array representation.
```
Example hex data
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
|000|001|002|003| |100|101|102|103| |200|201|202|203|
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
|004|005|006|007| |104|105|106|107| |204|205|206|207|
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
|008|009|00A|00B| |108|109|10A|10B| |208|209|20A|20B|
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
|00C|00D|00E|00F| |10C|10D|10E|10F| |20C|20D|20E|20F|
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
|010|011|012|013| |110|111|112|113| |210|211|212|213|
+---+---+---+---+ +---+---+---+---+ +---+---+---+---+
slice 0 slice 1 slice 2
short Arr[3][4][5]; // assume array is filled with hex test data
arr[1][2][3] = 0x10B use slice 1, row 2, col 3
arr[2][3][4] = 0x210 use slice 2, row 3, col 4
resolves to row 4, col 0
```
row major
{000,001,002,003,004,005,006,007,008,009,00A,00B,00C,00D,00E,00F,010,011,012,013,
100,101,102,103,104,105,106,107,108,109,10A,10B,10C,10D,10E,10F,110,111,112,113,
200,201,202,203,204,205,206,207,208,209,20A,20B,20C,20D,20E,20F,210,211,212,213}
column major
{000,004,008,00C,010,001,005,009,00D,011,002,006,00A,00E,012,003,007,00B,00F,013,
100,104,108,10C,110,101,105,109,10D,111,102,106,10A,10E,112,103,107,10B,10F,113,
200,204,208,20C,210,201,205,209,20D,211,202,206,20A,20E,212,203,207,20B,20F,213}
```
Calculation offset for arr[1][2][3] using row major offset?
Calculation offset for arr[1][2][3] using column major offset?
``` | When I asked this question I was hoping to find some good 3 dimensional array examples. Especially code examples. Since I didn't find anything that was understandable, I decided to create a little C program to help display the concept. It uses the same test data in a 3x4x5 array. It also includes test data for a 5x5x5 array. It creates a column major array from the row major array so the offset calculations can be verified.
The array offset methods are:- char \*calc\_RowMajor(char \*Base, int elemSz, int depth\_idx, int row\_idx, int col\_idx)- char \*calc\_ColMajor(char \*Base, int elemSz, int depth\_idx, int col\_idx, int row\_idx)
I added comments in the code where applicable to help clarify what the code is doing.
```
//
// Arrays.cpp :
// Purpose: Display rowMajor & colMajor data and calculations.
//
#include "stdafx.h"
#define _show_Arrays 1 // 1=display rowMajor & colMajor arrays
#define _square_array 0 // 1=use arr[5][5][5], 0=use arr[3][4][5]
#if (_square_array == 1)
const int depthSz = 5;
const int rowSz = 5;
const int colSz = 5;
/*
+---+---+---+---+---+
|x00|x01|x02|x03|x04|
+---+---+---+---+---+
|x05|x06|x07|x08|x09|
+---+---+---+---+---+
|x0A|x0B|x0C|x0D|x0E|
+---+---+---+---+---+
|x0F|x10|x11|x12|x13|
+---+---+---+---+---+
|x14|x15|x16|x17|x18|
+---+---+---+---+---+
slice x
*/
short row_arr[depthSz][colSz][rowSz] = {
{ /* slice 0 */
{0x000,0x001,0x002,0x003,0x004},
{0x005,0x006,0x007,0x008,0x009},
{0x00A,0x00B,0x00C,0x00D,0x00E},
{0x00F,0x010,0x011,0x012,0x013},
{0x014,0x015,0x016,0x017,0x018}},
{ /* slice 1 */
{0x100,0x101,0x102,0x103,0x104},
{0x105,0x106,0x107,0x108,0x109},
{0x10A,0x10B,0x10C,0x10D,0x10E},
{0x10F,0x110,0x111,0x112,0x113},
{0x114,0x115,0x116,0x117,0x118}},
{ /* slice 2 */
{0x200,0x201,0x202,0x203,0x204},
{0x205,0x206,0x207,0x208,0x209},
{0x20A,0x20B,0x20C,0x20D,0x20E},
{0x20F,0x210,0x211,0x212,0x213},
{0x214,0x215,0x216,0x217,0x218}},
{ /* slice 3 */
{0x300,0x301,0x302,0x303,0x304},
{0x305,0x306,0x307,0x308,0x309},
{0x30A,0x30B,0x30C,0x30D,0x30E},
{0x30F,0x310,0x311,0x312,0x313},
{0x314,0x315,0x316,0x317,0x318}},
{ /* slice 4 */
{0x400,0x401,0x402,0x403,0x404},
{0x405,0x406,0x407,0x408,0x409},
{0x40A,0x40B,0x40C,0x40D,0x40E},
{0x40F,0x410,0x411,0x412,0x413},
{0x414,0x415,0x416,0x417,0x418}}
};
#else
const int depthSz = 3;
const int rowSz = 4;
const int colSz = 5;
/*
+---+---+---+---+
|000|001|002|003|
+---+---+---+---+
|004|005|006|007|
+---+---+---+---+
|008|009|00A|00B|
+---+---+---+---+
|00C|00D|00E|00F|
+---+---+---+---+
|010|011|012|013|
+---+---+---+---+
slice x
*/
short row_arr[depthSz][colSz][rowSz] = {
{ /* slice 0 */
{0x000,0x001,0x002,0x003},
{0x004,0x005,0x006,0x007},
{0x008,0x009,0x00A,0x00B},
{0x00C,0x00D,0x00E,0x00F},
{0x010,0x011,0x012,0x013}},
{ /* slice 1 */
{0x100,0x101,0x102,0x103},
{0x104,0x105,0x106,0x107},
{0x108,0x109,0x10A,0x10B},
{0x10C,0x10D,0x10E,0x10F},
{0x110,0x111,0x112,0x113}},
{ /* slice 2 */
{0x200,0x201,0x202,0x203},
{0x204,0x205,0x206,0x207},
{0x208,0x209,0x20A,0x20B},
{0x20C,0x20D,0x20E,0x20F},
{0x210,0x211,0x212,0x213}}
};
#endif
short col_arr[depthSz*colSz*rowSz]; //
char *calc_RowMajor(char *Base, int elemSz, int depth_idx, int row_idx, int col_idx)
{ // row major slice is navigated by rows
char *address;
int lbound = 0; // lower bound (0 for zero-based arrays)
address = Base /* use base passed */
+ ((depth_idx-lbound)*(colSz*rowSz*elemSz)) /* select slice */
+ ((row_idx-lbound)*rowSz*elemSz) /* select row */
+ ((col_idx-lbound)*elemSz); /* select col */
return address;
}
char *calc_ColMajor(char *Base, int elemSz, int depth_idx, int col_idx, int row_idx)
{ // col major slice is navigated by columns
char *address;
int lbound = 0; // lower bound (0 for zero-based arrays)
int pageSz = colSz*rowSz*elemSz;
int offset;
offset = (col_idx-lbound)*(colSz*elemSz) /* select column */
+ (row_idx-lbound)*(elemSz); /* select row */
if (offset >= pageSz)
{ // page overflow, rollover
offset -= (pageSz-elemSz); /* ajdust offset back onto page */
}
address = Base /* use base passed */
+ ((depth_idx-lbound)*pageSz) /* select slice */
+ offset;
return address;
}
void disp_slice(char *pStr, short *pArr,int slice,int cols, int rows)
{
printf("== %s slice %d == %p\r\n",pStr, slice,pArr+(slice*rows*cols));
for(int x=0;x<rows;x++)
{
for(int y=0;y<cols;y++)
printf("%03X ",*(pArr+(slice*rows*cols)+(x*cols)+y));
printf("\r\n");
}
}
int _tmain(int argc, _TCHAR* argv[])
{
// initialize col based array using row based array data
{ // convert row_arr into col_arr
short *pSrc = &row_arr[0][0][0];
short *pDst = &col_arr[0];
for(int d=0;d<depthSz;d++)
for(int r=0;r<rowSz;r++)
for(int c=0;c<colSz;c++)
{
*pDst++ = *(pSrc+((d*rowSz*colSz)+(c*rowSz)+r));
}
}
printf("Using Array[%d][%d][%d]\r\n",depthSz,rowSz,colSz);
#if (_show_Arrays == 1)
{ for(int x=0;x<depthSz;x++) {disp_slice("rowMajor",&row_arr[0][0][0],x,rowSz,colSz);}}
{ for(int x=0;x<depthSz;x++) {disp_slice("colMajor",&col_arr[0],x,rowSz,colSz);}}
#endif
int d = 2; // depth
int r = 3; // row
int c = 4; // column
for(d=0;d<depthSz;d++)
{
c = r = d; // simple access test pattern arr[0][0][0],arr[1][1][1],arr[2][2][2],...
{ // retrieve Array element
printf(" row_arr[%d][%d][%d] = %x\t",d,r,c,row_arr[d][r][c]);
printf("&row_arr[%d][%d][%d] = %p\r\n",d,r,c,&row_arr[d][r][c]);
}
{ // retrieve RowMajor element
short *pRowMajor = (short*)calc_RowMajor((char*)&row_arr[0][0][0],sizeof(short),d,r,c);
printf("calc_RowMajor(%d,%d,%d) = %x\t\t",d,r,c,*pRowMajor);
printf("pRowMajor = %p\r\n",pRowMajor);
}
{ // retrieve ColMajor element
short *pColMajor = (short*)calc_ColMajor((char*)&col_arr[0],sizeof(short),d,c,r);
printf("calc_ColMajor(%d,%d,%d) = %x\t\t",d,r,c,*pColMajor);
printf("pColMajor = %p\r\n",pColMajor);
}
} // for
getchar(); // just to hold the console while looking at the information
return 0;
}
``` | Don't artificially constrain yourself by focusing on 3-dimensional and 2-dimensional. Instead **focus on learning the expression for addressing n-dimensional arrays**.
Expressing n-dimensional addressing would solidfy your grasp on this subject and will be easier to remember one formula rather than separate formulas for 2d and 3d addressing.
---
Here's my attempt at n-dimensional addressing:
```
#define LEN 10
int getValue_nDimensions( int * baseAddress, int * indexes, int nDimensions ) {
int i;
int offset = 0;
for( i = 0; i < nDimensions; i++ ) {
offset += pow(LEN,i) * indexes[nDimensions - (i + 1)];
}
return *(baseAddress + offset);
}
int main() {
int i;
int * baseAddress;
int val1;
int val2;
// 1 dimensions
int array1d[LEN];
int array1d_indexes[] = {2};
int array1d_nDimensions = 1;
baseAddress = &array1d[0];
for(i = 0; i < LEN; i++) { baseAddress[i] = i; }
val1 = array1d[2];
val2 = getValue_nDimensions( // Equivalent to: val1 = array1d[2];
baseAddress,
&array1d_indexes[0],
array1d_nDimensions
);
printf("SANITY CHECK: %d %d\n",val1,val2);
// 3 dimensions
int array3d[LEN][LEN][LEN];
int array3d_indexes[] = {2,3,4};
int array3d_nDimensions = 3;
baseAddress = &array3d[0][0][0];
for(i = 0; i < LEN*LEN*LEN; i++) { baseAddress[i] = i; }
val1 = array3d[2][3][4];
val2 = getValue_nDimensions( // Equivalent to: val1 = array3d[2][3][4];
baseAddress,
&array3d_indexes[0],
array3d_nDimensions
);
printf("SANITY CHECK: %d %d\n",val1,val2);
// 5 dimensions
int array5d[LEN][LEN][LEN][LEN][LEN];
int array5d_indexes[] = {2,3,4,5,6};
int array5d_nDimensions = 5;
baseAddress = &array5d[0][0][0][0][0];
for(i = 0; i < LEN*LEN*LEN*LEN*LEN; i++) { baseAddress[i] = i; }
val1 = array5d[2][3][4][5][6];
val2 = getValue_nDimensions( // Equivalent to: val1 = array5d[2][3][4][5][6];
baseAddress,
&array5d_indexes[0],
array5d_nDimensions
);
printf("SANITY CHECK: %d %d\n",val1,val2);
return 0;
}
```
Output:
```
SANITY CHECK: 2 2
SANITY CHECK: 234 234
SANITY CHECK: 23456 23456
``` | array offset calculations in multi dimensional array (column vs row major) | [
"",
"c++",
"c",
"arrays",
""
] |
Would you write xml-doc for a namespace? And if yes, how and where?
I would think, if it is possible, maybe an almost empty file like this:
```
/// <summary>
/// This namespace contains stuff
/// </summary>
namespace Some.Namespace
{
}
```
But will that work? Since you... "declare", or at least use the namespace in all the other files as well... and what would happen if you wrote an xml-documentation thing somewhere else on the same namespace? Would one be gone? Or would they be merged somehow? | NDoc supports this by recognising a special `NamespaceDoc` class located in each namespace, and using the documentation from that. I haven't tried it, but Sandcastle appears to support the same trick.
**Edit:**
For example:
```
namespace Some.Namespace
{
/// <summary>
/// This namespace contains stuff
/// </summary>
public static class NamespaceDoc
{
}
}
``` | Sandcastle does not support the NamespaceDoc directly, but if you use [Sandcastle Help File Builder](http://www.codeplex.com/SHFB) you can use the NamespaceDoc class mentioned by Tim.
```
namespace Example
{
/// <summary>
/// <para>
/// Summary
/// </para>
/// </summary>
/// <include file='_Namespace.xml' path='Documentation/*' />
internal class NamespaceDoc
{
}
}
```
SCHB also extends the syntax slightly and allows embedding code examples straight from code files. An example \_Namespace.xml:
```
<?xml version="1.0" encoding="utf-8" ?>
<Documentation>
<summary>
<h1 class="heading">Example Namespace</h1>
<para>
This namespace is used in the following way:
</para>
<code source="Examples\Class.cs" lang="cs"></code>
<code source="Examples\Class.vb" lang="vbnet"></code>
<para>
Hopefully this helps!
</para>
</summary>
</Documentation>
```
Including documentation in XML file allows you to write short summary in code and larger description in a separate XML file for the help file. This way the code isn't cluttered with all the details and remains easily readable. | XML-documentation for a namespace | [
"",
"c#",
"namespaces",
"xml-documentation",
""
] |
In Java web application, Suppose if I want to get the InputStream of an XML file, which is placed in the CLASSPATH (i.e. inside the *sources* folder), how do I do it? | [`ClassLoader.getResourceAsStream()`](http://download.oracle.com/javase/6/docs/api/java/lang/ClassLoader.html#getResourceAsStream(java.lang.String)).
As stated in the comment below, if you are in a multi-`ClassLoader` environment (such as unit testing, webapps, etc.) you may need to use `Thread.currentThread().getContextClassLoader()`. See <http://stackoverflow.com/questions/2308188/getresourceasstream-vs-fileinputstream/2308388#comment21307593_2308388>. | ```
ClassLoader.class.getResourceAsStream("/path/file.ext");
``` | Getting the inputstream from a classpath resource (XML file) | [
"",
"java",
"file-io",
"inputstream",
""
] |
How do I start using Mono in Linux as a beginner when I want to switch from Visual Studio?
Is there some easy way to install it like Visual Studio and get started?
So far,with what I've seen,it looks complex to even get started.
Installing and configuring Mono in linux is a lot of work right?
or Is there some distro which I can directly install and get started with applications in Linux? | I recently started to dabble in Mono myself and have so far realized that the MonoProject has made huge advancements in this area. It's well worth it to investigate.
With that said, the easiest method is to get setup with a Linux distro that is Mono friendly such as Suse, or Ubuntu. Personally, I tried it using Ubuntu 8.10.
Once you've got your Linux distro setup properly download and install MonoDevelop. This is an open source IDE that's tightly integrated to work with the Mono platform. MonoDevelop was taken as a branch of SharpDevelop and designed to work with the Mono compiler from the ground up.
This is by far the easiest and fastest way to get setup with Mono. The MonoDevelop IDE is very similar to that of Visual C# Express even. It comes complete with Project/Solution management, GUI development using the GTK# framework, an integrated debugger and a host of other features you would expect in an IDE such as code-completion, line numbers, code-folding etc.
The folks at the MonoProject are on to something with this suite of tools.
Hope this helps you get started.
[Mono Project Homepage](http://mono-project.com/Main_Page)
[Mono Develop Homepage](http://monodevelop.com/) | There are a few interesting books on Mono, although they're probably a little bit old. Still, probably it's worth to grab one and take a look in order to start up.
1. [Practical Mono](https://rads.stackoverflow.com/amzn/click/com/1590595483)
2. [Mono: A developer's notebook](https://rads.stackoverflow.com/amzn/click/com/0596007922)
3. [Cross-Platform .NET development](https://rads.stackoverflow.com/amzn/click/com/1590593308)
4. [Mono Kick-start](https://rads.stackoverflow.com/amzn/click/com/0672325799)
Then, I'd install latest Mono (2.4) on a Linux box (OpenSuse is the one they use, so it will always go smoothly for development, but we also use Ubuntu internally) and start playing around with the compiler, MonoDevelop (which is quite good since 2.0) and so on.
The only tough point will be writing GUI applications, although my team make extensively use of MWF on different Unix flavors. But everything else will go as you'd expect. I'm specially happy with how great remoting works, for instance.
If you're used to Linux then it will be much easier, otherwise I'd also recommend you getting used to it following some tutorial.
Remember tools such us NUnit and NAnt will be also available, so you can start writing your code on both Windows and Linux and testing and compiling on both platforms. | How to get started with Mono in Linux for a beginner? | [
"",
"c#",
"linux",
"mono",
""
] |
I usually add an `m_` in front of `private` fields and an `s_` before `static` members.
With a code like
```
protected static readonly Random s_Random = new Random ();
```
I get the following warnings by VS2008's Code Analysis:
* `CA1709`: Microsoft.Naming : Correct the casing of 's' in member name 'Bar.s\_Random' by changing it to 'S'.
* `CA1707`: Microsoft.Naming : Remove the underscores from member name 'Bar.s\_Random'.
How to resolve this issue? Should I simply remove the `s_`? Or add a global suppression for this warning?
**Edit:** My company lacks coding standards, so it's up to me to define them for my code. (Yea I know...)
If you think `s_` should be removed in general, I'd be glad if you could provide official sources. | You are not following Microsoft's .NET naming convention that tells you not to prefix stuff with anything. If this is really what you want, add a suppression. Otherwise, follow the guideline by getting rid of `s_` and other similar prefixes.
From [Names of Type Members](http://msdn.microsoft.com/en-us/library/ms229012.aspx):
"Names of Fields" Section: "Do not use a prefix for field names. For example, do not use g\_ or s\_ to distinguish static versus non-static fields." | 1. Unload the project
2. Right click on the unloaded project => Edit csproj
3. Make RunCodeAnalysis to false
4. Save and reload project | C# Code Analysis dislikes protected static s_Foo (CA1709, CA1707) | [
"",
"c#",
"visual-studio-2008",
"naming-conventions",
"code-analysis",
"fxcop",
""
] |
I am working on a SSL client server program and I have to reuse the following method.
```
private boolean postMessage(String message){
try{
String serverURLS = getRecipientURL(message);
serverURLS = "https:\\\\abc.my.domain.com:55555\\update";
if (serverURLS != null){
serverURL = new URL(serverURLS);
}
HttpsURLConnection conn = (HttpsURLConnection)serverURL.openConnection();
conn.setHostnameVerifier(new HostnameVerifier() {
public boolean verify(String arg0, SSLSession arg1) {
return true;
}
});
conn.setDoOutput(true);
OutputStream os = conn.getOutputStream();
OutputStreamWriter wr = new OutputStreamWriter(os);
wr.write(message);
wr.flush();
if (conn.getResponseCode() != HttpsURLConnection.HTTP_OK)
return false;
else
return true;
}
```
Here ServerURL is initialized as
```
private URL serverURL = null;
```
When I try to execute this method I get an exception at Line,
> OutputStream os = conn.getOutputStream();
The exception is
```
java.lang.IllegalArgumentException: protocol = https host = null
```
What is the reason for this? | URLs use forward slashes (/), not backward ones (as windows). Try:
```
serverURLS = "https://abc.my.domain.com:55555/update";
```
The reason why you get the error is that the URL class can't parse the host part of the string and therefore, `host` is `null`. | This code seems completely unnecessary:
```
String serverURLS = getRecipientURL(message);
serverURLS = "https:\\\\abc.my.domain.com:55555\\update";
if (serverURLS != null){
serverURL = new URL(serverURLS);
}
```
1. `serverURLS` is assigned the result of `getRecipientURL(message)`
2. Then immediately you overwrite the value of `serverURLS`, making the previous statement a [dead store](http://en.wikipedia.org/wiki/Dead_store)
3. Then, because `if (serverURLS != null)` evaluates to `true`, since you **just** assigned the variable a value in the preceding statement, you assign a value to `serverURL`. It is impossible for `if (serverURLS != null)` to evaluate to `false`!
4. You never actually use the variable `serverURLS` beyond the previous line of code.
You could replace all of this with just:
```
serverURL = new URL("https:\\\\abc.my.domain.com:55555\\update");
``` | How should I resolve java.lang.IllegalArgumentException: protocol = https host = null Exception? | [
"",
"java",
"exception",
"ssl",
"stream",
""
] |
As far as I know the JVM uses escape analysis for some performance [optimisations](http://www.ibm.com/developerworks/java/library/j-jtp10185/) like lock coarsening and lock elision.
I'm interested if there is a possibility for the JVM to decide that any particular object can be allocated on stack using escape analysis.
Some [resources](http://www.ibm.com/developerworks/java/library/j-jtp09275.html) make me think that I am right. Is there JVMs that actually do it? | I don't think it does escape analysis for stack allocation. example:
```
public class EscapeAnalysis {
private static class Foo {
private int x;
private static int counter;
public Foo() {
x = (++counter);
}
}
public static void main(String[] args) {
System.out.println("start");
for (int i = 0; i < 10000000; ++i) {
Foo foo = new Foo();
}
System.out.println(Foo.counter);
}
}
```
with `-server -verbose:gc -XX+DoEscapeAnalysis`:
```
start
[GC 3072K->285K(32640K), 0.0065187 secs]
[GC 3357K->285K(35712K), 0.0053043 secs]
[GC 6429K->301K(35712K), 0.0030797 secs]
[GC 6445K->285K(41856K), 0.0033648 secs]
[GC 12573K->285K(41856K), 0.0050432 secs]
[GC 12573K->301K(53952K), 0.0043682 secs]
[GC 24877K->277K(53952K), 0.0031890 secs]
[GC 24853K->277K(78528K), 0.0005293 secs]
[GC 49365K->277K(78592K), 0.0006699 secs]
10000000
```
Allegedly [JDK 7 supports stack allocation](http://blog.juma.me.uk/2008/12/17/objects-with-no-allocation-overhead/). | With this version of java -XX:+DoEscapeAnalysis results in far less gc activity and 14x faster execution.
```
$ java -version
java version "1.6.0_14"
Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
Java HotSpot(TM) Client VM (build 14.0-b16, mixed mode, sharing)
$ uname -a
Linux xxx 2.6.18-4-686 #1 SMP Mon Mar 26 17:17:36 UTC 2007 i686 GNU/Linux
```
Without escape analysis,
```
$ java -server -verbose:gc EscapeAnalysis|cat -n
1 start
2 [GC 896K->102K(5056K), 0.0053480 secs]
3 [GC 998K->102K(5056K), 0.0012930 secs]
4 [GC 998K->102K(5056K), 0.0006930 secs]
--snip--
174 [GC 998K->102K(5056K), 0.0001960 secs]
175 [GC 998K->102K(5056K), 0.0002150 secs]
176 10000000
```
With escape analysis,
```
$ java -server -verbose:gc -XX:+DoEscapeAnalysis EscapeAnalysis
start
[GC 896K->102K(5056K), 0.0055600 secs]
10000000
```
The execution time reduces significantly with escape analysis. For this the loop was changed to 10e9 iterations,
```
public static void main(String [] args){
System.out.println("start");
for(int i = 0; i < 1000*1000*1000; ++i){
Foo foo = new Foo();
}
System.out.println(Foo.counter);
}
```
Without escape analysis,
```
$ time java -server EscapeAnalysis
start
1000000000
real 0m27.386s
user 0m24.950s
sys 0m1.076s
```
With escape analysis,
```
$ time java -server -XX:+DoEscapeAnalysis EscapeAnalysis
start
1000000000
real 0m2.018s
user 0m2.004s
sys 0m0.012s
```
So with escape analysis the example ran about 14x faster than the non-escape analysis run. | Escape analysis in Java | [
"",
"java",
"stack",
"allocation",
"escape-analysis",
""
] |
I'm creating a process on Windows from Java. My problem is that this process doesn't terminate. Here's a sample program:
```
import java.io.IOException;
public class Test {
/**
* @param args
* @throws IOException
* @throws InterruptedException
*/
public static void main(String[] args) throws IOException,
InterruptedException {
Process process = Runtime.getRuntime().exec("cmd /c dir");
process.waitFor();
}
}
```
For reasons beyond my understanding, this program never completes. This is true if "cmd /c dir" is replaced with ipconfig as well as other things.
I can see using ProcessExplorer that java creates the cmd process. This sample is obviously a simplification; in my original program I found that if I call process.destroy() after a while and check the cmd process output, the command is executed successfully.
I've tried this with various releases of Java 1.5 and 1.6. My OS is Windows XP Pro, SP 2. | Likely that you just need to read the stdout and stderr of the process, or it will hang since its output buffer is full. This is easiest if you redirect stderr to stdout, just to be safe:
```
public static void main(String[] args) throws IOException,
InterruptedException {
String[] cmd = new String[] { "cmd.exe", "/C", "dir", "2>&1" };
Process process = Runtime.getRuntime().exec(cmd);
InputStream stdout = process.getInputStream();
while( stdout.read() >= 0 ) { ; }
process.waitFor();
}
}
``` | See this [link](http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html?page=2) for an explanation.
You need to read the input stream. Also the java process doesn't work like a dos shell. You need to pass the arguments yourself:
```
String[] cmd = new String[3];
cmd[0] = "cmd.exe" ;
cmd[1] = "/C" ;
cmd[2] = "dir";
Runtime rt = Runtime.getRuntime();
Process proc = rt.exec(cmd);
``` | Windows process exec'ed from Java not terminating | [
"",
"java",
"windows",
""
] |
This has probably been asked before, but a quick search only brought up the same question asked for C#. [See here.](https://stackoverflow.com/questions/410227/test-if-object-implements-interface)
What I basically want to do is to check wether a given object implements a given interface.
I kind of figured out a solution but this is just not comfortable enough to use it frequently in if or case statements and I was wondering wether Java does not have built-in solution.
```
public static Boolean implementsInterface(Object object, Class interf){
for (Class c : object.getClass().getInterfaces()) {
if (c.equals(interf)) {
return true;
}
}
return false;
}
```
---
EDIT: Ok, thanks for your answers. Especially to Damien Pollet and Noldorin, you made me rethink my design so I don't test for interfaces anymore. | The [`instanceof`](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.20.2) operator does the work in a [`NullPointerException`](https://docs.oracle.com/javase/8/docs/api/java/lang/NullPointerException.html) safe way. For example:
```
if ("" instanceof java.io.Serializable) {
// it's true
}
```
yields true. Since:
```
if (null instanceof AnyType) {
// never reached
}
```
yields false, the `instanceof` operator is null safe (the code you posted isn't).
**instanceof** is the built-in, compile-time safe alternative to [Class#isInstance(Object)](http://java.sun.com/javase/6/docs/api/java/lang/Class.html#isInstance(java.lang.Object)) | This should do:
```
public static boolean implementsInterface(Object object, Class interf){
return interf.isInstance(object);
}
```
For example,
```
java.io.Serializable.class.isInstance("a test string")
```
evaluates to `true`. | Test if object implements interface | [
"",
"java",
""
] |
Has anyone come across any good tools (preferably but not necessarily FOSS) that can read a SQL Server (2005/2008) trace file and execute the commands against another database. We are attempting to perform some performance testing on our SQL servers and would like to replicate an actual load.
I have come across but not yet used:
* JMeter
* ReplayML
Preferably, the application would be able to use threading to mimic user connections and query execution on the SQL Server. | You can replay a SQL Server Profiler trace against another server using the SQL Server Profiler itself.
See the following Microsoft Reference as a starting point.
<http://msdn.microsoft.com/en-us/library/ms189604.aspx>
Quest Software also have a tool called Benchmark Factory that can be used to perform SQL Server load testing.
<http://www.quest.com/benchmark-factory/> | One of the best tools is actually freely available from Microsoft. The RML Utilities are targeted at SQL2005 & SQL2008 and are specifically designed for this type of testing.
You can download the tools from <http://www.microsoft.com/downloads/details.aspx?FamilyId=7EDFA95A-A32F-440F-A3A8-5160C8DBE926&displaylang=en>
We have used them to solve several performance and locking issues.
Note: Capturing trace files using the SQL Profiler GUI can add to performance problems due to the way the GUI and Trace backend interact. The RML Utilites include a script that can capture traces directly from the SQL Server without using the GUI. | Tools to Replay Load on a SQL Server | [
"",
"sql",
"sql-server",
"stress-testing",
""
] |
My understanding is that if you are using a generic list (List) in C#, that it can support multiple concurrent readers but only one writer. And when you introduce a writer into the mix, you must also provide synchronization constructs to make the operations thread safe.
Is List.Contains considered a read operation? In other words, if I am calling this method, do I need to worry that a writer may be writing to this List at the same time? | Yes, you should. Basically I would synchronize for *any* operation if the list might be used for writing at the same time.
Generally I find collections fall into two categories - ones which are created, initialized and then never changed again (thread-safe), and ones which are mutated over time (not thread-safe, lock for all access). | If you use Reflector to inspect at the code, you get something like this:
```
public bool Contains(T item)
{
if (item == null)
{
for (int j = 0; j < this._size; j++)
{
if (this._items[j] == null)
{
return true;
}
}
return false;
}
EqualityComparer<T> comparer = EqualityComparer<T>.Default;
for (int i = 0; i < this._size; i++)
{
if (comparer.Equals(this._items[i], item))
{
return true;
}
}
return false;
}
```
As you can see, it is a simple iteration over the items which is definitely a 'read' operation. If you are using it only for reads (and nothing mutates the items), then no need to lock. If you start modifying the list in a separate thread, then you most decidedly need to synchronize access. | Is List<T>.Contains() a Threadsafe call - C# | [
"",
"c#",
"thread-safety",
""
] |
I have a timezone aware `timestamptz` field in PostgreSQL. When I pull data from the table, I then want to subtract the time right now so I can get it's age.
The problem I'm having is that both `datetime.datetime.now()` and `datetime.datetime.utcnow()` seem to return timezone unaware timestamps, which results in me getting this error:
```
TypeError: can't subtract offset-naive and offset-aware datetimes
```
Is there a way to avoid this (preferably without a third-party module being used).
EDIT: Thanks for the suggestions, however trying to adjust the timezone seems to give me errors.. so I'm just going to use timezone unaware timestamps in PG and always insert using:
```
NOW() AT TIME ZONE 'UTC'
```
That way all my timestamps are UTC by default (even though it's more annoying to do this). | Have you tried to remove the timezone awareness?
From <http://pytz.sourceforge.net/>
```
naive = dt.replace(tzinfo=None)
```
may have to add time zone conversion as well.
edit: Please be aware the age of this answer. An answer involving ADDing the timezone info instead of removing it in python 3 is below. <https://stackoverflow.com/a/25662061/93380> | The correct solution is to *add* the timezone info e.g., to get the current time as an aware datetime object in Python 3:
```
from datetime import datetime, timezone
now = datetime.now(timezone.utc)
```
On older Python versions, you could define the `utc` tzinfo object yourself (example from datetime docs):
```
from datetime import tzinfo, timedelta, datetime
ZERO = timedelta(0)
class UTC(tzinfo):
def utcoffset(self, dt):
return ZERO
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return ZERO
utc = UTC()
```
then:
```
now = datetime.now(utc)
``` | Can't subtract offset-naive and offset-aware datetimes | [
"",
"python",
"postgresql",
"datetime",
"timezone",
""
] |
I am trying to create an OSGi service that wraps another jar. I added the jar to the project, the classpath and the binary build. I registered the service in the Activator but when the consuming bundle calls the service I get a java.lang.NoClassDefFoundError on the wrapper jar. Does anyone have any idea what I am doing wrong here?
Thanks in advance. | Are you exporting the packages that are required by the consumer, as well as all those that the implementation requires. The consumer will need to import everything that will be referenced.
As a side note, creating a bundle this way doesn't work well in Eclipse for development (works fine for runtime). If you try to reference a class or interface in the jar from another OSGi project, the IDE won't resolve anything since it cannot 'see' the files in the jar. The jar has to be expanded within the bundle for everything to be visible (within the IDE). Eclipse automagically creates the appropriate classpath references based on the imports and exports for build purposes. Without the jar file in the bundle, you will have to explicitly maintain this classpath. | There can be multiple reasons for your behavior. To make sure, I would check for the following:
* assuming you work with Eclipse check if you have included the jar in your "Build" tab of the manifest editor, as well as pointed to this very jar within the "Runtime" tab under "Classpath".
* the created bundle: does it contain the jar? Does it have the "Bundle-ClassPath" header pointing to the jar, like: "Bundle-ClassPath: lib/myLibrary.jar,." (the last . is required to include the classes coming from the root directory of the bundle - your activator f.i.)
* make sure, the jar actually contains all required dependencies or expresses them via Import-Package headers in the wrapping bundle. Eclipse has a "Import Wizard" for just that. The before mentioned bnd tool does the same by the way. Hope that helps... | OSGi Service wrapping a jar | [
"",
"java",
"osgi",
""
] |
According to my book all that is needed to start using automatic paging is to set GridView.AllowPaging to true. But when I try to navigate to another page, I get *GridView fired event PageIndexChanging which wasn't handled exception*. I then have to create event handler for PageIndexChanging event, but then when I navigate to next page, GridView doesn’t display anything.
Q1 - What am I doing wrong?
Q2 - Book is written for Asp.Net 3.5, but none of the behavior described above is mentioned by the author?! Any thoughts why my GridView behaves so differently?
thanx
EDIT:
I'm embarassed to say, but reason it didn't work is because I forgot to remove the line
```
if(IsPostBack) GridView.DataSourceID="";
```
Sorry for taking your time and thanx for helping me | When you post back, you'll have to rebind the data to the gridview.
You may also need to set the page number like:
```
GridView1.CurrentPageIndex = e.NewPageIndex;
``` | It's been a while, but don't you have to set the page and rebind your data? From memory, something like ...
```
gv.PageIndex = e.NewPageIndex
// Rebind Grid
``` | GridView's Automatic paging doesn't work | [
"",
"c#",
"asp.net",
"gridview",
""
] |
It has been a long time since I've used C++. I want to start programming in C++ again but I am looking for a particular resource and wondering if anyone here has any good references. I am looking for a book, web tutorial, video...whatever that teaches you C++ while you're building an application. So, for example, a book might start the book by starting the application.. and so on. Does anyone have a good reference? Thanks | A great free tutorial is the [C++ language tutorial at cplusplus.com](http://cplusplus.com/doc/tutorial/) and its [other resources](http://cplusplus.com/).
You could also checkout popular C++ books like: The C++ Programming Language by Bjarne Stroustrup or Effective C++ by Scott Meyers, | I find [The C++ FAQ Lite](http://www.parashift.com/c%2B%2B-faq-lite/index.html) to be one of the best sites for C++ advice. It is not oriented toward beginners, but then, neither is the C++ language. :)
I'll also second Brian's recommendation of "Effective C++" by Scott Meyers. That is one of the best C++ books you can read. | Resource for C++ | [
"",
"c++",
""
] |
Does the Sun JVM slow down when more memory is available and used via -Xmx? (Assumption: The machine has enough physical memory so that virtual memory swapping is not a problem.)
I ask because my production servers are to receive a memory upgrade. I'd like to bump up the -Xmx value to something *decadent*. The idea is to prevent any heap space exhaustion failures due to my own programming errors that occur from time to time. Rare events, but they could be avoided with my rapidly evolving webapp if I had an obscene -Xmx value, like 2048mb or higher. The application is heavily monitored, so unusual spikes in JVM memory consumption would be noticed and any flaws fixed.
Possible important details:
* Java 6 (runnign in 64-bit mode)
* 4-core Xeon
* RHEL4 64-bit
* Spring, Hibernate
* High disk and network IO
***EDIT***: I tried to avoid posting the configuration of my JVM, but clearly that makes the question ridiculously open ended. So, here we go with relevant configuration parameters:
```
-Xms256m
-Xmx1024m
-XX:+UseConcMarkSweepGC
-XX:+AlwaysActAsServerClassMachine
-XX:MaxGCPauseMillis=1000
-XX:MaxGCMinorPauseMillis=1000
-XX:+PrintGCTimeStamps
-XX:+HeapDumpOnOutOfMemoryError
``` | By adding more memory, it will take longer for the heap to fill up. Consequently, it will reduce the frequency of garbage collections. However, depending on how mortal your objects are, you may find that how long it takes to do any single GC increases.
The primary factor for how long a GC takes is how many *live* objects there are. Thus, if virtually all of your objects die young and once you get established, none of them escape the young heap, you may not notice much of a change in how long it takes to do a GC. However, whenever you have to cycle the tenured heap, you may find everything halting for an unreasonable amount of time since most of these objects will still be around. Tune the sizes accordingly. | If you just throw more memory at the problem, you will have better throughput in your application, but your responsiveness can go down if you're not on a multi core system using the CMS garbage collector. This is because fewer GCs will occur, but they will have more work to do. The upside is that you will get more memory freed up with your GCs, so allocation will continue to be very cheap, hence the higher througput.
You seem to be confusing -Xmx and -Xms, by the way. -Xms just sets the initial heap size, whereas -Xmx is your max heap size. | Does the Sun JVM slow down when more memory is allocated via -Xmx? | [
"",
"java",
"jvm",
"performance",
""
] |
I have a bizarre problem that is doing my head in.
I have the following classes defined in a single project:
```
public abstract class AbstractUnitModel {
public void executeRemoteModel(){}
}
//this class also implements a seperate interface, but I dont think that is the issue
public class BlastFurnaceUnitModel : AbstractUnitModel, IUnitModel {}
```
Now if I try something like this, it works as expected:
```
class Class1
{
public void method1() {
BlastFurnaceUnitModel b = new BlastFurnaceUnitModel();
method2(b);
}
public void method2(AbstractUnitModel a) {}
}
```
Now I have another project that exposes a web method. This method takes an AbstractUnitModel and executes it remotely, then sends the results back to the client. So on the server I have this:
```
[WebMethod]
public AbstractUnitModel remotelyExecuteUnitModel(UnitModelWrapperInterface.AbstractUnitModel unitModel)
{
unitModel.executeRemoteModel();
return unitModel;
}
```
And on the client I have this:
```
public void remoteExecution() {
var unitModelWebService = new UnitModelRemoteServer.RemoteModelExecutionWebService();
unitModelWebService.remotelyExecuteUnitModelCompleted += new UnitModelRemoteServer.remotelyExecuteUnitModelCompletedEventHandler(remoteExecutionCompleted);
unitModelWebService.remotelyExecuteUnitModelAsync(this.remoteBF);
}
```
But my project will not compile, and I get these errors:
*Error 109 The best overloaded method match for 'CalibrationClient.UnitModelRemoteServer.RemoteModelExecutionWebService.remotelyExecuteUnitModelAsync(CalibrationClient.UnitModelRemoteServer.AbstractUnitModel)' has some invalid arguments*
*Error 110 Argument '1': cannot convert from 'UnitModelWrapperInterface.BlastFurnaceUnitModel' to 'CalibrationClient.UnitModelRemoteServer.AbstractUnitModel'*
I can not figure out why this is happening. I have references in the server project to the namespace where AbstractUnitModel is defined. The only thing that looked a little funny to me is that it is using AbstractUnitModel from the 'CalibrationClient' namespace rather than the UnitModelWrapperInterface. It seems when VS generates the proxy for a webservice on the client it creates a partial abstract implementation of AbstractUnitModel. Is this the source of my problem? If so, how might I go about fixing it?
edit for solution: As pointed out, the client needs to know about all classes that could be sent across the wire. I ended up solving this by removing the generated proxy classes and referencing the common library. Not ideal but good enough in this situation. | You might try `[XmlInclude]`:
```
[XmlInclude(typeof(BlastFurnaceUnitModel))]
public abstract class AbstractUnitModel {...}
```
Worth a try, at least...
(edit) Or at the method level:
```
[WebMethod(), XmlInclude(typeof(BlastFurnaceUnitModel))]
public AbstractUnitModel remotelyExecuteUnitModel(...) {...}
```
(less sure about the second) | This happens because the WSDL tool creates proxy classes (open the service code file and you'll see them) which are the classes used to instantiate when objects come from the service.
If you want to avoid this, it's best to use WCF. This also deals with the polymorphic returned objects, as webservices also can't deal with polymorphism (so the return type of the remotelyExecuteUnitModel method is always AbstractUnitModel. | Passing a derived class to a web service method that takes an abstract type | [
"",
"c#",
"web-services",
"abstract-class",
""
] |
Since I have started learning about rendering to a texture I grew to understand that `glCopyTexSubImage2D()` will copy the designated portion of the screen upside down. I tried a couple of simple things that came to mind in order to prevent/work around this but couldn't find an elegant solution.
* there are two problems with doing a ::glScalef(1.0f, -1.0f, 1.0f) before rendering the texture to the screen:
1, I have to do this every time I'm using the texture.
2, I'm mostly working with 2D graphics and have backface culling turned off for GL\_BACKsides. As much as possible I'd love to save switching this on and off.
* tried switching matrix mode to GL\_TEXTURE and doing the ::glScalef(1.0f, -1.0f, 1.0f) transformation on capturing, but the results were the same. (I'm guessing the texture matrix only has an effect on glTexCoord calls?)
So, How can I fix the up-down directions of textures captured with `glCopyTexSubImage2D`? | What are you going to be using the texture images for? Actually trying to render them upside down would usually take more work than moving that code somewhere else.
If you're trying to use the image without exporting it, just flipping the texture coordinates wherever you're using the result would be the most efficient way.
If you're trying to export it, then you either want to flip them yourself, after rendering.
On a related note, if you are making a 2D game, why is backface culling turned on? | If my understanding is correct, it depends on where your origin is. That function seems to assume your origin is in the bottom-left, whereas most 2D stuff assumes an origin of the top-left, which is why it seems upside down.
I suppose you could change the origin to the bottom-left, do the capture, then change the origin back to the top-left and render again before doing the swap. But that's a horrible solution since you're effectively rendering twice, but it might be fine if you don't plan to do it every frame. | fixing glCopyTexSubImage2D upside down textures | [
"",
"c++",
"opengl",
"capture",
"render",
"textures",
""
] |
I'm currently playing about with some XNA stuff learning to program AI. Anyway, here's my situation: Class A has a function which takes a reference to an instance of class B, does some stuff to to it, and returns it. Class B contains an instance of Class A, and then calls the function from it.
Example in code:
```
Class A
{
B classB;
public A()
{
classB = new B();
}
public void Act()
{
this = B.Do(ref this);
}
}
Class B
{
public A Do(ref A classA)
{
//Manipulate
return classA;
}
}
```
I've tried passing a memberwise clone .. but that didn't work, obviously, because "this" is read-only. I've no idea with this. I'm really stuck. Does anybody have any ideas? I'd ideally like to avoid having to pass every single variable in the object as a separate argument, really.
Andy. | Classes are reference types, so doing
```
Class B
{
public void Do(A classA)
{
//Manipulate
}
}
```
should manipulate the object classA references. Then in A,
```
Class A
{
B classB;
public A()
{
classB = new B();
}
public void Act()
{
B.Do(this);
}
}
```
Note: "This does have the side effect that the reference of A that you pass cannot be set to null (it will only set the local variable to null)" - [JulianR](https://stackoverflow.com/users/61632/julianr) | Your B.Do() method does it (modifying an A) double by using the ref and a return. Neither is needed to modify classA. And because you target `this` it won't work anyway. You never assign to `this`, it makes no sense.
So the Simple answer is
```
class B
{
void Do(A anA) { anA.PublicProp = 1; }
}
```
But the circular referencing does make it a dubious design. | Passing "this" as an argument - C# | [
"",
"c#",
"xna",
""
] |
My adventures in Python continue and my favorite books are silent again. Python offers a built-in way to test if a variable is inside an iterable object, using the 'in' keyword:
```
if "a" in "abrakadabra" :
print "it is definitely here"
```
But is it possible to test if more than one item is in the list (any one)?
Currently, I'm using the syntax below, but it is kinda long:
```
if "// @in " in sTxt or "// @out " in sTxt or "// @ret " in sTxt or <10 more>
print "found."
```
Of course regexes can help, but using regexes will take lots of verbose of code and will
not be as clear as "a in b". Are there any other Pythonic ways? | ```
alternatives = ("// @in ", "// @out ", "// @ret ")
if any(a in sTxT for a in alternatives):
print "found"
if all(a in sTxT for a in alternatives):
print "found all"
```
[`any()`](http://docs.python.org/library/functions.html#any) and [`all()`](http://docs.python.org/library/functions.html#all) takes an iterable and checks if any/all of them evaluate to a true value. Combine that with a generator expressions, and you can check multiple items. | `any(snippet in text_body for snippet in ("hi", "foo", "bar", "spam"))` | python, "a in b" keyword, how about multiple a's? | [
"",
"python",
""
] |
How can I implement a threaded UDP based server in Java ?
Basically what I want, is to connect multiple clients to the server, and let each client have his own thread. The only problem is, that I don't know how to check if a client is trying to connect to the server and spawn a new thread for it.
```
boolean listening = true;
System.out.println("Server started.");
while (listening)
new ServerThread().start();
```
In this case the server will spawn new threads until it runs out of memory.
Here's the code for the ServerThread ( I think I need here a mechanism that stalls the creation of the ServerThread until a client tries to connect.
```
public ServerThread(String name) throws IOException
{
super(name);
socket = new DatagramSocket();
}
```
So fathers of Java programming please help. | The design for this to a certain extent depends on whether each complete UDP "dialog" just requires a single request and immediate response, whether it's a single request or response with retransmissions, or whether there'll be a need to process lots of packets for each client.
The RADIUS server I wrote had the single request + retransmit model and spawned a thread for each incoming packet.
As each `DatagramPacket` was received it was passed to a new thread, and then that thread was responsible for sending back the response. This was because the computation and database accesses involved in generating each response could take a relatively long time and it's easier to spawn a thread than to have some other mechanism to handle new packets that arrive whilst old packets are still being processed.
```
public class Server implements Runnable {
public void run() {
while (true) {
DatagramPacket packet = socket.receive();
new Thread(new Responder(socket, packet)).start();
}
}
}
public class Responder implements Runnable {
Socket socket = null;
DatagramPacket packet = null;
public Responder(Socket socket, DatagramPacket packet) {
this.socket = socket;
this.packet = packet;
}
public void run() {
byte[] data = makeResponse(); // code not shown
DatagramPacket response = new DatagramPacket(data, data.length,
packet.getAddress(), packet.getPort());
socket.send(response);
}
}
``` | Since UDP is a connectionless protocol, why do you need to spawn a new thread for each connection? When you receive a UDP packet maybe you should spawn a new thread to take care of dealing with the message received.
UDP connections are not like TCP connections. They do not remain active and such is the design of UDP.
The handlePacket() method of this next code block can do whatever it wants with the data received. And many clients can send multiple packets to the same UDP listener. Maybe it will help you.
```
public void run() {
DatagramSocket wSocket = null;
DatagramPacket wPacket = null;
byte[] wBuffer = null;
try {
wSocket = new DatagramSocket( listenPort );
wBuffer = new byte[ 2048 ];
wPacket = new DatagramPacket( wBuffer, wBuffer.length );
} catch ( SocketException e ) {
log.fatal( "Could not open the socket: \n" + e.getMessage() );
System.exit( 1 );
}
while ( isRunning ) {
try {
wSocket.receive( wPacket );
handlePacket( wPacket, wBuffer );
} catch ( Exception e ) {
log.error( e.getMessage() );
}
}
}
``` | How can I implement a threaded UDP based server in Java? | [
"",
"java",
"multithreading",
"udp",
""
] |
Imagine I have the following 3 tables in SqlServer:
```
Customer (CustomerID, FirstName, LastName)
Address (AddressID, CustomerID, Line1, City, State)
Product (ProductID, CustomerID, Description)
```
A customer can have multiple delivery addresses and mulitple products.
What I would like to do is to list the number of customers for each State where the State is determined by the most recent Address record. Such as "How many customers last received a product in each State?". Therefore I'm not interested in any previous Address records for the Customer, only the most Recent (determined by AddressID).
```
State | Number of Customers
--------------------------
CA | 32
GA | 12
TX | 0
OH | 18
```
I would normally do something like:
```
SELECT a.State, count(c.CustomerID)
FROM Product p
INNER JOIN Customer c ON c.CustomerID = p.CustomerID
LEFT JOIN Address a ON a.CustomerID = c.CustomerID
WHERE p.ProductID = 101
GROUP BY a.State
```
However, as a Customer may have multiple Addresses will the customer only be counted in the State of the most recent Address record?
P.S. The above is purely an example scenario to easily explain the joins I am trying to achieve and does not reflect an actual system design. | Try this:
```
SELECT a.State, count(c.CustomerID)
FROM Product p
INNER JOIN Customer c ON c.CustomerID = p.CustomerID
LEFT JOIN Address a ON a.CustomerID = c.CustomerID
AND a.AddressID =
(
SELECT MAX(AddressID)
FROM Address z
WHERE z.CustomerID = a.CustomerID
)
WHERE p.ProductID = 101
GROUP BY a.State
``` | You could also try (assuming I remember my SQLServer syntax correctly):
```
SELECT state, count(customer_id)
FROM (
SELECT
p.customer_id
, (SELECT TOP 1 State FROM Address WHERE Address.CustomerID = p.CustomerID ORDER BY Address.ID DESC) state
FROM Product p
WHERE p.ProductID = 101)
GROUP BY state
``` | Most recent record in a left join | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have a rather large solution consisting of about 10 different projects. Until now we have shipped the entire solution as a whole to customers, but we are looking into shipping a stripped version of our software.
To this end I would like to exclude several projects from the solution. I know that you can prevent projects from being built in the solution configuration manager. Using macros all code references can be disabled when the stripped configuration is chosen. Unfortunately this does not take care of the project references. Can I make these references conditional depending on the chosen configuration? | It should be a pretty simple matter to remove the project references from the project file using a small script - it would just be a case of removing lines adding those references. The project file format is quite simple.
I suspect that's likely to be the easiest solution. | Your best bet is to create separate projects for your "stripped down" solution that references only those other projects you want. Reference the same code. Create a separate solution to hold those projects together.
They can all live together in the same folder structure, too.
For example:
* MySolution/MySolution.sln
* MySolution/MyStrippedDownSolution.sln
* MySolution/MyProject1/MyProject1.csproj
* MySolution/MyProject1/MyStrippedDownProject1.csproj
* MySolution/MyProject1/MyClass1.cs
* MySolution/MyProject2/MyProject2.csproj
* MySolution/MyProject2/MyStrippedDownProject2.csproj
* MySolution/MyProject2/MyClass2.cs
* MySolution/MyProject2/MyProject3.csproj
* MySolution/MyProject2/MyClass3.cs
+ MyProject1 and MyStrippedDownProject1 reference MyClass1
+ MyProject2 and MyStrippedDownProject2 reference MyClass2
+ MyProject3 and MyStrippedDownProject3 reference MyClass3
* MySolution references MyProject1 and MyProject2 and MyProject3
* MyStrippedDownSolution references MyStrippedDownProject1 and MyStrippedDownProject2
* MyProject1 references MyProject2 and MyProject3
* MyStrippedDownProject1 only references MyStrippedDownProject2 -- it does not reference - MyProject3 | conditional assembly references based on solution | [
"",
"c#",
"visual-studio-2008",
"projects-and-solutions",
""
] |
Being able to create javascript code on the fly is pretty cool.
That is by using
```
HtmlPage.Window.Eval("alert('Hello World')");
```
But is there a way to do the same thing, with a C# method? Lets say something like
```
void MethodEval("MessageBox.Show('Hello World')");
```
Is it even possible, without having to recompile your code? | You can do it right now. The [ag DLR (Silverlight Dynamic Languages Runtime)](http://www.codeplex.com/sdlsdk) can host javascript.
While Javascript cannot be hosted with the DLR outside the browser Ruby and Python can. Here's an example of a C# snippet using the [DLR](http://dlr.codeplex.com/Wiki/View.aspx?title=Docs%20and%20specs&referringTitle=Home) and hosting a piece of Python of code to demonstrate.
```
using IronPython.Hosting;
PythonEngine pythonEngine = new PythonEngine();
string script = @"import clr
clr.AddReference(""System.Windows.Forms"")
import System.Windows.Forms as WinForms
WinForms.MessageBox.Show(""Hello"", ""Hello World"")";
pythonEngine.Execute(script);
``` | It's possible using tricks posted by others. **However, it's usually a very bad idea.**
.Net code typically runs in a more trusted context than a javascript browser sandbox, and has access to a much richer, and therefore potentially damaging, api.
Instead you use the System.Addin namespace to provide a very strict interface for extensions, plugins, and the like. If you're just trying to use a more "fluid" or functional programming environment you can use fun features like lamdba expressions and closures to pass functionality around internally. | String to method | [
"",
"c#",
""
] |
Working my way through Effective STL at the moment. Item 5 suggests that it's usually preferable to use range member functions to their single element counterparts. I currently wish to copy all the values in a map (i.e. - I don't need the keys) to a vector.
What is the cleanest way to do this? | You can't easily use a range here because the iterator you get from a map refers to a std::pair, where the iterators you would use to insert into a vector refers to an object of the type stored in the vector, which is (if you are discarding the key) not a pair.
I really don't think it gets much cleaner than the obvious:
```
#include <map>
#include <vector>
#include <string>
using namespace std;
int main() {
typedef map <string, int> MapType;
MapType m;
vector <int> v;
// populate map somehow
for( MapType::iterator it = m.begin(); it != m.end(); ++it ) {
v.push_back( it->second );
}
}
```
which I would probably re-write as a template function if I was going to use it more than once. Something like:
```
template <typename M, typename V>
void MapToVec( const M & m, V & v ) {
for( typename M::const_iterator it = m.begin(); it != m.end(); ++it ) {
v.push_back( it->second );
}
}
``` | You could probably use [`std::transform`](http://en.cppreference.com/w/cpp/algorithm/transform) for that purpose. I would maybe prefer Neils version though, depending on what is more readable.
---
Example by [xtofl](https://stackoverflow.com/users/6610/xtofl) (see comments):
```
#include <map>
#include <vector>
#include <algorithm>
#include <iostream>
template< typename tPair >
struct second_t {
typename tPair::second_type operator()( const tPair& p ) const { return p.second; }
};
template< typename tMap >
second_t< typename tMap::value_type > second( const tMap& m ) { return second_t< typename tMap::value_type >(); }
int main() {
std::map<int,bool> m;
m[0]=true;
m[1]=false;
//...
std::vector<bool> v;
std::transform( m.begin(), m.end(), std::back_inserter( v ), second(m) );
std::transform( m.begin(), m.end(), std::ostream_iterator<bool>( std::cout, ";" ), second(m) );
}
```
Very generic, remember to give him credit if you find it useful. | Copy map values to vector in STL | [
"",
"c++",
"stl",
"containers",
""
] |
Any suggestions for my stack based allocator?
(Except for suggestions to use a class with private/public members)
```
struct Heap
{
void* heap_start;
void* heap_end;
size_t max_end;
Heap(size_t size)
{
heap_start = malloc(size);
heap_end = heap_start;
max_end = size + (size_t) heap_start;
}
~Heap()
{
::free(heap_start);
}
void* allocate(size_t bytes)
{
size_t new_end = ((size_t) heap_end) + bytes;
if( new_end > max_end )
throw std::bad_alloc();
void* output = heap_end;
heap_end = (void*) new_end;
return output;
}
}
``` | You've implemented a stack based allocator. You can't free up without leaving gaps. Usually a pool refers to a block of contiguous memory with fixed sized slots, which are doubly linked to allow constant time add and delete.
[Here's one](https://raw.githubusercontent.com/justinhj/astar-algorithm-cpp/master/cpp/fsa.h) you can use as a guide. It's along the same lines as yours but includes basic iterators over allocated nodes, and uses templates to be type aware. | ```
size_t new_end = ((size_t) heap_end) + bytes;
```
**Not good**, never do things like that, you assume that sizeof(size\_t)==sizeof(void\*), also what happens if `bytes==(size_t)(-1)` this would not work
**Additionally,** you need make sure that pointers that you are return are aligned.
Otherwise you would have problems. So you need to make sure that bytes are multiple of 4 or 8 according to your platform.
```
class {...
char *max_end,*head_end,*heap_start;
};
...
max_end=heap_start+size;
...
bytes=align_to_platform_specific_value(bytes);
if(max_end-heap_end >= bytes) {
void* output = (void*)heap_end;
heap_end+=bytes;
return output;
}
throw std::bad_alloc();
```
Suggestion? Do not reinvent the wheel. There are many and good pool libraries. | Improvements for this C++ stack allocator? | [
"",
"c++",
"memory",
"memory-management",
""
] |
hey, I have this code that should save a `java.util.Vector` of custom serializable classes:
```
if(filename.equals("")){
javax.swing.JFileChooser fc = new javax.swing.JFileChooser();
if(fc.showSaveDialog(this) == javax.swing.JFileChooser.APPROVE_OPTION){
filename = fc.getSelectedFile().toString();
}
}
try{
java.io.FileOutputStream fos = new java.io.FileOutputStream(filename);
java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream();
java.io.ObjectOutputStream oos = new java.io.ObjectOutputStream(baos);
oos.writeObject((Object)tl.entities);
baos.writeTo(fos);
oos.close();
fos.close();
baos.close();
}catch(java.io.FileNotFoundException e){
javax.swing.JOptionPane.showMessageDialog(this, "FileNotFoundException: Could not save file: "+e.getCause()+" ("+e.getMessage()+")", "Error", javax.swing.JOptionPane.ERROR_MESSAGE);
}catch(java.io.IOException e){
javax.swing.JOptionPane.showMessageDialog(this, "IOException: Could not save file: "+e.getCause()+" ("+e.getMessage()+")", "Error", javax.swing.JOptionPane.ERROR_MESSAGE);
}
```
But when saving, it shows one of the defined dialog errors saying: `IOException: Could not save file: null (com.sun.java.swing.plaf.windows.WindowsFileChooserUI)` and there's an `NullPointerException` in the command line at `javax.swing.plaf.basic.BasicListUI.convertModelToRow(BasicListUI.java:1251)` | I found the error, the save dialog script itself worked perfectly but the class in the vector had a null pointer that caused the error.
But thanks for all the replies, I could use some of them :) | maybe you can do a better check for the filename :
```
if (filename == null || "".equals(filename)){
javax.swing.JFileChooser fc = new javax.swing.JFileChooser();
if(fc.showSaveDialog(this) == javax.swing.JFileChooser.APPROVE_OPTION){
filename = fc.getSelectedFile().toString();
}
if (filename == null || "".equals(filename)) {
// Display a message or anything else
return;
}
}
try {
...
}
``` | Java save function doesn't work | [
"",
"java",
"dialog",
"save",
"ioexception",
""
] |
It seems like building a jar or zip from the exact same source files will always yield a different file. I tried this both using the java jar command, and the jar and zip tasks in ant.
It appears to be due to the fact that new jars/zips have the timestamp set to the current time on each stored file.
Is there a way to force a zip tool to simply use the timestamp on the file on the filesystem to ensure that a jar built from the exact same source will appear exactly the same? | Ok, a coworker and I came up with a solution that works for us.
Instead of reengineering our entire build process to not delete any class or jar files, we use this procedure:
1. Build new artifacts.
2. Use jardiff (part of [jnlp](http://java.sun.com/j2se/1.4.2/docs/guide/jws/downloadservletguide.html)) to compare changes from previous build.
3. If the diff jar that jardiff produces has no changes, get artifact from previous build.
Yeah, I know it sounds kludgy, but it sure beats rewriting build script to take this into account. Also, we can do a completely clean build on a fresh machine (in the case of server failure), and this process will ensure that only actually updated jars are produced. | The binary difference is because of the timestamp of the manifest files.
If you let jar create a manifest itself it will create a manifest on
the fly and set the created manifest to currentTimeMillis.
You can solve it by:
1. Do not add a manifest (if your using ant you must use zip instead of jar)
2. Add the manifest like you add normal files. (So the manifest is a file on your filesystem and it isn't created on the fly) | Binary Difference in Zip/Jar file | [
"",
"java",
"jar",
"zip",
""
] |
Lets say I have a table CUSTOMERS with a key of CUSTOMER\_ID. I would like to return multiple rows based on a query:
```
SELECT NAME FROM CUSTOMERS WHERE CUSTOMER_ID in (130,130);
```
This always returns a single row that matches although I would like it to appear twice. My motivation is a bit convoluted and far more complex than the tiny example above. The spreadsheet I would like to cut and paste this information into was the result of a complex join that I do not have access to. The list that I have has 43,000 records. I have done this in the past by unloading the row into a file and looping over my input list and grepping the output. Is there a neat SQL trick I could use? | Whatever logic is generating the "IN (130, 130)" could presumably as easily generate
SELECT NAME FROM CUSTOMERS WHERE CUSTOMER\_ID = 130
SELECT NAME FROM CUSTOMERS WHERE CUSTOMER\_ID = 130
which would give it to you twice (and probably be easier to generate). | ## Method 1:
You can `UNION ALL` your query to itself. That should double the rows.
```
SELECT NAME FROM CUSTOMERS WHERE CUSTOMER_ID IN (130,131,...)
UNION ALL
SELECT NAME FROM CUSTOMERS WHERE CUSTOMER_ID IN (130,131,...)
```
That ought to do it. I hope you have a good reason for this, because its fundamentally wrong.
## Method 2:
Create a dummy table, with one column and two rows. Join it, without any restrictions, allowing the database to create the Cartesian product:
```
CREATE TABLE dummy ( foo INTEGER PRIMARY KEY );
INSERT INTO dummy VALUES (1);
INSERT INTO dummy VALUES (2);
SELECT NAME FROM CUSTOMERS JOIN dummy WHERE CUSTOMER_ID IN (130,131,...)
```
You can use the dummy table with a bunch of values 1..*n* to generate 1 through *n* copies of each row by changing the join criteria/where clause. | Returning a row more than once in SQL from a single table? | [
"",
"sql",
""
] |
I have a BUNCH of $\_POST variables being sent in via a long form and instead of hard coding each one with a `mysql_escape_string()` is it ok for me to do the following? I don't know if this is actually safe and/or viable code.
```
foreach ($_POST as &$post_item){
$post_item = mysql_escape_string($post_item);
}
```
I'm fairly certain that because i'm using the &, it's passing it in by reference, not value, so i'm actually changing the value in the $\_POST.
Also, should I use `mysql_real_escape_string()` instead?
EDIT: I am using PDO and prepare() along with the above method. Does this take care of it for me? | Why not use `array_map()`?
```
array_map(mysql_real_escape_string, $_POST);
```
But in reality you should be using parametrized/prepared statements.
`mysql_real_escape_string()` takes the current database character set into account, `mysql_escape_string()` does not. So the former is the better alternative in comparison.
Edit (following up the OP's edit to the question):
Since you already do PDO prepared statements, there is no need to modify your values. PDO takes care of everything, that's the whole point of it (**If you really put all data in parameters, that is** - just concatenating strings to build SQL statements leads to disaster with PDO or without). Escaping the values beforehand would lead to escaped values in the database. | Yes, you should be using `mysql_real_escape_string()`, if you're going to go that route. But the correct way to make sure the variables are safe to send to the database is using [Parameterized Queries](http://en.wikipedia.org/wiki/SQL_injection#Using_Parameterized_Statements) which are provided in PHP through either the [mysqli](http://php.net/mysqli) functions or [PDO](http://php.net/pdo). | is this at least mildly secure php code? | [
"",
"php",
"mysql",
"security",
"post",
"mysql-real-escape-string",
""
] |
I have seen many projects using **`simplejson`** module instead of **`json`** module from the Standard Library. Also, there are many different `simplejson` modules. Why would use these alternatives, instead of the one in the Standard Library? | `json` [is](http://docs.python.org/whatsnew/2.6.html#the-json-module-javascript-object-notation) `simplejson`, added to the stdlib. But since `json` was added in 2.6, `simplejson` has the advantage of working on more Python versions (2.4+).
`simplejson` is also updated more frequently than Python, so if you need (or want) the latest version, it's best to use `simplejson` itself, if possible.
A good practice, in my opinion, is to use one or the other as a fallback.
```
try:
import simplejson as json
except ImportError:
import json
``` | I have to disagree with the other answers: the built in **`json`** library (in Python 2.7) is not necessarily slower than **`simplejson`**. It also doesn't have [this annoying unicode bug](https://code.google.com/p/simplejson/issues/detail?id=40).
Here is a simple benchmark:
```
import json
import simplejson
from timeit import repeat
NUMBER = 100000
REPEAT = 10
def compare_json_and_simplejson(data):
"""Compare json and simplejson - dumps and loads"""
compare_json_and_simplejson.data = data
compare_json_and_simplejson.dump = json.dumps(data)
assert json.dumps(data) == simplejson.dumps(data)
result = min(repeat("json.dumps(compare_json_and_simplejson.data)", "from __main__ import json, compare_json_and_simplejson",
repeat = REPEAT, number = NUMBER))
print " json dumps {} seconds".format(result)
result = min(repeat("simplejson.dumps(compare_json_and_simplejson.data)", "from __main__ import simplejson, compare_json_and_simplejson",
repeat = REPEAT, number = NUMBER))
print "simplejson dumps {} seconds".format(result)
assert json.loads(compare_json_and_simplejson.dump) == data
result = min(repeat("json.loads(compare_json_and_simplejson.dump)", "from __main__ import json, compare_json_and_simplejson",
repeat = REPEAT, number = NUMBER))
print " json loads {} seconds".format(result)
result = min(repeat("simplejson.loads(compare_json_and_simplejson.dump)", "from __main__ import simplejson, compare_json_and_simplejson",
repeat = REPEAT, number = NUMBER))
print "simplejson loads {} seconds".format(result)
print "Complex real world data:"
COMPLEX_DATA = {'status': 1, 'timestamp': 1362323499.23, 'site_code': 'testing123', 'remote_address': '212.179.220.18', 'input_text': u'ny monday for less than \u20aa123', 'locale_value': 'UK', 'eva_version': 'v1.0.3286', 'message': 'Successful Parse', 'muuid1': '11e2-8414-a5e9e0fd-95a6-12313913cc26', 'api_reply': {"api_reply": {"Money": {"Currency": "ILS", "Amount": "123", "Restriction": "Less"}, "ProcessedText": "ny monday for less than \\u20aa123", "Locations": [{"Index": 0, "Derived From": "Default", "Home": "Default", "Departure": {"Date": "2013-03-04"}, "Next": 10}, {"Arrival": {"Date": "2013-03-04", "Calculated": True}, "Index": 10, "All Airports Code": "NYC", "Airports": "EWR,JFK,LGA,PHL", "Name": "New York City, New York, United States (GID=5128581)", "Latitude": 40.71427, "Country": "US", "Type": "City", "Geoid": 5128581, "Longitude": -74.00597}]}}}
compare_json_and_simplejson(COMPLEX_DATA)
print "\nSimple data:"
SIMPLE_DATA = [1, 2, 3, "asasd", {'a':'b'}]
compare_json_and_simplejson(SIMPLE_DATA)
```
And the results on my system (Python 2.7.4, Linux 64-bit):
> Complex real world data:
> json dumps 1.56666707993 seconds
> simplejson dumps 2.25638604164 seconds
> json loads 2.71256899834 seconds
> simplejson loads 1.29233884811 seconds
>
> Simple data:
> json dumps 0.370109081268 seconds
> simplejson dumps 0.574181079865 seconds
> json loads 0.422876119614 seconds
> simplejson loads 0.270955085754 seconds
For dumping, `json` is faster than `simplejson`.
For loading, `simplejson` is faster.
Since I am currently building a web service, `dumps()` is more important—and using a standard library is always preferred.
Also, `cjson` was not updated in the past 4 years, so I wouldn't touch it. | What are the differences between json and simplejson Python modules? | [
"",
"python",
"json",
"simplejson",
""
] |
I'm trying to insert some value pairs into a std::map.
In the first case, I receive a pointer to the map, dereference it and use the subscript operator to assign a value. i.e.
```
(*foo)[index] = bar;
```
Later, when I try to iterate over the collection, I'm returned key/value pairs which contain null for the value attribute in all cases except for the first (map.begin()) item. The weird thing is, if I do the insertion via the map's insert function, everything is well, i.e:
```
foo->insert(std::pair<KeyType,ValueType>(myKey, myValue));
```
Why would this be? Aren't the two methods functionally equivalent? I've pasted some snippets of actual code below for context
```
...
typedef std::map<int, SCNode*> SCNodeMap;
...
void StemAndCycle::getCycleNodes(SCNodeMap* cycleNodes)
{
(*cycleNodes)[root->getId()] = root;
SCNode* tmp = root->getSucc();
while(tmp->getId() != root->getId())
{
// (*cycleNodes)[tmp->getId()] == tmp; // crashes (in loop below)
cycleNodes->insert(std::pair<int, SCNode*>(tmp->getId(), tmp));//OK
std::pair<int, SCNode*> it = *(cycleNodes->find(tmp->getId()));
tmp = tmp->getSucc();
}
// debugging; print ids of all the SCNode objects in the collection
std::map<int, SCNode*>::iterator it = cycleNodes->begin();
while(it != cycleNodes->end())
{
std::pair<int, SCNode*> p = (*it);
SCNode* tmp = (*it).second; // null except for it = cycleNodes->begin()
std::cout << "tmp node id: "<<tmp->getId()<<std::endl;
it++;
}
}
```
I'm all out of ideas. Does anyone have a suggestion please? | In your actual code you have:
```
(*cycleNodes)[tmp->getId()] == tmp;
```
This will not assign tmp into the map, but will instead reference into the map creating an empty value (see @Neil Butterworth) - you have == instead of =. What you want is:
```
(*cycleNodes)[tmp->getId()] = tmp;
``` | You should be aware that operator[] for std::map will insert a value into the map if one does not exist when used in expressions like this:
```
if ( amap[x] == 42 ) {
...
}
```
If the value x does not exist, one will be created and assigned the value created by the value types default constructor, or zero for the built-in types. This is almost never what you want, and you should generally avoid the use of operator[] with maps. | Weird bug while inserting into C++ std::map | [
"",
"c++",
"dictionary",
"stdmap",
"std",
""
] |
Which is the best Java memcached client, and why? | As the author of [spymemcached](http://code.google.com/p/spymemcached/), I'm a bit biased, but I'd say it's mine for the following reasons:
## Designed from scratch to be non-blocking everywhere possible.
When you ask for data, issue a set, etc... there's one tiny concurrent queue insertion and you get a Future to block on results (with some convenience methods for common cases like get).
## Optimized Aggressively
You can read more on my [optimizations](http://code.google.com/p/spymemcached/wiki/Optimizations) page, but I do whole-application optimization.
I still do pretty well in micro-benchmarks, but to compare fairly against the other client, you have to contrive unrealistic usage patterns (for example, waiting for the response on every set operation or building locks around gets to keep them from doing packet optimization).
## Tested Obsessively
I maintain a pretty rigorous test suite with [coverage reports](http://dustin.github.com/java-memcached-client/emma/) on every release.
Bugs still slip in, but they're usually pretty minor, and the client just keeps getting better. :)
## Well Documented
The [examples](http://code.google.com/p/spymemcached/wiki/Examples) page provides a quick introduction, but the [javadoc](http://dustin.github.com/java-memcached-client/apidocs/) goes into tremendous detail.
## Provides High-level Abstractions
I've got a Map interface to the cache as well as a functional CAS abstraction. Both binary and text support an incr-with-default mechanism (provided by the binary protocol, but rather tricky in text).
## Keeps up with the Specs
I do a [lot of work](http://cloud.github.com/downloads/dustin/memcached/changelog.html) on the server itself, so I keep up with protocol changes.
I did the first binary protocol server implementations (both a test server and in memcached itself), and this was the first production-ready client to support it, and does so first-class.
I've also got support for several hash algorithms and node distribution algorithms, all of which are well-tested for every build. You can do a stock ketama consistent hash, or a derivative using FNV-1 (or even java's native string hashing) if you want better performance. | I believe **memcached java client** is the best client.
## Features
* **Binary protocol support**. fastest way to access the key/value stored in memcached server.
* **UDP protocol support**. You can set key with tcp protocol, and get with udp protocol. Acctually, some big corporations are doing like this.
* Support customized serialization and deserialization.
* Connection pool with NIO and direct buffer. Dynamically increase connections when out of use for the connection pool.
## Performance
* Refer to [performance](http://wiki.github.com/gwhalin/Memcached-Java-Client/performance) for a benchmark test of existing popular memcached java clients.
* Deserializing while receiving the response
* Performance tuning into each line of the source code. | Java Memcached Client | [
"",
"java",
"web-applications",
"memcached",
""
] |
**Note:**
```
/*
* Trivial code
*/
wchar_t *greeting = L"Hello World!";
char *greeting_ = "Hello World!";
```
**WinDbg:**
```
0:000> ?? greeting
wchar_t * 0x00415810
"Hello World!"
0:000> ?? greeting_
char * 0x00415800
"Hello World!"
0:000> db 0x00415800
00415800 48 65 6c 6c 6f 20 57 6f-72 6c 64 21 00 00 00 00 Hello World!....
00415810 48 00 65 00 6c 00 6c 00-6f 00 20 00 57 00 6f 00 H.e.l.l.o. .W.o.
00415820 72 00 6c 00 64 00 21 00-00 00 00 00 00 00 00 00 r.l.d.!.........
```
**Question:**
* What is the purpose of the **NULL** character: `00` between ASCII characters in [`wchar_t`](http://en.wikipedia.org/wiki/Wide_character) - **Win32**? | `wchar_t` is a wide-character string, so each character takes 2 bytes of storage. 'H' as `a wchar_t` is 0x0048. Since x86 is little-endian, you see the bytes in memory in order 48 00.
db in windbg will dump the bytes and provide how its viewed as an ASCII string, hence the H.E.L. ... output you see. You can use 'du' to dump the memory as a unicode string. | The answer is that wchar\_t characters are 16-bit quantities, thus requiring two bytes each. Each represents a [UTF-16](http://en.wikipedia.org/wiki/UTF-16) character. Since the letters you're using are within the ASCII range, they have values < 256, so the high byte is zero for each 2-byte pair. | Representation of wchar_t and char in WinDbg | [
"",
"c++",
"winapi",
"x86",
"windbg",
""
] |
I am trying to create a class that lets me call methods with a specified timout. I have the actual timeout implementation working, but am having trouble encapsulating this into a class successfully.
I have decided at this time to limit myself to working with just the 5 Action delegates to put a timeout on methods that take 0 - 4 arguments. For the purposes of this question i will use the one argument variant of Action.
My Ideal calling syntax would be something like this:
```
Action<int> a = new TimeoutAction(i => i + 1)
{
Timeout = Timespan.FromSeconds(10)
}
```
or this:
```
Action<int> a = (x => x + 1).Timeout(Timespan.FromSeconds(10))
```
I am almost sure that the first version isnt possible as there is no way to inherit from a delegate. The second may be possible by using an extension method on the `Delegate` type.
A final fallback would be to create static methods that take in the appropriate delegate type and return a Delegate with the same signature but with the timeouts included.
Any suggestions or ideas? | Your last suggestion would be the right way:
> A final fallback would be to create
> static methods that take in the
> appropriate delegate type and return a
> Delegate with the same signature but
> with the timeouts included.
In other words:
```
public static Action<T> Timeout<T>(this Action<T> action, TimeSpan timeSpan);
public static Action<T1, T2> Timeout<T1, T2>(this Action<T1, T2> action, TimeSpan timeSpan);
public static Func<T, TResult> Timeout<T, TResult>(this Func<T, TResult> action, TimeSpan timeSpan);
public static Func<T1, T2, TResult> Timeout<T1, T2, TResult>(this Func<T1, T2, TResult> action, TimeSpan timeSpan);
/* snip the rest of the Action<...> and Func<...> overloads */
```
*"Why can't I just declare one method?"*
You could declare one method that accepts a `Delegate` and returns a `Delegate`, but then you'd lose the delegate type information (your second example wouldn't compile).
Unfortunately, the following example isn't valid C# -- you can't use `Delegate` as a type parameter constraint:
```
public static TDelegate Timeout<TDelegate>(this TDelegate d, Timespan timespan) where TDelegate : Delegate;
``` | Use [AOP](http://en.wikipedia.org/wiki/Aspect-oriented_programming) for that. Either [PostSharp](http://www.postsharp.org/) or [DynamicProxy](http://www.castleproject.org/dynamicproxy/index.html) | Creating a reusable method timeout class in C# | [
"",
"c#",
"delegates",
""
] |
How can i add Checkbox in listbox. i want to make it in asp.net . | You probably want to use [CheckBoxList](http://www.w3schools.com/aspnet/control_checkboxlist.asp). | Maybe [CheckBoxList](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.checkboxlist.aspx) can solve the problem? | How can i add checkbox in listbox(asp.net)? | [
"",
"c#",
".net",
"asp.net",
""
] |
What is the best way to search a large binary file for a certain substring in C#?
To provide some specifics, I'm trying to extract the DWARF information from an executable, so I only care about certain parts of the binary file (namely the sections starting with the strings `.debug_info`, `.debug_abbrev`, etc.)
I don't see anything obvious in `Stream`, `FileStream`, or `BinaryReader`, so it looks like I'll have to read chunks in and search through the data for the strings myself.
Is there a better way? | There's nothing built into .NET that will do the search for you, so you're going to need to read in the file chunk by chunk and scan for what you want to find.
You can speed up the search in two ways.
Firstly, use bufferred IO and transfer large chunks at a time - don't read byte by byte, read 64KB, 256KB or 1MB chunks.
Secondly, don't do a linear scan for the piece you want - check out the [Boyer-Moore](http://en.wikipedia.org/wiki/Boyer-Moore_string_search_algorithm) (wikipedia link) algorithm for string searches - you can apply this to searching for the DWARF information you want. | There must be a DWARF C library you could compile and use interop with? I did some searching and found [this](http://reality.sgiweb.org/davea/dwarf.html). If a library from there could be compiled into a DLL on Windows (I assume you're using Windows), then you could use System.Runtime.InteropServices to interact with the DLL and extract your information from there.
Perhaps? | C# - Search Binary File for a Pattern | [
"",
"c#",
"file",
"binary",
"find",
"substring",
""
] |
I have a web application that issues requests to 3 databases in the DAL. I'm writing some integration tests to make sure that the overall functionality round trip actually does what i expect it to do. This is completely separate from my unit tests, just fyi.
The way I was intending to write these tests were something to the effect of this
```
[Test]
public void WorkflowExampleTest()
{
(using var transaction = new TransactionScope())
{
Presenter.ProcessWorkflow();
}
}
```
The Presenter in this case has already been set up. The problem comes into play inside the ProcessWorkflow method because it calls various Repositories which in turn access different databases, and my sql server box does not have MSDTC enabled, so I get an error whenever I try to either create a new sql connection, or try to change a cached connection's database to target a different one.
For brevity the Presenter resembles something like:
```
public void ProcessWorkflow()
{
LogRepository.LogSomethingInLogDatabase();
var l_results = ProcessRepository.DoSomeWorkOnProcessDatabase();
ResultsRepository.IssueResultstoResultsDatabase(l_results);
}
```
I've attempted numerous things to solve this problem.
1. Caching one active connection at all times and changing the target database
2. Caching one active connection for each target database (this was kind of useless because pooling should do this for me, but I wanted to see if I got different results)
3. Adding additional TransactionScopes inside each repository so that they have their own transactions using the TransactionScopeOption "RequiresNew"
My 3rd attempt on the list looks something like this:
```
public void LogSomethingInLogDatabase()
{
using (var transaction =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
//do some database work
transaction.Complete();
}
}
```
And actually the 3rd thing I tried actually got the unit tests to work, but all the transactions that completed actually HIT my database! So that was an utter failure, since the entire point is to NOT effect my database.
My question therefore is, what other options are out there to accomplish what I'm trying to do given the constraints I've laid out?
**EDIT:**
This is what "//do some database work" would look like
```
using (var l_context = new DataContext(TargetDatabaseEnum.SomeDatabase))
{
//use a SqlCommand here
//use a SqlDataAdapter inside the SqlCommand
//etc.
}
```
and the DataContext itself looks something like this
```
public class DataContext : IDisposable
{
static int References { get; set; }
static SqlConnection Connection { get; set; }
TargetDatabaseEnum OriginalDatabase { get; set; }
public DataContext(TargetDatabaseEnum database)
{
if (Connection == null)
Connection = new SqlConnection();
if (Connection.Database != DatabaseInfo.GetDatabaseName(database))
{
OriginalDatabase =
DatabaseInfo.GetDatabaseEnum(Connection.Database);
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(database));
}
if (Connection.State == ConnectionState.Closed)
{
Connection.Open() //<- ERROR HAPPENS HERE
}
ConnectionReferences++;
}
public void Dispose()
{
if (Connection.State == ConnectionState.Open)
{
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(OriginalDatabase));
}
if (Connection != null && --ConnectionReferences <= 0)
{
if (Connection.State == ConnectionState.Open)
Connection.Close();
Connection.Dispose();
}
}
}
``` | Ok, I found a way around this issue. The only reason I'm doing it this way is because I couldn't find ANY other way to fix this problem, and because it's in my integration tests, so I'm not concerned about this having adverse effects in production code.
I had to add a property to my DataContext to act as a flag to keep track of whether or not to dispose of the connection object when my DataContext is being disposed. This way, the connection is kept alive throughout the entire transaction scope, and therefore no longer bothers DTC
Here's sample of my new Dispose:
```
internal static bool SupressConnectionDispose { get; set; }
public void Dispose()
{
if (Connection.State == ConnectionState.Open)
{
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(OriginalDatabase));
}
if (Connection != null
&& --ConnectionReferences <= 0
&& !SuppressConnectionDispose)
{
if (Connection.State == ConnectionState.Open)
Connection.Close();
Connection.Dispose();
}
}
```
this allows my integration tests to take the form of:
```
[Test]
public void WorkflowExampleTest()
{
(using var transaction = new TransactionScope())
{
DataContext.SuppressConnectionDispose = true;
Presenter.ProcessWorkflow();
}
}
```
I would not recommend utilizing this in production code, but for integration tests I think it is appropriate. Also keep in mind this only works for connections where the server is always the same, as well as the user.
I hope this helps anyone else who runs into the same problem I had. | 1. Set **Enlist=false** on connection string to avoid auto enlistment on transaction.
2. Manually enlist connection as participants in transaction scope. [(http://msdn.microsoft.com/en-us/library/ms172153%28v=VS.80%29.aspx)](http://msdn.microsoft.com/en-us/library/ms172153%28v=VS.80%29.aspx) | How do you get around multiple database connections inside a TransactionScope if MSDTC is disabled? | [
"",
"c#",
"sql-server",
"transactions",
"asp.net-2.0",
""
] |
Someone recently told me,
"In the past, Google never indexed PHP pages".
I don’t believe that for several reasons. But I’m no SEO expert, or even a novice, so I wonder. Before I file that person under “unreliable”, I thought I’d ask the SO community: Is there anything to that?
Thanks. | Ridiculous, some argue (correctly) that google tends to favor static content because it rarely changes, but I'm not sure how true that is anymore.
Get it from the horses mouth:
See: <http://googlewebmastercentral.blogspot.com/2008/09/dynamic-urls-vs-static-urls.html>
Also, it is true, Javascript generated content (or content pulled by Ajax), is completely ignored. | Actually, I think "Google never indexed PHP pages" is absolutely a true statement. Google doesn't have access to the PHP code, so it can't index it. Google also doesn't index Java, Ruby, Perl, Python, or any other backend code.
Google only indexes output, it doesn't matter what generated that output. Google can't even tell what type of language generated the page (although it can guess by the extension). You could easily change Apache to treat all files ending in .asp as PHP files. | SEO History and PHP | [
"",
"php",
"seo",
""
] |
I keep getting this error after my application is running for 2 days.
I've been told it's been some kind of buffer overflow, but is it the only option?
The app is written in C++ using Visual C++ 6.0. | In debug, when you get dynamic buffer by `new`, a special code gets inserted before and after the buffer to guard the buffer.
Ex:
```
<Guard>=====buffer allocated on heap of required size=======<Guard>
```
If you overrun the buffer, the guard inserted gets corrupted and when you try to delete the buffer, then debugger would assert after detecting buffer overrun.
Its bit difficult to find the buffer overrun in large code base. I would suggest couple of ways which can help you to detect this scenario:
* Using tools like [Rational Purify](http://www-01.ibm.com/software/awdtools/purify/):
Its good tool to detect memory corruption.
* Debugging by Windbg with GFlags
enabled:
Refer my [answer](https://stackoverflow.com/questions/781821/debug-visual-c-memory-allocation-problems/782085#782085) to similar question here | The simplest way to reproduce this is something like this:
```
//Allocate space for holding 10 ints
int *p = new int[10];
//Overwrite the memory.. doesn't crash here
p[10] = 8;
//Try to delete..crashes..
delete[] p;
```
Check whether you are writing to the memory location beyond its allocated space. | What could cause DAMAGE: after normal block error? | [
"",
"c++",
"visual-c++",
""
] |
I have been asking a lot of questions about a project I have been working on recently. Here is the scenario I am in and any help or point in the right direction would help a lot...
This is a network program built with a server and multiple clients. Each client has a GUI which must act according to commands sent from the server. Each client is wrapped within a class called `Player`. This `Player` has a GUI (extends `JFrame`) and a main method, and the Server has a main method only (no GUI). At first, this class was created in the main thread like this:
```
EventQueue.invokeLater(new Runnable()
{
public void run()
{
new Player().setVisible(true);
}
});
```
This was working fine, until I realized that the entire `Player` class was now executing in the EDT. So, when I wait for commands from the server the entire GUI locks up until that command is sent and proper actions are executed. As you can imagine, this is a horrible design and proved to be a real pain of a coding environment when every time you want to check for something you must find some crazy work around so that the GUI still remains intact.
Obviously, I must check for the commands from the Server in a separate thread and run the GUI components in the EDT. My second implementation had two classes - one for the GUI and one for the `Player`. The idea was that the `Player` had a variable that held the GUI so that I could access the GUI from the `Player` class, something like this:
```
class Player
{
public GUI gui;
...
// And then start this gui inside of the EDT.
EventQueue.invokeLater(new Runnable()
{
public void run()
{
this.gui = new GUI().setVisible(true);
}
}
```
This doesn't work either, because `this` inside of the new `Runnable` object refers to the `Runnable` object, not the `Player`.
How can I communicate between the `Player` class in one thread and the corresponding GUI class in the EDT thread? | * [Writing Responsive UIs with Swing](http://java.sun.com/docs/books/performance/1st_edition/html/JPSwingThreads.fm.html)
* [Worker Threads: Swing Worker](http://java.sun.com/docs/books/tutorial/uiswing/concurrency/index.html) | To handle your problem with the `this` pointer, you should write:
```
class Player
{
public GUI gui;
...
// And then start this gui inside of the EDT.
EventQueue.invokeLater(new Runnable()
{
public void run()
{
Playser.this.gui = new GUI().setVisible(true);
}
}
}
``` | Communication between the EDT and main threads | [
"",
"java",
"multithreading",
"swing",
"event-dispatch-thread",
""
] |
I quickly checked numPy but it looks like it's using arrays as vectors? I am looking for a proper Vector3 type that I can instance and work on. | [ScientificPython](http://dirac.cnrs-orleans.fr/plone/software/scientificpython/) has a [Vector](http://dirac.cnrs-orleans.fr/ScientificPython/ScientificPythonManual/Scientific.Geometry.Vector-class.html) class. for example:
```
In [1]: from Scientific.Geometry import Vector
In [2]: v1 = Vector(1, 2, 3)
In [3]: v2 = Vector(0, 8, 2)
In [4]: v1.cross(v2)
Out[4]: Vector(-20.000000,-2.000000,8.000000)
In [5]: v1.normal()
Out[5]: Vector(0.267261,0.534522,0.801784)
In [6]: v2.cross(v1)
Out[6]: Vector(20.000000,2.000000,-8.000000)
In [7]: v1*v2 # dot product
Out[7]: 22.0
``` | I don't believe there is anything standard (but I could be wrong, I don't keep up with python that closely).
It's very easy to implement though, and you may want to build on top of the numpy array as a container for it anyway, which gives you lots of good (and efficient) bits and pieces. | Is there a Vector3 type in Python? | [
"",
"python",
"vector",
""
] |
I know that if you leave a member out of an initialization list in a no-arg constructor, the default constructor of that member will be called.
Do copy constructors likewise call the copy constructor of the members, or do they also call the default constructor?
```
class myClass {
private:
someClass a;
someOtherClass b;
public:
myClass() : a(DEFAULT_A) {} //implied is b()
myClass(const myClass& mc) : a(mc.a) {} //implied is b(mc.b)??? or is it b()?
}
``` | Explicitly-defined copy constructors do not call copy constructors for the members.
When you enter the body of a constructor, every member of that class will be initialized. That is, once you get to `{` you are guaranteed that all your members have been initialized.
**Unless specified, members are default-initialized in the order they appear in the class.** (And if they can't be, the program is ill-formed.) So if you define your own copy constructor, it's now up to you to call any member copy constructors as desired.
Here is a small program you can copy-paste somewhere and mess around with:
```
#include <iostream>
class Foo {
public:
Foo() {
std::cout << "In Foo::Foo()" << std::endl;
}
Foo(const Foo& rhs) {
std::cout << "In Foo::Foo(const Foo&)" << std::endl;
}
};
class Bar {
public:
Bar() {
std::cout << "In Bar::Bar()" << std::endl;
}
Bar(const Bar& rhs) {
std::cout << "In Bar::Bar(const Bar&)" << std::endl;
}
};
class Baz {
public:
Foo foo;
Bar bar;
Baz() {
std::cout << "In Baz::Baz()" << std::endl;
}
Baz(const Baz& rhs) {
std::cout << "In Baz::Baz(const Baz&)" << std::endl;
}
};
int main() {
Baz baz1;
std::cout << "Copying..." << std::endl;
Baz baz2(baz1);
}
```
As-is, this prints:
```
In Foo::Foo()
In Bar::Bar()
In Baz::Baz()
Copying...
In Foo::Foo()
In Bar::Bar()
In Baz::Baz(const Baz&)
```
Note that it's default-initializing the members of `Baz`.
By commenting out the explicit copy constructor, like:
```
/*
Baz(const Baz& rhs) {
std::cout << "In Baz::Baz(const Baz&)" << std::endl;
}
*/
```
The output will become this:
```
In Foo::Foo()
In Bar::Bar()
In Baz::Baz()
Copying...
In Foo::Foo(const Foo&)
In Bar::Bar(const Bar&)
```
It calls the copy-constructor on both.
And if we reintroduce `Baz`'s copy constructor and explicitly copy a single member:
```
Baz(const Baz& rhs) :
foo(rhs.foo)
{
std::cout << "In Baz::Baz(const Baz&)" << std::endl;
}
```
We get:
```
In Foo::Foo()
In Bar::Bar()
In Baz::Baz()
Copying...
In Foo::Foo(const Foo&)
In Bar::Bar()
In Baz::Baz(const Baz&)
```
As you can see, once you explicitly declare a copy-constructor *you* are responsible for the copying of all class members; it's your constructor now.
This applies for all constructors, including move constructors. | For any member variable having a default constructor that default constructor is invoked if you have not explicitly added any other constructor call for that member variable into the initialization list. | Copy constructor initialization lists | [
"",
"c++",
"copy-constructor",
""
] |
I was looking for a way to print a string backwards, and after a quick search on google, I found this method:
Suppose `a` is a string variable. This will return the `a` string backwards:
```
a[::-1]
```
Can anyone explain how that works? | Sure, the `[::]` is the [extended slice](https://docs.python.org/whatsnew/2.3.html#extended-slices) operator. It allows you to take substrings. Basically, it works by specifying which elements you want as [begin:end:step], and it works for all sequences. Two neat things about it:
* You can omit one or more of the elements and it does "the right thing"
* Negative numbers for begin, end, and step have meaning
For begin and end, if you give a negative number, it means to count from the end of the sequence. For instance, if I have a list:
```
l = [1,2,3]
```
Then `l[-1]` is 3, `l[-2]` is 2, and `l[-3]` is 1.
For the `step` argument, a negative number means to work *backwards* through the sequence. So for a list::
```
l = [1,2,3,4,5,6,7,8,9,10]
```
You could write `l[::-1]` which basically means to use a step size of -1 while reading through the list. Python will "do the right thing" when filling in the start and stop so it iterates through the list backwards and gives you `[10,9,8,7,6,5,4,3,2,1]`.
I've given the examples with lists, but strings are just another sequence and work the same way. So `a[::-1]` means to build a string by joining the characters you get by walking backwards through the string. | The "-1" part represents the "step" part of the slicing—in this case, it goes through the string 1 character at a time, but backwards (a negative step means start from the end of the string). If you specify the step to be 2, for instance, you would get every other character of the string, starting with the first one. If you specify a step of -2, then you'd get every other character of the string, starting with the last character and working backwards.
So, in a nutshell, if `a = '12345'`:
* `a[::2]` becomes `135`
* `a[::-1]` becomes `54321`
* `a[::-2]` becomes `531` | Understanding string reversal via slicing | [
"",
"python",
"string",
"slice",
""
] |
I have a table of product. I have listed the items with pagination using rand() function. When using rand(), it displays the items in random order but the record is repeated while we go from 1 page to another page.
Is it possible to get non repeated items in each page using rand()? If yes how, if no what may be the option for this. | In a situation like that, I might see if there's a way to create a list of the product IDs (I'm assuming each product has some sort of unique ID) and shuffle it. The list might be stored as a temporary database table or it might be stored as a session variable in your web application, for example... that way you could get access to any part of the randomly ordered list without having to recompute the whole thing from the beginning.
This is obviously best if you have few people accessing the site at any given time, but you expect each of them to go through several pages of the results. | Well, you could generate a random seed when doing the first query (in your app code), and then keep that seed between requests - and supply this seed to the `rand([seed])` function. That should do it...
Of course, the specifics would depend on how exactly your sql code is working at the moment; you *might* need to loop to comsume the first page (or so) of random numbers to ignore. | how to get random data from database? | [
"",
"sql",
"random",
""
] |
When trying to figure out if a string is null or empty, I usually have the string already. That's why I would have expected a utility function such as String.IsNullOrEmpty() to work without parameters:
```
String myString;
bool test=myString.IsNullOrEmpty();
```
However, this does not work, because IsNullOrEmpty expects a String parameter. Instead, I have to write:
```
String myString;
bool test=String.IsNullOrEmpty(myString);
```
Why is this so? It seems unnecessarily clunky. Of course I can easily write own extension method for this, but it seems like a very obvious omission, so I am wondering if there is any good reason for this. I can't believe that the parameterless overload of this function has just been forgotten by Microsoft. | This method has been around long before extension methods were added to C#, and before extension methods, there was no way to define an instance method/property such as `xyz.IsNullOrEmpty()` that you could still call if `xyz` was `null`. | If the String would be `null`, calling `IsNullOrEmpty()` would cause a `NullReferenceException`.
```
String test = null;
test.IsNullOrEmpty(); // Instance method causes NullReferenceException
```
Now we have extension methods and we can implement this with an extension method and avoid the exception. But allways keep in mind that this only works because extension methods are nothing more than syntactical sugar for static methods.
```
public static class StringExtension
{
public static Boolean IsNullOrEmpty(this String text)
{
return String.IsNullOrEmpty(text);
}
}
```
With this extension method the follwing will never thrown an exception
```
String test = null;
test.IsNullOrEmpty(); // Extension method causes no NullReferenceException
```
because it is just syntactical sugar for this.
```
StringExtension.IsNullOrEmpty(test);
``` | Why is there no IsNullOrEmpty overload method without parameters? | [
"",
"c#",
""
] |
I need to sort an array containing a list of words and search the same using binarysearch. For certain reasons, the word-list must always be sorted using the sorting-rules of "en-US" i.e. American Regional Settings. The code will run under various international Operating Systems and of course, this will mean that the word-list will be sorted differently according to the local Regional Settings in use. One problem could arise on a computer/device running with Lithuanian Regional Settings. Why? Because the letter "**Y**" in most languages is sorted like X-**Y**-Z while in Lithuanian, the sort order is I-**Y**-J. This behavior would create havoc to my program.
On a desktop-PC, I could change momentarily the Regional Settings into American English by using:
Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-US")
However, since I am developing for Windows Mobile (CF.NET), this piece of code is not possible to implement.
I found some hacks which could let me change the Regional Settings on the device programmatically but they are not "official" and considered risky so I prefer avoiding these.
So my question is: how can I force Array.Sort and Array.BinarySearch to use CultureInfo = "en-US" while sorting and searching regardless of the Regional Settings set on the device?
I believe I could use:
```
Public Shared Function BinarySearch(Of T) ( _
array As T(), _
value As T, _
comparer As IComparer(Of T) _
) As Integer
```
and implement Comparer to take into consideration CultureInfo (and set it to "en-US") but I don't know how to do that despite trying hard. If anyone could post some sample-code in VB.Net or C# or an explanation of how to do it, I would be very grateful.
If you are aware of any alternative solution which works in CF.Net, then, of course, I am all ears.
Thanks.
EDIT:
I will consider Twanfosson's answer as the accepted solution since my question clearly stated that I wanted to maintain an association with the English language.
However, in means of flexibility, I believe Guffa's answer is the best one. Why? Let's use another example: In German, the letter **Ö** is sorted **Ö**-X-Z while in Swedish and Finnish, the order is X-Z-**Ö**. In Estonian the sort order is Z-**Ö**-X. Complicated, isn't it? Guffa's solution will let me force Swedish sorting-order (changing CultureInfo) on a device running under German Regional settings. Using **Comparer.DefaultInvariant** with its association to English wouldn't help in this case, probably the letter **Ö** would end up with O. Therefore my vote will go to Guffa. | Is it not possible to use the [Invariant](http://msdn.microsoft.com/en-us/library/system.globalization.cultureinfo.invariantculture.aspx) culture?
> InvariantCulture retrieves an instance
> of the invariant culture. It is
> associated with the English language
> but not with any country/region.
Using the invariant culture would make this trivial.
```
Array.Sort( myArray, Comparer.DefaultInvariant );
Array.BinarySearch( myArray, myString, Comparer.DefaultInvariant );
``` | Well, the answer to both is to implement a comparer. Create a class that implements the `IComparer(Of String)` interface and has it's own `CultureInfo` object that it uses to compare the strings:
```
Public Class StringComparerEnUs
Implements IComparer(Of String)
Private _culture As CultureInfo
Public Sub New()
_culture = New CultureInfo("en-US")
End Sub
Public Function Compare(ByVal x As String, ByVal y As String)
Return String.Compate(x, y, false, _culture)
End Function
End Class
```
Now you can use it to sort the strings:
```
Array.Sort(theArray, New StringComparerEnUs())
```
and to find them:
```
pos = BinarySearch(theArray, "Dent, Arthur", new StringComparerEnUs())
```
(The class can, of course, be made a bit more general by accepting a culture string in the constructor, and you can also add a variable to make use of the `ignoreCase` parameter in the `String.Compare` call.) | .net - Problems with Array.Sort and Array.BinarySearch - culture and globalization | [
"",
"c#",
".net",
"vb.net",
"windows-mobile",
"compact-framework",
""
] |
I recently have taken the support and programming of a web system written in JSF. The code is kind of messy and redundant, and yes, no documentation exists.
The system has over 40 jar libraries, and most of them are redundant due to old versions and testing. To remove one jar, I must check that it's not imported in the code, so I searched the code for the jar import path (I'm using IntelliJ IDE), made sure that it's not used, and removed it.
However, after compiling the code, a number of run-time errors occurred during testing. I figured out that I removed some jars which are used by other existing jars.
The problem, how do I make sure before removing a jar that it's not used by another jar/java class?
Despite that jars have compiled classes, the compiled classes *do* have the import path of required libraries. But I can't search them with IntelliJ (does not search inside jars files).
The only way that I'm doing now is to test the system every time I remove a jar and see if I can crash it! This is totally not an easy way due to the huge number of features to be tested.
I wish that there is a tool where I can submit a number of java files/jars, and it displays the dependencies between them. | I know that there was a tool coming out of the JBoss project called JBoss TattleTale, might be worth taking a look:
<http://www.jboss.org/tattletale> | [JDepend](http://clarkware.com/software/JDepend.html) will analyze dependencies for you for any number of JARs, class files, etc. Relating the packages it reports to those JARs should be a trivial extra step. | How To Check Dependencies Between Jar Files? | [
"",
"java",
"jar",
"dependencies",
"redundancy",
""
] |
I want to write a **simple P2P** test app which uses HTTP as
underlying protocol.
The app has to **decide on demand**, if it should
act as a **HTTP server**, or as a **HTTP client**.
The classic way which I am aware of, would be to deploy the app on
some **existing** HTTP server. But this is the wrong way for my intention.
It has to be the other way: the server is started by the app (and only
if it decides to).
Is there a way to execute the HTTP server part **on demand**
(without deploying the app itself on a server)?
Can I do it by linking Glassfish or Tomcat libraries
and call some "main" method?
**Edit: IT WORKS!!!**
I now tried **Jetty** along with **JAX-RS (Jersey)** successfully!
Thank you all.
(the REST class is trivial and left out)
package my.p2p;
```
import com.sun.jersey.spi.container.servlet.*;
import org.mortbay.jetty.*;
import org.mortbay.jetty.servlet.*;
public class Main {
public static void main(String[] args)
throws Exception {
ServletHolder holder = new ServletHolder(ServletContainer.class);
holder.setInitParameter("com.sun.jersey.config.property.resourceConfigClass",
"com.sun.jersey.api.core.PackagesResourceConfig");
holder.setInitParameter("com.sun.jersey.config.property.packages", "my.p2p.rest");
Server server = new Server(8000);
Context context = new Context(server, "/", Context.SESSIONS);
context.addServlet(holder, "/*");
server.start();
}
}
``` | You may use [Jetty](http://www.mortbay.org/jetty/) or some other embeddable HTTP server. From Jetty's web site:
> Jetty is an open-source project providing a HTTP server, HTTP client and javax.servlet container. These 100% java components are full-featured, standards based, small foot print, embeddable, asynchronous and enterprise scalable. Jetty is dual licensed under the Apache License 2.0 and/or Eclipse Public License 1.0. Jetty is free for commercial use and distribution under the terms of either of those licenses. | Beside the other answers there also is [NanoHTTPD](http://elonen.iki.fi/code/nanohttpd/). Its very tiny, in just one class and maybe the right thing if yout don't need more or just want to keep things minimal. | Java: Run HTTP server on demand from my app? | [
"",
"java",
"http",
"jakarta-ee",
"p2p",
""
] |
Is it possible to create some Linq that generates a List containing all possible combinations of a series of numbers??
If you enter "21" it would generate a list with the elements:
```
list[0] = "21"
list[1] = "22"
list[2] = "11"
list[3] = "12"
```
(Not nessesarily in that order)
I understand you can use range to do things like:
```
List<char> letterRange = Enumerable.Range('a', 'z' - 'a' + 1).Select(i => (Char)i).ToList(); //97 - 122 + 1 = 26 letters/iterations
```
Which generates the alphabet from a-z. But I can not seem to transfer this knowledge to make a combination generator
I have been able to figure it out with the following code, but it seems way too bulky and I am sure it can be done with a few lines. It really does feel like a bad solution I have made.
Imagine I have called `GetAllCombinations("4321")` if it helps
```
public static String[] GetAllCombinations(String s)
{
var combinations = new string[PossibleCombinations(s.Length)];
int n = PossibleCombinations(s.Length - 1);
for (int i = 0; i < s.Length; i++)
{
String sub;
String[] subs;
if (i == 0)
{
sub = s.Substring(1); //Get the first number
}
else if (i == s.Length - 1)
{
sub = s.Substring(0, s.Length - 1);
}
else
{
sub = s.Substring(0, i) + s.Substring(i + 1);
}
subs = GetAllCombinations(sub);
for (int j = 0; j < subs.Length; j++)
{
combinations[i * n + j] = s[i] + subs[j];
}
}
return combinations;
}
public static int PossibleCombinations(int n) //Combination possibilities. e.g 1-2-3-4 have 24 different combinations
{
int result = 1;
for (int i = 1; i <= n; i++)
result *= i;
return result;
}
``` | For what it's worth, try something like this:
```
public static IEnumerable<string> GetPermutations(string s)
{
if (s.Length > 1)
return from ch in s
from permutation in GetPermutations(s.Remove(s.IndexOf(ch), 1))
select string.Format("{0}{1}", ch, permutation);
else
return new string[] { s };
}
``` | For the record: Josh's answer the generic way:
```
public static IEnumerable<IEnumerable<T>> GetPermutations<T>(IEnumerable<T> items) {
if (items.Count() > 1) {
return items.SelectMany(item => GetPermutations(items.Where(i => !i.Equals(item))),
(item, permutation) => new[] { item }.Concat(permutation));
} else {
return new[] {items};
}
}
``` | Combination Generator in Linq | [
"",
"c#",
"linq",
"combinations",
""
] |
I've read some articles on how to create anonymous types in C#.
What are some use cases for these things? To me, it seems like it might make things a little more difficult to understand declaring objects and their members inline.
When does it make sense to use Anonymous Types? | I like to use anonymous types when I need to bind to a collection which doesn't exactly fit what I need. For example here's a sample from my app:
```
var payments = from terms in contract.PaymentSchedule
select new
{
Description = terms.Description,
Term = terms.Term,
Total = terms.CalculatePaymentAmount(_total),
Type=terms.GetType().Name
};
```
Here I then bind a datagrid to payments.ToList(). the thing here is I can aggregate multiple objects without havign to define an intermidary. | I often use them when databinding to complex controls -- like grids.
It gives me an easy way to format the data I'm sending to the control to make the display of that data easier for the control to handle.
GridView1.DataSource = myData.Select(x => new {Name=x.Description, Date = x.Date.ToShortDate() });
But, later on, after the code is stable, I will convert the anonymous classes to named classes.
I also have cases (Reporting Services) where I need to load them using non-relational data, and Reporting Services requires the data to be FLAT! I use LINQ/Lambda to flatten the data easily for me. | What are some examples of how anonymous types are useful? | [
"",
"c#",
".net",
"anonymous-types",
""
] |
I am creating auto suggest function in my website search box, every time the user press a new key, javascript call a webservice on the server side to get 10 most relevant keywords from the db and gives to the javascript again, and javascript populate the search autosuggest list.
My function is not too slow, but comparing to what live.com or google.com doing its veryyyy slow, i tested them and really i feel that they are getting the keywords from my PC not from their servers.
How they get keywords very fast like this, and sure they have million million times my keywords?
there is a famous style to do like this?
Also using my firebug, i found that they are not calling webservice "may be calling by a way i am not aware", but i found in the Net tab that a new get is happening. | Not sure where you're looking, but certainly on live.com I get a request for each letter:
[](https://i.stack.imgur.com/wxooV.png)
As you can see, there's very little coming back across the wire - 500B - that's what you're aiming for - a lean web service that returns the minimum you need to display that back to the user.
Then on top of that, as the others have said, cache previous responses, etc.
Not that the results often aren't alphabetical, and so if you don't display your ordering criteria, you can work on the principle of "Something now is better than completely accurate later". | Rather than making a request every keypress, what if you just make a request every certain amount of time if there has been a keypress in the interim? If you did something like 100ms intervals, it would still look "instant" but could potentially be much less load on your server. Also, are you having the client cache the keywords? If the user backspaces in the search field, it shouldn't have to re-contact the server to get the keywords. Also, you can immediately filter your current list of keywords on every keypress without contacting the server (you'll just end up with less than 10 of them because some/all of the ones you already have will not contain the letter just typed). That can fill in the "gaps" between actual requests for data to make it seem more instant. | How to create autosuggest to get keywords as fast as google search or live search | [
"",
"asp.net",
"javascript",
"autocomplete",
""
] |
### Duplicate:
> [How to do a Select in a Select](https://stackoverflow.com/questions/760950/how-to-do-a-select-in-a-select)
I have 2 tables:
```
TABLE1
Table1Id
TABLE2
Table2Id
Table1Id
UserId
```
TABLE2 has thousands of entries in it. I want to return a list of TABLE1 entries where there isn't an entry in TABLE2 for it for a particular user. So, where there isn't a foreign key entry in TABLE2. A query like:
```
select count(*) from TABLE1 where Table1Id not in (
select Table1Id from TABLE2 where id_user = 1)
```
However, that query runs very slowly. What would be the most efficient way of getting the results I require? | There is a similar [question](https://stackoverflow.com/questions/760950/how-to-do-a-select-in-a-select)
I think it would be better
```
SELECT COUNT(*)
FROM TABLE1
WHERE NOT EXISTS (SELECT Table1Id FROM TABLE2 WHERE TABLE2.Table1Id = TABLE1.Table1Id AND UserID = 1)
```
I would check the indexes also, as ck suggested | What about
```
select Table1Id from TABLE1
minus
select Table1Id from TABLE2 where id_user = 1
```
I am not sure, it MsSql support minus. If not, you should try a correlated subquery. | SQL Query Help | [
"",
"sql",
"t-sql",
""
] |
I have a case where I'm returning database results and displaying them on a page based on a search term. This part is working fine, but I want to hightlight these search terms by wrapping them in span tags. I started to write a function that I called on each result that used the str\_replace function, but then it dawned on me that this would also affect any text that is in HTML tags. Does anybody have a function they use to do this effectively? I'm using PHP 4.
Thanks! | If you are returning the results *and* displaying them, then shouldn't you have access to the raw data before it gets churned out in html? If so, then do your modifying on the raw data by adding the span tags in there.
If you are saying that your raw data may *already* have html in it, you may be able to use a regex to detect whether you are inside an html tag, and use preg\_replace instead of str\_replace.
Other options:
--load results into a dom parser and only do replace operations on the leaf text nodes
-- write your own parser to keep track of if you are inside an html tag or not
-- yank out all the HTML tags from your results, put in placeholders like
" [[[tag1]]] balha blah blah [[[tag2]]]" then do your replace on the remaining text, then substitute your tags back in | I'd go for highlight with javascript
<http://johannburkard.de/blog/programming/javascript/highlight-javascript-text-higlighting-jquery-plugin.html>
```
$(document).ready(function(){
$('#your_results_list').removeHighlight().highlight('search_word');
}
```
this way you don't mess with the source, user can turn highlight if you want, etc. | Highlighting Search Terms | [
"",
"php",
"php4",
""
] |
I've inherited an application with a glaring security hole.
It has session-based security, but file uploads (which are user specific) are not secured in any way and they are stored in the public file tree.
Filenames do not follow any convention as such, making them hard to guess, but the data is sensitive and thus I need to implement a security measure to prevent unauthorized file access.
Moving the location of the files is not really an option, so I'm looking at a htaccess solution to forward requests to a php handler script.
Does anyone have experience in implementing this type of thing or any good alternative solutions? Specific examples of .htaccess syntax greatly appreciated, as I'm struggling in this area. | Don't really understand why moving them isn't an option, since pushing requests for them to a handler means it no longer matters where they're stored. But you're the man on the scene.
.htaccess looks like:
```
RewriteEngine on
RewriteRule path/to/where/these/files/live/(.*) /handlerscript.php/$1
```
Then you pick up the remaining file path and name from `$_SERVER['PATH_INFO']`. | Well, you could make apache parse .jpg file's for a certain folder adding the following to your .htaccess
```
AddHandler php5-cgi .jpg
```
then you could set a file of php to parse the request the way chaos was recomending you and doing a certain validation, then just return jpeg headers along with the correct picture u'd like to display
here's an example
```
<?php
if($validUser)
{
header("Cache-control: No-cache");
header("Pragma: No-cache");
header("Content-Type: image/jpeg");
//correct picture address
$img = imagecreatefromjpeg("2326_b_lil.jpg");
imagejpeg($img);
}
else
{
//code for error image
}
?>
```
please let me know if you want a more extensive example | Restrict file access to authorized php users | [
"",
"php",
"security",
"session",
".htaccess",
"file",
""
] |
I want an event handler that fires when the user hits reload. Is onrefresh or onreload the correct handler to add to ? Also, will this even fire before or after onunload? Are there an browser inconsistencies? Thanks. | I don't think there are events called onrefresh or onreload. You can know when the page is unloading, but knowing why (i.e. where the user is going next) is outside JavaScript's security sandbox. The only way to know whether the page has been reloaded is to know where the user was on the last page request, which is also outside the scope of JavaScript. You can sometimes get that via `document.referrer`, but it relies on the browser's security settings to permit access to that information. | The WindowEventHandlers.onbeforeunload event handler property contains the code executed when the beforeunload is sent. This event fires when a window is about to unload its resources.
```
window.onbeforeunload = function () {
return 'Are you sure you want to leave?';
}
```
This will show a confirm dialog to the user with the message you returned in your function. It will give the user a leave this page or cancel option.
There is no way around the confirm as it could be used for malicious reasons.
<https://developer.mozilla.org/en-US/docs/Web/API/WindowEventHandlers/onbeforeunload> | Javascript: onrefresh or onreload? | [
"",
"javascript",
""
] |
I have a DLL that I need to access methods from.
In most cases like this I just use [DllImport] to access methods from unmanaged assemblies, but the problem with that in this situation is that it requires the path to the DLL at instantiation time, so a constant string.
This particular DLL is one that gets installed with my application and I can't guarantee where it will be after the program is installed (I'd rather not put it somewhere static like %SystemRoot%).
So is there a way in C# that I can declare and use a method from a DLL at runtime with a variable path?
Any ideas or suggestions would be greatly appreciated! | This is a bit of hack, but since you say that you can find the path to the dll at runtime, why not copy it to your current working directory before you use any of the functions? That way, the dll will exist next to your exe and will be found by LoadLibrary. No need for any additional path in your DllImport.
The only other way to use a method from a dynamic path is to do this:
1) Do the necessary P/Invoke signatures for [LoadLibrary](http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx) & [GetProcAddress](http://msdn.microsoft.com/en-us/library/ms683212(VS.85).aspx)
2) Load the library from the desired path (LoadLibrary)
3) Find the desired function (GetProcAddress)
4) Cast the pointer to a delegate [Marshal.GetDelegateForFunctionPointer](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.getdelegateforfunctionpointer(vs.80).aspx)
5) Invoke it.
Of course, you will need to declare a delegate for each function you want to "import" in this way since you have to cast the pointer to a delegate. | Don't use a path at all. Windows uses a default method of searching for DLLs when trying to dynamically or statically load a function from it.
The exact search logic is documented at MSDN in the docs for [LoadLibrary](http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx) - basically, if the DLL is just used by your app, put in the same folder as your application during the install and don't worry about it. If it's a commonly used DLL, put it somewhere in the folder structure searched by LoadLibrary() and it'll get found. | Call function from DLL with non-static path | [
"",
"c#",
".net",
"dll",
"interop",
"runtime",
""
] |
I used `List<T>.First()` as well as `List<T>.Last()` in my VS 2008 C# project, but when I've downgraded to 2005 I got errors like this:
'System.Collections.Generic.List' does not contain a definition for 'First'
As far as I understand, if still there are **Find()** and **FindLast()** methods there MUST be a very very simple way to get iterators to the first and last values, am I right? But I'm stuck with this and can't find anything useful :( | `First()` and `Last()` are part of LINQ which is why they're not being found in your VS 2005 project.
If you're using a `List<T>` it's really, *really* easy to find the first and last values, assuming the list is non-empty:
```
T first = list[0];
T last = list[list.Count-1];
```
If you really need to use iterators you can implement the LINQ methods really easily:
```
public static T First<T>(IEnumerable<T> source)
{
foreach (T element in source)
{
return element;
}
throw new InvalidOperationException("Empty list");
}
public static T Last<T>(IEnumerable<T> source)
{
T last = default(T);
bool gotAny = false;
foreach (T element in source)
{
last = element;
gotAny = true;
}
if (!gotAny)
{
throw new InvalidOperationException("Empty list");
}
return last;
}
```
(I suspect the real implementation of `Last` checks whether `source` is an `IList<T>` or not and returns `list[list.Count-1]` if so, to avoid having to iterate through the whole collection.)
As pointed out in the comments, these aren't extension methods - you'd write:
```
// Assuming the method is in a CollectionHelpers class.
Foo first = CollectionHelpers.First(list);
```
instead of
```
Foo first = list.First();
```
but the effect is the same. | There are 2 problems here
1. Both First and Last are extension methods. The compiler included in VS2005 do not support extension methods so there is no way to bind to them
2. The First and Last methods are included in the 3.5 framework which is not usable from VS2005. | Visual studio 2005: List<T>.First() List<T>.Last() methods in C#? | [
"",
"c#",
".net",
"visual-studio-2005",
""
] |
If I have this class:
```
<?php
class Model
{
var $db;
function Model()
{
$this->db=new Db_Class();
}
}
?>
```
and a second class that extended the parent class:
```
<?php
class LessonModel extends Model
{
public function LessonModel()
{
//code here
}
public function getTitle($id)
{
$this->db->setTable('myTable');
return $this->db->get('title',$id);
}
}
?>
```
Is it safe to assume that the `$LessonModel->db` field would have been instantiated by the parent `Model` class's constructor, or do I need to run it manually using something like `parent::Model();`? | You cannot assume that the parent constructor has been called because you have overridden this in your subclass. You would need to call `parent::Model()` as you suggest. If you change the class you are inheriting from you would obviously need to change this.
If you are using PHP5 then you can name your constructors `__construct()`. This has the benefit of letting you easily call a parent constructor by doing `parent::__construct()` in any derived class without specifying the parent class name. You can then rearrange your classes' inheritance with less hassle and less danger of introducing obscure bugs. | In the given example, you can assume that.
However, if your child class defines a constructor, PHP will not implicitly call it's parent constructor.
In order to do that, call:
```
parent::__construct();
``` | Is Superclass's constructor run inside child constructor in PHP? | [
"",
"php",
"oop",
""
] |
I have a table which may have anywhere from 10,000 to 10 million rows during the life of the application. This table contains NACSZ info, among other things, and I need to write a query that checks for matching NACSZ like so:
```
select
*
from
Profiles
where
FirstName = 'chris' and
LastName = 'test' and
Address1 = '123 main st' and
City = 'chicago' and
State = 'il' and
Zip = '11111'
```
I'm trying to optimize the table for this query which will be run inside an "if exists()" construct, and not having much luck. With no index, and ~110,000 rows of test data, I get:
> Table 'Profiles'. Scan count 1,
> logical reads 2021, physical reads 0,
> read-ahead reads 0, lob logical reads
> 0, lob physical reads 0, lob
> read-ahead reads 0.
>
> (1 row(s) affected)
>
> SQL Server Execution Times: CPU
> time = 16 ms, elapsed time = 70 ms.
>
> SQL Server Execution Times: CPU
> time = 0 ms, elapsed time = 1 ms.
The execution plan indicates a clustered index scan on the primary key. I thought about adding an index for all the values, but that would create a rather large key and I'd like to try to avoid that if possible. My next thought was to index something that would be reasonably different between rows, so I tried indexing on first name (though I could use address line 1, or zip, for example), so I created the index:
```
create index ix_profiles_firstName on profiles(FirstName)
```
Now, when I run the same query, I get
> Table 'Profiles'. Scan count 1,
> logical reads 171, physical reads 0,
> read-ahead reads 0, lob logical reads
> 0, lob physical reads 0, lob
> read-ahead reads 0.
>
> (1 row(s) affected)
>
> SQL Server Execution Times: CPU
> time = 0 ms, elapsed time = 52 ms.
>
> SQL Server Execution Times: CPU
> time = 0 ms, elapsed time = 1 ms.
Obviously, indexing first name made a HUGE difference. My question is, how do I decide whether I should index first name vs last name vs address vs. zip code? Is there a command I can run over my sample data that will tell me uniqueness of values in each field? My understanding is that I should try to index the column with the most uniqueness to make the index work best, correct? | For your query, you should create a composite index on all columns: `(FirstName, LastName, address1, city, state, zip)`
If you want a certain index to be used in `SQL Server`, issue:
```
SELECT *
FROM Profiles WITH (INDEX (index_name))
WHERE
FirstName = 'chris' and
LastName = 'test' and
Address1 = '123 main st' and
City = 'chicago' and
State = 'il' and
Zip = '11111'
```
> My question is, how do I decide whether I should index first name vs last name vs address vs. zip code?
Index all these values you are filtering on.
Note that you can efficiently filter on the first columns from the index, like:
```
SELECT *
FROM Profiles
WHERE FirstName = 'chris'
```
will use the index to search on `FirstName`,
```
SELECT *
FROM Profiles
WHERE FirstName = 'chris'
AND LastName = 'test'
```
will use the index to search on both `FirstName` and `LastName`,
```
SELECT *
FROM Profiles
WHERE FirstName = 'chris'
AND City = 'chicago'
```
will use the index to search only on `FirstName` (you don't filter on `LastName`, there is a gap, and the index cannot be used to search on other columns)
> Is there a command I can run over my sample data that will tell me uniqueness of values in each field?
```
SELECT COUNT(DISTINCT FirstName) / COUNT(*)
FROM Profiles
```
will show you `FirstName` reciprocal selectivity.
The more this value is, the less efficient is the index.
> My understanding is that I should try to index the column with the most uniqueness to make the index work best, correct?
Yes.
Again, in your case you should index all columns. Most uniqueness is for sure on all columns taken together. | > My question is, how do I decide whether I should index first name vs last name vs address vs. zip code?
Gather all queries you intend to use (if this is the only one, you're done). Then turn over the queries as a workload to the Index Tuning Wizard, and look at the recommendations.
> My understanding is that I should try to index the column with the most uniqueness to make the index work best, correct?
The more unique an index is, the fewer results will be looked up out of the actual table.
The narrower the index is, the faster it can be read. (this rule shows why a composite index on all criteria columns is no good). | SQL Server index question - address lookup | [
"",
"sql",
"sql-server",
""
] |
Do you know the answer to following question?
> Let us say, it MyMethod() is declared
> as partial method in MyPartialClass in
> MyPartialClass.cs. I have also
> provided body of MyMethod() in
> MyPartialClass in MyPartialClass2.cs.
> I use a problem without answer“Magic”
> code generator which has actually
> generated MyPartialClass.cs, let us
> say based on some DB schema. Some
> innocent guy changes schema for good
> and then runs “Magic”.
> MyPartialClass.cs is re-generated but
> with MyMethod2() instead of MyMethod()
> declaration. Think of me. I am
> thinking that I have implemented
> MyMethod() which is used by “Magic”
> but in reality, “Magic” is using
> MyMethod2(). C# compiler does not tell
> me that I have partial method
> implemented without any declaration
> and my implementation is not used at
> all!
>
> Any solution?
I think it is a problem without an answer.
**EDIT** I got an answer :-). I had a typo in my code and that is why compiler was not flagging error. Jon already has pointed that out. | You should get error CS0759. Test case:
```
partial class MyClass
{
partial void MyMethod()
{
Console.WriteLine("Ow");
}
}
partial class MyClass
{
partial void MyMethod2();
}
```
Compilation results:
```
Test.cs(6,18): error CS0759: No defining declaration found for implementing
declaration of partial method 'MyClass.MyMethod()'
```
Does that not do what you want it to? | In short, no; that is the point of partial methods - the declaring code **doesn't** need to know whether an implementation is provided or not.
Of course - you could just not declare the partial method: consume it *assuming* it exists; if you don't provide it, the compiler will complain of a missing method.
There is a *hacky* way to check at runtime (with `partial` methods), which is to have the other half update a `ref` variable:
```
partial void Foo(ref chk);
partial void Foo(ref chk) { chk++;}
```
(and verify it changes) - but in general, partial methods are designed to not know if they are called.
Another approach is a base-class with an `abstract` method - then it is forced by the compiler to be implemented. | Problem with partial method C# 3.0 | [
"",
"c#",
".net",
"c#-3.0",
""
] |
Under `IPv4` I have been parsing the string representation of IP addresses to `Int32` and storing them as `INT` in the `SQL Server`.
Now, with `IPv6` I'm trying to find out if there's a standard or accepted way to parse the string representation of `IPv6` to two `Int64` using `C#`?
Also how are people storing those values in the `SQL Server` - as two fields of `BIGINT`? | Just as an IPv4 address is really a 32 bit number, an IPv6 address is really a 128 bit number. There are different string representations of the addresses, but the actual address is the number, not the string.
So, you don't convert an IP address to a number, you parse a string representation of the address into the actual address.
Not even a `decimal` can hold a 128 bit number, so that leaves three obvious alternatives:
* store the numeric value split into two `bigint` fields
* store a string representation of the address in a `varchar` field
* store the numeric value in a 16 byte `binary` field
Neither is as convenient as storing an IPv4 address in an `int`, so you have to consider their limitations against what you need to do with the addresses. | The simplest route is to get the framework to do this for you. Use [`IPAddress.Parse`](http://msdn.microsoft.com/library/system.net.ipaddress.parse) to parse the address, then [`IPAddress.GetAddressBytes`](http://msdn.microsoft.com/library/system.net.ipaddress.getaddressbytes) to get the "number" as byte[].
Finally, divide the array into the first and second 8 bytes for conversion to two Int64s, e.g. by creating a `MemoryStream` over the byte array and then reading via a `BinaryReader`.
This avoids needing to understand all the available short cut representations for IPv6 addresses. | Formatting IPv6 as an int in C# and storing it in SQL Server | [
"",
"c#",
"sql-server",
"algorithm",
"ipv6",
""
] |
Given a list of objects i need to return a list consisting of the objects and the sum of a property of the objects for all objects in the list seen so far.
More generally given
```
var input = new int[] {1,2,3}
```
I would like to have the output of
```
// does not compile but did not want to include extra classes.
var output = { (1,1), (2,3), (3,6) };
```
What is the "right" functional way to do this? I can do it in a standard iterative approach of course but I am looking for how this would be done in a functional, lazy way.
Thanks | in functional terms this is a combination of :
### zip
take two sequences and create a sequence of tuples of the elements
and
### map
Take a function `f` and a sequence and return a new sequence which is f(x) for each x in the original sequence
The zip is [trivial in c# 4.0](http://bartdesmet.net/blogs/bart/archive/2008/11/03/c-4-0-feature-focus-part-3-intermezzo-linq-s-new-zip-operator.aspx)
Taking the simplistic implementation from there we have
```
static class Enumerable
{
public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult>(
this IEnumerable<TFirst> first,
IEnumerable<TSecond> second,
Func<TFirst, TSecond, TResult> func)
{
var ie1 = first.GetEnumerator();
var ie2 = second.GetEnumerator();
while (ie1.MoveNext() && ie2.MoveNext())
yield return func(ie1.Current, ie2.Current);
}
}
```
We then need the map. We already have it, it's what we call Select in c#
```
IEnumerable<int> input = { 1,2,3,4 };
int a = 0;
var accumulate = input.Select(x =>
{
a += x;
return a;
});
```
But it is safer to bake this into it's own method (no currying in c#) and allow support for arbitrary types/accumulations.
```
static class Enumerable
{
public static IEnumerable<T> SelectAccumulate<T>(
this IEnumerable<T> seq,
Func<T,T,T> accumulator)
{
var e = seq.GetEnumerator();
T t = default(T);
while (e.MoveNext())
{
t = accumulator(t, e.Current);
yield return t;
}
}
}
```
Then we can put them together like so
```
var input = new int[] {1,2,3};
var mapsum = input.Zip(
input.SelectAccumulate((x,y) => x+y),
(a,b) => new {a,b});
```
This will iterate over the sequence twice, but is more general. You could choose to do the accumulator yourself within a standard select and a simple closure but it is no longer so useful as a 'building block' which is one of the driving forces behind functional programming.
Tuple support is a pain except within a method as the anonymous types don't traverse method boundaries without quite a bit of hassle. A few basic tuples should be included in c# 4.0. assuming a tuple class/struct called `Pair<T,U>` you could do:
```
public static IEnumerable<Pair<T,T>> ZipMapAccumulate<T>(
this IEnumerable<T> input,
Func<T,T,T> accumulator)
{
return input.Zip(
input.SelectAccumulate((x,y) => accumulator (x,y)),
(a,b) => new Pair<T,T>(a,b));
}
//get an int specific one
public static Func<IEnumerable<int>, IEnumerable<Pair<int,int>>>
ZipMapSum()
{
return input => Enumerable.ZipMapAccumulate(
input,
(i,j) => i + j);
}
```
Where c# linq becomes much more cumbersome than languages like f# is the poor support for operators, currying and tuples unless you keep everything inside one function and 'reconstruct it' each and every time for each type. | I think this is the shortest approach:
```
int sum = 0;
var result = input.Select(i => new { i, S = sum += i });
``` | How to get list of intermediate sums in a functional way? Using LINQ? | [
"",
"c#",
"linq",
"functional-programming",
""
] |
I'm trying to download an image from a url.
The process I wrote works for everyone except for ONE content provider that we're dealing with.
When I access their JPGs via Firefox, everything looks kosher (happy Passover, btw). However, when I use my process I either:
A) get a 404
or
B) in the debugger when I set a break point at the URL line (URL url = new URL(str);)
then after the connection I DO get a file but it's not a .jpg, but rather some HTML that they're producing with generic links and stuff. I don't see a redirect code, though! It comes back as 200.
Here's my code...
```
URL url = new URL(urlString);
URLConnection uc = url.openConnection();
String val = uc.getHeaderField(0);
System.out.println("FOUND OBJECT OF TYPE:" + contType);
if(!val.contains("200")){
//problem
}
else{
is = uc.getInputStream();
}
```
Has anyone seen anything of this nature? I'm thinking maybe it's some mime type issue, but that's just a total guess... I'm completely stumped. | ```
if(!val.contains("200")) // ...
```
First of all, I would suggest you to use this useful class called
**HttpURLConnection**, which provides the method **getResponseCode()**
Searching the whole data for some '200' implies
1. performance issues, and
2. inconsistency (binary files **can** contain some '200') | Maybe the site is just using some kind of protection to prevent others from hotlinking their images or to disallow mass downloads.
They usually check either the HTTP referrer (it must be from their own domain), or the user agent (must be a browser, not a download manager). Set both and try it again. | strange problem with java.net.URL and java.net.URLConnection | [
"",
"java",
"url",
""
] |
When im trying to connect to an Exchange 2007 server over IMAP in PHP5 I get the following error message.
```
Kerberos error: No credentials cache found (try running kinit) for smtp.domain01.net
```
I was wondering if somebody found a way around this issue?
Related info:
<http://bugs.php.net/bug.php?id=33500> | Just stumbled upon this question and thought I'd answer this one since no one else has. The following page gives a very direct and accurate answer on how to solve the problem: <http://forums.kayako.com/threads/fix-kerberos-error-on-email-parser.29626/>
Basically whats happening here (according to <http://social.technet.microsoft.com/Forums/en-US/exchangesvradmin/thread/43aef3d6-3e91-4e41-a788-ae073393ad37/>) is that Microsoft Exchange 2007 broadcasts malformed Kerberos tokens, which causes the PHP IMAP driver to kill the stream. Some other sources claim that this is [a PHP bug](https://bugs.php.net/bug.php?id=33500) either way the solution in summary entails you to re-compile the PHP-IMAP extension with Kerberos disabled. This will force PHP IMAP to use Plain text authentication and will fix your problem.
Hope this helps. | Exchange does have the IMAP protocol enabled by default. Even when it does, the Exchange implementation of IMAP may or may not really be IMAP. That said, [this Technet thread](http://social.technet.microsoft.com/Forums/en-US/exchangesvrtransport/thread/1a84a06a-f1c8-40b4-ace8-1e264f218aa1) may apply to your situation. | Problem connecting to a Exchange 2007 server in PHP5 with imap_open | [
"",
"php",
"exchange-server-2007",
"imap-open",
""
] |
I just found out about [BuddyPress](http://buddypress.org/) (a collection of plugins that convert a [WordPress MU](http://mu.wordpress.org/) install into a social network) and now I was wondering if there are any Digg-like voting plugins for WordPress. This would eventually integrate into a BuddyPress website, where the site members would submit, vote and comment on stories (much like Digg).
I have a feeling I will end up having to build this from scratch, but since the site will be built on WordPress, I was wondering if there were any plugins already available that add this functionality. So far I have come up empty in my search. I did find a Wordpress blog that had this functionality [WpVote](http://wpvote.com/). It even creates thumbnails of the story webpage automatically (I'm assuming) using [websnapr](http://www.websnapr.com/). I browsed through the page source and didn't seem to find any hints of a WP plugin that they are using. | BuddyPress Links is what you're looking for:
<http://wordpress.org/extend/plugins/buddypress-links/> | I've successfully used [TDO Mini Forms](http://wordpress.org/extend/plugins/tdo-mini-forms/) for user submissions combined with [Vote it up](http://www.tevine.com/projects/voteitup/) to build a digg-style site.
You can create a "top votes" page using the *MostVotedAllTime\_Widget()* function from Vote it up, or use the SortVotes() or GetVoteArray() functions to build and style your own top votes list in a customised page template.
You'll find the full list of available functions in /vote-it-up/votingfunctions.php | Wordpress Digg-Like Voting System Plugin | [
"",
"php",
"wordpress",
""
] |
So I am thinking about building a hobby project, one off kind of thing, just to brush up on my programming/design.
It's basically a multi threaded web spider, updating the same data structure object->int.
So it is definitely overkill to use a database for this, and the only thing I could think of is a thread-safe singleton used to contain my data structure. <http://web.archive.org/web/20121106190537/http://www.ibm.com/developerworks/java/library/j-dcl/index.html>
Is there a different approach I should look in to? | Double-checked locking has been proven to be incorrect and flawed (as least in Java). Do a search or look at [Wikipedia's entry](http://en.wikipedia.org/wiki/Double_checked_locking_pattern) for the exact reason.
First and foremost is program correctness. If your code is not thread-safe (in a multi-threaded environment) then it's broken. Correctness comes first before performance optimization.
To be correct you'll have to synchronize the whole `getInstance` method
```
public static synchronized Singleton getInstance() {
if (instance==null) ...
}
```
or statically initialize it
```
private static final Singleton INSTANCE = new Singleton();
``` | Using lazy initialization for the database in a web crawler is probably not worthwhile. Lazy initialization adds complexity and an ongoing speed hit. One case where it is justified is when there is a good chance the data will never be needed. Also, in an interactive application, it can be used to reduce startup time and give the *illusion* of speed.
For a non-interactive application like a web-crawler, which will surely need its database to exist right away, lazy initialization is a poor fit.
On the other hand, a web-crawler is easily parallelizable, and will benefit greatly from being multi-threaded. Using it as an exercise to master the `java.util.concurrent` library would be extremely worthwhile. Specifically, look at [`ConcurrentHashMap`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ConcurrentHashMap.html) and [`ConcurrentSkipListMap`,](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ConcurrentSkipListMap.html) which will allow multiple threads to read and update a shared map.
When you get rid of lazy initialization, the simplest Singleton pattern is something like this:
```
class Singleton {
static final Singleton INSTANCE = new Singleton();
private Singleton() { }
...
}
```
The keyword `final` is the key here. Even if you provide a `static` "getter" for the singleton rather than allowing direct field access, making the singleton `final` helps to ensure correctness and allows more aggressive optimization by the JIT compiler. | proper usage of synchronized singleton? | [
"",
"java",
"multithreading",
"singleton",
"synchronized",
""
] |
I'm working on a PostgreSQL 8.1 SQL script which needs to delete a large number of rows from a table.
Let's say the table I need to delete from is Employees (~260K rows).
It has primary key named id.
The rows I need to delete from this table are stored in a separate temporary table called EmployeesToDelete (~10K records) with a foreign key reference to Employees.id called employee\_id.
Is there an efficient way to do this?
At first, I thought of the following:
```
DELETE
FROM Employees
WHERE id IN
(
SELECT employee_id
FROM EmployeesToDelete
)
```
But I heard that using the "IN" clause and subqueries can be inefficient, especially with larger tables.
I've looked at the PostgreSQL 8.1 documentation, and there's mention of
DELETE FROM ... USING but it doesn't have examples so I'm not sure how to use it.
I'm wondering if the following works and is more efficient?
```
DELETE
FROM Employees
USING Employees e
INNER JOIN
EmployeesToDelete ed
ON e.id = ed.employee_id
```
Your comments are greatly appreciated.
Edit:
I ran EXPLAIN ANALYZE and the weird thing is that the first DELETE ran pretty quickly (within seconds), while the second DELETE took so long (over 20 min) I eventually cancelled it.
Adding an index to the temp table helped the performance quite a bit.
Here's a query plan of the first DELETE for anyone interested:
```
Hash Join (cost=184.64..7854.69 rows=256482 width=6) (actual time=54.089..660.788 rows=27295 loops=1)
Hash Cond: ("outer".id = "inner".employee_id)
-> Seq Scan on Employees (cost=0.00..3822.82 rows=256482 width=10) (actual time=15.218..351.978 rows=256482 loops=1)
-> Hash (cost=184.14..184.14 rows=200 width=4) (actual time=38.807..38.807 rows=10731 loops=1)
-> HashAggregate (cost=182.14..184.14 rows=200 width=4) (actual time=19.801..28.773 rows=10731 loops=1)
-> Seq Scan on EmployeesToDelete (cost=0.00..155.31 rows=10731 width=4) (actual time=0.005..9.062 rows=10731 loops=1)
Total runtime: 935.316 ms
(7 rows)
```
At this point, I'll stick with the first DELETE unless I can find a better way of writing it. | Don't guess, measure. Try the various methods and see which one is the shortest to execute. Also, use [EXPLAIN](http://www.postgresql.org/docs/8.1/interactive/sql-explain.html) to know what PostgreSQL will do and see where you can optimize. Very few PostgreSQL users are able to guess **correctly** the fastest query... | > I'm wondering if the following works and is more efficient?
```
DELETE
FROM Employees e
USING EmployeesToDelete ed
WHERE id = ed.employee_id;
```
This totally depend on your index selectivity.
`PostgreSQL` tends to employ `MERGE IN JOIN` for `IN` predicates, which has stable execution time.
It's not affected by how many rows satisfy this condition, provided that you already have an ordered resultset.
An ordered resultset requires either a sort operation or an index. Full index traversal is very inefficient in `PostgreSQL` compared to `SEQ SCAN`.
The `JOIN` predicate, on the other hand, may benefit from using `NESTED LOOPS` if your index is very selective, and from using `HASH JOIN` is it's inselective.
`PostgreSQL` should select the right one by estimating the row count.
Since you have `30k` rows against `260K` rows, I expect `HASH JOIN` to be more efficient, and you should try to build a plan on a `DELETE ... USING` query.
To make sure, please post execution plan for both queries. | How to efficiently delete rows from a Postgresql 8.1 table? | [
"",
"sql",
"database",
"postgresql",
""
] |
I have a class called communicator. This class is a listener of a thread receiving events from another program. Also this class has a method call refresh that sends and action to the program a waits for the response that comes through the listener.
Both methods are in the same class but called by diferent threads.
```
public void processRefreshEvent(ManagerEvent event){
//processing event
//...
//I'm done
notify();
}
public synchronized void refresh() throws Exception {
isRefreshing = true;
try {
manager.send(new refresh());
} catch (ManagerException e) {
isRefreshing = false;
}
try {
wait(5000);
} catch (InterruptedException e) {
} finally{
isRefreshing = false;
}
}
```
when executing the code above I get the follow exception:
> ```
> java.lang.IllegalMonitorStateException: current thread not owner
> at java.lang.Object.wait(Native Method)
> at Communicator.refresh(Communicator.java:203)
> ...
> ```
What is the proper way to "wait" for another thread to finish. Thanks. | You need to synchronize your threads on a monitor. For instance (using the current object as the monitor):
```
public void processRefreshEvent(ManagerEvent event){
//processing event
//...
//I'm done
synchronized(this) {
notify(); // you are basically notifying any thread who has blocked
// on this monitor - in our case, the instance of this object
}
}
public synchronized void refresh() throws Exception {
isRefreshing = true;
try {
manager.send(new refresh());
} catch (ManagerException e) {
isRefreshing = false;
}
try {
synchronized(this) {
wait(5000); // wait will give up the monitor
}
} catch (InterruptedException e) {
} finally{
isRefreshing = false;
}
}
``` | The methods [`wait()`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Object.html#wait()) and [`notify()`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Object.html#notify()) may only be called from a thread that is currently synchronized on their instance.
Declare "processRefreshEvent" `synchronized`, or better yet, just the block of code that modifies the state that is used by the `refresh` method, together with the `notify()` call.
```
public void processRefreshEvent(ManagerEvent event){
// processing event
synchronized (this) {
// modify shared state with results of processing.
notify();
}
}
``` | How to deal with java threads | [
"",
"java",
"multithreading",
""
] |
I am working on some Python socket code that's using the [`socket.fromfd()`](http://docs.python.org/library/socket.html#socket.fromfd) function.
However, this method is not available on all platforms, so I am writing some fallback code in the case that the method is not defined.
**What's the best way to determine if a method is defined at runtime?** Is the following sufficient or is there a better idiom?
```
if 'fromfd' in dir(socket):
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
```
I'm slightly concerned that the documentation for [`dir()`](http://docs.python.org/library/functions.html#dir) seems to discourage its use. Would [`getattr()`](http://docs.python.org/library/functions.html#getattr) be a better choice, as in:
```
if getattr(socket, 'fromfd', None) is not None:
sock = socket.fromfd(...)
else:
sock = socket.socket(...)
```
Thoughts?
**EDIT** As [Paolo](https://stackoverflow.com/questions/763971/determine-if-a-function-is-available-in-a-python-module/763975#763975) pointed out, this question is [nearly a duplicate](https://stackoverflow.com/questions/610883) of a question about determining attribute presence. However, since the terminology used is disjoint (lk's *"object has an attribute"* vs my *"module has a function"*) it may be helpful to preserve this question for searchability unless the two can be combined. | [`hasattr()`](http://docs.python.org/library/functions.html#hasattr) is the best choice. Go with that. :)
```
if hasattr(socket, 'fromfd'):
pass
else:
pass
```
**EDIT**: Actually, according to the docs all hasattr is doing is calling getattr and catching the exception. So if you want to cut out the middle man you should go with marcog's answer.
**EDIT**: I also just realized this question is actually a [duplicate](https://stackoverflow.com/questions/610883/how-to-know-if-an-object-has-an-attribute-in-python/610893). One of the answers there discusses the merits of the two options you have: catching the exception ("easier to ask for forgiveness than permission") or simply checking before hand ("look before you leap"). Honestly, I am more of the latter, but it seems like the Python community leans towards the former school of thought. | Or simply use a try..except block:
```
try:
sock = socket.fromfd(...)
except AttributeError:
sock = socket.socket(...)
``` | Determine if a function is available in a Python module | [
"",
"python",
""
] |
I'm working on a Bruce Eckel exercise on how to take a keyboard command line input and put it into an array. It's supposed to take 3 separate inputs and place them into an array and print them back out.
Here is the code I have so far:
```
//: object/PushtoArray
import java.io.*;
import java.util.*;
class PushtoArray {
public static void main(String[] args) {
String[] s = new String[3];
int i = 0;
//initialize console
Console c = System.console();
while ( i <= 2 ) {
//prompt entry:
//System.out.println("Enter entry #" + i + ": ");
//readin
s[i] = c.readLine("enter entry #" + i + ": ");
//increment counter
i++;
} //end while
//reset counter
i = 0;
//print out array
while ( i <= 2 ) {
System.out.println(s[i]);
i++;
} //end while
}//end main
} //end class
```
**UPDATE:** Now I get a different error:
```
PushtoArray.java:15: readline(boolean) in java.io.Console cannot be applied to (java.lang.string)
s[i]= c.readline("Enter entry #" + i + ": ");
```
I'm trying to read in from the command prompt but it isn't prompting at all. It compiles correctly when I javac the java file.
Am I using the wrong function? Should I be using a push method instead of an assignment? | Are you sure you're running `javac` on the right file? There is no way that file could compile.
1. You need a semicolon on the import statement:
```
import java.io.*;
```
2. You forgot an end brace:
```
} // end main()
```
3. There is no such method as `readline()`. You need to read from `System.in`. The easiest way is to make a `Scanner` before the loop, then read from it in the loop. See the Javadocs for [`Scanner`](http://java.sun.com/javase/6/docs/api/java/util/Scanner.html) for an example.
**Edit:** See the [Java Tutorials](http://java.sun.com/docs/books/tutorial/essential/io/cl.html) for more on reading from the command line. The [`Console` class](http://java.sun.com/javase/6/docs/api/java/io/Console.html) (introduced in Java 6) looks like it has the `readLine()` method that you wanted.
**Edit 2:** You need to capitalize Line. You wrote "`readline`", but it should be "`readLine`". | Try
```
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
```
and then in your loop:
```
in.readline();
```
I would push the received lines into an ArrayList or similar. That way you can accept more/less than 3 lines of data (but that's only necessary if the number of lines you want to accept is variable)
EDIT: I thought originally the NoSuchMethod exception was highlighting a scoping problem. Obviously not, and thx to those who pointed that out! | Turn commandline input into an array | [
"",
"java",
"arrays",
"command-line",
""
] |
I know ASP.NET does this automatically, but for some reason I can't seem to find the method.
Help anyone? Just as the title says.
If I do a Response.Redirect("~/Default.aspx"), it works, but I don't want to redirect the site. I just want the full URL.
Can anyone help me out? | For the "/#{path}/Default.aspx" part, use:
```
Page.ResolveUrl("~/Default.aspx")
```
If you need more:
```
Request.Url.Scheme + "://" + Request.Url.Host + ":" + Request.Url.Port
``` | In a web control, the method is `ResolveUrl("~/Default.aspx")` | How to Convert "~/default.aspx" to "http://www.website.com/default.aspx" C#? | [
"",
"c#",
"methods",
""
] |
I'm looking for an example Spring MVC 2.5 web app that I can easily:
* Setup as a project in Eclipse
* Deploy to a local app server (using Ant/Maven)
There are a couple of example applications included with the Spring distribution ('petclinic' and 'jpetstore'), but they don't provide any Eclipse project files (or a way to generate them). They also seem a bit complicated for my needs, e.g. require a local database to be setup. | While not specifically an app you can download, [Developing a Spring Framework MVC application step-by-step](http://static.springframework.org/docs/Spring-MVC-step-by-step/) covers creating a spring application in Eclipse with an ant build script, complete with unit tests.
This meets the following requirements:
* Spring MVC 2.5
* Project in Eclipse
* Deploy to a local app server using Ant
* Uses HSQL (no need to install a local DB) | The easiest way to get up and running with a Spring MVC project is to use [SpringSource Tool Suite](http://www.springsource.com/products/springsource-tool-suite-download), which is another free IDE based on Eclipse.
The integration between the IDE and Spring/Maven is tight, and it comes with an application server already setup for you to deploy your web app.
Follow these steps to get a working Spring MVC web app.
1. To setup a new project in STS: Click File -> New -> Spring Template Project -> Spring MVC Project
2. To pull in dependencies and compile your project: Right click your new project -> Run As -> Maven install
3. To run your project inside an application server: Right click your new project -> Run As -> Run on Server -> SpringSource tc Server
If it works, you'll see a web page saying "Congratulations! You're running Spring!" | spring MVC sample web app | [
"",
"java",
"spring-mvc",
"spring",
"sample",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.