Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
my first time is 12:10:20 PM and second time is 7:10:20 Am of the same day how can i find diff b/w them??
My idea is convert all the time to seconds and find the difference again convert to time
is it good Approch r anything else?? | Not necessarily the best way, but if you wish to use what's available on the system, difftime() and mktime() can help -
```
#include <time.h>
tm Time1 = { 0 }; // Make sure everything is initialized to start with.
/* 12:10:20 */
Time1.tm_hour = 12;
Time1.tm_min = 10;
Time1.tm_sec = 20;
/* Give the function a sane date to work with (01/01/2000, here). */
Time1.tm_mday = 1;
Time1.tm_mon = 0;
Time1.tm_year = 100;
tm Time2 = Time1; // Base Time2 on Time1, to get the same date...
/* 07:10:20 */
Time2.tm_hour = 7;
Time2.tm_min = 10;
Time2.tm_sec = 20;
/* Convert to time_t. */
time_t TimeT1 = mktime( &Time1 );
time_t TimeT2 = mktime( &Time2 );
/* Use difftime() to find the difference, in seconds. */
double Diff = difftime( TimeT1, TimeT2 );
``` | You want the [`difftime`](http://linux.die.net/man/3/difftime) function.
**Edit**
If you don't have `difftime` available I would suggest just converting from whatever format you're in to seconds from the epoch, do your calculations and covert back to whatever format you need. The following group of functions can help you with all those conversions:
[asctime, ctime, gmtime, localtime, mktime, asctime\_r, ctime\_r, gmtime\_r, localtime\_r - transform date and time to broken-down time or ASCII](http://linux.die.net/man/3/localtime)
[timegm, timelocal - inverses for gmtime and localtime ( may not be available on all systems )](http://linux.die.net/man/3/timelocal) | How to find the difference between two times in c? | [
"",
"c++",
"c",
""
] |
Global scope allows you to use a variable in a function that was defined outside the function. eg
```
$a=1;
function $test(){
echo $a;
}
//outputs 1
```
but why is it that if I define a variable with an array I cannot use it the same way?
```
$test = array(
0=>'zero',
1=>'one',
2=>'two',
3=>'three',
);
function doesntWork($something){
echo "My favorite number is " . $test[$something];
}
//outputs My favorite number is 0
```
How do i pass the array into the function without having to recopy the array into the function itself.
any explanation would be appreciated
thanks | script #1 is not correct. neither does it work (function \*\*$\*\*test() {...}), nor does it output "1". and globals ARE bad practice. wrapping them in a class got nothing to do with it. *classes are not a solution for random problems not related to object orientation.*
just pass $a as a parameter:
```
<?php
$a=1;
function test($foo) {
echo 'number ' . $foo;
};
test($a);
// -> "number 1".
?>
```
script #2:
```
<?php
$test = array(
0=>'zero',
1=>'one',
2=>'two',
3=>'three',
);
function doesntWork($test, $something){
echo "My favorite number is " . $test[$something];
}
doesntWork($test, mt_rand(0,3));
?>
``` | Your first example should not output 1. The only way to make variables global in a particular function is to use the global keyword like this:
```
function test() {
global $a;
echo $a;
}
function doesWork($something) {
global $test;
echo "My favorite number is " . $test[$something];
}
```
More info here: <https://www.php.net/manual/en/language.variables.scope.php> | How to call an array in a function? PHP | [
"",
"php",
"arrays",
""
] |
I have about 4-5 years of background in programming some in C# and some in C++. I recently got an internship and have been using C# daily. I am confident in my work, and don't have any problem remembering syntax or anything like that. So I was wondering what you think about getting resharper? I was going to demo it first but I just want to hear other opinions about it. Basically what I'm trying to ask is, should I wait to get it and become more experienced and have more practice with just Visual studios and its built in intellisense and stuff or would it be alright to get it? | I'm a developer that has been using C# since .net 1.0 days on a daily basis, I usually likes to keep to the bare bones of an installation, so that if my dev environment is somehow destroyed I can be back and running as fast as possible. However, I recently did some pairing with another developer which required installing Resharper. What I found was:
1. I learnt alot about C# features I never new existed.
2. It had a much better test runner than VS.
3. Had much better refactoring tools.
About the only thing I didn't like was that it rearranged some of the default VS keyboard shortcuts, however after a bit of tinkering in the options dialog I was able to turn off the features I didn't need.
I'm missing it now, after the trial. I'm unfortunately unable to afford the cost of a licence (cash flow problem, not a it isn't worth it problem) at present and can't convince my bosses to get it for me. | I wouldn't work without it. It duplicates a bunch of the Visual Studio stuff, and adds a whole lot more. It simply improves Visual Studio, and will not impair your learning c# at all. | Does ReSharper require experience before getting it? | [
"",
"c#",
"visual-studio",
""
] |
I don't understand what I have to install to make EmguCV work.I opened the sample-src provided with EmguCV ,but I get errors in Visual studio 2008.
I also tried other .NET wrappers for OpenCV,but none of them worked(including the examples). A message "(sample program) has encounted a problem and needs to close.Please report this error to microsoft bla bla bla" is shown after I run any program that uses the OpenCV.
How to make this work in C#? | Are all the required .dlls in place? Try copying all the Emgu .dlls to the directory containing the .exe.
To automate this I added a post-build event in Visual Studio:
```
xcopy /Y /D /F $(ProjectDir)Emgu.CV.Binary-2.0.1.0\*.dll $(TargetDir)
```
You need to set the correct source directory (I like to keep my libs along with the sources). | I had problems with examples when I used Visual Studio 2010, and this is what I did:
1. Removed References: Emgu.CV.dll Emgu.UI.dll Emgu.Util.dll
2. From 'Project' menu -> Preferences -> and changed the .Net Framework target to 3.5
3. I added the DLLs that I have downloaded (the ones with the same name I deleted before) | How to use EmguCV (OpenCV in C#)? | [
"",
"c#",
"opencv",
""
] |
In class we are doing sorting algorithms and, although I understand them fine when talking about them and writing pseudocode, I am having problems writing actual code for them.
This is my attempt in Python:
```
mylist = [12, 5, 13, 8, 9, 65]
def bubble(badList):
length = len(badList) - 1
unsorted = True
while unsorted:
for element in range(0,length):
unsorted = False
if badList[element] > badList[element + 1]:
hold = badList[element + 1]
badList[element + 1] = badList[element]
badList[element] = hold
print badList
else:
unsorted = True
print bubble(mylist)
```
Now, this (as far as I can tell) sorts correctly, but once it finishes it just loops indefinitely.
How can this code be fixed so the function finishes properly and correctly sorts a list of any (reasonable) size?
P.S. I know I should not really have prints in a function and I should have a return, but I just have not done that yet as my code does not really work yet. | To explain why your script isn't working right now, I'll rename the variable `unsorted` to `sorted`.
At first, your list isn't yet sorted. Of course, we set `sorted` to `False`.
As soon as we start the `while` loop, we assume that the list is already sorted. The idea is this: as soon as we find two elements that are not in the right order, we set `sorted` back to `False`. `sorted` will remain `True` *only if there were no elements in the wrong order*.
```
sorted = False # We haven't started sorting yet
while not sorted:
sorted = True # Assume the list is now sorted
for element in range(0, length):
if badList[element] > badList[element + 1]:
sorted = False # We found two elements in the wrong order
hold = badList[element + 1]
badList[element + 1] = badList[element]
badList[element] = hold
# We went through the whole list. At this point, if there were no elements
# in the wrong order, sorted is still True. Otherwise, it's false, and the
# while loop executes again.
```
There are also minor little issues that would help the code be more efficient or readable.
* In the `for` loop, you use the variable `element`. Technically, `element` is not an element; it's a number representing a list index. Also, it's quite long. In these cases, just use a temporary variable name, like `i` for "index".
```
for i in range(0, length):
```
* The `range` command can also take just one argument (named `stop`). In that case, you get a list of all the integers from 0 to that argument.
```
for i in range(length):
```
* The [Python Style Guide](http://www.python.org/dev/peps/pep-0008/ "Style Guide for Python Code") recommends that variables be named in lowercase with underscores. This is a very minor nitpick for a little script like this; it's more to get you accustomed to what Python code most often resembles.
```
def bubble(bad_list):
```
* To swap the values of two variables, write them as a tuple assignment. The right hand side gets evaluated as a tuple (say, `(badList[i+1], badList[i])` is `(3, 5)`) and then gets assigned to the two variables on the left hand side (`(badList[i], badList[i+1])`).
```
bad_list[i], bad_list[i+1] = bad_list[i+1], bad_list[i]
```
Put it all together, and you get this:
```
my_list = [12, 5, 13, 8, 9, 65]
def bubble(bad_list):
length = len(bad_list) - 1
sorted = False
while not sorted:
sorted = True
for i in range(length):
if bad_list[i] > bad_list[i+1]:
sorted = False
bad_list[i], bad_list[i+1] = bad_list[i+1], bad_list[i]
bubble(my_list)
print my_list
```
(I removed your print statement too, by the way.) | The goal of bubble sort is to move the *heavier* items at the bottom in each round, while moving the *lighter* items up. In the inner loop, where you compare the elements, **you don't have to iterate the whole list in each turn**. The *heaviest* is already placed last. The *swapped* variable is an extra check so we can mark that the list is now sorted and avoid continuing with unnecessary calculations.
```
def bubble(badList):
length = len(badList)
for i in range(0,length):
swapped = False
for element in range(0, length-i-1):
if badList[element] > badList[element + 1]:
hold = badList[element + 1]
badList[element + 1] = badList[element]
badList[element] = hold
swapped = True
if not swapped: break
return badList
```
Your version 1, corrected:
```
def bubble(badList):
length = len(badList) - 1
unsorted = True
while unsorted:
unsorted = False
for element in range(0,length):
#unsorted = False
if badList[element] > badList[element + 1]:
hold = badList[element + 1]
badList[element + 1] = badList[element]
badList[element] = hold
unsorted = True
#print badList
#else:
#unsorted = True
return badList
``` | Why is Bubble Sort implementation looping forever? | [
"",
"python",
"algorithm",
"sorting",
"bubble-sort",
""
] |
Is this how you define a function in jQuery?
```
$(document).ready( function () {
var MyBlah = function($blah) { alert($blah); };
});
```
Now to call the function I do:
```
MyBlah('hello');
``` | First of all, your code works and that's a valid way of creating a function in JavaScript (jQuery aside), but because you are declaring a function inside another function (an anonymous one in this case) "MyBlah" will not be accessible from the global scope.
Here's an example:
```
$(document).ready( function () {
var MyBlah = function($blah) { alert($blah); };
MyBlah("Hello this works") // Inside the anonymous function we are cool.
});
MyBlah("Oops") //This throws a JavaScript error (MyBlah is not a function)
```
This is (sometimes) a desirable behavior since we **do not pollute the global namespace**, so if your function does not need to be called from other part of your code, this is the way to go.
Declaring it outside the anonymous function places it in the global namespace, and it's accessible from everywhere.
Lastly, the **$** at the beginning of the variable name is not needed, and sometimes used as a jQuery convention when the variable is an instance of the jQuery object itself (not necessarily in this case).
Maybe what you need is creating a [jQuery plugin](https://learn.jquery.com/plugins/basic-plugin-creation/), this is very very easy and useful as well since it will allow you to do something like this:
```
$('div#message').myBlah("hello")
```
See also: <http://www.re-cycledair.com/creating-jquery-plugins> | No, you can just write the function as:
```
$(document).ready(function() {
MyBlah("hello");
});
function MyBlah(blah) {
alert(blah);
}
```
This calls the function `MyBlah` on content ready. | Is this how you define a function in jQuery? | [
"",
"javascript",
"jquery",
""
] |
So I have a container(any kind, probably std::map or std::vector) which contains objects of a class with some network thing running in a thread that checks if it is still connected (the thread is defined inside that class and launches when constructed).
Is there any way I can make the object delete itself from the container when its disconnected or should I move the thread outside the object and use that class just to store data? | I would have am unload queue.
When a thread notices that the connection is down it registers the object (and continer) with the unload queue tides everything up as much as possible then the thred terminates.
A separate thread is then inside the unload queue. Its sole purpose is to monitor the queue. When it sees a new object on the queue, remove it from the container and then destroy it (syncing with the objects thread as required). | In order for the object to delete itself from the container, it will have to know which container it is in. You will need to maintain a pointer to the container in the object. You will also have to protect the container with a lock to stop multiple threads accessing the container at the same time.
I think I prefer your second solution - some managing object looks after removing dead objects from the collection. If nothing else, this will be quite a bit easier to debug and the locking logic becomes centralised in a single object. | Object delete itself from container | [
"",
"c++",
"containers",
""
] |
I'm trying to create a universal function that I can call from multiple places to truncate long text recursively to fit a predefined pixel width - using jquery.
Here is the code...
```
function constrain(text, original, ideal_width){
var ideal = parseInt(ideal_width);
$('span.temp_item').remove();
var temp_item = ('<span class="temp_item" style="display:none">'+ text +'</span>');
var item_length = text.length;
$(temp_item).appendTo('body');
var item_width = $('span.temp_item').width();
if (item_width > ideal) {
var smaller_text = text.substr(0, (item_length-1));
return constrain(smaller_text, original);
} else if (item_length != original) {
return (text + '…');
} else if (item_length == original) {
return text;
}
}
```
If I run the function like this:
```
$('.service_link span:odd').each(function(){
var item_text = $(this).text();
var original_length = item_text.length;
var constrained = constrain(item_text, original_length,'175');
$(this).html(constrained);
});
```
The text doesn't truncate. I also tried the 175 without the quotes.
If I define var ideal = 175; inside the function, then it works. Why is passing 175 to the function not working? I did a parseInt on it in case it was a string.
Also - this truncate code run a bit slow on older machines - any tips for speeding it up?
Thanks! | Great stuff here. I used the function by Phil Carter. I just wanted the new string with the &hellip to be truncated at the same width as the rest.
I just quickly added another temp-width lookup and recursive call. Could use some cleanup but it works.
here's the new while:
```
while(item_width > ideal) {
var smaller_text = text.substr(0, (item_length-1));
return constrain(smaller_text, original, ideal_width, counter);
}
if (item_length != original) {
new_text=text+'…';
$('span.temp_item').remove();
var temp_item = ('<span class="temp_item" style="display:none">'+ new_text +'</span>');
$(temp_item).appendTo('body');
var item_width_new = $('span.temp_item').width();
if(item_width_new>ideal){
var smaller_text = text.substr(0, (item_length-1));
return constrain(smaller_text, original, ideal_width, counter);
}
else {
return new_text;
}
} else if (item_length == original) {
return text;
}
}
``` | What happens when the visitor to your site presses "ctl +" ? It's my (probably out of date) belief that you're supposed to use "em" sizes for font containers, so they scale. | Truncate width function not working when passing integer | [
"",
"javascript",
"jquery",
""
] |
This is specifically related to the Google App Engine Memcache API, but I'm sure it also applies to other Memcache tools.
The dictionary .get() method allows you to specify a default value, such as dict.get('key', 'defaultval')
This can be useful if it's possible you might want to store None as a value in a dictionary.
However, the memcache.get() does not let you to do this. I've modified my @memoize decorator so it looks like this:
```
def memoize(keyformat, time=1000000):
"""Decorator to memoize functions using memcache."""
def decorator(fxn):
def wrapper(*args, **kwargs):
key = keyformat + str(args[1:]) + str(kwargs)
from google.appengine.api import memcache
data = memcache.get(key)
if Debug(): return fxn(*args, **kwargs)
if data:
if data is 'None': data = None
return data
data = fxn(*args, **kwargs)
if data is None: data = 'None'
memcache.set(key, data, time)
return data
return wrapper
return decorator
```
Now I'm sure there's a good argument that I shouldn't be storing None values in the first place, but let's put that aside for now. Is there a better way I can handle this besides converting None vals to strings and back? | A possible way to do this is to create new class that defines `None` for this purpose, and assign instances of this to the cache (unfortunately you cannot extend `None`). Alternatively, you could use the empty string "", or avoid storing None/null values altogether (absence of the key implies None).
Then check for instances of your 'None' class when you check the result of `mc.get(key)` (`is None`, `== ""`, etc) | You could do something like what Haskell and Scala does and store an Option dictionary. The dictionary contains two keys: one key to indicate that it is valid and one key that is used to hold the data. Something like this:
```
{valid: true, data: whatyouwanttostore}
```
Then if `get` return None, you know that the cache was missed; if the result is a dictionary with None as the data, the you know that the data was in the cache but that it was false. | What if I want to store a None value in the memcache? | [
"",
"python",
"google-app-engine",
"memcached",
""
] |
I have a large set of numbers, probably in the multiple gigabytes range. First issue is that I can't store all of these in memory. Second is that any attempt at addition of these will result in an overflow. I was thinking of using more of a rolling average, but it needs to be accurate. Any ideas?
These are all floating point numbers.
This is not read from a database, it is a CSV file collected from multiple sources. It has to be accurate as it is stored as parts of a second (e.g; 0.293482888929) and a rolling average can be the difference between .2 and .3
It is a set of #'s representing how long users took to respond to certain form actions. For example when showing a messagebox, how long did it take them to press OK or Cancel. The data was sent to me stored as seconds.portions of a second; 1.2347 seconds for example. Converting it to milliseconds and I overflow int, long, etc.. rather quickly. Even if I don't convert it, I still overflow it rather quickly. I guess the one answer below is correct, that maybe I don't have to be 100% accurate, just look within a certain range inside of a sepcific StdDev and I would be close enough. | You can sample randomly from your set ("[population](http://en.wikipedia.org/wiki/Statistical_population)") to get an average ("[mean](http://en.wikipedia.org/wiki/Arithmetic_mean)"). The accuracy will be determined by how much your samples vary (as determined by "[standard deviation](http://en.wikipedia.org/wiki/Standard_deviation#Estimating_population_SD)" or variance).
The advantage is that you have billions of observations, and you only have to sample a fraction of them to get a decent accuracy or the "[confidence range](http://en.wikipedia.org/wiki/Confidence_interval)" of your choice. If the conditions are right, this cuts down the amount of work you will be doing.
Here's a [numerical library](http://www.extremeoptimization.com/) for C# that includes a random sequence generator. Just make a random sequence of numbers that reference indices in your array of elements (from 1 to *x*, the number of elements in your array). Dereference to get the values, and then calculate your mean and standard deviation.
If you want to test the distribution of your data, consider using the [Chi-Squared Fit](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm) test or the [K-S](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35g.htm) test, which you'll find in many spreadsheet and statistical packages (e.g., [R](http://www.r-project.org/)). That will help confirm whether this approach is usable or not. | Integers or floats?
If they're integers, you need to accumulate a frequency distribution by reading the numbers and recording how many of each value you see. That can be averaged easily.
For floating point, this is a bit of a problem. Given the overall range of the floats, and the actual distribution, you have to work out a bin-size that preserves the accuracy you want without preserving all of the numbers.
---
**Edit**
First, you need to sample your data to get a mean and a standard deviation. A few thousand points should be good enough.
Then, you need to determine a respectable range. Folks pick things like ±6σ (standard deviations) around the mean. You'll divide this range into as many buckets as you can stand.
In effect, the number of buckets determines the number of significant digits in your average. So, pick 10,000 or 100,000 buckets to get 4 or 5 digits of precision. Since it's a measurement, odds are good that your measurements only have two or three digits.
---
**Edit**
What you'll discover is that the mean of your initial sample is very close to the mean of any other sample. And any sample mean is close to the population mean. You'll note that most (but not all) of your means are with 1 standard deviation of each other.
You should find that your measurement errors and inaccuracies are larger than your standard deviation.
This means that a sample mean is as useful as a population mean. | How do I find the average in a LARGE set of numbers? | [
"",
"c#",
"math",
"memory",
""
] |
I have an application that allows a user to view details on a specific case w/out a postback. Each time a user requests data from the server I pull down the following markup.
```
<form name="frmAJAX" method="post" action="Default.aspx?id=123456" id="frmAJAX">
<div>
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" />
</div>
<div>
<input type="hidden" name="__EVENTVALIDATION" id="__EVENTVALIDATION" />
</div>
<div id="inner">
<!-- valid info here --!>
</div>
</form>
```
Next I take the above and innerHTML it to a new DOM element like so:
```
success: function(xhtml) {
var tr = document.createElement('tr');
var td = document.createElement('td');
var container = document.createElement('div');
obj.parentNode.parentNode.parentNode.insertBefore(tr, obj.parentNode.parentNode.nextSibling);
td.appendChild(container);
container.innerHTML = xhtml;
tr.appendChild(td);
```
but after the above, I use some jQuery to remove the nasty aspnet junk
```
$('form:eq(1)').children().each(
function() {
if ($('form:eq(1)').find('div').filter(function() { return $(this).attr('id') == ''; }).remove());
}
);
//Capture the remaining children
var children = $('form:eq(1)').children();
// Remove the form
$('form:eq(1)').remove();
// append the correct child element back to the DOM
parentObj.append(children);
```
My question is this - When using [IESieve](http://home.orange.nl/jsrosman/) I notice no actual leaks but an ever growing number of DOM elements (thus memory usage).
What can I improve on in the client-side to actually cleanup this mess? Note- both IE7/8 show these results.
**EDIT**: I did finally get this working and decided to write a short [blog post](http://toranbillups.com/blog/archive/2009/04/21/Cleanup-for-dynamically-generated-DOM-elements-in-IE/) with complete source code. | The tricky part is figuring out where a reference still exists to the offending nodes.
You're doing this the hard way — you're adding all the markup to the page, then removing the stuff you don't want. I'd do it this way instead:
```
var div = document.createElement('div');
// (Don't append it to the document.)
$(div).html(xhtml);
var stuffToKeep = $(div).find("form:eq(1)> *").filter(
function() {
return $(this).attr('id') !== '';
}
);
parentObj.append(stuffToKeep);
// Then null out the original reference to the DIV to be safe.
div = null;
```
This isn't guaranteed to stop the leak, but it's a good start. | ```
function discardElement(element) {
var garbageBin = document.getElementById('IELeakGarbageBin');
if (!garbageBin) {
garbageBin = document.createElement('DIV');
garbageBin.id = 'IELeakGarbageBin';
garbageBin.style.display = 'none';
document.body.appendChild(garbageBin);
}
// move the element to the garbage bin
garbageBin.appendChild(element);
garbageBin.innerHTML = '';
}
```
[Source](http://social.msdn.microsoft.com/Forums/en-US/iewebdevelopment/thread/c76967f0-dcf8-47d0-8984-8fe1282a94f5/) | How to dispose of DOM elements in JavaScript to avoid memory leaks | [
"",
"javascript",
"jquery",
"dom",
"memory-management",
"memory-leaks",
""
] |
I'm new with PostgreSQL, and I already have my first problem..
I wrote some code to understand how transactions work, following the manual step by step.
To make it short, I've created 2 tables, user and movements: in the first one there are the name, email and credit columns, in the second the columns from, to, import.
So, I was trying this way:
```
BEGIN;
INSERT INTO movements (from, to, import) VALUES ('mary', 'steve', 600);
UPDATE users SET credit = credit - 600 WHERE name = 'mary';
UPDATE users SET credit = credit + 600 WHERE name = 'steve';
--here comes the problem!
IF (SELECT credit FROM users WHERE name = 'mary') < 0 THEN
ROLLBACK;
END IF
COMMIT;
```
I always get the error:
> ERROR: syntax error at or near "IF"
Where am I mistaken?
P.S.: Don't focus on the example functionality, it's just a trial for me to understand the transactions.. and now, the IF clause... | As Johannes already says: you are mixing regular SQL with PL/pgSQL, the stored procedure language. The link that Johannes provides should explain the concept of stored procedures to you.
I take it you're doing this as a script? Executing one statement after another? I'm afraid you can only do what you want to do inside a Stored Procedure, or Function, as you might call it. This is because when you are executing statements in this way, every statement stands on its own with no relation or information regarding the other statements.
Furthermore you can look at the following link for more information on how to use IF ... THEN ... ELSE ... END IF; conditionals inside plpgsql: [link](http://www.postgresql.org/docs/8.3/static/plpgsql-control-structures.html#PLPGSQL-CONDITIONALS).
---
**EDIT:**
I don't know if ROLLBACK is allowed at that point (because each stored procedure is already in its own transaction), but you must be able to figure that out for yourself using the extensive documentation @ <http://www.postgresql.org>. Here's a sample function with your code in it, also demonstrating some other syntax:
```
CREATE OR REPLACE FUNCTION public.test()
RETURNS integer AS
$$
DECLARE
tempvar integer;
BEGIN
tempvar := 1;
INSERT INTO movements (from, to, import) VALUES ('mary', 'steve', 600);
UPDATE users SET credit = credit - 600 WHERE name = 'mary';
UPDATE users SET credit = credit + 600 WHERE name = 'steve';
--here comes the problem!
IF (SELECT credit FROM users WHERE name = 'mary') < 0 THEN
ROLLBACK;
END IF;
RETURN tempvar;
END
$$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER;
```
However, if you are really going this road, I recommend using a GUI DB manager. It's easier for learning all this. | You seem to use plain `SQL` but the `IF` statement is part of the [`PL/pgSQL`](http://www.postgresql.org/docs/current/static/plpgsql.html) procedural language which is part of PostgreSQL. | PostgreSQL "IF" syntax error | [
"",
"sql",
"postgresql",
"syntax",
"if-statement",
""
] |
I'm building jars where I want to package them without sources but I would like the javadoc to come up for developers in eclipse. | What is your build process? The Maven release process actually generates 3 jars, one containing the compiled classes, one with sources and one with javadocs. You should be able to customize the POM to prevent distribution of the source jar, and anyone using Maven to manage dependencies will automatically get the javadoc if they declare a dependency on your jar (and have javadoc downloading turned on in the eclipse maven plugin). | The users of your JAR can associate a Javadoc location (URL, file or path inside an archive) to it in the Java Build Path properties of the Java project, where the JAR is used.
I'm not sure how well this would work with the Javadoc in the same JAR as the binaries (never seen that before), but in theory it should work. | How do I get Eclipse to show the javadoc from my classes without including sources in the jar | [
"",
"java",
"eclipse",
"javadoc",
""
] |
I have a label that is sometimes empty. How do I set up a conditional statement on the client side to test this?
i have
```
var label = document.getElementByID("<%=label1.ClientID %>").innerHTMl;
```
to get the text, but i can't seem to figure out an if..else statment if it is empty or not. label.length == 0; label == null, etc don't seem to work. any help? | try this:
```
if(label){
// The label is defined
}
```
Neither the if nor an else on it may execute if its undefined, so best not to use an else on this (seems weird, but I just did a check with Firefox). | Here's something better:
```
var id = "<%= label1.ClientID %>";
var label = id.length > 0 ? document.getElementById(id).innerHTML : "";
```
(Assuming this is Ruby here...) | javascript - find the length of a label and if it is empty? | [
"",
"javascript",
""
] |
I have been doing some work in python, but that was all for stand alone applications. I'm curious to know whether any offshoot of python supports web development?
Would some one also suggest a good tutorial or a website from where I can pick up some of the basics of web development using python? | Now that everyone has said [Django](http://docs.djangoproject.com/en/dev/intro/tutorial01/), I can add my two cents: I would argue that you might learn more by looking at the different components first, before using Django. For web development with Python, you often want 3 components:
1. Something that takes care
of the HTTP stuff (e.g.
[CherryPy](http://www.cherrypy.org/))
2. A templating language
to create your web pages.
[Mako](http://www.makotemplates.org/)
is very pythonic and [works](http://docs.cherrypy.org/stable/progguide/choosingtemplate.html) with Cherrpy.
3. If you get your data from a
database, an ORM comes in handy.
[SQLAlchemy](http://www.sqlalchemy.org/)
would be an [example](https://bitbucket.org/Lawouach/cherrypy-recipes/src/c8290261eefb/web/database/sql_alchemy/).
All the links above have good tutorials. For many real-world use-cases, Django will be a better solution than such a stack as it seamlessly integrates this functionality (and more). And if you need a CMS, Django is your best bet short of Zope. Nevertheless, to get a good grasp of what's going on, a stack of loosely coupled programs might be better. Django hides a lot of the details. | **Edited 3 years later**: Don't use mod\_python, use mod\_wsgi. Flask and Werkzeug are good frameworks too. Needing to know what's going on is useful, but it isn't a requirement. That would be stupid.
Don't lookup Django until you have a good grasp of what Django is doing on your behalf. for you. Write some basic apps using mod\_python and it's request object. I just started learning Python for web-development using mod\_python and it has been great.
mod\_python also uses a dispatcher in site-packages/mod\_python/publisher.py. Have a ganders through this to see how requests can be handled in a simple-ish way.
You may need to add a bit of config to your Apache config file to get mod\_python up and running but the mod\_python site explains it well.
```
<Directory /path/to/python/files>
AddHandler mod_python .py
PythonHandler mod_python.publisher
PythonDebug On
</Directory>
```
And you are away!
use (as a stupidly basic example):
```
def foo(req):
req.write("Hello World")
```
in `/path/to/python/files/bar.py` assuming `/path/to` is your site root.
And then you can do
```
http://www.mysite.com/python/files/bar/foo
```
to see "Hello World". Also, something that **tripped me up** is the dispatcher uses a lame method to work out the content-type, so to force HTML use:
```
req.content_type = 'text/html'
```
**Good Luck**
After you have a good idea of how Python interacts with mod\_python and Apache, then use a framework that does all the boring stuff for you. Up to you though, just my recommendation | Using python to develop web application | [
"",
"python",
""
] |
I need to look up a value in a "table" where a table can be an array or whatever really.
On paper it looks like this (reduced and generalized):
```
Size 500 750 1000 1250 1500 (speed)
--------------------------------------------
6x5 0.1 0.5 0.55 0.58 0.8
6x4 0.01 0.1 0.4 0.5 0.9
8x5 0.5 0.9 1.1 1.5 2.0
10x5 1.2 1.5 2.0 2.7 3.0
12x6 2.6 3.0 4.4 5.1 7.0 (pressure)
```
I need to somehow extract the pressure when I have a variable size and speed.
I right now I have put each row in a separate array, but I would like to avoid a bunch of if else's, but I don't really know a better way. Thanks for the help. | Assuming your size and speed are always specific values and do not fall between the values specified in your example (so no size `780` or `598` for example), the fastest way to perform lookups based on speed and size is to have a `Dictionary<SizeAndSpeed, double>` where `SizeAndSpeed` is a class like this:
```
public class SizeAndSpeed : IEquatable<SizeAndSpeed>
{
public string Size { get; set; }
public int Speed { get; set; }
public bool Equals(SizeAndSpeed other)
{
return Size == other.Size && Speed == other.Speed;
}
}
```
I'm assuming that `Size` can be a `string` but of course a more complex object could be used as well. | if the size is unique, make it the key to a Dictionary and then you can use it to get the other elements... | Efficient Method to Find Value in "Table" C# | [
"",
"c#",
"arrays",
"performance",
""
] |
I have configured java to dump garbage collection information into the logs ([verbose GC](http://wiki.zimbra.com/index.php?title=When_to_Turn_On_Verbose_GC)). I am unsure of what the garbage collection entries in the logs mean. A sample of these entries are posted below. I've searched around on [Google](http://www.google.com/search?q=PSYoungGen) and have not found solid explanations.
I have some reasonable guesses, but I'm looking for answers which provide strict definitions of what the numbers in the entries mean, backed up by credible sources. An automatic +1 to all answers which cite sun documentation. My questions are:
1. What does PSYoungGen refer to? I assume it has something to do with the previous (younger?) generation, but what exactly?
2. What is the difference between the second triplet of numbers and the first?
3. Why is a name(PSYoungGen) specified for the first triplet of numbers but not the second?
4. What does each number (memory size) in the triplet mean. For example in 109884K->14201K(139904K), is the memory before GC 109884k and then it is reduced to 14201K. How is the third number relevant? Why would we require a second set of numbers?
> 8109.128: [GC [PSYoungGen: 109884K->14201K(139904K)]
> 691015K->595332K(1119040K), 0.0454530
> secs]
>
> 8112.111: [GC [PSYoungGen: 126649K->15528K(142336K)]
> 707780K->605892K(1121472K), 0.0934560
> secs]
>
> 8112.802: [GC [PSYoungGen: 130344K->3732K(118592K)]
> 720708K->607895K(1097728K), 0.0682690
> secs] | Most of it is explained in the [GC Tuning Guide](http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html) (which you would do well to read anyway).
> The command line option `-verbose:gc` causes information about the heap and garbage collection to be printed at each collection. For example, here is output from a large server application:
>
> ```
> [GC 325407K->83000K(776768K), 0.2300771 secs]
> [GC 325816K->83372K(776768K), 0.2454258 secs]
> [Full GC 267628K->83769K(776768K), 1.8479984 secs]
> ```
>
> Here we see two minor collections followed by one major collection. The numbers before and after the arrow (e.g., `325407K->83000K` from the first line) indicate the combined size of live objects before and after garbage collection, respectively. After minor collections the size includes some objects that are garbage (no longer alive) but that cannot be reclaimed. These objects are either contained in the tenured generation, or referenced from the tenured or permanent generations.
>
> The next number in parentheses (e.g., `(776768K)` again from the first line) is the committed size of the heap: the amount of space usable for java objects without requesting more memory from the operating system. Note that this number does not include one of the survivor spaces, since only one can be used at any given time, and also does not include the permanent generation, which holds metadata used by the virtual machine.
>
> The last item on the line (e.g., `0.2300771 secs`) indicates the time taken to perform the collection; in this case approximately a quarter of a second.
>
> The format for the major collection in the third line is similar.
>
> **The format of the output produced by `-verbose:gc` is subject to change in future releases.**
I'm not certain why there's a PSYoungGen in yours; did you change the garbage collector? | 1. PSYoungGen refers to the garbage collector in use for the minor collection. PS stands for Parallel Scavenge.
2. The first set of numbers are the before/after sizes of the young generation and the second set are for the entire heap. ([Diagnosing a Garbage Collection problem](http://web.archive.org/web/20071011045315/http://java.sun.com/docs/hotspot/gc1.4.2/example.html) details the format)
3. The name indicates the generation and collector in question, the second set are for the entire heap.
An example of an associated full GC also shows the collectors used for the old and permanent generations:
```
3.757: [Full GC [PSYoungGen: 2672K->0K(35584K)]
[ParOldGen: 3225K->5735K(43712K)] 5898K->5735K(79296K)
[PSPermGen: 13533K->13516K(27584K)], 0.0860402 secs]
```
Finally, breaking down one line of your example log output:
```
8109.128: [GC [PSYoungGen: 109884K->14201K(139904K)] 691015K->595332K(1119040K), 0.0454530 secs]
```
* **107Mb** used before GC, **14Mb** used after GC, max young generation size **137Mb**
* **675Mb** heap used before GC, **581Mb** heap used after GC, **1Gb** max heap size
* minor GC occurred **8109.128** seconds since the start of the JVM and took **0.04** seconds | Java Garbage Collection Log messages | [
"",
"java",
"logging",
"garbage-collection",
""
] |
I have two .NET applications that talk to each other over a named pipe. Everything is great the first time through, but after the first message is sent, and the server is going to listen again, the `WaitForConnection()` method throws a `System.IO.Exception` with message **Pipe is broken.**
Why am I getting this exception here? This is my first time working with pipes, but a similar pattern has worked for me in the past with sockets.
Code ahoy!
```
using System.IO.Pipes;
static void main()
{
var pipe = new NamedPipeServerStream("pipename", PipeDirection.In);
while (true)
{
pipe.Listen();
string str = new StreamReader(pipe).ReadToEnd();
Console.Write("{0}", str);
}
}
```
Client:
```
public void sendDownPipe(string str)
{
using (var pipe = new NamedPipeClientStream(".", "pipename", PipeDirection.Out))
{
using (var stream = new StreamWriter(pipe))
{
stream.Write(str);
}
}
}
```
The first call to sendDownPipe gets the server to print the message I send just fine, but when it loops back up to listen again, it poops. | I'll post my code that seems to work - I was curious since I never did anything with pipes. I didn't find the class you name for the server-side in the relevant namespace, so here's the code based on the **NamedPipeServerStream**. The callback stuff is just because I couldn't be bothered with two projects.
```
NamedPipeServerStream s = new NamedPipeServerStream("p", PipeDirection.In);
Action<NamedPipeServerStream> a = callBack;
a.BeginInvoke(s, ar => { }, null);
...
private void callBack(NamedPipeServerStream pipe)
{
while (true)
{
pipe.WaitForConnection();
StreamReader sr = new StreamReader(pipe);
Console.WriteLine(sr.ReadToEnd());
pipe.Disconnect();
}
}
```
And the client does this:
```
using (var pipe = new NamedPipeClientStream(".", "p", PipeDirection.Out))
using (var stream = new StreamWriter(pipe))
{
pipe.Connect();
stream.Write("Hello");
}
```
I can repeat above block multiple times with the server running, no prob. | The problem for me has occurred when I would call pipe.WaitForConnection() from the server, after the client disconnected. The solution is to catch the IOException and call pipe.Disconnect(), and then call pipe.WaitForConnection() again:
```
while (true)
{
try
{
_pipeServer.WaitForConnection();
break;
}
catch (IOException)
{
_pipeServer.Disconnect();
continue;
}
}
``` | System.IO.Exception: Pipe is broken | [
"",
"c#",
".net-3.5",
"named-pipes",
""
] |
I'm reading from a binary data file which is written by invoking the following lines in Matlab m-file:
```
disp(sprintf('template = %d', fwrite(fid, template_1d, 'uint8')));
```
AFAIK, uint8 is the same size as the types BYTE, unsigned char, and unsigned short. Hence I have written the following code in a file-reading method in a C++ class instantiated in the mexfunction called by Matlab:
```
template1D = (unsigned short*) malloc(Nimgs*sizeof(unsigned short));
printf("template1D = %d\n", fread(template1D, sizeof(unsigned short), Nimgs, dfile));
```
and the following is how I deallocated this member variable in the class destructor's helper function:
```
free((void*) template1D);
```
In the main mexfunction, when I did not instantiate the class object to persist in memory after mex-function completes by calling mexMakeMemoryPersistent() function, template1D gets cleared properly without segmentation error messages from Matlab. However, if I did instantiate the class to persist in memory as follows:
```
if (!dasani)
{
dasani = new NeedleUSsim;
mexMakeMemoryPersistent((void*) dasani);
mexAtExit(ExitFcn);
}
```
with ExitFcn being:
```
void ExitFcn()
{
delete dasani;
}
```
then when I'm at the line of free((void\*) template1D);, Matlab gives me an error message about the segmentation fault. I have checked the memory sizes and they seem to be consistent. For the malloc/calloc/free functions, I'm using Matlab's mxMalloc/mxCalloc/mxFree functions when I'm executing the C++ project as a Matlab mex function.
Based on this description, what further suggestions would you have for me to solve this problem and ensure this doesn't happen in the future (or at least know how to deal with similar problems like this in the future)?
Thanks in advance.
----------------------------Additions------------------------------------------------------
The following block of code basically shows the jists of my mex file. A mex file is basically an executable that is run in Matlab and compiled from C/C++ code with some Matlab headers.
```
void ExitFcn()
{
delete dasani;
}
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
needle_info pin;
// check number of i/o if they are correct
if (nrhs != NUMIN)
{
mexErrMsgTxt("Invalid number of input arguments");
}
else if (nlhs != NUMOUT)
{
mexErrMsgTxt("Invalid number of output arguments");
}
// check if the input is noncomplex
if (mxIsComplex(NEEDLE))
{
mexErrMsgTxt("Input must be a noncomplex scalar integer.");
}
// check if the dimensions of the needle information is valid
int needlerows, needlecols;
needlerows = mxGetM(NEEDLE);
needlecols = mxGetN(NEEDLE);
```
```
if (needlerows < 1 || needlecols < 6)
```
```
{
mexErrMsgTxt("Needle information's dimensions are invalid");
}
float *needlePoint, *yPoint ;
// retrieving current needle information
// order of the variables are always as follows:
// r, theta, l, rho, alpha, beta
needlePoint = (float*) mxGetData(NEEDLE) ;
pin.r = needlePoint[0];
pin.theta = needlePoint[1];
pin.l = needlePoint[2];
pin.rho = needlePoint[3];
pin.alpha = needlePoint[4];
pin.beta = needlePoint[5];
//// read the file inputs
**//if (!dasani)
//{
// dasani = new NeedleUSsim;
// mexMakeMemoryPersistent((void*) dasani);
// mexAtExit(ExitFcn);
//}
dasani = new NeedleUSsim;
delete dasani;**
// sending an useless output for now (get rid of this if not conceptually needed
plhs[0] = mxCreateNumericMatrix(1,1,mxSINGLE_CLASS,mxREAL) ;
yPoint = (float*) mxGetData(plhs[0]) ;
*yPoint = 1;
}
```
This code would run after build/compilation if the user invokes "mexfunction" anywhere from the command line or m-file script. The snippet enclosed by "\*\*" (when I was trying to bold the snippet) is the problem that I'm looking at. From a second look at the snippet, I may be allocating the memory for dasani pointer in a different memory from the Matlab memory (as there is the memory with scope limited to the C++ mex function only, and another memory space with scope visible to the Matlab program). Otherwise, I'm not sure why Matlab is complaining about this problem. | On top of making dasani to be a persistent pointer, I also need to make its member variables with memory allocated by mxMalloc/mxCalloc to be persistent too, for example:
```
if (!dasani)
{
dasani = new NeedleUSsim;
mexMakeMemoryPersistent((void*) dasani->tplL);
mexMakeMemoryPersistent((void*) dasani->tplR);
mexMakeMemoryPersistent((void*) dasani->tplRho_deg);
mexMakeMemoryPersistent((void*) dasani->tplAlpha_deg);
mexMakeMemoryPersistent((void*) dasani->tplBeta_deg);
mexMakeMemoryPersistent((void*) dasani->hashTb);
mexMakeMemoryPersistent((void*) dasani->template1D);
mexAtExit(ExitFcn);
}
```
With the destructor as shown:
```
void NeedleUSsim::Deallocate()
{
free((void*) tplR); free((void*) tplL);
free((void*) tplRho_deg); free((void*) tplAlpha_deg);
free((void*) tplBeta_deg);
free((void*) hashTb);
free((void*) template1D);
}
``` | The MEX API supports C as well as C++. Because C has no try/catch or destructors, there is no way for a C MEX-function to clean-up memory directly in the event of an error. Therefore, MATLAB tracks the results of the memory allocation routines (mxMalloc, mxCalloc, mxRealloc, mxFree, and all the mxCreate\* functions that return mxArrays) on an internal list. If an error occurs during the MEX-function execution (either by calling mexErrMsgIdAndTxt directly, or using something like mexEvalString to call MATLAB code that errors), then MATLAB will automatically free any mx-based allocated memory. But, also, when a MEX-function terminates normally, MATLAB will
also free any mx-based memory allocated by the MEX-function. Before the days of destructors, this was a convenience for MEX authors, though in the modern C++ world it can get really annoying.
Sometimes, as in the case of this question, you don't want MATLAB to auto-free memory. In that case, you have to use mexMakeMemoryPersistent, or mexMakeArrayPersistent for mxArrays.
You should only ever pass a pointer to mexMakeMemoryPersistent if it was originally allocated with mxMalloc, mxCalloc, or mxRealloc. So this code
```
dasani = new NeedleUSsim;
mexMakeMemoryPersistent((void*) dasani);
```
is bad with a capital 'B', unless you have overloaded NeedleUSsim::operator new() to use mxMalloc, which I wouldn't recommend. But if the fields of dasani are allocated with mxMalloc et al., then you *would* want to pass those to mexMakeMemoryPersistent. I'd recommend doing something like that in the NeedleUSsim constructor if at all possible, to keep it near the mxMalloc call. | memory management problem in mexFunction | [
"",
"c++",
"matlab",
""
] |
I am in the middle of developing a Django application, which has quite complicated models (it models a university - courses, modules, lectures, students etc.)
I have separated the project into apps, to make the whole thing more organised (apps are courses, schools, people, modules and timeperiods). I am having a problem whereby a model in one app may depend on a model in another - so I must import it. The second app then in turn depends on a model in the first, so there is a cycle and Python throws up an error.
How do people deal with this? I understand that apps should be relatively "independent", but in a system like this it doesn't make sense, for example, to use ContentTypes to link students to a module.
Does anyone have a similar project that could comment on this case? | If your dependency is with foreign keys referencing models in other applications, you *don't* need to import the other model. You can use a string in your ForeignKey definition:
```
class MyModel(models.Model):
myfield = models.ForeignKey('myotherapp.MyOtherModel')
```
This way there's no need to import MyOtherModel, so no circular reference. Django resolves the string internally, and it all works as expected. | I wrote the following a long time ago. Reading it now, it is not good advice for the OP's question. They should probably just put their models into a single Django app. But the following is some general background on dependencies in code, and it mentions this "put them into one group" solution halfway through:
Ignoring the Django aspect to your question, the general technique for breaking circular dependencies is to break out one of the cross-referenced items into a new module. For example, the cycle:
```
moduleA: class1, class2
| ^
v |
moduleB: class3, class4
```
could become:
```
moduleB2: class4
|
v
moduleA: class1, class2
|
v
moduleB1: class3
```
Or alternatively, you could split up the classes from moduleA:
```
moduleA1: class1
|
v
moduleB: class3, class4
|
v
moduleA2: class2
```
Or both:
```
moduleA1: class1
|
v
moduleB1: class3
moduleB2: class4
|
v
moduleA2 class2
```
Of course, this is no help if class A & B depend on each other:
```
moduleA: class1
| ^
v |
moduleB: class2
```
In that case, maybe they should be in the same module (I suspect this is the solution the OP should use. Put the various interrelated model classes into a single Django app):
```
moduleA: class1 <---> class2
```
or maybe the classes can be broken down in some way. For example, perhaps a first step could be to break out the part of class1 that class2 needs, into a new class3, which both original classes depend upon:
```
moduleA: class1
| |
| v
moduleB: | class2
| |
v v
moduleC: class3(new!)
```
This has already broken the cycle. We might want to go one step further, and break out the part of class 2 that class 1 depends upon:
```
moduleA: class1
| |
| |
moduleB: | | class2
| | | |
| v v |
moduleC: | class3 |
| |
v v
moduleD: class4(new!)
```
Sometimes this kind of breakdown is difficult. Getting the hang of it is certainly a skill that requires some effort to learn. If it doesn't go well, the resulting pieces seem very ad-hoc, specific to your problem, and you are left with a feeling that you have made your algorithm harder to understand by breaking it into multiple pieces. But when this process goes well, the resulting pieces feel like useful new primitives, which one could imagine being used elsewhere (even if that doesn't actually happen in your current project), and using them makes your core algorithm easier to understand.
(Remember in the general case, that the above "dependency" arrows could represent any kind of dependency between the classes, which might be inheritance, or composition, etc. One class method might simply call a static method on another class. And when we talk of "classes", all of this applies equally to other chunks of logic, such as functions which depend on (i.e. call) each other, or database tables that reference each other. | Django App Dependency Cycle | [
"",
"python",
"django",
"dependencies",
""
] |
I have a project for my Data Structures class, which is a file compressor that works using Binary Trees and other stuff. We are required to "zip" and "unzip" any given file by using the following instructions in the command line:
For compressing: *compressor.exe -zip file.whatever*
For uncompressing: *compressor.exe -unzip file.zip*
We are programming in **C++**. I use the IDE Code::Blocks and compile using GCC in Windows.
My question is: **How do you even implement that??!!** How can you make your .exe receive those parameters in command line, and then execute them the way you want?
Also, anything special to have in mind if I want that implementation to compile in Linux?
Thanks for your help | Lo logré, I gotz it!!
I now have a basic understanding on how to use the **argc** and **argv[ ]** parameters on the **main()** function (I *always* wondered what they were good for...). For example, if I put in the command line:
*compressor.exe -unzip file.zip*
Then:
* argc initializes in '3' (number of arguments in line)
* argv[0] == "compressor.exe" (name of app.)
* argv[1] == "-unzip"
* argv[2] == "file.zip"
Greg (not 'Creg', sorry =P) and Bemrose, thank you guys for your help!! ^^ | You may want to look in your programming text for the signature of the `main` function, your program's entry point. That's where you'll be able to pull in those command line parameters.
I don't want to be more detailed than that because this is apparently a key point of the assignment, and if I ever find myself working with you, I'll expect you to be able to figure this sort of stuff out on your own once you've received an appropriate nudge. :)
Good luck! | Implementing "app.exe -instruction file" notation in C++ | [
"",
"c++",
"command-line-arguments",
""
] |
Edit: I have this line:
```
$comments = "http://www.kiubbo.com/index.php?action=comments¶m=".$articles[$index]->getId();
```
And if I put it here it works out in the title tag, but not on the link or guid tags.
```
fwrite ($fp, "<item>\n");
fwrite ($fp, " <title>$title</title>\n");
fwrite ($fp, " <link>$comments</link>\n");
fwrite ($fp, "<guid>$comments</guid>\n");
fwrite ($fp, "</item>\n");
}
fwrite ($fp, "</channel></rss>\n");
fclose ($fp);
```
For some reason it takes out the "¶m" part of the link ($comments).
Thanks everybody. | Could be your parser failing to error correct properly. You haven't, for instance, escaped your ampersands. Ask [Feed Validator](http://feedvalidator.org/check.cgi) what is wrong.
If you want to write an XML file, use an XML tool, don't munge together strings. | If I;ve understood you correctly and your want the link to comments:
```
$comments = "YOUR_WEBSITE_HERE/index.php?action=comments¶m=".$articles[$index]->getId();
```
you should probably also use `<link rel="replies"` for the link, every one loves semantics!
Hope that helps,
Phil | Building a RSS feed with links embedded | [
"",
"php",
"mysql",
"html",
"xml",
"rss",
""
] |
I have a button on a JFrame that when clicked I want a dialog box to popup with multiple text areas for user input. I have been looking all around to try to figure out how to do this but I keep on getting more confused. Can anyone help? | If you don't need much in the way of custom behavior, JOptionPane is a good time saver. It takes care of the placement and localization of OK / Cancel options, and is a quick-and-dirty way to show a custom dialog without needing to define your own classes. Most of the time the "message" parameter in JOptionPane is a String, but you can pass in a JComponent or array of JComponents as well.
Example:
```
JTextField firstName = new JTextField();
JTextField lastName = new JTextField();
JPasswordField password = new JPasswordField();
final JComponent[] inputs = new JComponent[] {
new JLabel("First"),
firstName,
new JLabel("Last"),
lastName,
new JLabel("Password"),
password
};
int result = JOptionPane.showConfirmDialog(null, inputs, "My custom dialog", JOptionPane.PLAIN_MESSAGE);
if (result == JOptionPane.OK_OPTION) {
System.out.println("You entered " +
firstName.getText() + ", " +
lastName.getText() + ", " +
password.getText());
} else {
System.out.println("User canceled / closed the dialog, result = " + result);
}
``` | Try this simple class for customizing a dialog to your liking:
```
import java.util.ArrayList;
import java.util.List;
import javax.swing.JComponent;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JRootPane;
public class CustomDialog
{
private List<JComponent> components;
private String title;
private int messageType;
private JRootPane rootPane;
private String[] options;
private int optionIndex;
public CustomDialog()
{
components = new ArrayList<>();
setTitle("Custom dialog");
setMessageType(JOptionPane.PLAIN_MESSAGE);
setRootPane(null);
setOptions(new String[] { "OK", "Cancel" });
setOptionSelection(0);
}
public void setTitle(String title)
{
this.title = title;
}
public void setMessageType(int messageType)
{
this.messageType = messageType;
}
public void addComponent(JComponent component)
{
components.add(component);
}
public void addMessageText(String messageText)
{
JLabel label = new JLabel("<html>" + messageText + "</html>");
components.add(label);
}
public void setRootPane(JRootPane rootPane)
{
this.rootPane = rootPane;
}
public void setOptions(String[] options)
{
this.options = options;
}
public void setOptionSelection(int optionIndex)
{
this.optionIndex = optionIndex;
}
public int show()
{
int optionType = JOptionPane.OK_CANCEL_OPTION;
Object optionSelection = null;
if(options.length != 0)
{
optionSelection = options[optionIndex];
}
int selection = JOptionPane.showOptionDialog(rootPane,
components.toArray(), title, optionType, messageType, null,
options, optionSelection);
return selection;
}
public static String getLineBreak()
{
return "<br>";
}
}
``` | Java - How to create a custom dialog box? | [
"",
"java",
"swing",
"jframe",
"jdialog",
"joptionpane",
""
] |
I am in the design phase of writing a new Windows service application that accepts TCP/IP connections for long running connections (i.e., this is not like HTTP where there are many short connections, but rather a client connects and stays connected for hours or days or even weeks).
I'm looking for ideas for the best way to design the network architecture. I'm going to need to start at least one thread for the service. I am considering using the Asynch API (BeginRecieve, etc.) since I don't know how many clients I will have connected at any given time (possibly hundreds). I definitely do not want to start a thread for each connection.
Data will primarily flow out to the clients from my server, but there will be some commands sent from the clients on occasion. This is primarily a monitoring application in which my server sends status data periodically to the clients.
What is the best way to make this as scalable as possible? Basic workflow?
To be clear, I'm looking for .NET-based solutions (C# if possible, but any .NET language will work).
I would need a working example of a solution, either as a pointer to something I could download or a short example in-line. And it must be .NET and Windows based (any .NET language is acceptable). | I've written something similar to this in the past. From my research years ago showed that writing your own socket implementation was the best bet, using the *asynchronous* sockets. This meant that clients not really doing anything actually required relatively few resources. Anything that does occur is handled by the .NET thread pool.
I wrote it as a class that manages all connections for the servers.
I simply used a list to hold all the client connections, but if you need faster lookups for larger lists, you can write it however you want.
```
private List<xConnection> _sockets;
```
Also you need the socket actually listening for incoming connections.
```
private System.Net.Sockets.Socket _serverSocket;
```
The start method actually starts the server socket and begins listening for any incoming connections.
```
public bool Start()
{
System.Net.IPHostEntry localhost = System.Net.Dns.GetHostEntry(System.Net.Dns.GetHostName());
System.Net.IPEndPoint serverEndPoint;
try
{
serverEndPoint = new System.Net.IPEndPoint(localhost.AddressList[0], _port);
}
catch (System.ArgumentOutOfRangeException e)
{
throw new ArgumentOutOfRangeException("Port number entered would seem to be invalid, should be between 1024 and 65000", e);
}
try
{
_serverSocket = new System.Net.Sockets.Socket(serverEndPoint.Address.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
}
catch (System.Net.Sockets.SocketException e)
{
throw new ApplicationException("Could not create socket, check to make sure not duplicating port", e);
}
try
{
_serverSocket.Bind(serverEndPoint);
_serverSocket.Listen(_backlog);
}
catch (Exception e)
{
throw new ApplicationException("An error occurred while binding socket. Check inner exception", e);
}
try
{
//warning, only call this once, this is a bug in .net 2.0 that breaks if
// you're running multiple asynch accepts, this bug may be fixed, but
// it was a major pain in the rear previously, so make sure there is only one
//BeginAccept running
_serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket);
}
catch (Exception e)
{
throw new ApplicationException("An error occurred starting listeners. Check inner exception", e);
}
return true;
}
```
I'd just like to note the exception handling code looks bad, but the reason for it is I had exception suppression code in there so that any exceptions would be suppressed and return `false` if a configuration option was set, but I wanted to remove it for brevity sake.
The \_serverSocket.BeginAccept(new AsyncCallback(acceptCallback)), \_serverSocket) above essentially sets our server socket to call the acceptCallback method whenever a user connects. This method runs from the .NET threadpool, which automatically handles creating additional worker threads if you have many blocking operations. This should optimally handle any load on the server.
```
private void acceptCallback(IAsyncResult result)
{
xConnection conn = new xConnection();
try
{
//Finish accepting the connection
System.Net.Sockets.Socket s = (System.Net.Sockets.Socket)result.AsyncState;
conn = new xConnection();
conn.socket = s.EndAccept(result);
conn.buffer = new byte[_bufferSize];
lock (_sockets)
{
_sockets.Add(conn);
}
//Queue receiving of data from the connection
conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn);
//Queue the accept of the next incoming connection
_serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket);
}
catch (SocketException e)
{
if (conn.socket != null)
{
conn.socket.Close();
lock (_sockets)
{
_sockets.Remove(conn);
}
}
//Queue the next accept, think this should be here, stop attacks based on killing the waiting listeners
_serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket);
}
catch (Exception e)
{
if (conn.socket != null)
{
conn.socket.Close();
lock (_sockets)
{
_sockets.Remove(conn);
}
}
//Queue the next accept, think this should be here, stop attacks based on killing the waiting listeners
_serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket);
}
}
```
The above code essentially just finished accepting the connection that comes in, queues `BeginReceive` which is a callback that will run when the client sends data, and then queues the next `acceptCallback` which will accept the next client connection that comes in.
The `BeginReceive` method call is what tells the socket what to do when it receives data from the client. For `BeginReceive`, you need to give it a byte array, which is where it will copy the data when the client sends data. The `ReceiveCallback` method will get called, which is how we handle receiving data.
```
private void ReceiveCallback(IAsyncResult result)
{
//get our connection from the callback
xConnection conn = (xConnection)result.AsyncState;
//catch any errors, we'd better not have any
try
{
//Grab our buffer and count the number of bytes receives
int bytesRead = conn.socket.EndReceive(result);
//make sure we've read something, if we haven't it supposadly means that the client disconnected
if (bytesRead > 0)
{
//put whatever you want to do when you receive data here
//Queue the next receive
conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn);
}
else
{
//Callback run but no data, close the connection
//supposadly means a disconnect
//and we still have to close the socket, even though we throw the event later
conn.socket.Close();
lock (_sockets)
{
_sockets.Remove(conn);
}
}
}
catch (SocketException e)
{
//Something went terribly wrong
//which shouldn't have happened
if (conn.socket != null)
{
conn.socket.Close();
lock (_sockets)
{
_sockets.Remove(conn);
}
}
}
}
```
EDIT: In this pattern I forgot to mention that in this area of code:
```
//put whatever you want to do when you receive data here
//Queue the next receive
conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn);
```
Generally, in the whatever you want code, I would do reassembly of packets into messages, and then create them as jobs on the thread pool. This way the BeginReceive of the next block from the client isn't delayed while whatever message processing code is running.
The accept callback finishes reading the data socket by calling end receive. This fills the buffer provided in the begin receive function. Once you do whatever you want where I left the comment, we call the next `BeginReceive` method which will run the callback again if the client sends any more data.
Now here's the really tricky part: When the client sends data, your receive callback might only be called with part of the message. Reassembly can become very very complicated. I used my own method and created a sort of proprietary protocol to do this. I left it out, but if you request, I can add it in. This handler was actually the most complicated piece of code I had ever written.
```
public bool Send(byte[] message, xConnection conn)
{
if (conn != null && conn.socket.Connected)
{
lock (conn.socket)
{
//we use a blocking mode send, no async on the outgoing
//since this is primarily a multithreaded application, shouldn't cause problems to send in blocking mode
conn.socket.Send(bytes, bytes.Length, SocketFlags.None);
}
}
else
return false;
return true;
}
```
The above send method actually uses a synchronous `Send` call. For me that was fine due to the message sizes and the multithreaded nature of my application. If you want to send to every client, you simply need to loop through the \_sockets List.
The xConnection class you see referenced above is basically a simple wrapper for a socket to include the byte buffer, and in my implementation some extras.
```
public class xConnection : xBase
{
public byte[] buffer;
public System.Net.Sockets.Socket socket;
}
```
Also for reference here are the `using`s I include since I always get annoyed when they aren't included.
```
using System.Net.Sockets;
```
I hope that's helpful. It may not be the cleanest code, but it works. There are also some nuances to the code which you should be weary about changing. For one, only have a single `BeginAccept` called at any one time. There used to be a very annoying .NET bug around this, which was years ago so I don't recall the details.
Also, in the `ReceiveCallback` code, we process anything received from the socket before we queue the next receive. This means that for a single socket, we're only actually ever in `ReceiveCallback` once at any point in time, and we don't need to use thread synchronization. However, if you reorder this to call the next receive immediately after pulling the data, which might be a little faster, you will need to make sure you properly synchronize the threads.
Also, I hacked out a lot of my code, but left the essence of what's happening in place. This should be a good start for you're design. Leave a comment if you have any more questions around this. | There are many ways of doing network operations in C#. All of them use different mechanisms under the hood, and thus suffer major performance issues with a high concurrency. Begin\* operations are one of these that many people often mistake for being the faster/fastest way of doing networking.
To solve these issues, they introduced the *Async set of methods*: From MSDN, *[SocketAsyncEventArgs Class](https://learn.microsoft.com/en-us/dotnet/api/system.net.sockets.socketasynceventargs)* -
> The SocketAsyncEventArgs class is part of a set of enhancements to the System.Net.Sockets..::.Socket class that provide an alternative asynchronous pattern that can be used by specialized high-performance socket applications. This class was specifically designed for network server applications that require high performance. An application can use the enhanced asynchronous pattern exclusively or only in targeted hot areas (for example, when receiving large amounts of data).
>
> The main feature of these enhancements is the avoidance of the repeated allocation and synchronization of objects during high-volume asynchronous socket I/O. The Begin/End design pattern currently implemented by the System.Net.Sockets..::.Socket class requires a System..::.IAsyncResult object be allocated for each asynchronous socket operation.
Under the covers, the \*Async API uses I/O completion ports which is the fastest way of performing networking operations, see *[Windows Sockets 2.0: Write Scalable Winsock Apps Using Completion Ports](https://learn.microsoft.com/en-us/archive/msdn-magazine/2000/october/windows-sockets-2-0-write-scalable-winsock-apps-using-completion-ports)*
And just to help you out, I am including the source code for a telnet server I wrote using the \*Async API. I am only including the relevant portions. Also to note, instead of processing the data inline, I instead opt to push it onto a lock free (wait free) queue that is processed on a separate thread. Note that I am not including the corresponding **Pool** class which is just a simple pool which will create a new object if it is empty, and the **Buffer** class which is just a self-expanding buffer which is not really needed unless you are receiving an indeterministic amount of data.
```
public class Telnet
{
private readonly Pool<SocketAsyncEventArgs> m_EventArgsPool;
private Socket m_ListenSocket;
/// <summary>
/// This event fires when a connection has been established.
/// </summary>
public event EventHandler<SocketAsyncEventArgs> Connected;
/// <summary>
/// This event fires when a connection has been shutdown.
/// </summary>
public event EventHandler<SocketAsyncEventArgs> Disconnected;
/// <summary>
/// This event fires when data is received on the socket.
/// </summary>
public event EventHandler<SocketAsyncEventArgs> DataReceived;
/// <summary>
/// This event fires when data is finished sending on the socket.
/// </summary>
public event EventHandler<SocketAsyncEventArgs> DataSent;
/// <summary>
/// This event fires when a line has been received.
/// </summary>
public event EventHandler<LineReceivedEventArgs> LineReceived;
/// <summary>
/// Specifies the port to listen on.
/// </summary>
[DefaultValue(23)]
public int ListenPort { get; set; }
/// <summary>
/// Constructor for Telnet class.
/// </summary>
public Telnet()
{
m_EventArgsPool = new Pool<SocketAsyncEventArgs>();
ListenPort = 23;
}
/// <summary>
/// Starts the telnet server listening and accepting data.
/// </summary>
public void Start()
{
IPEndPoint endpoint = new IPEndPoint(0, ListenPort);
m_ListenSocket = new Socket(endpoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
m_ListenSocket.Bind(endpoint);
m_ListenSocket.Listen(100);
//
// Post Accept
//
StartAccept(null);
}
/// <summary>
/// Not Yet Implemented. Should shutdown all connections gracefully.
/// </summary>
public void Stop()
{
//throw (new NotImplementedException());
}
//
// ACCEPT
//
/// <summary>
/// Posts a requests for Accepting a connection. If it is being called from the completion of
/// an AcceptAsync call, then the AcceptSocket is cleared since it will create a new one for
/// the new user.
/// </summary>
/// <param name="e">null if posted from startup, otherwise a <b>SocketAsyncEventArgs</b> for reuse.</param>
private void StartAccept(SocketAsyncEventArgs e)
{
if (e == null)
{
e = m_EventArgsPool.Pop();
e.Completed += Accept_Completed;
}
else
{
e.AcceptSocket = null;
}
if (m_ListenSocket.AcceptAsync(e) == false)
{
Accept_Completed(this, e);
}
}
/// <summary>
/// Completion callback routine for the AcceptAsync post. This will verify that the Accept occured
/// and then setup a Receive chain to begin receiving data.
/// </summary>
/// <param name="sender">object which posted the AcceptAsync</param>
/// <param name="e">Information about the Accept call.</param>
private void Accept_Completed(object sender, SocketAsyncEventArgs e)
{
//
// Socket Options
//
e.AcceptSocket.NoDelay = true;
//
// Create and setup a new connection object for this user
//
Connection connection = new Connection(this, e.AcceptSocket);
//
// Tell the client that we will be echo'ing data sent
//
DisableEcho(connection);
//
// Post the first receive
//
SocketAsyncEventArgs args = m_EventArgsPool.Pop();
args.UserToken = connection;
//
// Connect Event
//
if (Connected != null)
{
Connected(this, args);
}
args.Completed += Receive_Completed;
PostReceive(args);
//
// Post another accept
//
StartAccept(e);
}
//
// RECEIVE
//
/// <summary>
/// Post an asynchronous receive on the socket.
/// </summary>
/// <param name="e">Used to store information about the Receive call.</param>
private void PostReceive(SocketAsyncEventArgs e)
{
Connection connection = e.UserToken as Connection;
if (connection != null)
{
connection.ReceiveBuffer.EnsureCapacity(64);
e.SetBuffer(connection.ReceiveBuffer.DataBuffer, connection.ReceiveBuffer.Count, connection.ReceiveBuffer.Remaining);
if (connection.Socket.ReceiveAsync(e) == false)
{
Receive_Completed(this, e);
}
}
}
/// <summary>
/// Receive completion callback. Should verify the connection, and then notify any event listeners
/// that data has been received. For now it is always expected that the data will be handled by the
/// listeners and thus the buffer is cleared after every call.
/// </summary>
/// <param name="sender">object which posted the ReceiveAsync</param>
/// <param name="e">Information about the Receive call.</param>
private void Receive_Completed(object sender, SocketAsyncEventArgs e)
{
Connection connection = e.UserToken as Connection;
if (e.BytesTransferred == 0 || e.SocketError != SocketError.Success || connection == null)
{
Disconnect(e);
return;
}
connection.ReceiveBuffer.UpdateCount(e.BytesTransferred);
OnDataReceived(e);
HandleCommand(e);
Echo(e);
OnLineReceived(connection);
PostReceive(e);
}
/// <summary>
/// Handles Event of Data being Received.
/// </summary>
/// <param name="e">Information about the received data.</param>
protected void OnDataReceived(SocketAsyncEventArgs e)
{
if (DataReceived != null)
{
DataReceived(this, e);
}
}
/// <summary>
/// Handles Event of a Line being Received.
/// </summary>
/// <param name="connection">User connection.</param>
protected void OnLineReceived(Connection connection)
{
if (LineReceived != null)
{
int index = 0;
int start = 0;
while ((index = connection.ReceiveBuffer.IndexOf('\n', index)) != -1)
{
string s = connection.ReceiveBuffer.GetString(start, index - start - 1);
s = s.Backspace();
LineReceivedEventArgs args = new LineReceivedEventArgs(connection, s);
Delegate[] delegates = LineReceived.GetInvocationList();
foreach (Delegate d in delegates)
{
d.DynamicInvoke(new object[] { this, args });
if (args.Handled == true)
{
break;
}
}
if (args.Handled == false)
{
connection.CommandBuffer.Enqueue(s);
}
start = index;
index++;
}
if (start > 0)
{
connection.ReceiveBuffer.Reset(0, start + 1);
}
}
}
//
// SEND
//
/// <summary>
/// Overloaded. Sends a string over the telnet socket.
/// </summary>
/// <param name="connection">Connection to send data on.</param>
/// <param name="s">Data to send.</param>
/// <returns>true if the data was sent successfully.</returns>
public bool Send(Connection connection, string s)
{
if (String.IsNullOrEmpty(s) == false)
{
return Send(connection, Encoding.Default.GetBytes(s));
}
return false;
}
/// <summary>
/// Overloaded. Sends an array of data to the client.
/// </summary>
/// <param name="connection">Connection to send data on.</param>
/// <param name="data">Data to send.</param>
/// <returns>true if the data was sent successfully.</returns>
public bool Send(Connection connection, byte[] data)
{
return Send(connection, data, 0, data.Length);
}
public bool Send(Connection connection, char c)
{
return Send(connection, new byte[] { (byte)c }, 0, 1);
}
/// <summary>
/// Sends an array of data to the client.
/// </summary>
/// <param name="connection">Connection to send data on.</param>
/// <param name="data">Data to send.</param>
/// <param name="offset">Starting offset of date in the buffer.</param>
/// <param name="length">Amount of data in bytes to send.</param>
/// <returns></returns>
public bool Send(Connection connection, byte[] data, int offset, int length)
{
bool status = true;
if (connection.Socket == null || connection.Socket.Connected == false)
{
return false;
}
SocketAsyncEventArgs args = m_EventArgsPool.Pop();
args.UserToken = connection;
args.Completed += Send_Completed;
args.SetBuffer(data, offset, length);
try
{
if (connection.Socket.SendAsync(args) == false)
{
Send_Completed(this, args);
}
}
catch (ObjectDisposedException)
{
//
// return the SocketAsyncEventArgs back to the pool and return as the
// socket has been shutdown and disposed of
//
m_EventArgsPool.Push(args);
status = false;
}
return status;
}
/// <summary>
/// Sends a command telling the client that the server WILL echo data.
/// </summary>
/// <param name="connection">Connection to disable echo on.</param>
public void DisableEcho(Connection connection)
{
byte[] b = new byte[] { 255, 251, 1 };
Send(connection, b);
}
/// <summary>
/// Completion callback for SendAsync.
/// </summary>
/// <param name="sender">object which initiated the SendAsync</param>
/// <param name="e">Information about the SendAsync call.</param>
private void Send_Completed(object sender, SocketAsyncEventArgs e)
{
e.Completed -= Send_Completed;
m_EventArgsPool.Push(e);
}
/// <summary>
/// Handles a Telnet command.
/// </summary>
/// <param name="e">Information about the data received.</param>
private void HandleCommand(SocketAsyncEventArgs e)
{
Connection c = e.UserToken as Connection;
if (c == null || e.BytesTransferred < 3)
{
return;
}
for (int i = 0; i < e.BytesTransferred; i += 3)
{
if (e.BytesTransferred - i < 3)
{
break;
}
if (e.Buffer[i] == (int)TelnetCommand.IAC)
{
TelnetCommand command = (TelnetCommand)e.Buffer[i + 1];
TelnetOption option = (TelnetOption)e.Buffer[i + 2];
switch (command)
{
case TelnetCommand.DO:
if (option == TelnetOption.Echo)
{
// ECHO
}
break;
case TelnetCommand.WILL:
if (option == TelnetOption.Echo)
{
// ECHO
}
break;
}
c.ReceiveBuffer.Remove(i, 3);
}
}
}
/// <summary>
/// Echoes data back to the client.
/// </summary>
/// <param name="e">Information about the received data to be echoed.</param>
private void Echo(SocketAsyncEventArgs e)
{
Connection connection = e.UserToken as Connection;
if (connection == null)
{
return;
}
//
// backspacing would cause the cursor to proceed beyond the beginning of the input line
// so prevent this
//
string bs = connection.ReceiveBuffer.ToString();
if (bs.CountAfterBackspace() < 0)
{
return;
}
//
// find the starting offset (first non-backspace character)
//
int i = 0;
for (i = 0; i < connection.ReceiveBuffer.Count; i++)
{
if (connection.ReceiveBuffer[i] != '\b')
{
break;
}
}
string s = Encoding.Default.GetString(e.Buffer, Math.Max(e.Offset, i), e.BytesTransferred);
if (connection.Secure)
{
s = s.ReplaceNot("\r\n\b".ToCharArray(), '*');
}
s = s.Replace("\b", "\b \b");
Send(connection, s);
}
//
// DISCONNECT
//
/// <summary>
/// Disconnects a socket.
/// </summary>
/// <remarks>
/// It is expected that this disconnect is always posted by a failed receive call. Calling the public
/// version of this method will cause the next posted receive to fail and this will cleanup properly.
/// It is not advised to call this method directly.
/// </remarks>
/// <param name="e">Information about the socket to be disconnected.</param>
private void Disconnect(SocketAsyncEventArgs e)
{
Connection connection = e.UserToken as Connection;
if (connection == null)
{
throw (new ArgumentNullException("e.UserToken"));
}
try
{
connection.Socket.Shutdown(SocketShutdown.Both);
}
catch
{
}
connection.Socket.Close();
if (Disconnected != null)
{
Disconnected(this, e);
}
e.Completed -= Receive_Completed;
m_EventArgsPool.Push(e);
}
/// <summary>
/// Marks a specific connection for graceful shutdown. The next receive or send to be posted
/// will fail and close the connection.
/// </summary>
/// <param name="connection"></param>
public void Disconnect(Connection connection)
{
try
{
connection.Socket.Shutdown(SocketShutdown.Both);
}
catch (Exception)
{
}
}
/// <summary>
/// Telnet command codes.
/// </summary>
internal enum TelnetCommand
{
SE = 240,
NOP = 241,
DM = 242,
BRK = 243,
IP = 244,
AO = 245,
AYT = 246,
EC = 247,
EL = 248,
GA = 249,
SB = 250,
WILL = 251,
WONT = 252,
DO = 253,
DONT = 254,
IAC = 255
}
/// <summary>
/// Telnet command options.
/// </summary>
internal enum TelnetOption
{
Echo = 1,
SuppressGoAhead = 3,
Status = 5,
TimingMark = 6,
TerminalType = 24,
WindowSize = 31,
TerminalSpeed = 32,
RemoteFlowControl = 33,
LineMode = 34,
EnvironmentVariables = 36
}
}
``` | How to write a scalable TCP/IP based server | [
"",
"c#",
".net",
"networking",
"tcp",
"scalability",
""
] |
I have a large xml file (approx. 10 MB) in following simple structure:
```
<Errors>
<Error>.......</Error>
<Error>.......</Error>
<Error>.......</Error>
<Error>.......</Error>
<Error>.......</Error>
</Errors>
```
My need is to write add a new node <Error> at the end before the </Errors> tag. Whats is the fastest way to achieve this in .net? | You need to use the XML inclusion technique.
Your error.xml (doesn't change, just a stub. Used by XML parsers to read):
```
<?xml version="1.0"?>
<!DOCTYPE logfile [
<!ENTITY logrows
SYSTEM "errorrows.txt">
]>
<Errors>
&logrows;
</Errors>
```
Your errorrows.txt file (changes, the xml parser doesn't understand it):
```
<Error>....</Error>
<Error>....</Error>
<Error>....</Error>
```
Then, to add an entry to errorrows.txt:
```
using (StreamWriter sw = File.AppendText("logerrors.txt"))
{
XmlTextWriter xtw = new XmlTextWriter(sw);
xtw.WriteStartElement("Error");
// ... write error messge here
xtw.Close();
}
```
Or you can even use .NET 3.5 XElement, and append the text to the `StreamWriter`:
```
using (StreamWriter sw = File.AppendText("logerrors.txt"))
{
XElement element = new XElement("Error");
// ... write error messge here
sw.WriteLine(element.ToString());
}
```
See also [Microsoft's article Efficient Techniques for Modifying Large XML Files](http://msdn.microsoft.com/en-us/library/aa302289.aspx) | First, I would disqualify System.Xml.XmlDocument because [it is a DOM](http://en.wikipedia.org/wiki/Document_Object_Model) which requires parsing and building the entire tree in memory before it can be appended to. This means your 10 MB of text will be more than 10 MB in memory. This means it is "memory intensive" and "time consuming".
Second, I would disqualify System.Xml.XmlReader because it [requires parsing the entire file](http://en.wikipedia.org/wiki/Simple_API_for_XML) first before you can get to the point of when you can append to it. You would have to copy the XmlReader into an XmlWriter since you can't modify it. This requires duplicating your XML in memory first before you can append to it.
The faster solution to XmlDocument and XmlReader would be string manipulation (which has its own memory issues):
```
string xml = @"<Errors><error />...<error /></Errors>";
int idx = xml.LastIndexOf("</Errors>");
xml = xml.Substring(0, idx) + "<error>new error</error></Errors>";
```
Chop off the end tag, add in the new error, and add the end tag back.
I suppose you could go crazy with this and truncate your file by 9 characters and append to it. Wouldn't have to read in the file and would let the OS optimize page loading (only would have to load in the last block or something).
```
System.IO.FileStream fs = System.IO.File.Open("log.xml", System.IO.FileMode.Open, System.IO.FileAccess.ReadWrite);
fs.Seek(-("</Errors>".Length), System.IO.SeekOrigin.End);
fs.Write("<error>new error</error></Errors>");
fs.Close();
```
That will hit a problem if your file is empty or contains only "<Errors></Errors>", both of which can easily be handled by checking the length. | Fastest way to add new node to end of an xml? | [
"",
"c#",
".net",
"xml",
""
] |
We are developing two versions of an application. Not in the sense of a lite vs standard version of the application, where one version will have limited functionality etc. We will actually be displaying different types of information in the application, depending on the version (that's the best way I can describe it without going into too many details).
To differentiate the two versions of the application we've considered using the conditional attribute and the #if directive (if there are any other options or better way than these two, I'm open for suggestions). After some research and debate, we've decided to go with the #if approach, since this will not include the unnecessary code during the compile process (whereas the conditional attribute will just remove the calls to the methods that do not meet the condition, but still include the methods... if I'm not mistaken). I realize the two are not mutually exclusive, so we could always mix and match if need be.
Anyway... What we're now wondering, is if there is a way to **only** include certain windows forms during a compile, based on which version of the application we are compiling. We have split out all of the logic, so the forms are really just forms, with very little code inside them (mostly just calls to form manager classes that handle all of the business logic). The form manager classes will contain some of the #if statements inside of them, so the code can be reused in both versions of the application, whenever possible (instead of making two classes and putting a conditional attribute on the classes... though maybe this is something we should consider).
Is anyone aware of a good way to do this?
TIA
UPDATE:
Just an FYI of what we actually decided to do. We put the different versions of the forms into separate namespaces and then only had to use an #if statement around the namespace using statement at the top of the class that manages all of the forms. Worked out pretty slick and was very litte work. | I do this with library projects. I produce another project (.csproj), and then include into that project the existing sources. In VS2008, right click on the new project, Click add Existing Item... and then *instead of clicking Add*, use the select arrow to select "Add as Link".
Rather than duplicating source modules, **Add as Link** will include a reference to the existing source, into the new project. This way you can have N projects, each with a different combination of source modules. I use this in concert with #if statements within the source of common modules to produce different versions of a library.
[Add Existing Item http://www.freeimagehosting.net/uploads/th.eff09391e9.png](http://www.freeimagehosting.net/uploads/th.eff09391e9.png)
[full image](http://www.freeimagehosting.net/image.php?eff09391e9.png)
[Add as Link http://www.freeimagehosting.net/uploads/th.f12b764887.png](http://www.freeimagehosting.net/uploads/th.f12b764887.png)
[full image](http://www.freeimagehosting.net/image.php?f12b764887.png) | Another way to do this is using OO inheritance: put functionality that's common to both versions in a superclass, and then create separate subclasses which define the specializations of the superclass for each of your versions.
You can then build your superclass[es] as a shared library, and build each specialized subclass in separate assemblies (which reference the common shared library).
Doing this uses no conditional compilation nor conditional build options. | Can you include only certain forms during a compile | [
"",
"c#",
".net",
"conditional-compilation",
""
] |
I have a static `std::vector` in a class. When I use Microsoft's memory leak detection tools:
```
_CrtMemState state;
_CrtMemCheckpoint( & state);
_CrtMemDumpAllObjectsSince( & state );
```
it reports a leak after I insert stuff into the vector. This makes sense to me because new space is allocated when something is inserted into the vector. This space isn't deallocated until the program terminates (since the vector is static). Is this right?
In the destructor of the class that contains the vector, I'm deleting the object that I put into the vector. However, the memory that's allocated when the insertion happened is still hanging around. Is there anyway to delete this space? | You can swap the vector with an empty one - this will release the memory.
See also [Q: Shrinking a vector](https://stackoverflow.com/questions/586634/shrinking-a-vector) | To add to what James wrote. He means to do this:
```
std::vector<T>().swap(v);
```
where 'v' is the vector whose memory you want to release. | Memory leak (sort of) with a static std::vector | [
"",
"c++",
"memory",
"stl",
"vector",
"memory-leaks",
""
] |
I need to be able to scroll a RichTextBox to the bottom, even when I am not appending text. I know I can append text, and then use that to set the selection start. However I want to ensure it is at the bottom for visual reasons, so I am not adding any text. | You could try setting the SelectionStart property to the length of the text and then call the ScrollToCaret method.
```
richTextBox.SelectionStart = richTextBox.Text.Length;
richTextBox.ScrollToCaret();
``` | The `RichTextBox` will stay scrolled to the end if it has focus and you use `AppendText` to add the information. If you set `HideSelection` to false it will keep its selection when it loses focus and stay auto-scrolled.
I designed a Log Viewer GUI that used the method below. It used up to a full core keeping up. Getting rid of this code and setting `HideSelection` to false got the CPU usage down to 1-2%.
```
//Don't use this!
richTextBox.AppendText(text);
richTextBox.ScrollToEnd();
``` | How do I scroll a RichTextBox to the bottom? | [
"",
"c#",
"winforms",
"scroll",
"richtextbox",
""
] |
I want to take advantage of the new features in Windows 7 using C# (ie. JumpList, Window Shake, etc). Are there any websites/tutorials that can show me what I want to learn. | Check out this link. Good info:
<http://windowsteamblog.com/blogs/developers/archive/2009/05/18/windows-7-managed-code-apis.aspx> | [.NET Jump List](https://stackoverflow.com/questions/466726/net-jump-list)
<http://www.vcskicks.com/aero-shake.php> | Taking advantage of Windows 7 RC using C#? | [
"",
"c#",
"windows-7",
"jump-list",
""
] |
i am trying to make a c# WPF form where i can drag it around the screen by clicking on it and moving with the mouse. the forms characteristics include being completely transparent and containing only one image. This being said the window style is none and it is not displayed in the taskbar. So essentially all you can see when the app is running is a little image - and ideally i want to be able to drag it around the desktop if i click and hold the left mouse button and move it about.
Does anyone know a simple way i can accomplish this or have i overlooked a build in function?
Thanks. | You can use the Window.DragMove method in the mouse down event of the window. | Previous answers hit on the answer, but the full example is this:
```
private void Window_MouseDown(object sender, MouseButtonEventArgs e)
{
if (e.LeftButton == MouseButtonState.Pressed)
{
DragMove();
}
}
``` | Drag a WPF Form around the desktop | [
"",
"c#",
"wpf",
"forms",
"window",
"drag",
""
] |
With Spring MVC, it's easy to express a concept like "A user is submitting the form if they use POST or if they include the 'isSubmit' parameter." You'd just extend `SimpleFormController` and override the `isFormSubmission` method.
However, Spring MVC now uses these neat annotations like `@RequestMapping` to handle requests. `@RequestMapping` has an obvious filter for whether somebody used a GET or a POST, but I don't see any inherent support for all of the useful logic SimpleFormController provided. Is it still available to me with annotations? | So, after a bit of investigation, there are in fact a couple of ways to handle this situation.
The first way is to go ahead and use a `SimpleFormController` with with `@RequestMapping` annotation at the class level. A lesser-known but pretty cool property of @RequestMapping is that it knows perfectly well how to deal with the classes that implement Spring's `Controller` interface. The only downside here is that I'm still using the old MVC interfaces and classes, and they're going to be deprecated in Spring 3.0.
The second way was pointed out by kgiannakakis above. Simply create a `@RequestMapping` method for every way that the submit can be called, and have them all just call a single submit method, either in a constructor-chaining style or with some private method. Simple and easy to understand. Thanks, kgiannakakis! | Copying from [here](http://static.springframework.org/spring/docs/2.5.x/reference/mvc.html#mvc-annotation):
```
Path mappings can be narrowed through parameter conditions: a sequence of
"myParam=myValue" style expressions, with a request only mapped if each such
parameter is found to have the given value. "myParam" style expressions are
also supported, with such parameters having to be present in the request
(allowed to have any value). Finally, "!myParam" style expressions indicate
that the specified parameter is not supposed to be present in the request.
```
You can only use the RequestMapping options to define the desired functionality. The Annotations Controller doesn't implement any interface to work with. | Spring MVC isFormSubmission() equivalent for annotations? | [
"",
"java",
"spring",
"spring-mvc",
""
] |
I’m trying to work with some json formatted data in C#, but, I’m having some problems determining the proper way to approach the problem. My issue is that the json formatted data will be in an unknown format (I know that sounds odd … please read on). Basically, the json formatted data will be some collection of name/value pairs where the values may or may not be arrays of nested name/value pairs. To make things more fun, the nesting of the name/value pair arrays can continue on ad infinitum.
For example:
I might have some data that looks like…
```
{
"1": {
"1.1": {
"1.1.1": "value1",
"1.1.2": "value2",
"1.1.3": "value3"
},
"1.2": "value4",
"1.3": {
"1.3.1": {
"1.3.1.1": "value5",
"1.3.1.2": "value6"
},
"1.3.1.2": "value7",
"1.3.1.3": "value8"
}
}
}
```
Unfortunately, I don’t know how much nesting is going to occur and technically I don’t know what names/value pairs will be present in any given message.
Is there any supported mechanism in C# that would enable me to easily parse this into a nested set of hastables?
I’d like to do something along the lines of (note this code is not 100% syntactically correct and would better be done via recursion … but it get’s the idea across).
```
Hashtable ht = [deserialize data method](jsonformattedstring);
foreach (Hashtable nested in ht)
{
If (nested.count > 1)
{
Foreach (hashtable next in nested)
…
}
}
``` | I didn't like the .Net Json parsing...it does some strange things occasionally. I've switched to [Json.NET](http://james.newtonking.com/projects/json-net.aspx), an open source library. It has a nice JObject object that will do what you need. | In .NET, you have the [JsonArray](http://msdn.microsoft.com/en-us/library/system.json.jsonarray(VS.95).aspx), which allows you to load up and parse the JSON data. It creates an array of JsonValue and it is completely nested based on the JSON data it parses.
If you specifically need Hashtable, you could translate the data from JsonArray, though Hastable is all but deprecated in favor of Dictionary.
Josh Holmes has a pretty good "getting started" post about JSON in .NET:
<http://www.joshholmes.com/blog/2009/01/20/PlayingWithJSON.aspx> | C# - parsing json formatted data into nested hashtables | [
"",
"c#",
"json",
"hashtable",
""
] |
I am building a basic Image editor. In my app, if the user wants to resize the image a new form pops up and asks the user to input an new width and height for the image.
```
public partial class Form1 : Form
{
...
private void resizeToolStripMenuItem_Click(object sender, EventArgs e)
{
resize resizeForm = new resize();
resizeForm.ShowDialog();
}
...
}
```
I am wondering how I can get the values from the `resizeForm` and use them to alter the image on the parent form (the `Form1` instance).
If this question needs clarification please let me know.
Thanks! | I assume there are a number of ways to do this. I'd probably use public properties on the resizeForm and then get those when the resizeForm.ShowDialog() returns.
```
if (resizeForm.ShowDialog() == DialogResult.OK) // or whatever
{
myVal = resizeForm.Val;
...
}
```
or something like that. | Setup properties in your "resize" class for the values you want to retrieve. For example, if you add a width property:
```
public int Width { get; set; }
```
you will be able to get the width from your Form1 class. | Calling a method on another form in C# | [
"",
"c#",
"winforms",
""
] |
What do you think? Is this correct or are there memory leaks?
Source:
```
#include <QList.h>
#include <boost/shared_ptr.hpp>
#include <iostream>
class A {
private:
int m_data;
public:
A(int value=0) { m_data = value; }
~A() { std::cout << "destroying A(" << m_data << ")" << std::endl; }
operator int() const { return m_data; }
};
int _tmain(int argc, _TCHAR* argv[])
{
QList<boost::shared_ptr<A> > list;
list.append(boost::shared_ptr<A>(new A(6)));
std::cout << int(*(list.at(0))) << std::endl;
return 0;
}
```
Output:
```
6
destroying A(6)
``` | It seems correct. Boost's shared\_ptr is a reference counting pointer. Reference counting is able to reclaim memory if there are no circular references between objects. In your case, objects of class A do not reference any other objects. Thus, you can use shared\_ptr without worries. Also, the ownership semantics allow shared\_ptrs to be used in STL (and Qt) containers. | It is difficult to suggest anything without knowing why the list of `shared_ptr`s of `A` objects exist in the first place.
Take a look at the [ownership semantics](http://www.boost.org/doc/libs/1_35_0/doc/html/interprocess/interprocess_smart_ptr.html#interprocess.interprocess_smart_ptr.unique_ptr) of smart pointers. Maybe of help to you.
Some other things that can be improved:
**1.** Use initializer lists in ctor like:
```
class A {
private:
int m_data;
public:
A(int value=0) : m_data (value) {}
// ....
```
**2.** `int _tmain(int argc, _TCHAR* argv[])` is not a Standard signature;
Use
```
int main(int argc, char* argv[])
```
or just:
```
int main()
``` | QList and shared_ptr | [
"",
"c++",
"qt",
"boost",
"qlist",
""
] |
I am investigating the use of web frameworks with my Java web-app. My basic requirements are pretty much easy maintainability, testability and no repetition.
I have explored writing my own MVC-type app using some sort of front controller pattern and JSP's for the views. The benefit of this is that I have complete control of all aspects of my web-app and if I design it properly it should not be hard to move it over to a more tested framework in the future if I so choose. However, the con is that I have to reinvent the wheel so to speak.
I hear good things about the currently available web frameworks. Some technologies that I have been looking at are Spring, Wicket, Struts, Guice, Hibernate and Tapestry.
I am a bit wary of Tapestry and Wicket. I dont know TOO much about them but they seem to deviate from the servlet->model jsp->view formula. I am not sure if I am comfortable with that. Although, I hear that Wicket is actually the best fit with Guice and is extremely testable.
Spring seems like it could be a nice fit, but I am very wary of frameworks which attempt to do everything. I would love to use spring-MVC, but can I swap in other components? Can I for example use Guice as my DI engine while using Spring-MVC as my framework?
I have briefly looked at Struts but it seems overly complex for my needs and again seems to be a complete package.
I've never used Hibernate, but it seems to be the standard for ORM and if it's anything like ActiveRecord (which I have only been exposed to a little bit) I am sure it fits my needs.
I also have never really used Guice but people really seem to like it, and I am a fan of DI in general even though I am not sure how it is used in an actual application.
Basically, I am only really confident with writing Servlets / JSP's. I am not opposed to learning alternate technologies but I am looking for advice as to which ones would REALLY benefit me.
If I can make an MVC app using Servlets and JSP's is it worth it to incorporate Spring? Or should I just use Servlets / JSP's and incorporate a DI engine like Guice?
I am pretty sure I would like to use Hibernate for ORM, but I hear it can be pretty complex. All I am really looking for is a way to map my POJO's to a database, so if there is something better/easier to use I am willing to look it up.
I am feeling lost and am looking for a bit of direction from people knowledgable in the area, any opinions on any of these issues would be hugely appreciated. Thanks! | "Spring seems like it could be a nice fit, but I am very wary of frameworks which attempt to do everything. I would love to use spring-MVC, but can I swap in other components? Can I for example use Guice as my DI engine while using Spring-MVC as my framework?"
Agreed Spring provides a lot of stuff, but it's totally modular. You can use DI with or without AOP and so forth. And yes you can use Spring MVC and Guice for DI together.
"I have briefly looked at Struts but it seems overly complex for my needs and again seems to be a complete package."
I have used Struts for quite some time now, but even when I started using it, I found it easy as a breeze. The controller might seem overwhelming at first, but you will have real fun when you get the hang of it. The best way would be taking a look at some real world examples using Struts.
"I've never used Hibernate, but it seems to be the standard for ORM and if it's anything like ActiveRecord (which I have only been exposed to a little bit) I am sure it fits my needs."
Oh then if you found Struts to tough, Hibernate is huge. It requires a big learning curve. It pays at the end, but if you know ActiveRecord, I will suggest you to stick to it before you get a good amount of knowledge of Hibernate.
"I am pretty sure I would like to use Hibernate for ORM, but I hear it can be pretty complex."
IMHO, very true...at least for beginners. (Anyone suggesting a change here?)
"If I can make an MVC app using Servlets and JSP's is it worth it to incorporate Spring?"
You mean without Struts or any other framework? How?
Seems like you are trying to take on too much too fast. Try considering one thing at a time. DI itself is a tricky thing to implement in real world. Oh yes conceptually it's great, but what I mean is you need to first get a hang of things one by one. | Very simply, if you are comfortable with JSPs and Servlets, then if you want to save some of the drudgery of web programming, I would look at Stripes or Struts 2.
I am very familiar with Stripes, and only am aware that Struts 2 is similar, so I will focus this entry on Stripes.
As an aside, Struts 1 is worthless. It offers no value (frankly).
Stripes has several features, but I will focus on only a few.
The primary value of Stripes, and if this were it's only feature it would still be very valuable, is its binding framework.
Binding is the process of converting the requests string values in to the actions values. Stripes does this amazingly well. Specifically, Stripes binding does very well on nested and indexed parameters, as well as type conversions. You can easily have a form field named "currentDate" and then have a "Date currentDate" in your Action, and Stripes will "do the right thing".
If you have a form field named "mainMap['bob'].listOfThings[3].customer.birthDate", Stripes will make the map, create the list, create the customer, convert the string to a date, populate the birthDate, put the customer in the 3 slot of the list, and put that list in the 'bob' spot of the map. (I do stuff like this all the time.)
The binding of requests to Action variables is just wonderful.
On top of that you get, if you use their form tags, you get nice behaviors when, for example, they put "Fred" in your date field. You easily get the form back, with Fred in the field, and a nice error message.
Finally, I really like their Resolutions as a result from their Actions. For example, a ForwardResolution to forward to a page, RedirectResolution to redirect to a page, StreamingResolution if you want to pump data down the socket, etc. It's a very elegant feature.
Stripes has all sorts of power and does all sorts of things, but those 3 pieces are what make it best for me, and what I use 99% of the time.
Simply, it really stays out of the way and readily handles the "plumbing" without completely obscuring the HTTP request nature of the system.
For someone who is content with JSP/Servlets, Stripes I think is an excellent step up as it adds good, solid value with very little cost (it's simple to set up) and without having to toss out everything you already know, since it works just great with JSPs and JSTL. Learn the simple mechanism it uses to map Actions to URLs, and how simple it is to map requests to your actions, and you'll be flying in no time.
Works great with Ajax and the like as well. | Is it possible to use a web framework but not be dependant on that framework? | [
"",
"java",
"model-view-controller",
"web-frameworks",
""
] |
I have a C# project A which uses a .net wrapper DLL and a native DLL. I add the .net wrapper DLL to the reference list of project A. Since the wrapper DLL only works with the native DLL when they are in the same folder, the native DLL should be copied to the output directory of project A. I achieve this by adding the native DLL as a content file under project A and set its copy action to copy if newer. This is fine.
If a C# project B has direct reference to project A, VS will copy all dependent files used by project A to the output directory of project B. This means the wrapper DLL and the native DLL will be copied to project B's output directory as well. This works fine as well.
Then I have yet another C# project C, which only directly refers to project B, not project A. It is interesting to see that VS will not copy the native DLL to the output directory of project C, which is what I intend to do otherwise when project C uses the functionality of project B and looks for the native DLL to work with the wrapper DLL, it won't find it.
Can someone explain why VS doesn't copy the native DLL to the output directory of project C? What is the mechanism of copying chain-dependent files in VS? Many thanks. | Basically reference-chains don't propagate, and it is up to the topmost assembly (the exe, web-site, etc) to ensure that it has everything it needs, either locally or in (for example with managed dlls) the GAC. You will need to add the files to the exe/web-site as "copy to output". | Why not just add the native dll as a reference in project A? This will ensure that it'll always be included when other libraries use A.
Edit: Nevermind, this only works if the dll is a COM or .NET component. | Why the native DLL is not copied to the output directory | [
"",
"c#",
"visual-studio-2008",
"dll",
""
] |
I want to do a sample program in PHP on Windows XP.
Do I need any special software to get this to work?
I am afraid that I simply open notepad type the PHP program and save it with php extension. Then I open it with the browser as a HTML file. But it does not work.
Please help
Thanks in advance | However you decide to install a webserver and php (or just php and use it in the console like someone mentioned):
* Check out the [PHP Manual](http://php.net/manual/en/), especially the [Getting Started](http://php.net/manual/en/getting-started.php) section.
**Apache** (Very, very easy)
1. Check out the WampServer [getting started presentation](http://www.wampserver.com/en/presentation.php).
2. Download and install [WampServer](http://www.wampserver.com/en/).
3. And you are ready to go.
**Microsoft** (Not tested this myself)
1. Check out [PHP on Windows](http://www.microsoft.com/web/platform/phponwindows.aspx).
2. Download and install the [Microsoft Web Platform Installer](http://www.microsoft.com/Web/downloads/platform.aspx).
3. Let me know how that work out... (never tried it myself, since WampServer is so easy) | You need to have a web server with PHP installed on your PC to make this work.
I highly recommend installing [wampserver](http://www.wampserver.com/en/) on your computer. It is a Windows installer that will put PHP, MySQL and Apache in your computer and let you manage all the services and such *very* easily. If you have problems getting it to work, you can also try out [XAMPP](http://www.apachefriends.org/en/xampp-windows.html), although I've never used it myself. | How to work with PHP on Windows XP? | [
"",
"php",
""
] |
I have a method that does a bunch of things; amongst them doing a number of inserts and updates.
It's declared thusly:
```
@Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.DEFAULT, readOnly = false)
public int saveAll() {
//do stuff;
}
```
It works exactly as it is supposed to and I have no problems with it. There are situations however when I want to force the rollback in spite of there not being an exception... at the moment, I'm forcing an exception when I encounter the right conditions, but it's ugly and I don't like it.
Can I actively call the rollback somehow?
The exception calls it... I'm thinking maybe I can too. | In Spring Transactions, you use `TransactionStatus.setRollbackOnly()`.
The problem you have here is that you're using `@Transactional` to demarcate your transactions. This has the benefit of being non-invasive, but it also means that if you want to manually interact with the transaction context, you can't.
If you want tight control of your transaction status, you have to use programmatic transactions rather than declarative annotations. This means using Spring's TransactionTemplate, or use its PlatformTransactionManager directly. See section 9.6 of the Spring reference manual.
With `TransactionTemplate`, you provide a callback object which implements `TransactionCallback`, and the code in this callback has access to the `TransactionStatus` objects.
It's not as nice as `@Transactional`, but you get closer control of your tx status. | This works for me:
```
TransactionInterceptor.currentTransactionStatus().setRollbackOnly();
``` | Force a transactional rollback without encountering an exception? | [
"",
"java",
"hibernate",
"spring",
""
] |
I feel like this is something I should already know, but I'm just not firing on all engines today...
I have a base class with a single ctor that takes an implementation of an interface as it's only parameter. I'm using a DI framework and have my component registrations all set up and working fine.
When I inherit from this base class, unless I pass in a value to the base constructor, I have to define a parameterless ctor, which bypasses the DI.
So right now I have:
```
public class MyObjectBase
{
IMyRequiredInterface _InterfaceImpl;
public MyObjectBase(IMyRequiredInterface interfaceImpl)
{
_InterfaceImpl = interfaceImpl;
}
...
}
public class AnotherObject : MyObjectBase
{
public AnotherObject()
{
}
...
}
```
So, out of the gate this fails. I get errors when AnotherObject is instantiated indicating that there is no base class ctor that takes 0 parameters. Ok, I get that. But now I have a choice: either modify the descendant class ctor to take a similar parameter and pass that value on to the base ctor, or wire up a ctor chain in the base class that forces me to bypass DI and create a concrete implementation of the required interface and pass it in as part of the parameterless ctor declaration.
The goal is to meet the requirement of the base class without the descendant classes knowing anything about it.
Maybe I'm going about this all wrong, but it's bugging me. Any thoughts on a better way to handle this? I feel like I've got to be missing something simple... | The correct approach is:
```
public class AnotherObject : MyObjectBase {
public AnotherObject(IMyRequiredInterface interfaceImpl) :
base(interfaceImpl) {
}
}
```
You specifically asked for an approach other than this approach. Why?
> The goal is to meet the requirement of the base class without the descendant classes knowing anything about it.
That's generally the wrong thing to do. Why do you want to do it?
### Update:
Based on your later comment, you should probably use (and configure your container to use) property injection instead of constructor injection. That will get you all of your requirements. | Err....the whole point of inheriting from `MyObjectBase` is that, as it were, you get the good and the bad, as far as the behaviour goes. If you can't create a `MyObjectBase` without an object implementing `IMyRequiredInterface`, you can't create a subclass without such an object either.
So what do you do when someone doesn't hand you that. Do you have a default?
It's quite reasonable for a subclass to instantiate something that implements `IMyRequiredInterface`, and pass that to the superclass constructor with a `super(...)` call. Can you do that? (Though, as I recall, you can get a bit hung up on this in Java, having to call `super` before doing anything else....) | Dependency Injection and Class Inheritance | [
"",
"c#",
"dependency-injection",
"castle-windsor",
""
] |
I am trying to implement a template class that would be able to tell me if a variable is a class,structure or a basic type.
So far I've come with this:
```
template< typename T >
class is_class
{
private:
template< typename X >
static char ( &i_class( void(X::*)() ) )[1];
//
template< typename X >
static char ( &i_class( X ) )[2];
public:
static bool const val = sizeof( i_class< T >(0) ) == 1;
};
```
and ussage:
```
is_class< int >::val; // false
is_class< some_class_type >::val; // true
```
The problem is that now I need to write a blank void function in every class that could be used with this code.
Does anyone know a solution to this problem? | `is_class` is a member of the Boost.Type\_Traits library. The implementation is probably similar to what you already have. You would use in in conjunction with `enable_if` to create the function if appropriate:
```
template <class T>
typename enable_if_c<boost::is_class<T>::value>::type
foo() { }
```
Or the equivalent:
```
template <class T>
typename enable_if<boost::is_class<T>>::type
foo() { }
```
The function `foo` is only generated if the type `T` is of class type.
The return value for the function, if it is generated, is the second parameter (omitted) to the `enable_if` template (the default is `void`). | Use boost's [type traits](http://www.boost.org/doc/libs/1_38_0/libs/type_traits/doc/html/index.html). | Is there a way to compile-time assert if a variable is a class, struct or a basic type in c++? | [
"",
"c++",
"templates",
"typechecking",
""
] |
I have a DataGridView where I manage manually the focused cells when navigating with `Tab` key. For example, when the first cell from the last row is the last cell than can be navigated in the DataGridView I want that when pressing `Tab` the focus to go on the next focusable control (a button).
SendKeys.Send("{TAB}") will not work - the focus will go to the second cell on the last row | Have you tried the [SelectNextControl](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.selectnextcontrol.aspx) method? | If you set the `StandardTab` property to True then the behavior of the Tab key will change from moving to the next grid cell to moving to the next control on the form. This may be what you want.
If you want to control which grid cell/column/row gets focused then you can handle the `ProcessDialogKey` and `ProcessDataGridViewKey` events in your code. | Focus next control when pressing 'TAB' key | [
"",
"c#",
"datagridview",
""
] |
I'm more accustomed to make, so I'm confused why ant recompiles classes when the source hasn't been changed. I've read that there is a requirement to recompile in some cases where generics are used, but I'm not sure that this will be necessary for my project.
Also, in the javac task, I've set includeDestClasses="true"
Here's some of the targets I'm using
```
<target name="init">
<mkdir dir="${build}"/>
<mkdir dir="${dist}"/>
</target>
<target name="compile" depends="init,util,semantics" description=""/>
<target name="util" depends="" description="">
<javac destdir="${build}" classpath="project.class.path" debug="on" srcdir="${src}/util" includeDestClasses="true" source="1.5">
<classpath refid="project.class.path"/>
</javac>
</target>
``` | Try modifying the opening tag of the `javac` task to include both a `srcdir` attribute and an `includes` attribute:
`<javac destdir="${build}" classpath="project.class.path" debug="on" srcdir="${src}" includes="util/**" includeDestClasses="true" source="1.5">` | Your src & dest directories are not equivalent, so ant is not able to effectively stat the output files to compare them.
This is an FAQ:
<http://ant.apache.org/faq.html#always-recompiles> | Why does ant compile all classes each run? | [
"",
"java",
"ant",
""
] |
I need a textbox which only the user can permit to enter integers. But the user can't enter zero. i.e, he can enter 10,100 etc. Not 0 alone.
How can I make event in KeyDown? | The way you plan to do this, is very annoying for a user. You're guessing what a user wants to enter, and act upon your guess, but you can be so wrong.
It also has holes, for example, a user can enter "10" and then delete the "1". Or he could paste in a "0" -- you do allow paste, don't you?
So my solution would be: let him enter any digit he likes, any way he likes, and validate the input only *after* he finished, for example, when the input loses focus. | Why not using a NumericUpDown and make the following settings:
```
upDown.Minimum = 1;
upDown.Maximum = Decimal.MaxValue;
``` | Not allow zero in textbox | [
"",
"c#",
"wpf",
"forms",
"input-filtering",
""
] |
I have a table orders that keeps all order from all our stores.
I wrote a query to check the sequence orders for each store.
It looks like that.
```
select WebStoreID, min(webordernumber), max(webordernumber), count(webordernumber)
from orders
where ordertype = 'WEB'
group by WebStoreID
```
I can check it all orders are present with this query. web ordernumber is number from 1...n.
How can I write query to find missing orders without joining to temporary/different table? | You could join the table on itself to detect rows which have no previous row:
```
select cur.*
from orders cur
left join orders prev
on cur.webordernumber = prev.webordernumber + 1
and cur.webstoreid = prev.webstoreid
where cur.webordernumber <> 1
and prev.webordernumer is null
```
This would detect gaps in the 1...n sequence, but it would not detect duplicates. | I would make an auxiliary table of "all integers from 1 to n" (see <http://www.sql-server-helper.com/functions/integer-table.aspx> for some ways to make it with a SQL Server function, but since it's something you will need over and over I'd make it into a real table anyway, and with any SQL engine it's easy to make that, just once) then use a nested query, `SELECT value FROM integers WHERE value NOT IN (SELECT webordernumber FROM orders)` &c. Also see <http://www.sqlmag.com/Article/ArticleID/99797/sql_server_99797.html> for a problem similar to yours, "detecting gaps in a sequence of numbers". | Checking sequence with SQL query | [
"",
"sql",
"sql-server",
"ms-access",
"sequence",
""
] |
In Python is there any language (or interpreter) feature to force the python interpreter to always raise exceptions even if the exception offending code is inside a try/except block ?
I've just inherited a larger and old codebase written in python, whose purpose is to communicate with some custom designed hardware we also developed.
Many communication errors and timeouts are being masked/missed due to the following (simplified) pattern of code:
```
try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
#or just:
#pass
```
In order to avoid the typical scenario of "just rewrite the whole thing from scratch", I'm currently trying to fix all exceptions errors/timeouts. I'm doing this by disabling by hand the all exception handling code, one at a time. | The "all-exceptions" `except:` block is a remarkably bad thing and must simply be found and replaced with sensible except handling.
In this case `grep` is your friend. A good IDE can help make these unpleasant chores manageable.
But there's no "ignore the code as written" option in Python. | No, not really. Your best bet is to change the code to something more like this:
```
try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
raise
```
This will make it re-raise the exact exception. The main thing that you need to understand is that if there were a way to make all exceptions exit the system, you wouldn't be able to use a for loop (iterators raise a StopIteration exception). | Python - Possible to force raise exceptions regardless try/except blocks? | [
"",
"python",
"exception",
""
] |
My experience of using Adobe ColdFusion, even if still somewhat limited, was absolutely joyful and pleasant.
Of all good things I could say about ColdFusion, one feature completely blew me off my feet. It might be neither very effective, or useful in production, but anyway, I am talking about the so-called "query of queries" feature, or the `dbtype="query"` attribute of `cfquery`. It allows you to run SQL statements against arbitrary datasets, not just a database connection. You can, for example, join a resultset, that you've just retrieved from the database and an in-memory structure (that is, of course, subject to certain limitations). It provides a quick-and-dirty way to kind of "post-process" the data, which can sometimes be much more readable (and flexible, too!), than, say, iterating through the dataset in a loop.
However, ColdFusion is not a very popular product and I am not going to go over the reasons why it is like that. What I am asking is, is there any support for this technique in other languages (like a library, that does more or less the same)? Python? Perl? Ruby? PHP? Anything? Because, to me it seems, that the potential of this feature is huge, maybe not in production code, but it is an absolute life-saver if you need to test something quickly. Needless to say, the SQL ColdFusion uses for this is somewhat limited, to my knowledge, but still, the idea is still great. | If you don't find anything that handles data as well as ColdFusion then remember it plays very well with other programming languages. You can always do the heavy query processing in CF then just wrap your processing logic in remote CFCs and expose them as web services serving up JSON.
That will let you benefit from what you find great about ColdFusion while trying out some other languages.
If you need to get away from CF try SqlAlchemy in Python, or like other posters said Rails and LINQ are worth playing with. | i can't for python, ruby, perl, php. however .Net has something called [LINQ](http://msdn.microsoft.com/en-us/netframework/aa904594.aspx) which is essentially QoQ on steroids. | Query of queries outside ColdFusion | [
"",
"sql",
"coldfusion",
"qoq",
""
] |
I have this string: `"\"Blah \'Blah\' Blah\""`. There is another string inside it. How do I convert that into: `Blah 'Blah' Blah`? (you see, unescaping the string.) This is because I get a SQL Where query:
```
WHERE blah="Blah \'Blah\' Blah"
```
When I parse this, I get the string above (still inside quotes and escaped.) How would I extract that, un-escaping the string? Or is ther some much easier way to do this? Thanks,
Isaac | # DO NOT DO THIS.
Follow the proper steps for parametrization of a query on your Database/Platform, and you won't have to escape anything. You also will protect yourself from injection vulnerabilities. | Put the string in a property file, Java supports XML property files and the quote character does not need to be escaped in XML.
Use loadFromXML(InputStream in) method of the Properties class.
You can then use the MessageFormat class to interpolate values into the String if needed. | Java string inside string to string | [
"",
"java",
"string",
""
] |
I have been given the option to either have a Windows laptop or a Mac laptop to do my Java development on. Before committing to one of these, I thought I would find out if there are any issues or benefits I should know about using a Mac laptop over a windows laptop?
One thing I did hear was that the Java JDK releases are not always the latest for Macs and you need to wait a while.
My environment is simple: Eclipse with Java EE 5.0 | For your configuration, there's no problem...
That's true that sometimes you have to wait for release but nothing really insurmountable...
I got some issues with Eclipse, though. It crashes quite often but I'm doing intensive stuff...
One my issue was with the shortcuts, that sounds dumb but I'm a heavy shortcut user and switching between control and the apple keys and alt was sometimes hard to remember.. | Could be of interest to know that Apple's support for Java 1.6 is only for the 64 bit intel architectures. If you are runnning a 32 bits cpu you have to be satisfied with 1.5 | Are there any issues or benefits I should be aware of from switching my Java development from Windows to Mac? | [
"",
"java",
"windows",
"macos",
"comparison",
""
] |
We've been trying to redirect from one action to another, hoping that data would be passed between corresponding `ActionForm` beans. The first action receives a request from the browser, prints a data field, and forwards it to another action, which prints the same field and redirects to a JSP.
The problem is that `ActionTo` is printing an incorrect value - its `commonInt` has a default value of `0`, while we expect `35`.
Here is a representing example:
```
public class ActionFrom extends DispatchableAction{
public ActionForward send(ActionMapping mapping, ActionForm form, HttpServletRequest request,HttpServletResponse response){
FormA formA = (FormA)form;
formA.commonInt = 35;
System.out.println("sent: "+formA.commonInt);
return mapping.findForward("send");
}
}
public class ActionTo extends DispatchableAction{
public ActionForward recv(ActionMapping mapping, ActionForm form, HttpServletRequest request,HttpServletResponse response){
FormB formB = (FormB)form;
System.out.println("recv= "+formB.commonInt);
return mapping.findForward("send");
}
}
```
And actionForms are:
```
public class FormA extends ActionForm {
public int intA;
public int commonInt;
}
public class FormB extends ActionForm{
public int intB;
public int commonInt;
}
```
Mappings:
```
<action path="/from" type="EXPERIMENT.ActionFrom" name="formA" scope="request"
input="something.jsp" parameter="dispatch" unknown="false" validate="false">
<forward name="send" path="/to.do?dispatch=recv" redirect="false"/>
</action>
<action path="/to" type="EXPERIMENT.ActionTo" name="formB" scope="request"
input="something.jsp" parameter="dispatch" unknown="false" validate="false">
<forward name="send" path="/login.do" redirect="false"/>
</action>
```
Is there a way to accomplish this? Or both forms should be the same?
The workaround we tried was to pass things through request, but it can get large and messy. | The way to accomplish this is to use the same actionform for both actions. Is there a specific reason why you need two different actionforms? If not try modifying the second action mapping to name="formA" and the action itself to use FormA rather than FormB. | Tom, using you solution and combining with [ActionRedirect](http://struts.apache.org/1.x/apidocs/org/apache/struts/action/ActionRedirect.html), suggested by Vincent Ramdhanie, I got what you wanted too.
The code is simple as that and it allow you to have separated Forms for each Action.
```
ActionRedirect redirect = new ActionRedirect(mapping.findForward("send"));
redirect.addParameter("commonInt", formA.getCommonInt());
return redirect;
```
```
formB.setCommonInt(request.getParameter("commonInt"));
```
This endend up saving my day and helping me not to have the effort to change that directly in the JSP, what would be awful. | Java Struts 1: forward from action to action. Passing data through ActionForms | [
"",
"java",
"struts-1",
""
] |
Consider this python program:
```
import sys
lc = 0
for line in open(sys.argv[1]):
lc = lc + 1
print lc, sys.argv[1]
```
Running it on my 6GB text file, it completes in ~ 2minutes.
Question: **is it possible to go faster?**
Note that the same time is required by:
```
wc -l myfile.txt
```
so, I suspect the anwer to my quesion is just a plain "no".
Note also that my real program is doing something more interesting than just counting the lines, so please give a generic answer, **not** line-counting-tricks (like keeping a line count metadata in the file)
PS: I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you have them.
See also the [follow-up question](https://stackoverflow.com/questions/860893/is-python-automagically-parallelizing-io-and-cpu-or-memory-bound-sections) | You can't get any faster than the maximum disk read speed.
In order to reach the maximum disk speed you can use the following two tips:
1. Read the file in with a big buffer. This can either be coded "manually" or simply by using io.BufferedReader ( available in python2.6+ ).
2. Do the newline counting in another thread, in parallel. | **Throw hardware at the problem.**
As gs pointed out, your bottleneck is the hard disk transfer rate. So, no you can't use a better algorithm to improve your time, but you can buy a faster hard drive.
**Edit:** Another good point by gs; you could also use a [RAID](http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks) configuration to improve your speed. This can be done either with [hardware](http://www.pcguide.com/ref/hdd/perf/raid/conf/ctrlHardware-c.html) or software (e.g. [OS X](http://www.frozennorth.org/C2011481421/E20060221212020/index.html), [Linux](http://linux-raid.osdl.org/index.php/Linux_Raid), [Windows Server](http://www.techotopia.com/index.php/Creating_and_Managing_Windows_Server_2008_Striped_(RAID_0)_Volumes), etc).
---
**Governing Equation**
`(Amount to transfer) / (transfer rate) = (time to transfer)`
`(6000 MB) / (60 MB/s) = 100 seconds`
`(6000 MB) / (125 MB/s) = 48 seconds`
---
**Hardware Solutions**
[The ioDrive Duo](http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=storage&articleId=9129644&taxonomyId=19&intsrc=kc_top) is supposedly the fastest solution for a corporate setting, and "will be available in April 2009".
Or you could check out the WD Velociraptor hard drive (10,000 rpm).
Also, I hear the Seagate [Cheetah](http://www.seagate.com/docs/pdf/datasheet/disc/ds_cheetah_15k_5.pdf) is a good option (15,000 rpm with sustained 125MB/s transfer rate). | Is it possible to speed-up python IO? | [
"",
"python",
"linux",
"performance",
"text-files",
""
] |
I have an SQL table written in MSSQL:
```
create table [action]
(
action_id bigint identity not null, -- PK
action_action char(1) not null, -- 'C' Call, 'R' Raise, 'F' Fold, 'P' Post
action_size decimal(9,2) not null, -- zero if fold, > zero otherwise
constraint pk_action primary key clustered (action_id),
constraint chk_action_action check (action_action in('C','R','F','P'))
)
```
I want to put a constraint on the `action_size` column such that:
1) If `action_action` is 'F' then `action_size` must be 0.00 (or null if that is more doable)
2) If `action_action` is anything other than 'F' then `action_size` must be greater than zero (ie >= 0.01)
How do I express this? I tried:
```
constraint chk_action_size check (
select action_action
case 'F' action_size = 0.00
else action_size > 0.00
)
```
...to no avail.
I'm writing this in MSSQL 2005 but would like a solution that works with MySQL 5.1.34 also.
BTW if you would care to comment on my `action_action` column, feel free. There will either never be other valid values for `action_action` or, if there are, it will be exceedingly rare and there will only be ~1 other valid value. | ```
create table [action]
(
action_id bigint identity not null,
action_action char(1) not null,
action_size decimal(9,2) not null,
constraint pk_action primary key clustered (action_id),
constraint chk_action_action check (action_action in('C','R','F','P')),
constraint chk_action_size check
(
(action_action = 'F' AND action_size = 0.00) OR
(action_action <> 'F' AND action_size > 0.00)
)
)
``` | ```
ALTER TABLE action ADD CONSTRAINT chk_action_size CHECK (
(action_action = 'F' AND action_size = 0.00)
OR (action_action <> 'F' AND action_size > 0.00)
)
```
If you were using floats instead of decimals, write the zero check as:
```
ABS(action_size) > 0.01
```
Since floats may not be exactly zero, especially after some math. | Changing SQL CONSTRAINT Based On Other Fields | [
"",
"sql",
"mysql",
"t-sql",
""
] |
Well I'm doing some Java - C integration, and throught C library werid type mappings are used (theres more of them;)):
```
#define CHAR char /* 8 bit signed int */
#define SHORT short /* 16 bit signed int */
#define INT int /* "natural" length signed int */
#define LONG long /* 32 bit signed int */
typedef unsigned char BYTE; /* 8 bit unsigned int */
typedef unsigned char UCHAR; /* 8 bit unsigned int */
typedef unsigned short USHORT; /* 16 bit unsigned int */
typedef unsigned int UINT; /* "natural" length unsigned int*/
```
Is there any legitimate reason not to use them? It's not like `char` is going to be redefined anytime soon.
I can think of:
1. Writing platform/compiler portable code (size of type is underspecified in C/C++)
2. Saving space and time on embedded systems - if you loop over array shorter than 255 on 8bit microprocessor writing:
```
for(uint8_t ii = 0; ii < len; ii++)
```
will give meaureable speedup. | The C standard doesn't specify the size of a number of the integer types; it depends on the compiler, and the processor on which the code will run.
Therefore, for maximum portability, it's best to have a header which uses standard names which indicate how big each type is for that particular target.
MISRA-C and others use uint16\_t, sint32\_t, etc. A shorter form, e.g. u16, s32 is also in use.
Regarding #define v typedef: use typedef, to enforce type checking by the compiler. | That is exactly the reason. C is used across a number of systems and its actually rather disturbing how often type sizes actually do change between platforms/hardware/versions. | Using typedefs (or #defines) on built in types - any sensible reason? | [
"",
"c++",
"c",
"coding-style",
""
] |
I am trying to make a method that will go through a list of generic objects and replace all their properties of type `string` which is either `null` or empty with a replacement.
How is a good way to do this?
I have this kind of... shell... so far:
```
public static void ReplaceEmptyStrings<T>(List<T> list, string replacement)
{
var properties = typeof(T).GetProperties( -- What BindingFlags? -- );
foreach(var p in properties)
{
foreach(var item in list)
{
if(string.IsNullOrEmpty((string) p.GetValue(item, null)))
p.SetValue(item, replacement, null);
}
}
}
```
So, how do I find all the properties of a type that are:
* Of type `string`
* Has public `get`
* Has public `set`
?
---
I made this test class:
```
class TestSubject
{
public string Public;
private string Private;
public string PublicPublic { get; set; }
public string PublicPrivate { get; private set; }
public string PrivatePublic { private get; set; }
private string PrivatePrivate { get; set; }
}
```
The following does not work:
```
var properties = typeof(TestSubject)
.GetProperties(BindingFlags.Instance|BindingFlags.Public)
.Where(ø => ø.CanRead && ø.CanWrite)
.Where(ø => ø.PropertyType == typeof(string));
```
If I print out the Name of those properties I get there, I get:
> PublicPublic
> PublicPrivate
> PrivatePublic
In other words, I get two properties too much.
---
***Note**: This could probably be done in a better way... using nested foreach and reflection and all here... but if you have any great alternative ideas, please let me know cause I want to learn!* | Your code rewritten. Does not use LINQ nor var.
```
public static void ReplaceEmptyStrings<T>(List<T> list, string replacement)
{
PropertyInfo[] properties = typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance);
foreach (PropertyInfo p in properties)
{
// Only work with strings
if (p.PropertyType != typeof(string)) { continue; }
// If not writable then cannot null it; if not readable then cannot check it's value
if (!p.CanWrite || !p.CanRead) { continue; }
MethodInfo mget = p.GetGetMethod(false);
MethodInfo mset = p.GetSetMethod(false);
// Get and set methods have to be public
if (mget == null) { continue; }
if (mset == null) { continue; }
foreach (T item in list)
{
if (string.IsNullOrEmpty((string)p.GetValue(item, null)))
{
p.SetValue(item, replacement, null);
}
}
}
}
``` | You will find the properties as such with `BindingFlags.Public | BindingFlags.Instance`. Then you will need to examine each PropertyInfo instance by checking the CanWrite and CanRead properties, in order to find out whether they are are readable and/or writeable.
Update: code example
```
PropertyInfo[] props = yourClassInstance.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance);
for (int i = 0; i < props.Length; i++)
{
if (props[i].PropertyType == typeof(string) && props[i].CanWrite)
{
// do your update
}
}
```
---
I looked into it more in detail after your update. If you also examine the MethodInfo objects returned by GetGetMethod and GetSetMethod you will hit the target, I think;
```
var properties = typeof(TestSubject).GetProperties(BindingFlags.Instance | BindingFlags.Public)
.Where(ø => ø.CanRead && ø.CanWrite)
.Where(ø => ø.PropertyType == typeof(string))
.Where(ø => ø.GetGetMethod(true).IsPublic)
.Where(ø => ø.GetSetMethod(true).IsPublic);
```
By default these two methods return only public getters and setters (risking a NullReferenceException in a case like this), but passing `true` as above makes them also return private ones. Then you can examine the `IsPublic` (or `IsPrivate`) properties. | How to get all public (both get and set) string properties of a type | [
"",
"c#",
"reflection",
""
] |
C# 2008 SP1
I am using the background worker
If one of the conditions fails I will set e.cancel to true, and assign the string to the e.result. Everything works there.
However, when the workCompleted fires, I test for the e.Result and I get an exception "e.result throw an exception of type systeminvalidoperation".
I guess I could use a global variable to set in the DoWork and compare in the work completed. But this might not be threadsafe.
Can anyone tell me why I am getting this with the e.Result, and how can I solve this?
Many thanks,
```
private void bgwPrepareCall_DoWork(object sender, DoWorkEventArgs e)
{
// Check to see if the called number is blocked.
// If the nubmer is blocked display error message.
if (this.IsNoGateway(sipUsername, statusDisplay1.PhoneNumber))
{
// this.SetIdleState(DialerErrorMessage.GetErrorDescription(12));
e.Cancel = true;
e.Result = "NoGateway";
return;
}
if (!this.IsInvalidTelephoneNumber(sipUsername, statusDisplay1.PhoneNumber))
{
// this.SetIdleState(DialerErrorMessage.GetErrorDescription(13));
e.Cancel = true;
e.Result = "InvalidNumber";
return;
}
}
private void bgwPrepareCall_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
if (e.Error == null)
{
if (e.Cancelled)
{
// Failed
switch (e.Result.ToString())
{
case "NoGateway":
Console.WriteLine("NoGateway: [ Failed ]");
break;
case "InvalidNumber":
Console.WriteLine("InvalidNumber: [ Failed ]");
break;
default:
Console.WriteLine("Unknown");
break;
}
}
if (!e.Cancelled)
{
// Successfull
Console.WriteLine("NoGateway: [ ok ]");
Console.WriteLine("InvalidNumber: [ ok ]");
}
}
else
{
Console.WriteLine("PrepareCall Failed: [ " + e.Error.Message + " ]");
}
}
``` | From [MSDN](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.runworkercompleted.aspx):
> If the operation completes
> successfully and its result is
> assigned in the DoWork event handler,
> you can access the result through the
> RunWorkerCompletedEventArgs.Result
> property.
[and](http://msdn.microsoft.com/en-us/library/system.componentmodel.runworkercompletedeventargs.result.aspx):
> Your RunWorkerCompleted event handler
> should always check the Error and
> Cancelled properties before accessing
> the Result property. If an exception
> was raised or if the operation was
> canceled, accessing the Result
> property raises an exception.
So if it *doesn't* complete successfully (i.e. you cancel it) it looks like it won't work. Perhaps consider returning your cancellation-details as the result (as success) for your abort case, an detecting the difference in the completion handler? | The `Result` property is meant to represent the result of a *completed* operation. You've set `Cancel` to true, meaning that the operation was cancelled, therefore there shouldn't be a result.
It sounds like you should encode the "I aborted because something was wrong" into your result, or throw an exception which will be set as the `Error` property in the result instead - `Cancel` is meant to be set if the worker noticed that the call was cancelled externally.
The docs for [`RunWorkerCompletedEventArgs.Result`](http://msdn.microsoft.com/en-us/library/system.componentmodel.runworkercompletedeventargs.result.aspx) state:
> Your RunWorkerCompleted event handler
> should always check the Error and
> Cancelled properties before accessing
> the Result property. If an exception
> was raised or if the operation was
> canceled, accessing the Result
> property raises an exception.
The "Exceptions" part of the documentation also states that it will throw an exception if `Cancelled` is true. | C# Background worker setting e.Result in DoWork and getting value back in WorkCompleted | [
"",
"c#",
"backgroundworker",
""
] |
Given an array of ids `$galleries = array(1,2,5)` I want to have a SQL query that uses the values of the array in its WHERE clause like:
```
SELECT *
FROM galleries
WHERE id = /* values of array $galleries... eg. (1 || 2 || 5) */
```
How can I generate this query string to use with MySQL? | > **BEWARE!** This answer contains a severe [SQL injection](https://stackoverflow.com/questions/60174/how-can-i-prevent-sql-injection-in-php) vulnerability. Do NOT use the code samples as presented here, without making sure that any external input is sanitized.
```
$ids = join("','",$galleries);
$sql = "SELECT * FROM galleries WHERE id IN ('$ids')";
``` | **Using PDO:**[1]
```
$in = join(',', array_fill(0, count($ids), '?'));
$select = <<<SQL
SELECT *
FROM galleries
WHERE id IN ($in);
SQL;
$statement = $pdo->prepare($select);
$statement->execute($ids);
```
**Using MySQLi** [2]
```
$in = join(',', array_fill(0, count($ids), '?'));
$select = <<<SQL
SELECT *
FROM galleries
WHERE id IN ($in);
SQL;
$statement = $mysqli->prepare($select);
$statement->bind_param(str_repeat('i', count($ids)), ...$ids);
$statement->execute();
$result = $statement->get_result();
```
---
Explanation:
### Use the SQL `IN()` operator to check if a value exists in a given list.
In general it looks like this:
```
expr IN (value,...)
```
We can build an expression to place inside the `()` from our array. Note that there must be at least one value inside the parenthesis or MySQL will return an error; this equates to making sure that our input array has at least one value. To help prevent against SQL injection attacks, first generate a `?` for each input item to create a parameterized query. Here I assume that the array containing your ids is called `$ids`:
```
$in = join(',', array_fill(0, count($ids), '?'));
$select = <<<SQL
SELECT *
FROM galleries
WHERE id IN ($in);
SQL;
```
Given an input array of three items `$select` will look like:
```
SELECT *
FROM galleries
WHERE id IN (?, ?, ?)
```
Again note that there is a `?` for each item in the input array. Then we'll use PDO or MySQLi to prepare and execute the query as noted above.
### Using the `IN()` operator with strings
It is easy to change between strings and integers because of the bound parameters. For PDO there is no change required; for MySQLi change `str_repeat('i',` to `str_repeat('s',` if you need to check strings.
[1]: I've omitted some error checking for brevity. You need to check for the usual errors for each database method (or set your DB driver to throw exceptions).
[2]: Requires PHP 5.6 or higher. Again I've omitted some error checking for brevity. | Passing an array to a query using a WHERE clause | [
"",
"php",
"mysql",
""
] |
I have a simple marker annotation for methods (similar to the first example in Item 35 in *Effective Java* (2nd ed)):
```
/**
* Marker annotation for methods that are called from installer's
* validation scripts etc.
*/
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface InstallerMethod {
}
```
Then, in a given package (say `com.acme.installer`), which has a few subpackages containing some 20 classes, I'd like to find all methods that are annotated with it. (Because I'd like to do some checks regarding all the annotated methods in a unit test.)
What (if any) is the easiest way to do this? Preferably without adding new 3rd party libraries or frameworks.
**Edit**: to clarify, obviously `method.isAnnotationPresent(InstallerMethod.class)` will be the way to check if a method has the annotation - but this problem includes finding all the methods. | If you want to implement it yourself, these methods will find all the classes in a given package:
```
/**
* Scans all classes accessible from the context class loader which belong
* to the given package and subpackages.
*
* @param packageName
* The base package
* @return The classes
* @throws ClassNotFoundException
* @throws IOException
*/
private Iterable<Class> getClasses(String packageName) throws ClassNotFoundException, IOException
{
ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
String path = packageName.replace('.', '/');
Enumeration<URL> resources = classLoader.getResources(path);
List<File> dirs = new ArrayList<File>();
while (resources.hasMoreElements())
{
URL resource = resources.nextElement();
URI uri = new URI(resource.toString());
dirs.add(new File(uri.getPath()));
}
List<Class> classes = new ArrayList<Class>();
for (File directory : dirs)
{
classes.addAll(findClasses(directory, packageName));
}
return classes;
}
/**
* Recursive method used to find all classes in a given directory and
* subdirs.
*
* @param directory
* The base directory
* @param packageName
* The package name for classes found inside the base directory
* @return The classes
* @throws ClassNotFoundException
*/
private List<Class> findClasses(File directory, String packageName) throws ClassNotFoundException
{
List<Class> classes = new ArrayList<Class>();
if (!directory.exists())
{
return classes;
}
File[] files = directory.listFiles();
for (File file : files)
{
if (file.isDirectory())
{
classes.addAll(findClasses(file, packageName + "." + file.getName()));
}
else if (file.getName().endsWith(".class"))
{
classes.add(Class.forName(packageName + '.' + file.getName().substring(0, file.getName().length() - 6)));
}
}
return classes;
}
```
Then you can just filter on those classes with the given annotation:
```
for (Method method : testClass.getMethods())
{
if (method.isAnnotationPresent(InstallerMethod.class))
{
// do something
}
}
``` | You should probably take a look at the open source [Reflections library](http://code.google.com/p/reflections/). With it you can easily achieve what you want with few lines of code:
```
Reflections reflections = new Reflections(
new ConfigurationBuilder().setUrls(
ClasspathHelper.forPackage( "com.acme.installer" ) ).setScanners(
new MethodAnnotationsScanner() ) );
Set<Method> methods = reflections.getMethodsAnnotatedWith(InstallerMethod.class);
``` | How to find annotated methods in a given package? | [
"",
"java",
"annotations",
""
] |
I'm currently working on a C++ program in Windows XP that processes large sets of data. Our largest input file causes the program to terminate unexpectedly with no sort of error message. Interestingly, when the program is run from our IDE (Code::Blocks), the file is processed without any such issues.
As the data is being processed, it's placed into a tree structure. After we finish our computations, the data is moved into a C++ STL vector before being sent off to be rendered in OpenGL.
I was hoping to gain some insight into what might be causing this crash. I've already checked out another post which I can't post a link to since I'm a new user. The issue in the post was quite similar to mine and resulted from an out of bounds index to an array. However, I'm quite sure no such out-of-bounds error is occurring.
I'm wondering if, perhaps, the size of the data set is leading to issues when allocating space for the vector. The systems I've been testing the program on should, in theory, have adequate memory to handle the data (2GB of RAM with the data set taking up approx. 1GB). Of course, if memory serves, the STL vectors simply double their allocated space when their capacity is reached.
Thanks, Eric | As it turns out, our hardware is reaching its limit. The program was hitting the system's memory limit and failing miserably. We couldn't even see the error statements being produced until I hooked cerr into a file from the command line (thanks starko). Thanks for all the helpful suggestions! | The fact that the code works within the IDE (presumably running within a debugger?), but not standalone suggests to me that it might be an initialisation issue. | Program crashes when run outside IDE | [
"",
"c++",
"memory",
"crash",
""
] |
I've scripted up a stored procedure as follows. It will parse without errors, but when I try to execute it, it will fail. The error message reads: Msg 208, Level 16, State 6, Procedure aspnet\_updateUser, Line 23
Invalid object name 'dbo.aspnet\_updateUser'.
Here is the stored procedure.
```
USE [PMRS2]
GO
/****** Object: StoredProcedure [dbo].[aspnet_updateUser] Script Date: 05/25/2009 15:29:47 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: <Author,,Name>
-- Create date: <Create Date,,>
-- Description: <Description,,>
-- =============================================
ALTER PROCEDURE [dbo].[aspnet_updateUser]
-- Add the parameters for the stored procedure here
@UserName nvarchar(50),
@Email nvarchar(50),
@FName nvarchar(50),
@LName nvarchar(50),
@ActiveFlag bit,
@GroupId int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
UPDATE dbo.aspnet_Users
SET UserName = @UserName, LoweredUserName = LOWER(@UserName), Email = @Email, FName = @FName, LName = @LName, ActiveFlag = @ActiveFlag, GroupId = @GroupId
WHERE LoweredUserName = LOWER(@UserName)
END
``` | Looks like it might not exist yet, swap the Alter to a Create. | To avoid this happening in the furture, do what we do, never use alter proc. Instead we check for the existance of the proc and drop it if it exists, then create it with the new code:
```
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'P' AND name = 'myProc')
BEGIN
DROP Procedure myProc
END
GO
CREATE PROCEDURE myProc
(add the rest of the proc here)
``` | Stored procedure parses correctly but will not execute. Invalid object name. Msg 208 | [
"",
"sql",
"database",
"stored-procedures",
""
] |
I am playing about with the Zend Framework at the moment and I have a written the authentication code using Zend\_Auth. I am trying to find a way to ensure that the user is logged in before they can view anything but I don't want to do it on a per controller basis.
I am thinking a plugin, but all the books I have on it are pretty rubbish in this respect. | A plugin is a good idea.
I answered to a similar question here:
[How do i centralize code from my init functions in all controllers ?](https://stackoverflow.com/questions/866367/how-do-i-centralize-code-from-my-init-functions-in-all-controllers/866517#866517)
Also check the documentation page [Zend Controller Plugins](http://framework.zend.com/manual/en/zend.controller.plugins.html) | ```
Zend_Auth::getInstance()->hasIdentity()
```
You can use this to determine if the user is logged in, and then use the redirector to redirect to the login page if not.
However, the easier way is to use a [Redirector Zend Controller Action Helper](http://www.refreshinglyblue.com/2008/10/28/zend-framework-redirect-the-easy-way/). | How do I force a user to be logged in to view any page? | [
"",
"php",
"zend-framework",
"authentication",
""
] |
I'm using the DataContractSerializer to serialize an objects properties and fields marked with DataMember attributes to xml.
Now a have another use case for the same class, where I need to serialize other properties and other fields.
Are there a way to add "another DataMemberAttribute" that can be used for my other serialization scenario? | No, basically.
If you want to use the existing `DataContractSerializer`, you'll have to maintain a second version of the DTO class and convert the data between them.
Options if you are writing your own serialization code:
* declare your own `[DataMember]`-style attribute(s) and interpret them at runtime in your own serialization code
* use a "buddy class"
* use external metadata (such as a file)
* use code-based configuration (i.e. via a DSL)
In reality, I expect the first will be the simplest choice. | In a similar scenario in the past, we've taken an Object Oriented approach, and created a new class that extends from the main class.
To help you achieve inhertience with the DataContractSerializer, check out [KnownTypeAttribute](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.knowntypeattribute.aspx)
In one of your comments to your question,
> If the same class is implementing multiple interfaces, certain data elements may be relevant to only one of the interfaces.
If that is the case in your scenario, then perhaps your Data Service Contracts should be exposing just the Interfaces, and not the Class?
For example, if you have a class like:
```
[DataContract]
public class DataObject : IRed, IBlue
```
then rather than have your operation contract expose DataObject, you have two operation contracts one for IRed and one for IBlue.
This eliminates the need for custom serialization code. | Having DataContractSerializer serialize the same class in two different ways? | [
"",
"c#",
".net",
"wcf",
""
] |
How would i go about stopping a form from being moved. I have the form border style set as FixedSingle and would like to keep it this way because it looks good in vista :) | Take a look at this [link](http://vaibhavgaikwad.wordpress.com/2006/06/05/creating-a-immovable-windows-form-in-c/). You might be interested in option #3. It will require you to wrap some native code, but should work. There's also a comment at the bottom of the link that shows an easier way to do it. Taken from the comment (can't take credit for it, but I'll save you some searching):
```
protected override void WndProc(ref Message message)
{
const int WM_SYSCOMMAND = 0x0112;
const int SC_MOVE = 0xF010;
switch(message.Msg)
{
case WM_SYSCOMMAND:
int command = message.WParam.ToInt32() & 0xfff0;
if (command == SC_MOVE)
return;
break;
}
base.WndProc(ref message);
}
``` | You can set the `FormBorderStyle` property of the Form to None
```
this.FormBorderStyle=System.Windows.Forms.FormBorderStyle.None
``` | How do you prevent a windows from being moved? | [
"",
"c#",
".net",
"winforms",
"user-interface",
""
] |
I'm used to scripting languages. PHP, Javascript etc. and I've written a few relatively simple Java and C# apps. This is a question I've repeatedly needed an answer for, and I imagine I'm not the only one.
Let's say I'm in Javascript.
I have function A(), called by the GUI, which retrieves some value.
Function B(), also called by the GUI, requires that value, but function B() is going to be called an arbitrary number of times, an arbitrary length of time after A().
I don't want A() to recalculate the value every time.
An example is logon credentials. A() asks for a username, and B() uses that value to append to a log every time it is called.
For this I would probably just use a global variable.
Now, C#. No global variables! How am I supposed to do this?
**Edit:**
Enjoying the answers, but there are a lot of "try not to use globals" comments. Which I do understand, but I'd like to hear the alternative patterns for this requirement. | Firstly, always ask yourself if you really need to have a global, often you won't. But if you really have to have one...
The best way to do this is to have a static property on some sensibly named class, it effectively becomes your global variable
```
public class Credentials
{
public static string Username {get;set;}
}
//...somewhere in your code
Credentials.Username = "MyUserName";
```
EDIT:
A couple people here have said the blanket statement that Global Variables are bad, and I do agree with this sentiment, and it would appear that the designers of C# also agree as they are simply not available.
We should however look at the reasons why Globals are bad, and they are mostly regarded as bad because you break the rules of encapsulation. Static data though, is not necesarrily bad, the good thing about static data is that you can encapsulate it, my example above is a very simplistic example of that, probably in a real world scenario you would include your static data in the same class that does other work with the credentials, maybe a Login class or a User class or whatever makes sense to your app. | This is not a good practice, but if you really need it, there is a number of ways:
1. Web apps: You can put your variable in some kind of context, like the session or the application scope.
2. Desktop apps: You can create an object and store it as a property of a class that always have an object active.
3. Any kind of app: use a public static property. It is visible to everyone. | How do I persist data without global variables? | [
"",
"c#",
"global-variables",
""
] |
I have a Path (a star figure):
```
<Path x:Name="NiceStar" StrokeThickness="10" Stroke="#ff000000" StrokeMiterLimit="1" Data="F1 M 126.578613,11.297852 L 162.373535,83.825684 L 242.412598,95.456055 L 184.495605,151.911133 L 198.167480,231.626953 L 126.578613,193.990234 L 54.988770,231.626953 L 68.661621,151.911133 L 10.744629,95.456055 L 90.783691,83.825684 L 126.578613,11.297852 Z">
<Path.Fill>
<RadialGradientBrush MappingMode="Absolute" GradientOrigin="390.395508,448.130371" Center="390.395508,448.130371" RadiusX="113.034821" RadiusY="113.034821">
<RadialGradientBrush.Transform>
<MatrixTransform Matrix="1,0,-0,-1,-263.816895,569.592773" />
</RadialGradientBrush.Transform>
<GradientStop Offset="0" Color="#ff00ff00"/>
<GradientStop Offset="1" Color="#ff006736"/>
</RadialGradientBrush>
</Path.Fill>
</Path>
```
Now I want to duplicate this Path several times (just refering to "NiceStar"). Can I do this in pure XAML?
I can use it once, by doing this:
```
<Decorator Child="{StaticResource star}" />
```
However, I cannot duplicate this line. My compiler says:
> Specified element is already the logical child of another element. Disconnect it first. | Create a style.
```
<Style x:Key="NiceStarPath" TargetType="{x:Type Path}">
<Setter Property="StrokeThickness" Value="10"/>
<Setter Property="Stroke" Value="#FF000000"/>
<Setter Property="StrokeMiterLimit" Value="1"/>
<Setter Property="Data" Value="F1 M 126.578613,11.297852 L 162.373535,83.825684 L 242.412598,95.456055 L 184.495605,151.911133 L 198.167480,231.626953 L 126.578613,193.990234 L 54.988770,231.626953 L 68.661621,151.911133 L 10.744629,95.456055 L 90.783691,83.825684 L 126.578613,11.297852 Z"/>
<Setter Property="Fill">
<Setter.Value>
<RadialGradientBrush MappingMode="Absolute" GradientOrigin="390.395508,448.130371" Center="390.395508,448.130371" RadiusX="113.034821" RadiusY="113.034821">
<RadialGradientBrush.Transform>
<MatrixTransform Matrix="1,0,-0,-1,-263.816895,569.592773" />
</RadialGradientBrush.Transform>
<GradientStop Offset="0" Color="#ff00ff00"/>
<GradientStop Offset="1" Color="#ff006736"/>
</RadialGradientBrush>
</Setter.Value>
</Setter>
</Style>
```
...
```
<Path Style="{StaticResource NiceStarPath}"/>
``` | Sure, just define a Style for the path and then you can reuse it as a static resource:
```
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Page.Resources>
<Style x:Key="StarStyle" TargetType="Path">
<Setter>
<Setter.Property>Fill</Setter.Property>
<Setter.Value>
<RadialGradientBrush MappingMode="Absolute"
GradientOrigin="390.395508,448.130371" Center="390.395508,448.130371"
RadiusX="113.034821" RadiusY="113.034821">
<RadialGradientBrush.Transform>
<MatrixTransform Matrix="1,0,-0,-1,-263.816895,569.592773" />
</RadialGradientBrush.Transform>
<GradientStop Offset="0" Color="#ff00ff00"/>
<GradientStop Offset="1" Color="#ff006736"/>
</RadialGradientBrush>
</Setter.Value>
</Setter>
<Setter Property="StrokeThickness" Value="10" />
<Setter Property="Stroke" Value="#ff000000" />
<Setter Property="StrokeMiterLimit" Value="1" />
<Setter Property="Data" Value="F1 M 126.578613,11.297852 L 162.373535,83.825684 L 242.412598,95.456055 L 184.495605,151.911133 L 198.167480,231.626953 L 126.578613,193.990234 L 54.988770,231.626953 L 68.661621,151.911133 L 10.744629,95.456055 L 90.783691,83.825684 L 126.578613,11.297852 Z"/>
</Style>
</Page.Resources>
<StackPanel>
<Path Style="{StaticResource StarStyle}" />
<Path Style="{StaticResource StarStyle}" />
</StackPanel>
</Page>
``` | Reuse path object in XAML | [
"",
"c#",
".net",
"wpf",
"xaml",
""
] |
How do you deal with user input (unicode) that you need to be restricted to a certain set of values, and you want to minimize the risk to applications that you pass the data to further down the line. For example, if I were to store the data in SQL, I would want to remove any chance of a SQL injection. If I were to send it over the wire via HTTP, I would want to make sure it doesn't malform the request, etc..
I guess what I am asking is there any generic method for data sanitization? | Each interface has its own problems when it comes to ways to compromise the system. If you want to play it safe you will need to tailor the validations to suit the problems and/or threats that are relevant in the current context.
If a certain text box in a user interface should be used for numeric input, make sure that the user cannot type (or paste) anything non-numeric into it. If a certain control is used to collect a date from the user, validate that the given value is indeed a valid date (perhaps it should even fall within a certain range; validate that too).
Make sure to url encode anything that is being passed as a query string value in a http request. Use stored procedures and pass the values as parameters to them.
And so on. There is no free lunch, unfortunately. | In case of saving to the database this is very simple. Just use parametes (DbParameter objects) - they will protect you from SQL injection and also will add escape symbols if necessary.
The code can be like this:
```
OleDbConnection cn = new OleDbConnection(strConn);
cn.Open();
strSQL = "INSERT INTO customers (Name) VALUES (@Name)";
OleDbCommand cmd = new OleDbCommand(strSQL, cn);
cmd.Parameters.Add("@Name", "John O'Brian");
cmd.ExecuteNonQuery();
``` | How to handle erroneous data? | [
"",
"c#",
"sanitization",
""
] |
I need to cast a property to its actual type dynamically. How do I/Can I do this using reflection?
To explain the real scenario that I am working on a bit. I am trying to call the "First" extension method on an Entity Framework property. The specific property to be called on the Framework context object is passed as a string to the method (as well as the id of the record to be retrieved). So I need the actual type of the object in order to call the First method.
I can't use the "Where" method on the object as the lambda or delegate method still needs the actual type of the object in order to access the properties.
Also as the object is generated by the Entity Framework I can't cast the type to an interface and operate on that.
This is scenario code:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Reflection;
namespace NmSpc
{
public class ClassA
{
public int IntProperty { get; set; }
}
public class ClassB
{
public ClassA MyProperty { get; set; }
}
public class ClassC
{
static void Main(string[] args)
{
ClassB tester = new ClassB();
PropertyInfo propInfo = typeof(ClassB).GetProperty("MyProperty");
//get a type unsafe reference to ClassB`s property
Object property = propInfo.GetValue(tester, null);
//get the type safe reference to the property
ClassA typeSafeProperty = property as ClassA;
//I need to cast the property to its actual type dynamically. How do I/Can I do this using reflection?
//I will not know that "property" is of ClassA apart from at runtime
}
}
}
``` | I had some time so I tried to solve my problem using VS2010 and I think I was right previously when I though that the dynamic keywork would 'solve' my question. See the code below.
```
using System.Reflection;
namespace TempTest
{
public class ClassA
{
public int IntProperty { get; set; }
}
public class ClassB
{
public ClassB()
{
MyProperty = new ClassA { IntProperty = 4 };
}
public ClassA MyProperty { get; set; }
}
public class Program
{
static void Main(string[] args)
{
ClassB tester = new ClassB();
PropertyInfo propInfo = typeof(ClassB).GetProperty("MyProperty");
//get a type unsafe reference to ClassB`s property
dynamic property = propInfo.GetValue(tester, null);
//casted the property to its actual type dynamically
int result = property.IntProperty;
}
}
}
``` | ```
public object CastPropertyValue(PropertyInfo property, string value) {
if (property == null || String.IsNullOrEmpty(value))
return null;
if (property.PropertyType.IsEnum)
{
Type enumType = property.PropertyType;
if (Enum.IsDefined(enumType, value))
return Enum.Parse(enumType, value);
}
if (property.PropertyType == typeof(bool))
return value == "1" || value == "true" || value == "on" || value == "checked";
else if (property.PropertyType == typeof(Uri))
return new Uri(Convert.ToString(value));
else
return Convert.ChangeType(value, property.PropertyType); }
``` | Cast a property to its actual type dynamically using reflection | [
"",
"c#",
"entity-framework",
"reflection",
""
] |
I have the CakePHP stack in /var/www/site
from one view under controller A I do a jquery ajax call:
```
$("#searchstring").autocomplete("/items/getitemsforautocomplete", { ... more code
```
when the call is triggered I can see from firebug that cakephp wants to call:
```
http://localhost/items/getitemsforautocomplete?q=me
```
Note that 'site' is missing, resulting in a 404.
When I upload this to my site it works the way it should. How should I configure this correctly?? | Doesn't '/' go to the root directory of the site? If your javascript file is in /var/www/site/script, you might want to do:
```
$("#searchstring").autocomplete("../items/getitemsforautocomplete", { ... more code }
``` | Try using FULL\_BASE\_URL in your JS, like:
```
$("#searchstring").autocomplete("<?= FULL_BASE_URL ?>/items/getitemsforautocomplete", {
```
Not the most elegant way, but it solved some of my headaches in the past. | How to I make sure the path in jQuery autocomplete call to CakePHP controller/action resolves correctly? | [
"",
"php",
"jquery",
"linux",
"cakephp",
""
] |
I'm not fully getting all i want from Google analytics so I'm making my own simple tracking system to fill in some of the gaps.
I have a session key that I send to the client as a cookie. This is a GUID.
I also have a surrogate IDENTITY int column.
I will frequently have to access the session row to make updates to it during the life of the client. Finding this session row to make updates is where my concern lies.
I only send the GUID to the client browser:
> a) i dont want my technical 'hacker'
> users being able to guage what 'user
> id' they are - i.e. know how many
> visitors we have had to the site in total
>
> b) i want to make sure noone messes with data maliciously - nobody can guess a GUID
I know GUID indexes are inefficnent, but I'm not sure exactly how inefficient. I'm also not clear how to maximize the efficiency of multiple updates to the same row.
I don't know which of the following I should do :
* Index the GUID column and always use that to find the row
* Do a table scan to find the row based on the GUID (assuming recent sessions are easy to find). Do this by reverse date order (if thats even possible!)
* Avoid a GUID index and keep a hashtable in my application tier of active sessions : `IDictionary<GUID, int>` to allow the 'secret' IDENTITY surrogate key to be found from the 'non secret' GUID key.
There may be several thousand sessions a day.
PS. I am just trying to better understand the SQL aspects of this. I know I can do other clever thigns like only write to the table on session expiration etc., but please keep answers SQL/index related. | In this case, I'd just create an index on the GUID. Thousands of sessions a day is a completely trivial load for a modern database.
Some notes:
* If you create the GUID index as non-clustered, the index will be small and probably be cached in memory. By default most databases cluster on primary key.
* A GUID column is larger than an integer. But this is hardly a big issue nowadays. And you need a GUID for the application.
* An index on a GUID is just like an index on a string, for example Last Name. That works efficiently.
* The B-tree of an index on a GUID is harder to balance than an index on an identity column. (But not harder than an index on Last Name.) This effect can be countered by starting with a low fill factor, and reorganizing the index in a weekly job. This is a micro-optimization for a databases that handle a million inserts an hour or more. | Assuming you are using SQL Server 2005 or above, your scenario might benefit from NEWSEQUENTIALID(), the function that gives you ordered GUIDs.
Consider this quote from the article [Performance Comparison - Identity() x NewId() x NewSequentialId](http://www.codeproject.com/KB/database/AgileWareNewGuid.aspx)
*"The NEWSEQUENTIALID system function is an addition to SQL Server 2005. It seeks to bring together, what used to be, conflicting requirements in SQL Server 2000; namely identity-level insert performance, and globally unique values."*
Declare your table as
```
create table MyTable(
id uniqueidentifier default newsequentialid() not null primary key clustered
);
```
However, keep in mind, as Andomar noted that the sequentiality of the GUIDs produced also make them easy to predict. There are ways to make this harder, but non that would make this better than applying the same techniques to sequential integer keys.
Like the other authors I seriously doubt that the overheads of using straight newid() GUIDs would be significant enough for your application to notice. You would be better of focusing on minimizing roundtrips to your DB than on implementing custom caching scenarios such as the dictionary you propose. | Indexing a 'non guessable' key for quick retrieval? | [
"",
"sql",
"performance",
""
] |
I'm writing some console applications to move some data between SQLite databases. The classes for the connection and various prepared statements implenent IDisposable, so I'm instantiating these objects using `using` blocks, like this:
```
using (SQLiteConnection sqlite_conn = new SQLiteConnection(connectionString))
{
sqlite_conn.Open();
using (SQLiteCommand sqlite_cmd = new SQLiteCommand())
{
sqlite_cmd.Connection = sqlite_conn;
sqlite_cmd.CommandText = "INSERT INTO SOMETHING SOMETHING;";
sqlite_cmd.ExecuteNonQuery();
}
sqlite_conn.Close();
}
```
But, I need to be able to create these connections in one method, and then call them in other methods. What is the cleanest and least confusing way for me to store these connections as instance variables? I want to make sure their `.Dispose()` methods are called in an intelligent way, but can't see a good way to make sure all of the action occurs within the context of a single `using` block.
I realize this is a C# newb question so please tailor your answers as such. If you have a suggested solution, I'd love it if you included a code sample to illustrate.
**EDIT:** My use case is a console app. Someone passes in the source and destination connection strings, and the console app performs an operation. Would I actually make my console class Program itself implement IDisposable like this?:
```
class Program : IDisposable
{
private SQLiteConnection sourceConnection;
private SQLiteConnection destinationConnection;
public void Dispose()
{
sourceConnection.Dispose();
destinationConnection.Dispose();
}
}
``` | > I need to be able to create these connections in one method, and then call them in other methods. What is the cleanest and least confusing way for me to store these connections as instance variables?
In this case, the class where you keep these instance variables should itself implement IDisposable, and on disposal should make sure the connections are also disposed.
Also, when you have multiple IDisposables at once you can re-write it this way, to group them and reduce the nesting in the code:
```
using (SQLiteConnection sqlite_conn = new SQLiteConnection(connectionString))
using (SQLiteCommand sqlite_cmd = new SQLiteCommand())
{
sqlite_conn.Open();
sqlite_cmd.Connection = sqlite_conn;
sqlite_cmd.CommandText = "INSERT INTO SOMETHING SOMETHING;";
sqlite_cmd.ExecuteNonQuery();
} // no need to call .Close(): IDisposable normally handles it for you
``` | You can make your class implement IDisposable and ensure you clean up the connection etc within your Dispose method. Then the method calling your class can do using(MyClass c ...). | How do I share IDisposable resources within a class? | [
"",
"c#",
"class-design",
"using-statement",
""
] |
I just hope the following doesn't seem to you like redundant jabber :)
Anyway, there is that:
```
for (p = fmt; *p; p++) {
if (*p != '%') {
putchar(*p);
continue;
}
switch (*++p) {
/* Some cases here */
...
}
}
```
And I wondered why the writer (Kernighan / Ritchie) used the `continue` in the `if` statement.
I thought it was for the mere reason that he deemed it would be more elegant than indenting the whole `switch` under an `else` statement, what do you think? | Probably. The human brain has limited stack space, making it difficult to deal with deeply nested structures. Anything that flattens the information we're expected to parse makes it easier to understand.
Similarly, I normally prefer this:
```
bool foo(int arg)
{
if(!arg) {
/* arg can't be 0 */
return false;
}
/* Do some work */
return true;
}
```
To this:
```
bool foo(int arg)
{
if(!arg) {
/* arg can't be 0 */
return false;
} else {
/* Do some work */
return true;
}
}
```
Or worse, to this:
```
bool foo(int arg)
{
if(arg) {
/* Do some work */
return true;
} else {
/* arg can't be 0 */
return false;
}
}
```
In the last example, the part that does the work might be quite long. By the time the reader gets to the else clause, he may not remember how he got there.
Putting the bail out conditions as close to the beginning helps to assure that people who try to call your functions will have a good idea of what inputs the function expects.
Also, as others pointed out, the continue makes it clear that there's no need to read further into the code inside the loop to determine whether any more processing is done after that point for this case, making the code easier to follow. Again, the fewer things you force the reader to keep track of, the better. | Because with the continue it is clear that the code is done for this loop iteration. If a else would have been used you had also to check if there is no code after the else.
I think it is general a good habit to exit a context as soon as possible because this leads to much clearer code.
---
For example:
```
if(arg1 == NULL)
return;
if(arg2 == NULL)
return;
//Do some stuff
```
vs.
```
if(arg1 != null)
{
if(arg2 != null)
{
//Do some stuff
}
}
``` | Style question about existing piece of code (C/C++) | [
"",
"c++",
"c",
"coding-style",
""
] |
The site I am developing in php makes many MySQL database requests per page viewed. Albeit many are small requests with properly designed index's. I do not know if it will be worth while to develop a cache script for these pages.
1. Are file I/O generally faster than database requests? Does this depend on the server? Is there a way to test how many of each your server can handle?
2. One of the pages checks the database for a filename, then checks the server to see if it exists, then decides what to display. This I would assume would benefit from a cached page view?
Also if there is any other information on this topic that you could forward me to that would be greatly appreciated. | If you're doing read-heavy access (looking up filenames, etc) you might benefit from [memcached](https://www.php.net/memcached). You could store the "hottest" (most recently created, recently used, depending on your app) data in memory, then only query the DB (and possibly files) when the cache misses. Memory access is far, far faster than database or files.
If you need write-heavy access, a database is the way to go. If you're using MySQL, use InnoDB tables, or another engine that supports row-level locking. That will avoid people blocking while someone else writes (or worse, writing anyway).
But ultimately, it depends on the data. | It depends on how the data is structured, how much there is and how often it changes.
If you've got relatively small amounts, of relatively static data with relatively simple relationships - then flat files are the right tool for the job.
Relational databases come into their own when the connections between the data are more complex. For basic 'look up tables' they can be a bit overkill.
But, if the data is constantly changing, then it can be easier to just use a database rather than handle the configuration management by hand - and for large amounts of data, with flat files you've got the additional problem of how do you find the one bit that you need, efficiently. | File access speed vs database access speed | [
"",
"php",
"database",
"file-io",
""
] |
I have a situation where two objects of the same type have parents of different types.
The following pseudo code explains the situation the best:
```
TypeA a1, a2;
TypeB b;
TypeC c;
a1.Parent = b;
a2.Parent = c;
```
To complicate things even further TypeB and TypeC may have primary keys of different types, for instance, the following assertion may be true:
```
Assert(b.Id is string && c.Id is int);
```
My question is what is the best way to define this parent-child relationship in SQL Server?
The only solution I can think of is to define that TypeA table has two columns - ParentId and ParentType, where:
* ParentId is sql\_variant - to be able to hold both numbers and strings
* ParentType is string - to keep the assembly qualified name of the parent type.
However, when I defined a user data type based on sql\_variant, it specified the field size as fixed 8016 bytes, which seems to be way to much.
There have to be a better way. Anyone?
Thanks. | If NEITHER column will EVER be involved in any mathematical operations, make them CHAR() or VARCHAR(), as you will be dealing with a sequence of characters, not numbers. '1' is just as valid in that case as 'A'. | One word: **DON'T**
This is very bad practice - columns have **ONE SINGLE DATATYPE** for a reason. Do not abuse this and make everything into variants.......
Marc | What is the best way to define a column in SQL server that may contain either an integer or a string? | [
"",
"sql",
"sql-server",
"sql-variant",
""
] |
I have a school project in which I have to implement a chat application, whose server will be a java web service.
The problem is that I've always thought of a web service as a way of calling remote functions, and I have no idea how to keep a "session" active on the web service, nor how to keep track of all the people currently in chat, rooms etc. | To the best of my knowledge, a chat server is supposed to know its clients after an initial connection, and send every client message to all clients. This definitely calls for some sort of session maintenance. I think the right way to do this is as follows:
1. Client calls web service 'handshake' and provides some minimal identification details.
2. Server returns an acknowledgment that includes a unique client identifier.
3. Client calls web service 'message' and sends a new message, together with its identifier.
4. Server identifies client by the identifier, distributes message to all clients.
I'm not really sure how the message distribution should work, as web services are essentially a pull-service and not push. Perhaps the client should expose its own web service for the server to call.
Hope this helps,
Yuval =8-) | You could consider implementing a [COMET](http://en.wikipedia.org/wiki/Comet_(programming)) solution. This will effectively give you push communication, thus eliminating latency, a VERY nice feature for a chat application.
If you want to go for the gold, consider implementing more advanced features:
* spell check
* URLs/email addresses converted to links automatically
* separate chat rooms
* moderator functions (terminate chat, kick user)
* event info like "User is typing..."
* statuses (available, busy, away...)
* avatars
* ... | Implementing a chat server as a WebService | [
"",
"java",
"web-services",
"chat",
""
] |
How can I deny access to a file I open with fstream? I want to unable access to the file while I'm reading/writing to it with fstream? | You cannot do it with the standard fstream, you'll have to use platform specific functions.
On Windows, you can use [CreateFile()](http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx) or [LockFileEx()](http://msdn.microsoft.com/en-us/library/aa365203.aspx). On Linux, there is [flock()](http://linux.die.net/man/2/flock), [lockf()](http://linux.die.net/man/3/lockf), and [fcntl()](http://linux.die.net/man/2/fcntl) (as the previous commenter said).
If you are using MSVC, you can pass a third parameter to fstream's constructor. See the documentation for [Visual Studio 6](http://msdn.microsoft.com/en-us/library/aa243822(VS.60).aspx) or [newer versions](http://msdn.microsoft.com/en-us/library/8et8s826.aspx). Of course it won't work with other compilers and platforms.
Why do you want to lock others out anyway? There might be a better solution... | *Expanding on the comment by Casebash:*
To open a file in windows so that other processes cannot write to it use
```
file.rdbuf()->open(path, std::ios_base::app, _SH_DENYWR);
```
\_SH\_DENYRW will deny both read and write access | Using std:fstream how to deny access (read and write) to the file | [
"",
"c++",
""
] |
When I do
```
fstream someFile("something.dat", ios::binary|ios::out);
someFile.seekp(someLocation, ios::beg);
someFile.write(someData, 100);
```
It seems to replace the entire file with those 100 bytes instead of replacing only the appropriate 100 bytes, as if I had specified ios::trunc. Is there a portable way to not have it truncate the file?
**Edit:** adding ios::in seems to do the trick, by why is this required, and is that standard behavior?
**Edit #2:** I am not trying to append to the existing file. I need to replace the 100 bytes while leaving the rest unaffected. | You want the append flag, ios::app, if you want to write at the end of the file.
To do it somewhere arbitrarily in the middle of the file, you need to seek to the right place. You CAN do this by opening the file for in and out, but if I were you I'd create a temp file, copy input up to mark, write your new data, copy the rest to EOF, close the files and replace the previous version with the temp file. This is called a "Master File update". | Since the file already exists open it in 'read+write' mode and then do seekp. I think it will work.
```
fstream someFile("something.dat", ios::binary|ios::out|ios::in);
someFile.seekp(someLocation, ios::beg);
someFile.write(someData, 100);
``` | fstream replace portion of a file | [
"",
"c++",
"io",
"fstream",
""
] |
I have a list of object I wish to sort in C#.Net 3.5, the object is as follows
id | name | parent\_id
1 | Superman | NULL
2 | Batman | 3
3 | Watchman | 1
4 | Wolverine | 2
I know some of you might find this easy, but I need to sort the list based on the parent\_id, which is pointing to its own index(id) in the same table.
So can someone give me a good algorithm to sort this list without looping over and over again? I cannot really phrase it that well, therefore I can't seem to google the right results I wanted.
A collection of IEnumerable or DataTable solution would be great.
Thanks in advance.
EDIT:----------------NEW Example
id | name | parent\_id
1 | TOP CHILD | NULL
2 | Child C | 3
3 | Child B | 4
4 | Child A | 1
----> The Output I want is
id | name | parent\_id
1 | TOP CHILD | NULL
4 | Child A | 1
3 | Child B | 4
2 | Child C | 3
----> If I use OrderBy or Sort, the result I get is
id | name | parent\_id
1 | TOP CHILD | NULL
4 | Child A | 1
2 | Child C | 3
3 | Child B | 4
--> Non of the solutions is what I really wanted,
Sorry again for not being clear
Hope this example is clearer | after you edit: I think I get you and the comparer looks like:
```
public Int32 CompareTo(SuperHero right)
{
if (this.ID == right.ID)
return 0;
return this.ParentID.CompareTo(right.ID);
}
```
in response to your comment:
The class looks like:
```
public class SuperHero : IComparable<SuperHero>
{
public Int32 ID { get; set; }
public String Name { get; set; }
public Int32 ParentID { get; set; }
public SuperHero(Int32 id, String name, Int32 pid)
{
this.ID = id;
this.Name = name;
this.ParentID = pid;
}
public Int32 CompareTo(SuperHero right)
{
if (this.ID == right.ID)
return 0;
return this.ParentID.CompareTo(right.ID);
}
public override string ToString()
{
return this.Name;
}
}
```
and to use it:
```
static void Main(string[] args)
{
// create your list
List<SuperHero> heroes = new List<SuperHero>();
// populate it
heroes.Add(new SuperHero(1, "Superman", 0));
heroes.Add(new SuperHero(2, "Batman", 3));
heroes.Add(new SuperHero(3, "Watchman", 1));
heroes.Add(new SuperHero(4, "Wolverine", 2));
foreach (SuperHero hero in heroes)
Console.WriteLine(hero.ToString());
Console.WriteLine();
// sort it
heroes.Sort();
foreach (SuperHero hero in heroes)
Console.WriteLine(hero.ToString());
Console.ReadKey();
}
```
The .NET sort (QuickSort) will use your comparer to sort the list. | You can create a class for the above data with id, name, parent\_id as class members.
Override the CompareTo method of IComparable
```
public class Person : IComparable<Person>
{
public int id, parentID;
public string name;
public Person(int id, string name, int parentID)
{
this.id = id;
this.name = name;
this.parentID = parentID;
}
#region IComparable Members
public int CompareTo(Person obj)
{
return this.parentID.CompareTo(obj.parentID);
//OR
//if (this.parentID > obj.parentID)
//{
// return 1;
//}
//else if (this.parentID < obj.parentID)
//{
// return -1;
//}
//return 0;
}
#endregion
}
```
In the main code:
```
List<Person> peopleArray = new List<Person>();
peopleArray.Add(new Person(1, "Jerry", 1));
peopleArray.Add(new Person(2, "George", 4));
peopleArray.Add(new Person(3, "Elaine", 3));
peopleArray.Add(new Person(4, "Kramer", 2));
peopleArray.Sort();
foreach (Person p in peopleArray)
Console.WriteLine(p.parentID);
```
This will sort the list by parent id
O/P of parent ids:
1
2
3
4 | Algorithm for sorting a list of objects in c# | [
"",
"c#",
".net",
"algorithm",
""
] |
I've been thinking about writing unit tests for my PHP/MySQL projects.
The thing that bugs me is how i can test things like form validation, ajax features, and UI features (such as clicking links). Also, I don't want the tests to be dependent on one aspect of the UI, so if I moved one link then all tests would break.
Does anyone have any ideas/suggestions on what parts of the code I should unit test, approximately how much time should be spent on unit testing, and any other suggestions/tips? | Unit tests, at least from a web application development standpoint, can't really cover UI design. What unit tests **can** help you with is to test the input/output of all your controller methods and/or any singleton/global functions you have hanging around. So, if you want to get unit test coverage on your form validation, make sure your application is structured in such a way that you can test your validation function independent of the UI.
```
class SomeController extends ... {
... function isAPhoneNumber($string){
if(...) {
return true;
}
else (...) {
return false;
}
}
}
```
This does leave all the code that gets your value TO this method untested. The only effective way I've seen of testing this is to test the UI directly with browser remote controls like [Selenium](http://seleniumhq.org). The [PHPUnit testing framework](http://www.phpunit.de/manual/3.1/en/selenium.html) has hooks for using Selenium's PHP driver, which means you can use the same test runner as your other unit tests.
You're never going to be 100% assured that an innocent change won't hork your Selenium tests, but there are steps you can take during development to minimize this. Selenium has [various different ways](http://seleniumhq.org/documentation/core/reference.html#locators) to "target" an element it will click on or type in. You want to use targeting that is as specific as possible, so try to give all your (X)HTML nodes a unique ID element. This ensures that selenium will click a link no matter where it ends up on a page.
When a unique id is not possible, try to wrap common elements with some HTML element that has a unique identifier, and then use selenium's xpath tareting abilities to say "click the link that contains the text "signup" that's in the div with the id of "foo", or click the third link in the div with the id of "foo".
If it's not obvious from the items above, **do not** rely on the targeting choices that Selenium IDE (or other recorders) will pick. Their goal is to ensure that running the test on that exact page will give you the same results, which can give you very fragile xpath. Always review the tests you record, and fix-up shoddy targeting.
There are no easy answers here. If there were, web applications would be a lot less buggy. | You should thumb through this part of the [documentation of SimpleTest](http://www.simpletest.org/en/web_tester_documentation.html) and look at the methods they have for testing
* page content
* navigation
* cookies
* images
* frames
* form submission
* redirects
* proxies
Even if you don't end up using SimpleTest, it may be worth a look.
To me, the most useful thing is testing form submissions, including each parameter in a GET or POST request for each form that is necessary for the operation of your site.
Whether you want to test a lot of your UI or just small key areas is up to you. | Unit testing in PHP? | [
"",
"php",
"unit-testing",
""
] |
I'm working on a wallpaper application. Wallpapers are changed every few minutes as specified by the user.
The feature I want is to fade in a new image while fading out the old image. Anyone who has a mac may see the behavior I want if they change their wallpaper every X minutes.
My current thoughts on how I would approach this is to take both images and lay one over the other and vary the opacity. Start the old image at 90% and the new image at 10%. I would then decrease the old image by 10% until it is 0%, while increasing the new image by 10% until 90%. I would then set the wallpaper to the new image.
To make it look like a smooth transition I would create the transition wallpapers before starting the process instead of doing it in real-time.
My question is, is there a more effective way to do this?
I can think of some optimizations such as saving the transition images with a lower quality.
Any ideas on approaches that would make this more efficient than I described? | Sounds like an issue of trade-off.
It depends on the emphasis:
* Speed of rendering
* Use of resources
Speed of rendering is going to be an issue of how long the process of the blending images is going to take to render to a screen-drawable image. If the blending process takes too long (as transparency effects may take a long time compared to regular opaque drawing operations) then pre-rendering the transition may be a good approach.
Of course, pre-rendering means that there will be multiple images either in memory or disk storage which will have to be held onto. This will mean that more resources will be required for temporary storage of the transition effect. If resources are scarce, then doing the transition on-the-fly may be more desirable. Additionally, if the images are on the disk, there is going to be a performance hit due to the slower I/O speed of data outside of the main memory.
On the issue of "saving the transition images with a lower quality" -- what do you mean by "lower quality"? Do you mean compressing the image? Or, do you mean having smaller image? I can see some pros and cons for each method.
* **Compress the image**
+ **Pros:** Per image, the amount of memory consumed will be lower. This would require less disk space, or space on the memory.
+ **Cons:** Decompression of the image is going to take processing. The decompressed image is going to take additional space in the memory before being drawn to the screen. If lossy compression like JPEG is used, compression artifacts may be visible.
* **Use a smaller image**
+ **Pros:** Again, per image, the amount of memory used will be lower.
+ **Cons:** The process of stretching the image to the screen size will take some processing power. Again, additional memory will be needed to produce the stretched image.
Finally, there's one point to consider -- **Is rendering the transition in real-time really not going to be fast enough?**
It may actually turn out that rendering doesn't take too long, and this may all turn out to be [premature optimization](http://en.wikipedia.org/wiki/Optimization_(computer_science)#When_to_optimize).
It might be worth a shot to make a prototype without any optimizations, and see if it would really be necessary to pre-render the transition. [Profile](http://en.wikipedia.org/wiki/Profiling_(software)) each step of the process to see what is taking time.
If the performance of on-the-fly rendering is unsatisfactory, weigh the positives and negatives of each approach of pre-rendering, and pick the one that seems to work best. | Pre-rendering each blended frame of the transition will take up a lot of disk space (and potentially bandwidth). It would be better to simply load the two images and use the graphics card to do the blending in real time. If you have to use something like openGL directly, you will probably be able to just create two rectangles, set the images as the textures, and vary the alpha values. Most systems have simpler 2d apis that would let you do this very easily. (eg. CoreAnimation on OS X, which will automatically vary the transparency over time and make full use of hardware acceleration.) | Whats the most efficient method for transitioning between two images (Like Mac wallpaper change) | [
"",
"c#",
"algorithm",
"image-processing",
""
] |
How to check if the binary representation of an integer is a palindrome? | Since you haven't specified a language in which to do it, here's some C code (not the most efficient implementation, but it should illustrate the point):
```
/* flip n */
unsigned int flip(unsigned int n)
{
int i, newInt = 0;
for (i=0; i<WORDSIZE; ++i)
{
newInt += (n & 0x0001);
newInt <<= 1;
n >>= 1;
}
return newInt;
}
bool isPalindrome(int n)
{
int flipped = flip(n);
/* shift to remove trailing zeroes */
while (!(flipped & 0x0001))
flipped >>= 1;
return n == flipped;
}
```
**EDIT** fixed for your 10001 thing. | Hopefully correct:
```
_Bool is_palindrome(unsigned n)
{
unsigned m = 0;
for(unsigned tmp = n; tmp; tmp >>= 1)
m = (m << 1) | (tmp & 1);
return m == n;
}
``` | How to check if the binary representation of an integer is a palindrome? | [
"",
"c++",
"c",
"binary",
"integer",
"palindrome",
""
] |
consider the following pgSQL statement:
```
SELECT DISTINCT some_field
FROM some_table
WHERE some_field LIKE 'text%'
LIMIT 10;
```
Consider also, that some\_table consists of several million records, and that some\_field has a b-tree index.
Why does the query take so long to execute (several minutes)? What I mean is, why doesnt it loop through creating the result set, and once it gets 10 of them, return the result? It looks like the execution time is the same, regardless of whether or not you include a 'LIMIT 10' or not.
Is this correct or am I missing something? Is there anything I can do to get it to return the first 10 results and 'screw' the rest?
UPDATE: If you drop the distinct, the results are returned virtually instantaneously. I do know however, that many of the some\_table records are fairly unique already, and certianly when I run the query without he distinct declaration, the first 10 results are in fact unique. I also eliminated the where clause (eliminating it as a factor). So, my original question still remains, why isnt it terminating as soon as 10 matches are found? | You have a DISTINCT. This means that to find 10 distinct rows, it's necessary to scan all rows that match the predicate until 10 *different* some\_fields are found.
Depending on your indices, the query optimizer may decide that scanning all rows is the best way to do this.
10 distinct rows could represent 10, a million, an infinity of non-distinct rows. | Can you post the results of running EXPLAIN on the query? This will reveal what Postgres is doing to execute the query, and is generally the first step in diagnosing query performance problems.
It may be sorting or constructing a hash table of the entire rowset to eliminate the non-distinct records before returning the first row to the LIMIT operator. It makes sense that the engine should be able to read a fraction of the records, returning one new distinct at a time until the LIMIT clause has satisfied its 10 quota, but there may not be an operator implemented to make that work.
Is the some\_field unique? If not, it would be useless in locating distinct records. If it is, then the DISTINCT clause would be unnecessary, since that index already guarantees that each row is unique on some\_field. | Why do SQL statements take so long when "limited"? | [
"",
"sql",
"postgresql",
""
] |
Why is this code crashing when run as a restricted user, but not when run as an admin of the machine?
```
extern "C" BOOL WINAPI DllMain(HINSTANCE hInstance,
DWORD dwReason,
LPVOID lpReserved)
{
hInstance;
m_hInstance=hInstance;
return _AtlModule.DllMain(dwReason, lpReserved);
}
```
The code is crashing on the return... and I don't know why.
I am getting:
```
The instruction at "0x7c90100b" referenced memory at "0x00000034".
The memory could not be "read".
```
Furthermore, \_AtlModule.DLLMain looks like this:
```
inline BOOL WINAPI CAtlDllModuleT<T>::DllMain(DWORD dwReason, LPVOID lpReserved) throw()
{
#if !defined(_ATL_NATIVE_INITIALIZATION)
dwReason; lpReserved;
#pragma warning(push)
#pragma warning(disable:4483)
using namespace __identifier("<AtlImplementationDetails>");
#pragma warning(pop)
if (dwReason == DLL_PROCESS_ATTACH)
{
ATLASSERT(DllModuleInitialized == false);
}
return TRUE;
#else
return _DllMain(dwReason, lpReserved);
#endif
}
```
We are importing the ATL DLL, and tried linking statically as well... no luck.
---
**UPDATE**
Using ProcMon, I get a buffer overflow here:
RegQueryValue
HKU\S-1-5-21-448539723-854245398-1957994488-1005\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Cache
BUFFER OVERFLOW
Length: 144
What does this mean? | When you get an error saying you can't reference a memory at some 0x0000... location, it usually means your code is trying to reference a member variable of some object, but the object pointer points to NULL. In this case, the member variable is 0x34 bytes into the object. Further guessing, given that it only fails when running under a restricted user, I'd say some operation that is supposed to return a pointer to an object fails due to insufficient rights. If the returned pointer is not tested for being null, the code will continue running until someone tries to read one of its member variables, at which point you get the crash.
what I would do is thoroughly debug the code and look for suspicious NULLs. Also, you might want to run your app under [AppVerifier](http://technet.microsoft.com/en-us/library/bb457063.aspx) with the LuaPriv test on. If my guess is correct, some API calls failures would be reported, manifested in your code as returned NULLs. AppVerifier should also provide you the stack trace, so you'll be able to easily find the root of the problem. | Jason,
Where are you declaring m\_hInstance? Is it static at above DllMain? Just trying to get some more details about the code. | Why would I get a GPF in DLLMain when run as a restricted user? | [
"",
"c++",
"com",
"activex",
"atl",
""
] |
I have the following class
```
public class UIControl
{
public string FName{ get; set; }
public HtmlInputCheckBox SelectCheckBox { get; set; }
public bool OverrideSelect { get; set; }
//Want to throw an ApplicationExceptioh if developer uses this constructor and passes
//chkSelect = null
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect)
{
this.FName= sFieldName;
this.SelectCheckBox = chkSelect;
}
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect, bool overrideSelect)
: this(sFieldName, chkSelect)
{
OverrideSelect = overrideSelect;
}
}
```
I want to make sure that the developer uses the first constructor only when chkSelect is not null.
I want to do a:
```
throw new ApplicationException("Dev is using the incorrect constructor");
``` | You can use a private constructor thus:
```
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect)
: this(sFieldName, chkSelect, false, false)
{
}
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect,
bool overrideSelect)
: this(sFieldName, chkSelect, overrideSelect, true)
{
}
private UIControl(string sFieldName, HtmlInputCheckBox chkSelect,
bool overrideSelect, bool allowOverride)
{
if ((!allowOverride) && (chkSelect == null))
throw new ArgumentException(...);
this.FName= sFieldName;
this.SelectCheckBox = chkSelect;
OverrideSelect = overrideSelect;
}
```
There are lots of variants, but as a general rule, have less specific constructors calling more specific ones. For example, the following would also work in your case:
```
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect)
: this(sFieldName, chkSelect, false)
{
if (chkSelect == null) throw ...
}
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect,
bool overrideSelect)
{
this.FName= sFieldName;
this.SelectCheckBox = chkSelect;
this.OverrideSelect = overrideSelect;
}
``` | Whats hard?
```
public UIControl(string sFieldName, HtmlInputCheckBox chkSelect)
{
if (chkSelect == null)
{
throw new ApplicationException("Dev is using the incorrect constructor");
}
this.FName= sFieldName;
this.SelectCheckBox = chkSelect;
}
``` | Force developer to use a particular constructor | [
"",
"c#",
".net",
""
] |
I'm loading HTML from an AJAX request, and some `img` tags in it.
I need to know when the images loaded in the AJAX are fully loaded to resize the container.
I don't know how many images are in the result, so I can't simply check the `.complete` value.
Do you know a solution for this? | If you're using jQuery, the `$(window).load()` command can set up a function to be called once all the images are loaded, as follows:
```
$(window).load(
function() {
// weave your magic here.
}
);
```
If you're not using jQuery, I'd just do what I've done when I want a *little* bit of its functionality: download it and examine the source to see how it does it. Then do that :-) | `XMLHTTPrequests` have properties that you can use to determine when the load has completed. If you are using JavaScript directly, you would need to assign a function to the `onreadystatechange` event. This is called each time that the state changes. The states are stored in `ReadyState` and are:
0. Uninitiated
1. Loading
2. Loaded
3. Interactive
4. Complete
In your function, you check that the status is 4 and call the other required functions.
jQuery has events that tell you when certain things have occurred. [View the AJAX Documentation](http://docs.jquery.com/Ajax) | How can I know when AJAX loaded content finish the images loading? | [
"",
"javascript",
"ajax",
"image",
"loading",
""
] |
for an IM script i am making,
i would like the computer to beep like in gmail everytime there is a new message.
is there a script in php or html to make the computer beep?
Thanks! | All you need to do is load a small flash movie that makes the sound, you can easily toggle it in your page with Javascript whenever you need it. | Several popular javascript libraries have interfaces for playing sounds built in or available as plugins, for example see [Scriptaculous](http://wiki.github.com/madrobby/scriptaculous/sound) and [jQuery](http://plugins.jquery.com/project/sound). | is it possible to make "beep" sound in php like in gmail | [
"",
"php",
"html",
"audio",
""
] |
I'm creating an application and I have problems with `Cursor`. I have an `SQLiteDatabase` that returns me a `Cursor` when I try to fetch the values with this function:
```
public Cursor fetchOption(long rowId) throws SQLException {
Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID,
KEY_TITLE, KEY_BODY}, KEY_ROWID + "=" + rowId, null,
null, null, null, null);
if (mCursor != null) {
mCursor.moveToFirst();
}
return mCursor;
}
```
I don't know how to obtain the value of the field in the `Cursor`. If I do that like so:
```
String a = mOptionDb.fetchOption(0).getColumnName(0).toString();
String b = mOptionDb.fetchOption(0).getColumnName(1).toString();
String c = mOptionDb.fetchOption(0).getColumnName(2).toString();
```
I only obtain the name of the columns (`_id, title, body`) but not the values. Any suggestions on how to achieve this? | I think you can forget about checking for null.
Instead check if there is data and then access the columns using the cursor:
```
Cursor cursor = fetchOption(0);
if (cursor.moveToFirst()) // data?
System.out.println(cursor.getString(cursor.getColumnIndex("title"));
cursor.close(); // that's important too, otherwise you're gonna leak cursors
```
It might also make sense to read an Android tutorial. The notepad tutorial seems to fit the bill: <http://developer.android.com/guide/tutorials/notepad/index.html> | You can use the `Cursor`'s `get*` methods to retrieve values from the result:
```
long id = cursor.getLong(cursor.getColumnIndex("_id"));
long title = cursor.getString(cursor.getColumnIndex("title"));
...
```
Better practice is obviously to use constants (often provided by ContentProviders) instead of calls to `getColumnIndex` with hardcoded strings. | Get the field value with a Cursor | [
"",
"java",
"android",
"android-sqlite",
""
] |
I want to programmatically edit python source code. Basically I want to read a `.py` file, generate the [AST](http://en.wikipedia.org/wiki/Abstract_syntax_tree), and then write back the modified python source code (i.e. another `.py` file).
There are ways to parse/compile python source code using standard python modules, such as [`ast`](http://docs.python.org/library/ast.html) or [`compiler`](http://docs.python.org/library/compiler.html). However, I don't think any of them support ways to modify the source code (e.g. delete this function declaration) and then write back the modifying python source code.
UPDATE: The reason I want to do this is I'd like to write a [Mutation testing library](http://en.wikipedia.org/wiki/Mutation_testing) for python, mostly by deleting statements / expressions, rerunning tests and seeing what breaks. | [Pythoscope](https://github.com/mkwiatkowski/pythoscope) does this to the test cases it automatically generates as does the [2to3](http://docs.python.org/library/2to3.html) tool for python 2.6 (it converts python 2.x source into python 3.x source).
Both these tools uses the [lib2to3](http://svn.python.org/projects/python/trunk/Lib/lib2to3/) library which is an implementation of the python parser/compiler machinery that can preserve comments in source when it's round tripped from source -> AST -> source.
The [rope project](https://github.com/python-rope/rope) may meet your needs if you want to do more refactoring like transforms.
The [ast](http://docs.python.org/library/ast.html) module is your other option, and [there's an older example of how to "unparse" syntax trees back into code](https://svn.python.org/projects/python/trunk/Demo/parser/unparse.py) (using the parser module). But the `ast` module is more useful when doing an AST transform on code that is then transformed into a code object.
The [redbaron](https://redbaron.readthedocs.org/en/latest/) project also may be a good fit (ht Xavier Combelle) | The builtin ast module doesn't seem to have a method to convert back to source. However, the [codegen](https://pypi.python.org/pypi/codegen/1.0) module here provides a pretty printer for the ast that would enable you do do so.
eg.
```
import ast
import codegen
expr="""
def foo():
print("hello world")
"""
p=ast.parse(expr)
p.body[0].body = [ ast.parse("return 42").body[0] ] # Replace function body with "return 42"
print(codegen.to_source(p))
```
This will print:
```
def foo():
return 42
```
Note that you may lose the exact formatting and comments, as these are not preserved.
However, you may not need to. If all you require is to execute the replaced AST, you can do so simply by calling compile() on the ast, and execing the resulting code object. | Parse a .py file, read the AST, modify it, then write back the modified source code | [
"",
"python",
"compiler-construction",
"abstract-syntax-tree",
""
] |
I want to make sure a string has only characters in this range
[a-z] && [A-Z] && [0-9] && [-]
so all letters and numbers plus the hyphen.
I tried this...
C# App:
```
char[] filteredChars = { ',', '!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_', '+', '=', '{', '}', '[', ']', ':', ';', '"', '\'', '?', '/', '.', '<', '>', '\\', '|' };
string s = str.TrimStart(filteredChars);
```
This TrimStart() only seems to work with letters no otehr characters like $ % etc
Did I implement it wrong?
Is there a better way to do it?
I just want to avoid looping through each string's index checking because there will be a lot of strings to do...
Thoughts?
Thanks! | This seems like a perfectly valid reason to use a [regular expression](http://www.regular-expresions.info).
```
bool stringIsValid = Regex.IsMatch(inputString, @"^[a-zA-Z0-9\-]*?$");
```
In response to miguel's comment, you could do this to remove all unwanted characters:
```
string cleanString = Regex.Replace(inputString, @"[^a-zA-Z0-9\-]", "");
```
Note that the caret (`^`) is now placed *inside* the character class, thus negating it (matching any non-allowed character). | Here's a fun way to do it with LINQ - no ugly loops, no complicated RegEx:
```
private string GetGoodString(string input)
{
var allowedChars =
Enumerable.Range('0', 10).Concat(
Enumerable.Range('A', 26)).Concat(
Enumerable.Range('a', 26)).Concat(
Enumerable.Range('-', 1));
var goodChars = input.Where(c => allowedChars.Contains(c));
return new string(goodChars.ToArray());
}
```
Feed it "Hello, world? 123!" and it will return "Helloworld123". | Filter a String | [
"",
"c#",
"string",
"filtering",
""
] |
I have the following code (calling a method returning IEnumerable):
```
FibonacciWithLinq.Skip(delegate() {
return 5;
}
);
```
The parameter for skip takes an int (number of items to skip). I could have a seperate function which would determine this number (it may be dependent on some factors). However, if the method is going to be consumed by the Skip() method, then it would be good to write it as an anon method. Problem is, in the above code, I get this error:
The type arguments cannot be inferred from the usage. Try specifying the type arguments explicitly.
But I can't see anything wrong with the above code. What can I do? I'm pretty sure an anon method can be written for the above scenario?
Thanks | There is no automatic conversion between a "value" and "a function with no arguments that returns a value" (let's call it a `Func<T>`), nor is there an automatic conversion in the opposite direction.
To convert a `Func<T>` into a value, `T`, you have to put `()` after it. To convert a value into a `Func<T>`, you can put `() =>` in front of it.
If you want to call some method that accepts a `Func<T>`, you cannot directly pass it a value, so you have to convert that value into a `Func<T>`, and to do that you can put `() =>` in front of it.
If you want to call some method that accepts a value, you cannot directly pass it a `Func<T>`, so you have to convert that `Func<T>` into a value by putting `()` after it.
Some languages automatically convert between these things, but C# doesn't.
In your case, you have a function that accepts a value, and you're trying to pass it a function, even though you already have a value, so you don't need to do anything special apart from give the value to the function.
```
int val = 5; // value
Func<int> f = () => val; // convert value to function
int val2 = f(); // convert back to value
```
The "anonymous methods" syntax is just the ugly old way of doing this. There are two problems with what you're trying to do (aside from the fact that it's unnecessary).
Firstly, you need to give the compiler a type hint by explicitly stating the delegate type with a `new Func<int>(...)` wrapper.
Secondly, you need to add the `()` after it to get the value.
```
FibonacciWithLinq.Skip(new Func<int>
(delegate()
{
return 5;
})());
```
But it can't be stressed enough - this is completely pointless. | If you have something that would determine how many to skip, simply write:
```
FibonacciWithLinq.Skip(HowManyToSkip())
```
Or do you want it parameterized in a different way, such as this:
```
Func<int, IEnumerable<int>> skipper = x => FibonacciWithLine.Skip(x);
foreach (var i in skipper(5)) Console.WriteLine(i);
```
The code in your question is passing in a method instead of a constant int value, which is what Skip wants. Perhaps you want to skip values returned by the sequence: if so, the Except() extension method is what you want:
```
FibonacciWithLinq.Except(x => (x == 5 || x == 7));
```
Note the lambda syntax is just short for:
```
FibonacciWithLinq.Except(delegate(int x) { return x==5||x==7; });
``` | Writing an anonymous method for a method in IEnumerable | [
"",
"c#",
"linq",
""
] |
In SQL Server 2005+ (I use both), does adding the `UNIQUE` constraint to a column automatically create an index, or should I still `CREATE INDEX`? | See this [MSDN article](http://msdn.microsoft.com/en-us/library/ms177420.aspx):
> The Database Engine automatically
> creates a UNIQUE index to enforce the
> uniqueness requirement of the UNIQUE
> constraint.
If you do create an index, you'll end up with two indexes, as this example demonstrates:
```
create table TestTable (id int)
alter table TestTable add constraint unique_id unique (id)
create unique index ix_TestTable_id on TestTable (id)
select * from sys.indexes where [object_id] = object_id('TestTable')
```
This will display two unique indexes on TestTable; and the HEAP that represents the table itself. | Yes, it does.
In fact, you can even create a `CLUSTERED UNIQUE CONSTRAINT`:
```
ALTER TABLE mytable ADD CONSTRAINT UX_mytable_col1 UNIQUE CLUSTERED (col1)
```
, which will make the table to be clustered on `col1`.
Almost all databases create an index for `UNIQUE CONSTRAINT`, otherwise it would be very hard to maintain it.
`Oracle` doesn't even distinguish between `UNIQUE CONSTRAINT` and `UNIQUE INDEX`: one command is just a synonym for another.
The only difference in `Oracle` is that a `UNIQUE INDEX` should have a user-supplied name, while a `UNIQUE CONSTRAINT` may be created with a system-generated name:
```
ALTER TABLE mytable MODIFY col1 UNIQUE
```
This will create an index called `SYS_CXXXXXX`. | Does making a column unique force an index to be created? | [
"",
"sql",
"sql-server",
""
] |
I'm writing a 7 card poker hand evaluator as one of my pet projects. While trying to optimize its speed (I like the challenge), I was shocked to find that the performance of Dictionary key lookups was quite slow compared to array index lookups.
For example, I ran this sample code that enumerates over all 52 choose 7 = 133,784,560 possible 7 card hands:
```
var intDict = new Dictionary<int, int>();
var intList = new List<int>();
for (int i = 0; i < 100000; i ++)
{
intDict.Add(i, i);
intList.Add(i);
}
int result;
var sw = new Stopwatch();
sw.Start();
for (int card1 = 0; card1 < 46; card1++)
for (int card2 = card1 + 1; card2 < 47; card2++)
for (int card3 = card2 + 1; card3 < 48; card3++)
for (int card4 = card3 + 1; card4 < 49; card4++)
for (int card5 = card4 + 1; card5 < 50; card5++)
for (int card6 = card5 + 1; card6 < 51; card6++)
for (int card7 = card6 + 1; card7 < 52; card7++)
result = intDict[32131]; // perform C(52,7) dictionary key lookups
sw.Stop();
Console.WriteLine("time for dictionary lookups: {0} ms", sw.ElapsedMilliseconds);
sw.Reset();
sw.Start();
for (int card1 = 0; card1 < 46; card1++)
for (int card2 = card1 + 1; card2 < 47; card2++)
for (int card3 = card2 + 1; card3 < 48; card3++)
for (int card4 = card3 + 1; card4 < 49; card4++)
for (int card5 = card4 + 1; card5 < 50; card5++)
for (int card6 = card5 + 1; card6 < 51; card6++)
for (int card7 = card6 + 1; card7 < 52; card7++)
result = intList[32131]; // perform C(52,7) array index lookups
sw.Stop();
Console.WriteLine("time for array index lookups: {0} ms", sw.ElapsedMilliseconds);
```
which outputs:
```
time for dictionary lookups: 2532 ms
time for array index lookups: 313 ms
```
Is this type of behavior expected (performance decrease by a factor of 8)? IIRC, a Dictionary has, on average, O(1) lookups, while an array has worst-case O(1) lookups, so I do expect the array lookups to be faster, but not by this much!
I am currently storing poker hand rankings in a Dictionary. I suppose if this is as fast as the dictionary lookups can be, I have to rethink my approach and use arrays instead, although indexing the rankings will get a little tricky and I'll probably have to ask another question about it. | Don't forget that Big-O notations only says how the complexity grows with respect to the size (etc) - it doesn't give any indication of the constant factors involved. That's why sometimes even a linear *search* for keys is faster than a dictionary lookup, when there are sufficiently few keys. In this case you're not even doing a search with the array though - just a straight indexing operation.
For straight index lookups, arrays are basically ideal - it's just a case of
```
pointer_into_array = base_pointer + offset * size
```
(And then a pointer dereference.)
Performing a dictionary lookup is relatively complicated - very fast compared with (say) a linear lookup by key when there are lots of keys, but much more complicated than a straight array lookup. It has to calculate the hash of the key, then work out which bucket that should be in, possibly deal with duplicate hashes (or duplicate buckets) and then check for equality.
As always, choose the right data structure for the job - and if you really can get away with just indexing into an array (or `List<T>`) then yes, that will be blindingly fast. | > Is this type of behavior expected (performance decrease by a factor of 8)?
Why not? Each array lookup is almost intantaneous/negligeable, whereas a dictionary lookup may need at least an extra subroutine call.
The point of their both being O(1) means that even if you have 50 times more items in each collection, the performance decrease is still only a factor of whatever it is (8). | Optimizing Lookups: Dictionary key lookups vs. Array index lookups | [
"",
"c#",
".net",
"performance",
"poker",
""
] |
I am creating a HttpWebRequest object from another aspx page to save the response stream to my data store. The Url I am using to create the HttpWebRequest object has querystring to render the correct output. When I browse to the page using any old browser it renders correctly. When I try to retrieve the output stream using the HttpWebResponse.GetResponseStream() it renders my built in error check.
Why would it render correctly in the browser, but not using the HttpWebRequest and HttpWebResponse objects?
Here is the source code:
Code behind of target page:
```
protected void PageLoad(object sender, EventsArgs e)
{
string output = string.Empty;
if(Request.Querystring["a"] != null)
{
//generate output
output = "The query string value is " + Request.QueryString["a"].ToString();
}
else
{
//generate message indicating the query string variable is missing
output = "The query string value was not found";
}
Response.Write(output);
}
```
Code behind of page creating HttpWebRequest object
```
string url = "http://www.mysite.com/mypage.aspx?a=1";
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url)
//this if statement was missing from original example
if(User.Length > 0)
{
request.Credentials = new NetworkCredentials("myaccount", "mypassword", "mydomain");
request.PreAuthenticate = true;
}
request.UserAgent = Request.UserAgent;
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
Stream resStream = response.GetResponseStream();
Encoding encode = System.Text.Encoding.GetEncoding("utf-8");
StreamReader readStream = new StreamReader(resStream, encode, true, 2000);
int count = readStream.Read(read, 0, read.Length);
string str = Server.HtmlEncode(" ");
while (count > 0)
{
// Dumps the 256 characters on a string and displays the string to the console.
string strRead = new string(read, 0, count);
str = str.Replace(str, str + Server.HtmlEncode(strRead.ToString()));
count = readStream.Read(read, 0, 256);
}
// return what was found
result = str.ToString();
resStream.Close();
readStream.Close();
```
**Update**
@David McEwing - I am creating the HttpWebRequest with the full page name. The page is still generating the error output. I updated the code sample of the target page to demonstrate exactly what I am doing.
@Chris Lively - I am not redirecting to an error page, I generate a message indicating the query string value was not found. I updated the source code example.
**Update 1:**
I tried using Fiddler to trace the HttpWebRequest and it did not show up in the Web Sessions history window. Am I missing something in my source code to get a complete web request and response.
**Update 2:**
I did not include the following section of code in my example and it was culprit causing the issue. I was setting the `Credentials` property of the `HttpWebRequest` with a sevice account instead of my AD account which was causing the issue.
I updated my source code example | I need to replace the following line of code:
```
request.Credentials = new NetworkCredentials("myaccount", "mypassword", "mydomain");
```
with:
```
request.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
``` | What webserver are you using? I can remember at one point in my past when doing something with IIS there was an issue where the redirect between <http://example.com/> and <http://example.com/default.asp> dropped the query string.
Perhaps run Fiddler (or a protocol sniffer) and see if there is something happening that you aren't expecting.
Also check if passing in the full page name works. If it does the above is almost certainly the problem. | Not generating a complete response from a HttpWebResponse object in C# | [
"",
"c#",
"asp.net",
"httpwebrequest",
"query-string",
"httpwebresponse",
""
] |
I have a PHP script which changes file permissions on my server using chmod. I'd like to be able to run the script both via the browser (as nobody) and via a cron job (as username).
Is it correct that only the owner of the file can change the permissions? I.e. if I create a file via a cron job and set the permissions, I can't then change those permissions when running the script from the browser?
Are there any ways round this please? Delete (unlink) and re-create the file as whatever user the script is running as? Or is there a way of running a php script via a cron job as nobody? / via the browser as username?
The aim is to be able to make images publicly viewable or not by changing the file permissions. | Solution 1: Create a group for both the user and the cron user, add each user to your new group, and give both users access to read and write to the file (chmod g+rw filename). (safer then the next solution).
Solution 2: The simplest way to do this is to make the file readable and writable by everybody (chmod a+rw filename) would have this effect.
I would not recommend this for production usage though. | You can do this without putting a username or password in your script.
In your crontab have sudo execute the script as the user that your web server runs as. Following your example, I'll use the nobody user.
0 12 \* \* \* (sudo -u nobody php ./yourscript.php)
Note that the "nobody" user (as well as users like "apache") do not normally have login privileges. This may require you to allow sudo to execute scripts without a tty. You'll know this if you receive an error like: "sudo: sorry, you must have a tty to run sudo"
Allowing this can be done by commenting out the "Defaults requiretty" line using the visudo command. As with any change to sudo, you may want to search for any side-effects this change may come with. | php - changing file permissions | [
"",
"php",
"permissions",
""
] |
**Question 1:**
Is is possible to throw an exception that will not be caught by std::exception?
```
try
{
}
catch(std::exception & e)
{
}
catch(...)
{
//Is this block needed?
}
```
**Question 2:**
Is it better to have:
```
catch(std::exception & e)
```
Or
```
catch(std::exception e)
```
Or
```
catch(const std::exception &e)//<--- this is the method I usually use
``` | Q1: yes. you can throw any type, not necessary types that inherit from std::exception.
you can write `throw 1;` to throw and int or `throw "hello";` to throw a `char*`, both of which do not inherit from `std::exception`. this is however considered bad practice because the user of the class can't expect you to throw anything. If you don't want to inherit from `std::exception` what you usually do is create your own exception hierarchy.
Q2: catching an exception by value (2nd option) is a bad practice because you force the exception instance to be copied and in that possibly performing allocations which may cause further exceptions.
Using the first option suggest you intend to change `e` in the catch block which is also something you'd probably like to avoid because exceptions are usually maintained immutable after creation. The only thing that is left is the third option. | It sure is, you can throw any type you want, and it doesn't need to be derived from `std::exception`.
Catching a `const` reference is better. The reason is that you can throw a `const` or a non-`const`, and it can be caught by a non-`const`. Which works like a silent casting away of `const`. | In C++, is is possible to throw an exception that will not be caught by std::exception? | [
"",
"c++",
"exception",
""
] |
I have a PHP app that creates a CSV file which is forced to download using headers. Here's the relevant part of the code:
```
header('Content-Type: application/csv');
header("Content-length: " . filesize($NewFile));
header('Content-Disposition: attachment; filename="' . $FileName . '"');
echo $content;
exit();
```
What I'd like to do is redirect users to a new page after the file is built and the download prompt is sent. Just adding `header("Location: /newpage")` to the end didn't work, expectedly, so I'm not sure how to rig this up. | I don't think this can be done - although I am not 100% sure.
The common thing (e.g. in popular download sites) is the reverse: first you go to the *after* page and then the download starts.
So redirect your users to the *final* page that (among other things) says:
Your download should start automatically. If not click `[a href="create_csv.php"]here[/a]`.
As about initiating the download (e.g. automatically calling create\_csv.php) you have many options:
* HTML: `[meta http-equiv="refresh" content="5;url=http://site/create_csv.php"]`
* Javascript: `location.href = 'http://site/create_csv.php';`
* iframe: `[iframe src="create_csv.php"][/iframe]` | very easy to do in the case it is really needed.
But you will need to have a bit work in JavaScript and cookies:
in PHP you should add setting up a cookie
```
header('Set-Cookie: fileLoading=true');
```
then on the page where you call the download you should track with JS (e.g. once per second) if there is coming cookie like that (there is used plugin [jQuery cookie](https://github.com/carhartl/jquery-cookie) here):
```
setInterval(function(){
if ($.cookie("fileLoading")) {
// clean the cookie for future downoads
$.removeCookie("fileLoading");
//redirect
location.href = "/newpage";
}
},1000);
```
Now if the file starts to be downoaded JS recognizes it and redirects to the page needed after cookie is deleted.
Of course, you can tell you need browser to accept cookies, JavaScript and so on, but it works. | PHP generate file for download then redirect | [
"",
"php",
"redirect",
"header",
""
] |
I know how to download an html/txt page. For example :
```
//Variables
DWORD dwSize = 0;
DWORD dwDownloaded = 0;
LPSTR pszOutBuffer;
vector <string> vFileContent;
BOOL bResults = FALSE;
HINTERNET hSession = NULL,
hConnect = NULL,
hRequest = NULL;
// Use WinHttpOpen to obtain a session handle.
hSession = WinHttpOpen( L"WinHTTP Example/1.0",
WINHTTP_ACCESS_TYPE_DEFAULT_PROXY,
WINHTTP_NO_PROXY_NAME,
WINHTTP_NO_PROXY_BYPASS, 0);
// Specify an HTTP server.
if (hSession)
hConnect = WinHttpConnect( hSession, L"nytimes.com",
INTERNET_DEFAULT_HTTP_PORT, 0);
// Create an HTTP request handle.
if (hConnect)
hRequest = WinHttpOpenRequest( hConnect, L"GET", L"/ref/multimedia/podcasts.html",
NULL, WINHTTP_NO_REFERER,
NULL,
NULL);
// Send a request.
if (hRequest)
bResults = WinHttpSendRequest( hRequest,
WINHTTP_NO_ADDITIONAL_HEADERS,
0, WINHTTP_NO_REQUEST_DATA, 0,
0, 0);
// End the request.
if (bResults)
bResults = WinHttpReceiveResponse( hRequest, NULL);
// Keep checking for data until there is nothing left.
if (bResults)
do
{
// Check for available data.
dwSize = 0;
if (!WinHttpQueryDataAvailable( hRequest, &dwSize))
printf( "Error %u in WinHttpQueryDataAvailable.\n",
GetLastError());
// Allocate space for the buffer.
pszOutBuffer = new char[dwSize+1];
if (!pszOutBuffer)
{
printf("Out of memory\n");
dwSize=0;
}
else
{
// Read the Data.
ZeroMemory(pszOutBuffer, dwSize+1);
if (!WinHttpReadData( hRequest, (LPVOID)pszOutBuffer,
dwSize, &dwDownloaded))
{
printf( "Error %u in WinHttpReadData.\n",
GetLastError());
}
else
{
printf("%s", pszOutBuffer);
// Data in vFileContent
vFileContent.push_back(pszOutBuffer);
}
// Free the memory allocated to the buffer.
delete [] pszOutBuffer;
}
} while (dwSize>0);
// Report any errors.
if (!bResults)
printf("Error %d has occurred.\n",GetLastError());
// Close any open handles.
if (hRequest) WinHttpCloseHandle(hRequest);
if (hConnect) WinHttpCloseHandle(hConnect);
if (hSession) WinHttpCloseHandle(hSession);
// Write vFileContent to file
ofstream out("test.txt",ios::binary);
for (int i = 0; i < (int) vFileContent.size();i++)
out << vFileContent[i];
out.close();
```
When I try to download a picture, I get only the first lines of the file and no error message. The problem seems related to this parameter (ppwszAcceptTypes) in WinHttpOpenRequest Function.
[link text](http://msdn.microsoft.com/en-us/library/aa384099(VS.85).aspx) | Looks like this thread on MSDN is the same and has the solution
<http://social.msdn.microsoft.com/forums/en-US/vclanguage/thread/45ccd91c-6794-4f9b-8f4f-865c76cc146d> | Merely opening the ofstream in binary mode does not change the way that the << operators work - they will always perfform formatted output. You need to use the stream's write() function, which does unformatted output. | How to download a file with WinHTTP in C/C++? | [
"",
"c++",
"c",
"windows",
"http",
"winhttp",
""
] |
i did this:
```
this.combobox.ItemsSource = Common.Component.ModuleManager.Instance.Modules;
```
to bind the combobox to a collection, which is located in an other project/namespace. But i had to move the `ComboBox` into a DataTemplate.
Now i need to do something like that:
```
<ComboBox ItemsSource="{Binding Common.Component.ModuleManager.Instance.Modules}"/>
```
I don't want to list all of my tries, but none were successful.
Any better Ideas? | You need to map the .NET namespace to an XML namespace at the top of your XAML file:
```
<Window
x:Class="WindowsApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:q="clr-namespace:Common.Component">
```
So now "q" is mapped to the "Common.Component" namespace. Now you can use the x:Static markup extension to access the static "Instance" property of your ModuleManager class:
```
<ComboBox
ItemsSource="{Binding Modules,Source={x:Static q:ModuleManager.Instance}}" />
```
See if that works for you.
**Edit**
One more thing: If your "Common.Component" namespace lives in a separate assembly, you need to tell the XAML that:
```
xmlns:q="clr-namespace:Common.Component;assembly=CommonAssemblyFilename"
``` | Ok, i found a workaround. There must be a Problem if the collection is contained in an other assembly.
I added a new class to the assembly of the XAML and the Binding.
```
public static class ResourceHelper
{
public static IEnumerable<Common.Component.Module> Modules = Common.Component.ModuleManager.Instance.Modules;
}
```
Then i changed the binding to
```
<ComboBox ItemsSource="{Binding Path=.,Source={x:Static e:ResourceHelper.Modules}}"/>
```
And this works fine.
Thx Matt for your help. | How to bind to a collection contained in an other project/namespace? | [
"",
"c#",
"wpf",
"data-binding",
"xaml",
""
] |
I'm using the textwriter to write data to a text file but if the line exceeds 1024 characters a line break is inserted and this is a problem for me. Any suggestions on how to work round this or increase the character limit?
```
textWriter.WriteLine(strOutput);
```
Many thanks | I wrote a sample app that writes and read a 1025 character string. The size never changes. Although if I opened it with notepad.exe (Windows) I can see the extra character in the second line. These seems like a notepad limitation. Here is my sample code
```
static void Main(string[] args)
{
using (TextWriter streamWriter = new StreamWriter("lineLimit.txt")) {
String s=String.Empty;
for(int i=0;i<1025;i++){
s+= i.ToString().Substring(0,1);
}
streamWriter.Write(s);
streamWriter.Close();
}
using (TextReader streamReader = new StreamReader("lineLimit.txt"))
{
String s = streamReader.ReadToEnd();
streamReader.Close();
Console.Out.Write(s.Length);
}
}
``` | Use Write, not WriteLine | C# TextWriter inserting line break every 1024 characters | [
"",
"c#",
"text",
""
] |
I am working on a greasemonkey script for gmail where i need to make a copy of the "Inbox" link. Using cloneNode works ok, but i think there's an onclick event that gets attached to it at runtime. So, this is a two part question:
1. Is there a way to see what events are attached to a node?
2. Is there a way to copy those events as well?
The closest thing i found was jQuery, and i am not ready to go there yet.
Thanks! | 1. Not unless it's set using the `onclick` attribute on the element.
2. Not reliably (you can copy the `onclick` attribute, but whether that will continue to work depends on if it was used and what it does).
You're better off adding your own `click` handler, and then triggering that event on the original... Or simulating the behavior in some other way. | I think we can solve this stuff using this theory:
We have NodeList in JS, also called liveLists. We can check whenever their length changes, addEvent the desired common events to the new element in list at (length-1).
What say.... | javascript cloneNode with events | [
"",
"javascript",
"gmail",
"greasemonkey",
""
] |
in Ajax based web apps , Is it mandatory to provide an alternative "html" interface for those who don't have javascript enabled & slow connections?
for example Google Mail provides both Ajax & plain HTML Application **but** Microsoft SharePoint doesn't .
do we have to care about them (disabled javascript/slow connections) or not? | "Mandatory"? "Have to"? According to whom? There is no law saying that you must (at least in the US), unless you happen to be under a government contract that requires [Section 508](http://www.section508.gov/) compliance ([accessibility](http://www.w3.org/TR/WAI-WEBCONTENT/) to people with disabilities, such as the blind; a JavaScript solution may not work well in a screen reader) or whatever your local equivalent is.
Now, should you? Probably, yes. Do you really want to tell your users who prefer to run with JavaScript disabled that they can't use your service? Or do you want people who are running on mobile phones with slow processors and possibly worse or no JavaScript support not to be able to use your service? Consider also your blind users who may be using your site with a screen reader. Also, it's [easier for search engines](http://google.com/support/webmasters/bin/answer.py?answer=35769) to index your site if all of the content is in static versions of the page, rather than hidden in something loaded via an XMLHTTPRequest. And a well designed static website can also be much more easily adapted into a [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer) type API than a dynamic JavaScript based site.
Of course, there are always some applications that just don't make sense as a static HTML page. If you're implementing a drawing app using an HTML5 `canvas`, there's really no way to make that static. But on the whole, if you can do a static version, and it doesn't add too much to the cost of the project, you probably should. In fact, some advocate that you do a plain HTML static version first, and treat the styling and JavaScript as a [progressive](http://en.wikipedia.org/wiki/Progressive_enhancement) [enhancement](http://www.alistapart.com/articles/understandingprogressiveenhancement) on the static version, rather than the primary focus. | This is really a question you need to answer for yourself. You need to be able to gauge what kind of userbase you will be turning away by not supplying/supporting people without Ajax support. In addition, you may want to consider the SEO and/or Accessibility aspects.
If this were an inhouse/controlled webapp, then I wouldn't worry too much about it. However, for a public website, I tend to program for the lowest common denominator and build up from there. | Alternative "html" interface for Ajax based web apps | [
"",
"javascript",
"ajax",
"browser",
""
] |
I was wondering if anyone knew of a way to do some client side validation with jquery and then run the postback event manually for a asp.net control?
Here's a sample Master page
i.e.
```
<script type="text/javascript">
$(document).ready(function() {
$("#<%=lnkbtnSave.ClientID %>").click(function() {
alert("hello");
// Do some validation
// If validation Passes then post back to lnkbtnSave_Click Server side Event
});
});
</script>
<asp:LinkButton ID="lnkbtnSave" runat="server" onclick="lnkbtnSave_Click" ><asp:Image ID="Image3" runat="server" ImageUrl="~/images/save.gif" AlternateText="Save" />Save</asp:LinkButton>
```
Master Page Code Behind
```
public delegate void MasterPageMenuClickHandler(object sender, System.EventArgs e);
public event MasterPageMenuClickHandler MenuButton;
protected void lnkbtnSave_Click(object sender, EventArgs e)
{
// Assign value to public property
_currentButton = "Save";
// Fire event to existing delegates
OnMenuButton(e);
}
protected virtual void OnMenuButton(EventArgs e)
{
if (MenuButton != null)
{
//Invokes the delegates.
MenuButton(this, e);
}
}
```
Content Page Code behind
```
protected void Page_Load(object sender, EventArgs e)
{
Master.MenuButton += new Form.MasterPageMenuClickHandler(Master_MenuButton);
}
void Master_MenuButton(object sender, EventArgs e)
{
switch (Master.CurrentButton)
{
case "Save":
Save();
break;
case "New":
Response.Redirect("ContentPage.aspx");
break;
default:
break;
}
}
```
Also the control lnkbtnSave is in a master page so how would i determine which content page i'm on since each content page will have it's own controls to validate.
Thanks for any help | For the first question you should be able to just return true to cause the linkbutton to postback and return false to stop it.
```
$(document).ready(function() {
$("#<%=lnkbtnSave.ClientID %>").click(function() {
alert("hello");
var isValid = false;
// Do some validation
return isValid;
});
});
```
For your second question, you could have each page add their own version of the validation script, and have the function in the masterpage use it to determine validity:
Master Page:
```
$(document).ready(function() {
$("#<%=lnkbtnSave.ClientID %>").click(function() {
return pageIsValid();
});
});
```
Content Page:
```
function pageIsValid() {
var isValid = false;
//validation logic
return isValid;
}
``` | If you look at it in reverse it could be simplified. Instead of doing validation and then initiating the postback, you could validate and prevent the postback if needed.
```
$('#<%= lnkbtnSave.ClientID %>').click(function(e) {
if (!validate()) {
e.preventDefault(); //will stop the page from posting back.
}
});
```
If you really need to initiate the postback (and do it the other way) you could explicitly call the \_\_doPostBack() function that ASP.NET puts on the page for you after doing your validation.
```
// This does the exact same thing as above, but it explicitly calls __doPostBack
// If you use the above code __doPostBack will be called automatically for you.
$('#<%= lnkbtnSave.ClientID %>').click(function(e) {
if (validate()) {
__doPostBack('<%= lnkbtnSave.ClientID %>','');
}
e.preventDefault();
});
```
Let me know if any of this needs clarification. | Validation Before Postback Event on Masterpage Asp.net | [
"",
"asp.net",
"javascript",
"events",
"postback",
""
] |
I have several thread pools and I want my application to handle a cancel operation.
To do this I implemented a shared operation controller object which I poll at various spots in each thread pool worker function that is called.
Is this a good model, or is there a better way to do it?
I just worry about having all of these operationController.checkState() littered throughout the code. | Yes it's a good approach. Herb Sutter has a [nice article](http://www.ddj.com/architect/207100682) comparing it with the alternatives (which are worse). | With any kind of ansynchronous cancellation you're going to have to periodically poll some sort of flag. There's a fundamental issue of having to keep things in a consitant state. If you just kill a thread in the middle of whatever it's doing, bad things will happen sooner or later.
Depending on what you are actually doing, you may be able to just ignore the result of the operation instead of cancelling it. You let the operation continue on, but just don't wait for it to complete and never check the result.
If you actually need to stop the operation, then you're going to have to poll at appropriate points, and do whatever cleanup is necessary. | c++ implementing cancel across thread pools | [
"",
"c++",
"multithreading",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.