Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am looping through a directory and copying all files. Right now I am doing `string.EndsWith` checks for `".jpg"` or `".png"`, etc . .
Is there any more elegant way of determining if a file is an image (any image type) without the hacky check like above? | Check the file for a [known header](https://web.archive.org/web/20090302032444/http://www.mikekunz.com/image_file_header.html). (Info from link also mentioned in [this answer](https://stackoverflow.com/a/9446045/1452172))
The first eight bytes of a PNG file always contain the following (decimal) values: 137 80 78 71 13 10 26 10 | Check out [System.IO.Path.GetExtension](http://msdn.microsoft.com/en-us/library/system.io.path.getextension.aspx)
Here is a quick sample.
```
public static readonly List<string> ImageExtensions = new List<string> { ".JPG", ".JPEG", ".JPE", ".BMP", ".GIF", ".PNG" };
private void button_Click(object sender, RoutedEventArgs e)
{
var folder = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
var files = Directory.GetFiles(folder);
foreach(var f in files)
{
if (ImageExtensions.Contains(Path.GetExtension(f).ToUpperInvariant()))
{
// process image
}
}
}
``` | determine if file is an image | [
"",
"c#",
"image",
"file",
""
] |
I found <http://www.iseriespython.com/>, which is a version of Python for the iSeries apparently including some system specific data access classes. I am keen to try this out, but will have to get approval at work to do so. My questions are:
Does the port work well, or are there limits to what the interpreter can handle compared with standard Python implementations?
Does the iSeries database access layer work well, creating usable objects from table definitions? | From what I have seen so far, it works pretty well. Note that I'm using iSeries Python 2.3.3. The fact that strings are natively EBCDIC can be a problem; it's definitely one of the reasons many third-party packages won't work as-is, even if they are pure Python. (In some cases they can be tweaked and massaged into working with judicious use of encoding and decoding.) Supposedly 2.5 uses ASCII natively, which would in principle improve compatibility, but I have no way to test this because I'm on a too-old version of OS/400.
Partly because of EBCDIC and partly because OS/400 and the QSYS file system are neither Unix-like nor Windows-like, there are some pieces of the standard library that are not implemented or are imperfectly implemented. How badly this would affect you depends on what you're trying to do.
On the plus side, the iSeries-specific features work quite well. It's very easy to work with physical files as well as stream files. Calling CL or RPG programs from Python is fairly painless. On balance, I find iSeries Python to be highly usable and very worthwhile.
**Update (2012):** A lot of work has gone into iSeries Python since this question was asked. [Version 2.7](http://sourceforge.net/projects/iseriespython/) is now available, meaning it's up-to-date as far as 2.x versions go. A few participants of the forum are reasonably active and provide amazing support. One of them has gotten Django working on the i. As expected, the move to native ASCII strings solves a lot of the EBCDIC problems and greatly increases compatibility with third-party packages. I enthusiastically recommend iSeries Python 2.7 for anyone on V5R3 or later. (I still strongly recommend iSeries Python 2.3.3 for those who are on earlier versions of the operating system.)
**Update (2021):** Unfortunately, iSeriesPython is no longer maintained, and the old website and forum are gone. You can still get the software from its SourceForge repository, and it is still an amazingly useful and worthwhile asset for those who are stuck on old (pre-7.2) versions of the operating system. For those who are on 7.2 or newer, there is a Python for PASE from IBM, which should be considered the preferred way to run Python on the midrange platform. This version of Python is part of a [growing ecosystem of open source software on IBM i](https://ibmi-oss-docs.readthedocs.io/en/latest/yum/README.html). | It sounds like it is would work as expected. Support for other libraries might be pretty limited, though.
Timothy Prickett talks about some Python ports for the iSeries in this article:
<http://www.itjungle.com/tfh/tfh041706-story02.html>
Also, some discussion popped up in the Python mailing archives:
<http://mail.python.org/pipermail/python-list/2004-January/245276.html> | Has anyone here tried using the iSeries Python port? | [
"",
"python",
"ibm-midrange",
""
] |
I have a web application that processes events and audio received from a specialised microphone. The audio is processed by a Java applet that runs in the web page, but other events (microphone connected, microphone disconnected, microphone button pressed) are handled by an ActiveX object.
The ActiveX object traps these events and calls JavaScript code to handle them
```
<!-- Load the ActiveX control -->
<object id="PhilipsSpeechMikeCtrl" width="0" height="0" tabindex="-1"
classid="CLSID:AAA44754-CC81-4692-91AF-7064E58EB22A"
standby="Loading Philips SpeechMike component..."
type="application/x-oleobject">
</object>
<script type="text/javascript">
// This is Microsofts javascript way of trapping ActiveX object events.
function PhilipsSpeechMikeCtrl::SPMEventDeviceConnected(deviceID) {
// Call JavaScript code to handle the microphone connected event
}
function PhilipsSpeechMikeCtrl::SPMEventDeviceDisconnected(deviceID) {
// Call JavaScript code to handle the microphone disconnected event
}
function PhilipsSpeechMikeCtrl::SPMEventButton(deviceID, eventId) {
// Call JavaScript code to handle the microphone button pressed event
}
</script>
```
Of course a problem with this approach is that it's completely IE dependent. I would prefer to load the ActiveX object within the applet, trap the events there and handle the events either within the applet, or JavaScript code called from the applet. This should then enable me to run the app in any browser that supports applets.
I'm not entirely sure how to go about implementing the solution I've proposed above, any suggestions?
**Update:** I realise the solution I've proposed above would still be IE dependent, that's fine. My immediate goal is to support all browsers on Windows.
It has been suggested that instead of using ActiveX, I could use JNI (or JNA) to access the DLLs underlying the ActiveX object. However, I don't actually want to call the functions in the DLLs, I want the DLLs to call me, i.e. register an event handler.
Thanks,
Don | ActiveX are not supported by an another browser than IE, so there is no way for your application to support all browsers, even on Windows only.
An attempt (plugin) to port ActiveX under Firefox 1 was made, but wasn't really useful so as far as I know, there is today no "emulation" solution to your question.
Sorry...
(see [here](http://support.mozilla.com/en-US/kb/ActiveX) for Mozilla comments) | [JACOB](http://danadler.com/jacob/) is supposed to let you call COM from Java. It looks like it supports events too. | load ActiveX object in Applet | [
"",
"java",
"activex",
"applet",
"cross-browser",
""
] |
Trying out PDO for the first time.
```
$dbh = new PDO("mysql:host=$hostname;dbname=animals", $username, $password);
$stmt = $dbh->query("SELECT * FROM animals");
$stmt->setFetchMode(PDO::FETCH_INTO, new animals);
foreach($stmt as $animals)
{
echo $animals->name;
}
```
If i skip the `setFetchMode()` method, then I need to call `$animals["name"]` which I do not want.
But I do not want to call the `setFetchMode()` for each query I do.
Is there a way to set the default FetchMode ? Or some other method to make the `query()` return objects with one global setting. | Perhaps you could try extending the PDO class to automatically call the function for you... in brief:
```
class myPDO extends PDO
{
function animalQuery($sql)
{
$result = parent::query($sql);
$result->setFetchMode(PDO::FETCH_INTO, new animals);
return $result;
}
// // useful if you have different classes
// function vegetableQuery($sql)
// {
// $result = parent::query($sql);
// $result->setFetchMode(PDO::FETCH_INTO, new vegetables);
// return $result;
// }
}
$dbh = new myPDO("mysql:host=$hostname;dbname=animals", $username, $password);
$stmt = $dbh->animalQuery("SELECT * FROM animals");
foreach($stmt as $animals)
{
echo $animals->name;
}
``` | Since PDO would need to know what object you want to fetch into, you would need to specify it manually.
But of you just want to use the object to retrieve the data rather than an array and do not care if its not an animal object, you can use anonymous objects by default when you set the attribute after the connection string which could be done in a wrapped constructor
```
$connection = new PDO($connection_string);
//PDO::FETCH_OBJ: returns an anonymous object with property names that correspond to the column names returned in your result set
$connection->setAttribute(PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_OBJ);
```
Then all queries will return objects. Though it's not exactly what you want its close.
---
You could also inject the data into your animal class:
```
while($dataObj = ...) {
$animal = new Animal($dataObj);
}
```
---
If you look at the query function it is possible to change some options by passing extra parameters:
<http://www.php.net/manual/en/pdo.query.php>
I haven't tested it but it looks like it gets you close to what you want | How can I simply return objects in PDO? | [
"",
"php",
"object",
"pdo",
""
] |
I have an application where i sometimes need to read from file being written to and as a result being locked. As I have understood from other [questions](https://stackoverflow.com/questions/50744/wait-until-file-is-unlocked-in-net) i should catch the IOException and retry until i can read.
But my question is how do i know for certain that the file is locked and that it is not another IOExcetpion that occurs. | When you open a file for reading in .NET it will at some point try to create a file handle using the [CreateFile](http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx) API function which sets the [error code](http://help.netop.com/support/errorcodes/win32_error_codes.htm) which can be used to see why it failed:
```
const int ERROR_SHARING_VIOLATION = 32;
try
{
using (var stream = new FileStream("test.dat", FileMode.Open, FileAccess.Read, FileShare.Read))
{
}
}
catch (IOException ex)
{
if (Marshal.GetLastWin32Error() == ERROR_SHARING_VIOLATION)
{
Console.WriteLine("The process cannot access the file because it is being used by another process.");
}
}
``` | There's a useful discussion on [google groups](http://groups.google.com/group/microsoft.public.dotnet.languages.csharp/browse_thread/thread/e1c077cddc823339/b0d9b54636e72ea1) which you really should read. One of the options is close to darin's; however, to guarantee you get the right win32 error, you really should call the win32 OpenFile() API yourself (otherwise, you really don't know which error you are retrieving).
Another is to parse the error message: that will fail if your application is run on another language version.
A third option is to hack inside the exception class with reflection to fish out the actual HRESULT.
None of the alternatives are really that attractive: the IOException hierarchy would benefit from a few more subclasses IMHO. | how to best wait for a filelock to release | [
"",
"c#",
".net",
"ioexception",
"filelock",
""
] |
I am trying to display a please wait dialog for a long running operation. The problem is since this is single threaded even though I tell the WaitScreen to display it never does. Is there a way I can change the visibility of that screen and make it display immediately? I included the Cursor call as an example. Right after I call this.Cursor, the cursor is updated immediately. This is exactly the behavior I want.
```
private void Button_Click(object sender, RoutedEventArgs e)
{
this.Cursor = System.Windows.Input.Cursors.Pen;
WaitScreen.Visibility = Visibility.Visible;
// Do something long here
for (Int32 i = 0; i < 100000000; i++)
{
String s = i.ToString();
}
WaitScreen.Visibility = Visibility.Collapsed;
this.Cursor = System.Windows.Input.Cursors.Arrow;
}
```
WaitScreen is just a Grid with a Z-index of 99 that I hide and show.
update: I really don't want to use a background worker unless I have to. There are a number of places in the code where this start and stop will occur. | I found a way! Thanks to [this thread](http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/6fce9b7b-4a13-4c8d-8c3e-562667851baa/).
```
public static void ForceUIToUpdate()
{
DispatcherFrame frame = new DispatcherFrame();
Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.Render, new DispatcherOperationCallback(delegate(object parameter)
{
frame.Continue = false;
return null;
}), null);
Dispatcher.PushFrame(frame);
}
```
That function needs to be called right before the long running operation. That will then Force the UI thread to update. | Doing it single threaded really is going to be a pain, and it'll never work as you'd like. The window will eventually go black in WPF, and the program will change to "Not Responding".
I would recommending using a BackgroundWorker to do your long running task.
It's not that complicated. Something like this would work.
```
private void DoWork(object sender, DoWorkEventArgs e)
{
//Do the long running process
}
private void WorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
//Hide your wait dialog
}
private void StartWork()
{
//Show your wait dialog
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += DoWork;
worker.RunWorkerCompleted += WorkerCompleted;
worker.RunWorkerAsync();
}
```
You can then look at the ProgressChanged event to display a progress if you like (remember to set WorkerReportsProgress to true). You can also pass a parameter to RunWorkerAsync if your DoWork methods needs an object (available in e.Argument).
This really is the simplest way, rather than trying to do it singled threaded. | Display "Wait" screen in WPF | [
"",
"c#",
".net",
"wpf",
"multithreading",
"xaml",
""
] |
I have this class where I am using a combination of jQuery and [Prototype](http://en.wikipedia.org/wiki/Prototype_JavaScript_Framework):
```
var MyClass = Class.create({
initElements: function(sumEl) {
this.sumEl = sumEl;
sumEl.keyup(this.updateSumHandler);
},
updateSumHandler: function(event) {
// Throws error here: "this.updateSum is not a function"
this.updateSum();
},
updateSum: function() {
// does something here
}
});
```
How can I call `this.updateSum()` after all? | You need to use closures.
```
initElements: function(sumEl) {
this.sumEl = sumEl;
var ref = this;
sumEl.keyup( function(){ref.updateSumHandler();});
},
``` | Totally untested suggestion:
```
sumEl.keyup(this.updateSumHandler.bind(this));
```
`.bind()` gives back a new function where the first parameter of `bind` is closured for you as the function's `this` context.
It can also closure parameters, check out the [documentation](http://www.prototypejs.org/api/function/bind).
To me, `Function.bind()` is the single best function ever written in JavaScript :) | Overwritten "this" variable problem or how to call a member function? | [
"",
"javascript",
"jquery",
"oop",
"event-handling",
"prototypejs",
""
] |
Is it bad practice to use the following format when `my_var` can be None?
```
if my_var and 'something' in my_var:
#do something
```
*The issue is that `'something' in my_var` will throw a TypeError if my\_var is None.*
Or should I use:
```
if my_var:
if 'something' in my_var:
#do something
```
or
```
try:
if 'something' in my_var:
#do something
except TypeError:
pass
```
To rephrase the question, which of the above is the best practice in Python (if any)?
Alternatives are welcome! | It's safe to depend on the order of conditionals ([Python reference here](http://docs.python.org/library/stdtypes.html#boolean-operations-and-or-not)), specifically because of the problem you point out - it's very useful to be able to short-circuit evaluation that could cause problems in a string of conditionals.
This sort of code pops up in most languages:
```
IF exists(variable) AND variable.doSomething()
THEN ...
``` | Yes it is safe, it's explicitly and very clearly defined in the language reference:
> The expression `x and y` first evaluates
> `x`; if `x` is `false`, its value is
> returned; otherwise, `y` is evaluated
> and the resulting value is returned.
>
> The expression `x or y` first evaluates
> `x`; if `x` is true, its value is
> returned; otherwise, `y` is evaluated
> and the resulting value is returned. | Is it safe to rely on condition evaluation order in if statements? | [
"",
"python",
"if-statement",
""
] |
I have a file (image) upload script in PHP that I use to upload and resize images... It uses a simple MIME type and size validation so only jpg images are allowed and 1MB max file size.
I recently discovered a problem. When I try tu upload a .avi file using the script, the script processes the file like its the correct MIME type and size and then just do nothing, just takes me back to the upload form without any error message. (Instead of showing a "file too big" message).
I mean, if I try to upload a .gif or .txt or something else I get an error, as expected.
If I try to upload any file bigger than 1MB I get an error, as expected.
Only when I try to upload a .avi file with more than 1MB I dont get any kind of error.....
Well, here the first par of the code:
```
// define a constant for the maximum upload size
define ('MAX_FILE_SIZE', 1024000);
if (array_key_exists('upload', $_POST)) {
// define constant for upload folder
define('UPLOAD_DIR', 'C:/Wamp/www/Version-1.4/posters_uploaded/');
// replace any spaces in original filename with underscores. At the same time, assign to a simpler variable
$file = str_replace(' ', '_', $_FILES['image']['name']);
// convert the maximum size to KB
$max = number_format(MAX_FILE_SIZE/1024, 1).'kb';
// create an array of permitted MIME types
$permitted = array('image/jpeg','image/pjpeg');
// begin by assuming the file is unacceptable
$sizeOK = false;
$typeOK = false;
// check that file is within the permitted size
if ($_FILES['image']['size'] > 0 && $_FILES['image']['size'] <= MAX_FILE_SIZE) {
$sizeOK = true;
}
// check that file is of a permitted MIME type
foreach ($permitted as $type) {
if ($type == $_FILES['image']['type']) {
$typeOK = true;
break;
}
}
if ($sizeOK && $typeOK) {
switch($_FILES['image']['error']) {
case 0: // ...................
```
I'm just modifying a build PHP code so Im no expert...
Any suggestions??
Thanks. | As john Rasch mentioned above, any file above the php.ini max\_upload\_filesize will not process at all. so you'll have no chance to test the error for you. you have to assume it was not uploaded and validate it if it was.
---
now that I understand your scenario better I think this is what you can do:
```
// at the top of your script
$upload_success = FALSE;
// when successfully detected upload
$upload_success = TRUE;
// if successful upload code is never run
$display_error = "File not uploaded, may be too large a file, "
. "please upload less than 1MB"
;
print $display_error;
```
---
main point being:
You can't always detect upload files that are too big because they get cut off at a level deeper than where the scripts run. | <https://www.php.net/manual/en/features.file-upload.common-pitfalls.php>
It looks like your `upload_max_filesize` ini-setting is too low. This would cause no error to be displayed when you upload a very large file such as an AVI video.
The reason you're seeing the errors with text files and .jpg images is likely because the size of those files are greater than 1 MB, but below your `upload_max_filesize` setting in *php.ini*.
Try `echo`ing the value of `ini_get("max_upload_filesize")` and see what the value is if you don't have access to the *php.ini* file directly. | PHP upload code problem with permitted MIME file types | [
"",
"php",
"image-processing",
"image-manipulation",
"file-upload",
""
] |
Say you have a javascript object like this:
```
var data = { foo: 'bar', baz: 'quux' };
```
You can access the properties by the property name:
```
var foo = data.foo;
var baz = data["baz"];
```
But is it possible to get these values if you don't know the name of the properties? Does the unordered nature of these properties make it impossible to tell them apart?
In my case I'm thinking specifically of a situation where a function needs to accept a series of name-value pairs, but the names of the properties may change.
My thoughts on how to do this so far is to pass the names of the properties to the function along with the data, but this feels like a hack. I would prefer to do this with introspection if possible. | Old versions of JavaScript (< ES5) require using a `for..in` loop:
```
for (var key in data) {
if (data.hasOwnProperty(key)) {
// do something with key
}
}
```
ES5 introduces [Object.keys](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/keys) and [Array#forEach](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/forEach) which makes this a little easier:
```
var data = { foo: 'bar', baz: 'quux' };
Object.keys(data); // ['foo', 'baz']
Object.keys(data).map(function(key){ return data[key] }) // ['bar', 'quux']
Object.keys(data).forEach(function (key) {
// do something with data[key]
});
```
[ES2017](https://github.com/tc39/proposals/blob/cba6aa3c8ace8d87c6fb8a0e9b9262240815ce34/finished-proposals.md) introduces [`Object.values`](http://tc39.github.io/proposal-object-values-entries/#Object.values) and [`Object.entries`](http://tc39.github.io/proposal-object-values-entries/#Object.entries).
```
Object.values(data) // ['bar', 'quux']
Object.entries(data) // [['foo', 'bar'], ['baz', 'quux']]
``` | You can loop through keys like this:
```
for (var key in data) {
console.log(key);
}
```
This logs "Name" and "Value".
If you have a more complex object type (not just a plain hash-like object, as in the original question), you'll want to only loop through keys that belong to the object itself, as opposed to keys on the object's [prototype](https://developer.mozilla.org/en/Core_JavaScript_1.5_Guide/Details_of_the_Object_Model):
```
for (var key in data) {
if (data.hasOwnProperty(key)) {
console.log(key);
}
}
```
As you noted, keys are not guaranteed to be in any particular order. Note how this differs from the following:
```
for each (var value in data) {
console.log(value);
}
```
This example loops through values, so it would log `Property Name` and `0`. N.B.: The [`for each`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Statements/for_each...in) syntax is mostly only supported in Firefox, but not in other browsers.
If your target browsers support ES5, or your site includes [`es5-shim.js`](https://github.com/es-shims/es5-shim) (recommended), you can also use `Object.keys`:
```
var data = { Name: 'Property Name', Value: '0' };
console.log(Object.keys(data)); // => ["Name", "Value"]
```
and loop with [`Array.prototype.forEach`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach):
```
Object.keys(data).forEach(function (key) {
console.log(data[key]);
});
// => Logs "Property Name", 0
``` | How do I access properties of a javascript object if I don't know the names? | [
"",
"javascript",
"object",
"properties",
"iteration",
"introspection",
""
] |
How do I enable flashback queries for all of my developers on my Oracle instance?
Flashback queries are executed as of a particular time so that
```
select * from mytable as of timestamp(sysdate-1);
```
will show the contents of the table as of 24 hours ago. | ```
grant execute on dbms_flashback to public;
grant flashback any table to public;
``` | You can use flashback query for your own tables without needing any privileges. If you want other users to use flashback query on your tables you need to grant select and flashback privileges to those users.
If you want to see data as of 24 hours ago you need to have an adequately sized undo tablespace and properly set undo retention.
[Also see this](http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_flashback.htm). | Oracle: how to enable flashback "as of" queries for all developers? | [
"",
"sql",
"oracle",
""
] |
Can someone explain the usage of the dollar sign here..
```
var updateProgressDiv = $get('updateProgressDiv');
```
scroll down to the functions..
<http://mattberseth.com/blog/2007/05/ajaxnet_example_using_an_updat.html> | It's merely part of the function name -- $ is allowed in JavaScript identifiers, so ($get) is the full function name. | This is asp.net ajax shortcut:
$get Shortcut Method
Provides a shortcut to the getElementById method of the Sys.UI.DomElement class. This member is static and can be invoked without creating an instance of the class.
[See more information about](http://www.asp.net/ajax/documentation/live/ClientReference/Global/GetShortCutMethod.aspx) | dollar sign in javascript methods | [
"",
"javascript",
"syntax",
""
] |
I am looking for a Javascript-based date picker that would allow me to display one week only (instead of the current month).
Ideally it should be one that can be expanded to a full month view if necessary and back again.
Also, (css based?) design customizability would be a plus.
A jQuery solution would be preferred. I've had a brief look at the jQuery datepicker, but it seemed to me that it can only display full months. Please let me know if I am wrong. | The jQuery UI datepicker, as has been noted, can not easily be configured to display only one week. You can however use the `maxDate` and `minDate` configuration options to constrain the user's input. Here is an [example of this functionality](http://jqueryui.com/demos/datepicker/#min-max).
To answer your specific question, I am not aware of any datepicker control which will only display one week. | I'm not sure, but maybe you can customize this <http://jqueryui.com/demos/datepicker/>
Ones that show just one week I know none :(
Edit: After inspect, it shows to be CSS based, I've run through it's options but couldn't find anything just to show one week. Rather there is one which allows you to select only one week though, I don't know what the supposed effect is though. | Javascript date picker for displaying one week only? | [
"",
"javascript",
"jquery",
"datepicker",
""
] |
When you write an app that will have vast number of classes, do you use prefixes/suffixes for your class names? Or should I just rely on the namespaces I already define?
In my case, I have classes like:
```
Blur
Contrast
Sharpen
Invert
...
```
I just don't know what I might have tomorrow, and want to be collision free as possible with the frequent used classes in the BCL.
Any ideas? | Namespaces were invented exactly to solve this problem. Use them. | I would say don't use prefixes or suffixes for the names -- that's very close to hungarian notation, which is widely regarded as bad, even by the creators of it at MS. The sole exception to this rule is interfaces, which many regard as OK to prefix with `I`, but I am of the opinion that interfaces shouldn't have a prefix either. You should group classes that are related into namespaces to prevent collisions. | Prefix/suffix for classes | [
"",
"c#",
".net",
"naming-conventions",
""
] |
Is it possible to find out if two expressions are the same?
Like given the following four expressions:
```
Expression<Func<int, bool>> a = x => false;
Expression<Func<int, bool>> b = x => false;
Expression<Func<int, bool>> c = x => true;
Expression<Func<int, bool>> d = x => x == 5;
```
Then, at least *we* can see that:
* `a == b`
* `a != c`
* `a != d`
But can I do anything to find this out in my code?
Took a peek in the msdn library, where it says that
> `Equals`: Determines whether the specified Object is equal to the current `Object`. (Inherited from `Object`.)
which I guess means that at least the Expression class hasn't overrided the equals method to become Equatable? So how would you do this? Or am I asking too much here? :p | You can have a look at the type [ExpressionEqualityComparer](https://github.com/lytico/db4o/blob/master/db4o.net/Db4objects.Db4o.Linq/Db4objects.Db4o.Linq/Expressions/ExpressionEqualityComparer.cs) that is used inside [Linq to db4o](http://evain.net/blog/articles/2008/02/06/an-elegant-linq-to-db4o-provider-and-a-few-linq-tricks). It implements the interface IEqualityComparer<T>, so it's usable for generic collections, as well as for a standalone usage.
It uses the type [ExpressionComparison](https://github.com/lytico/db4o/blob/master/db4o.net/Db4objects.Db4o.Linq/Db4objects.Db4o.Linq/Expressions/ExpressionComparison.cs) to compare two Expressions for equality, and [HashCodeCalculation](https://github.com/lytico/db4o/blob/master/db4o.net/Db4objects.Db4o.Linq/Db4objects.Db4o.Linq/Expressions/HashCodeCalculation.cs), to compute a hashcode from an Expression.
It all involves visiting the expression tree, so it can be pretty costly if you do it repeatedly, but it can also be quite handy.
The code is available under the GPL or the [dOCL](http://www.db4o.com/about/company/legalpolicies/docl.aspx)
For instance, here's your test:
```
using System;
using System.Linq.Expressions;
using Db4objects.Db4o.Linq.Expressions;
class Test {
static void Main ()
{
Expression<Func<int, bool>> a = x => false;
Expression<Func<int, bool>> b = x => false;
Expression<Func<int, bool>> c = x => true;
Expression<Func<int, bool>> d = x => x == 5;
Func<Expression, Expression, bool> eq =
ExpressionEqualityComparer.Instance.Equals;
Console.WriteLine (eq (a, b));
Console.WriteLine (eq (a, c));
Console.WriteLine (eq (a, d));
}
}
```
And it indeed prints True, False, False. | As a lazy answer, you can check `ToString()` - it should at least indicate where they are clearly different (although it will include the var-name in there, so that would have to be the same).
For checking equivalence accurately... much harder - a lot of work, over a lot of different node types. | How to check if two Expression<Func<T, bool>> are the same | [
"",
"c#",
"expression",
"equality",
""
] |
* **Is PHP (as of 5.2) thread-safe on Linux/UNIX?**
* **Would it be possible to use it with Apache Worker-MPM or Event-MPM?**
The facts I gathered so far are inconclusive:
* Default binaries included in most distributions have ZTS disabled, so I'm aware, that I'd have to recompile them.
* In theory Zend Engine (core PHP) with ZTS enabled is thread-safe.
* It's said that some modules might not be thread-safe, but I haven't found any list of modules that are or that are not.
* [PHP FAQ](http://www.php.net/manual/en/faq.installation.php#faq.installation.apache2) states pretty much same as above.
What's your experience?
It's not only about [segmentation faults](http://en.wikipedia.org/wiki/Segmentation_fault) ("access violations" in Windows nomenclature). There is a lot more to [thread safety](http://en.wikipedia.org/wiki/Thread_safe). | I know gettext and [set\_locale](http://php.net/manual/en/function.setlocale.php) is not threadsafe. PHP should not be used with a threaded MPM.
[PHP Isn't Thread-Safe Yet](http://neosmart.net/blog/2008/dont-believe-the-lies-php-isnt-thread-safe-yet/).
[Running PHP not threaded](https://stackoverflow.com/questions/1646249/php-gettext-problems-like-non-thread-safe/6726570#6726570). | See *[Where can I get libraries needed to compile some of the optional PHP extensions?](http://php.net/manual/en/faq.obtaining.php#faq.obtaining.optional)* for a list of thread-safe and nonthread-safe extensions (\* marked are not thread-safe and others are). | Is PHP thread-safe? | [
"",
"php",
"multithreading",
"thread-safety",
""
] |
i want a python program that given a directory, it will return all directories within that directory that have 775 (`rwxrwxr-x`) permissions
thanks! | Neither answer recurses, though it's not entirely clear that that's what the OP wants. Here's a recursive approach (untested, but you get the idea):
```
import os
import stat
import sys
MODE = "775"
def mode_matches(mode, file):
"""Return True if 'file' matches 'mode'.
'mode' should be an integer representing an octal mode (eg
int("755", 8) -> 493).
"""
# Extract the permissions bits from the file's (or
# directory's) stat info.
filemode = stat.S_IMODE(os.stat(file).st_mode)
return filemode == mode
try:
top = sys.argv[1]
except IndexError:
top = '.'
try:
mode = int(sys.argv[2], 8)
except IndexError:
mode = MODE
# Convert mode to octal.
mode = int(mode, 8)
for dirpath, dirnames, filenames in os.walk(top):
dirs = [os.path.join(dirpath, x) for x in dirnames]
for dirname in dirs:
if mode_matches(mode, dirname):
print dirname
```
Something similar is described in the stdlib documentation for
[stat](http://docs.python.org/library/stat.html). | Take a look at the [os](http://docs.python.org/library/os.html) module. In particular [os.stat](http://docs.python.org/library/os.html#os.stat) to look at the permission bits.
```
import os
for filename in os.listdir(dirname):
path=os.path.join(dirname, filename)
if os.path.isdir(path):
if (os.stat(path).st_mode & 0777) == 0775:
print path
``` | check permissions of directories in python | [
"",
"python",
"chmod",
""
] |
The codebase where I work has an object called Pair where A and B are the types of the first and second values in the Pair. I find this object to be offensive, because it gets used instead of an object with clearly named members. So I find this:
```
List<Pair<Integer, Integer>> productIds = blah();
// snip many lines and method calls
void doSomething(Pair<Integer, Integer> id) {
Integer productId = id.first();
Integer quantity = id.second();
}
```
Instead of
```
class ProductsOrdered {
int productId;
int quantityOrdered;
// accessor methods, etc
}
List<ProductsOrderded> productsOrdered = blah();
```
Many other uses of the Pair in the codebase are similarly bad-smelling.
I Googled tuples and they seem to be often misunderstood or used in dubious ways. Is there a convincing argument for or against their use? I can appreciate not wanting to create huge class hierarchies but are there realistic codebases where the class hierarchy would explode if tuples weren't used? | First of all, a tuple is quick and easy: instead of writing a class for every time you want to put 2 things together, there's a template that does it for you.
Second of all, they're generic. For example, in C++ the std::map uses an std::pair of key and value. Thus ANY pair can be used, instead of having to make some kind of wrapper class with accessor methods for every permutation of two types.
Finally, they're useful for returning multiple values. There's really no reason to make a class specifically for a function's multiple return values, and they shouldn't be treated as one object if they're unrelated.
To be fair, the code you pasted is a bad use of a pair. | Tuples are used all the time in Python where they are integrated into the language and very useful (they allow multiple return values for starters).
Sometimes, you really just need to pair things and creating a real, honest to god, class is overkill. One the other hand, using tuples when you should really be using a class is just as bad an idea as the reverse. | Why use tuples instead of objects? | [
"",
"c++",
"tuples",
""
] |
How can I detect a Scrollbar presence ( using Javascript ) in HTML iFrame ?
**I have already tried :**
```
var vHeight = 0;
if (document.all) {
if (document.documentElement) {
vHeight = document.documentElement.clientHeight;
} else {
vHeight = document.body.clientHeight
}
} else {
vHeight = window.innerHeight;
}
if (document.body.offsetHeight > vHeight) {
//when theres a scrollbar
}else{
//when theres not a scrollbar
}
```
**And I also had tried :**
```
this.scrollLeft=1;
if (this.scrollLeft>0) {
//when theres a scrollbar
this.scrollLeft=0;
}else{
//when theres not a scrollbar
return false;
}
```
With no success..
I have searched the *javascript objets* on *DOM Inspector*, but didn't find anything.
Is is possible to detect a scrollbar presence in a iframe in javacscript ?
---
The iframe content comes from the same domain.
No success until now..
[alt text http://www.upvtp.com.br/file.php/1/help\_key.jpg](http://www.upvtp.com.br/file.php/1/help_key.jpg) | Using jQuery you can compare the document height, the scrollTop position and the viewport height, which might get you the answer you require.
Something along the lines of:
```
$(window).scroll(function(){
if(isMyStuffScrolling()){
//There is a scroll bar here!
}
});
function isMyStuffScrolling() {
var docHeight = $(document).height();
var scroll = $(window).height() + $(window).scrollTop();
return (docHeight == scroll);
}
``` | ```
var root= document.compatMode=='BackCompat'? document.body : document.documentElement;
var isVerticalScrollbar= root.scrollHeight>root.clientHeight;
var isHorizontalScrollbar= root.scrollWidth>root.clientWidth;
```
This detects whether there is a *need* for a scrollbar. For the default of iframes this is the same as whether there *is* a scrollbar, but if scrollbars are forced on or off (using the ‘scrolling="yes"/"no"’ attribute in the parent document, or CSS ‘overflow: scroll/hidden’ in the iframe document) then this may differ. | How can I detect a Scrollbar presence ( using Javascript ) in HTML iFrame? | [
"",
"javascript",
"html",
"dom",
"iframe",
"scrollbar",
""
] |
How can I do this in WPF's code-behind?
```
<Grid Background="{DynamicResource {x:Static SystemColors.DesktopBrushKey}}"/>
``` | I just found an ugly solution:
```
grid1.SetResourceReference(
Control.BackgroundProperty,
SystemColors.DesktopBrushKey);
```
I hope someone will post a better one (I'd like to see something like grid1.Background = BackgroundBrush, because the syntax of SetResourceReference is a step backwards from Windows Forms). | This must have been added to a later version of WPF since this was originally posted because your original code works fine for me (I'm using WPF 4.5)
`<Grid Background="{DynamicResource {x:Static SystemColors.DesktopBrushKey}}"/>` | How can I set a WPF control's color to a system color programmatically, so that it updates on color scheme changes? | [
"",
"c#",
"wpf",
""
] |
Is $('\*').index(currentElement) will give a unique number?
i am asking because i can't understand the index method good from the [JQuery docs](http://docs.jquery.com/Core/index) | Yes, it will return the index where you can find your element in your jQuery collection - e.g.
```
var allElements = $("*");
var index = allElements.index(someElement);
if(allEmenets[index] == someElement){
alert("Found it!");
}
``` | "Yes". (I should qualify this with, as long as the context of the index is always the entire DOM) Otherwise, as the other answer states, the answer is no.
It will give you the index of the element within all DOM elements. If the DOM changes, it will no longer be valid.
The reason for needing an index like this, would have to be *very* unusual, and I would strongly suspect there's a better way to do what you are trying to accomplish. | $('*').index(currentElement) will give a unique number? | [
"",
"javascript",
"jquery",
"html",
"dom",
""
] |
What's the difference between
```
$PDOStatement->fetchColumn();
```
and
```
$PDOStatement->fetch(PDO::FETCH_COLUMN);
```
(if one exists)? Or are they functionally similar but only aesthetically different? | By default `fetchColumn()` will return only `'value'` while other by default will return `array('column_name'=>'value')`. You'd have to use `setFetchMode()` to change that.
```
$PDOStatement->fetchColumn($colno);
```
would be equivalent to:
```
$PDOStatement->setFetchMode(PDO::FETCH_COLUMN, $colno);
$PDOStatement->fetch();
``` | From the doc [here](https://www.php.net/manual/en/pdostatement.fetch.php) for fetch, there doesn't seem to be a PDO::FETCH\_COLUMN style. If that's true, then the difference is fetch will return a row, while fetchColumn will only return the specified column. | PHP PDO: Differences between the methods of retrieving a single-column result | [
"",
"php",
"pdo",
""
] |
Going to make an applications wich will be used on a device without a physical keyboard
Looking for best practices for touch-screen applications.
Which commercial/free on-screen keyboard or on-screen numeric keypads do you use on Windows devices?
Or should we use a library to implement our own input devices?
Currently using the standard windows osk.exe, but I think it is too small for making a good touch-screen experience.
Update: We decided to make our own keyboard, num keypad controls (although the windows7 keypad looks better)
 | Don't have a true answer, just a bit of advice. I've been designing PDA apps in Windows mobile, and I've found the best solution is to try to elminate the keyboard as much as possible. Spend some extra time on design, implement an effective GUI with selection controls instead of textboxes. Regardless of the virtual keyboard used, your users will likely gripe if they are required to type in too much text, as it is isn't intuitive.
I've gone as far as letting users customize responses to fields via Intranet interfaces and saving it in a DB. The user can then select "canned" responses from ddls and modify as necessary. | I am using [Click-N-Type](http://www.lakefolks.org/cnt/). See [this question](https://stackoverflow.com/questions/330348/which-on-screen-keyboard-for-touch-screen-application/330407) for further details and other answers | On-screen keyboard for Windows apps (C++/C#) with touch-screen capability | [
"",
"c#",
"windows",
"touchscreen",
"on-screen-keyboard",
""
] |
What is an easy way to save/load settings? I'd prefer it if each user has their own set of settings. | If this is a desktop application, the best way to store application and/or user specific settings is to use the builtin methods. [Using Settings in C#](http://msdn.microsoft.com/en-us/library/aa730869.aspx)
**EDIT**
[Using Settings in C# Visual Studio 2010](http://msdn.microsoft.com/en-us/library/0zszyc6e) | This would depend a lot on the type of application.
If this is a desktop application you could save information into the registry in the user's area. Or into their User directory on disk. If this is a web or server application, you would need to store it into a database keyed by user, or to a disk file named for each user or something.
Since you mention options, it seems like the client path is more likely. | Best way to save per user options in C# | [
"",
"c#",
"settings",
""
] |
I have a file I need to write to a temp location, what is the best place in Windows? This file needs to not move as I need to read it a couple of times and dispose it when I close the program. | use
```
Path.GetTempPath();
```
or
```
Path.GetTempFileName();
```
As commentator pointed out, GetTempFileName() is not threadsafe - but you can construct unique file names based on GUIDs. | The others beat me to it regarding
```
System.IO.Path.GetTempPath()
```
But you can also look into application data folder. That will allow you more control over your file, like the ability have 1 shared for all users or 1 per user.
```
Application.CommonAppDataPath
Application.UserAppDataPath
``` | Correct location to save a temporary file in Windows? | [
"",
"c#",
"windows",
""
] |
I don't know if the title makes sense, but basically I am wondering people's opinions on whether to calculate public members whenever they have to be changed, or as soon as they are accessed?
Say you have a class like `CustomCollection`, that has a property called `Count`. Should `Count` be updated for each `Add`, `Remove`, etc operations, or should it just be calculated at the time it's accessed?
Keeping up-to-date seems intuitive, but then you wonder, how often people call `Add`, `Remove`, etc vs `.Count`.
Also is there a hybrid version where you can cache it when the property is accessed? I think that would require another variable to be updated, right? | Most of the information about the *state* of the class should always be up-to-date as a side effect of manipulating the data. For example the Count property is based on the internal storage of the data (ie. the array length).
Other properties that depend on certain conditions of the state of the class may need to be calculated. For exaple a `ContainsValidOrder` property might depend on orders in the class. For those properties you have to evaluate the use of the class and decide if the cost of calculating the value as you add and remove items from the collection is cheaper then scannig the entire collection each time the property is accessed.
The .NET guidlines do suggest however that properties do not execute complex code and that repeated access of the property does not have any side-effects or performance implications. So for properties that represent calculated data it might be better to use a method GetXXX. This indicates to the developer using your library that 1. the calculation might take some time and 2. they should hold on to the value for the duratino of their task. | You're right when you say you have to consider how often these functions are accessed. If `count` is accessed all the time, it shouldn't be on-demand as that would be slower than necessary. If the other functions are accessed more, then recalculating `count` every time would be a waste as well.
A middle ground would be having something that calculates `count` on demand if a flag is set to false, and then sets the flag to true. Calls to `add`, `remove`, etc would set the flag to false.
Something like this:
```
Class CachedCount
int count = 0;
boolean count_is_valid = false;
int getCount()
if count_is_valid
return count;
else
count = calculate_count();
count_is_valid = true;
return count;
void Add(item)
count_is_valid = false;
...
...
```
Note that this would really only provide a benefit if you access `count` several times in a row without accessing `add`, `remove`, etc in between, and that accesses to `add`, `remove`, etc aren't interleaved with calls to `count`. The benefit of this is lost if the requests are interleaved. The biggest benefit comes from sequences like: `add, add, add, remove, remove, add, count, count, count, count, count, count` rather than `add, count, add, count, remove, count, remove, count, add, count`. | Up-to-date vs On-Demand | [
"",
"c#",
".net",
""
] |
I have a dropdown box in my GUI which shows the contents of an ArrayList in another class.
New objects can be added to the ArrayList elsewhere in the GUI, so I need to know when it is updated, so I can refresh the dropdown menu. From what I can gather, my two options are to extend the ArrayList class to allow me to add my own changeListener to it, or to make the class which contains the ArrayList in question extend observable.
Which would be a more appropriate solution? | The two solutions are essentially implementations of the same root design pattern (the "Observer" pattern as defined by the Gang of Four.) In the former case, you are making the ArrayList itself "observable", in the latter you are making the domain object which uses the array list "observable."
My tendency would be to do the latter: make the domain object observable. This is primarily because you may eventually have other things that could change about the domain object (for which the GUI should be updated.) If it is already observable, you're already set.
Note that you don't strictly have to extend `java.util.Observable` - you can implement the design pattern without doing that. | The `Observable` implementation in Java is rarely used, and doesn't inter-operate well with Swing. Use an `EventListener` instead.
In particular, is there a reason not to extend [`AbstractListModel`](http://java.sun.com/j2se/1.5.0/docs/api/javax/swing/AbstractListModel.html) or even use [`DefaultListModel`](http://java.sun.com/j2se/1.5.0/docs/api/javax/swing/DefaultListModel.html) directly when managing the contents of the list "elsewhere in the GUI"? Then your combo box could use a `ComboBoxModel` that delegates to the same `ListModel` instance, adding its own implementation to track the selection state.
I have in mind something like this (but I haven't test it):
```
final class MyComboBoxModel
extends AbstractListModel
implements ComboBoxModel
{
private final ListModel data;
private volatile Object selection;
MyComboBoxModel(ListModel data) {
/*
* Construct this object with a reference to your list,
* which contents are managed somewhere else in the UI.
*/
this.data = data;
data.addListDataListener(new ListDataListener() {
public void contentsChanged(ListDataEvent evt) {
fireContentsChanged(this, evt.getIndex0(), evt.getIndex1());
}
public void intervalAdded(ListDataEvent evt) {
fireContentsChanged(this, evt.getIndex0(), evt.getIndex1());
}
public void intervalRemoved(ListDataEvent evt) {
fireContentsChanged(this, evt.getIndex0(), evt.getIndex1());
}
});
}
public void setSelectedItem(Object selection) {
this.selection = selection;
fireContentsChanged(this, 0, data.getSize() - 1);
}
public Object getSelectedItem() { return selection; }
public int getSize() { return data.getSize(); }
public Object getElementAt(int idx) { return data.getElementAt(idx); }
}
``` | Should I use a Listener or Observer? | [
"",
"java",
"swing",
"arraylist",
"listener",
"observer-pattern",
""
] |
I need a regex or function that can remove the ENCODED HTML tags from a database record. I have text in a database that is being stored (from TinyMCE) as encoded HTML.
The code has the 'less than'; and 'greater than'; tags encoded.
I would like to remove all the encoded tags and HTML and just leave the plain text and spaces only. | I'd avoid a reg ex here, as coming up with something that can cover any and all HTML that a user might foist on you is a task that could keep a full-time employee permanently busy.
Instead, a two stop approach that relies on already present PHP functionality is a better choice.
First, let's turn the encoded HTML entities back into greater than and less than signs with [htmlspecialchars\_decode](https://www.php.net/manual/en/function.htmlspecialchars-decode.php).
```
$string = htmlspecialchars_decode($string);
```
This should give us a string of proper html. (If your quotes are still encoded, see the second argument in the linked documentation).
To finish, we'll strip out the HTML tags with the PHP function strip\_tags. This will remove any and all HTML tags from the source.
```
$string = strip_tags($string);
```
Wrapped in a function/method
```
function decodeAndStripHTML($string){
return strip_tags(htmlspecialchars_decode($string));
}
``` | Sounds like you'll need to translate `<` to `<` and `>` to `>` and then use an HTML parser to extract the text (the latter can't/shouldn't be done with regular expressions). | Can you help with a regular expression or function to remove HTML encoded tags? | [
"",
"php",
"html",
"tags",
""
] |
I'm writing a function that works out whether or not the time is between 9am and 5pm on a working day. It then tells you when the next working day starts (if currently out of business hours) based on whether today's business hours have ended or (if after midnight) are about to begin.
All is going well so far, and to create the most readable code I could’ve used `strtotime()` -– but how do you test `strtotime()`?
Is there a way to alter the system time temporarily for testing purposes? | Just use `strtotime()`'s optional second argument, to give it a specific timestamp to use. If you don't supply the second argument, it uses the current time. | strtotime accepts a second argument that sets the time to whatever you want instead of the current time
int strtotime ( string $time [, int $now ] )
$time is the format to mimic
$now is the timestamp to generate the date from
if now is provided it overrides the current time.
e.g. to test 6pm on Monday March 30th no matter what time the program runs
```
$format = '2009-03-30 18:00'
$timestamp = mktime(18,0,0,3,30,2009);
strtotime($format,$timestamp);
``` | How can I offset system time to test my PHP function? | [
"",
"php",
"time",
"strtotime",
""
] |
I know it's kinda subjective but, if you were to put yourself in my shoes **which would you invest the time in learning?**
I want to write a web app which deals securely with relatively modest amounts of peoples private data, a few thousand records of a few Kb each but stuff that needs to be kept safe, addresses, phone numbers etc. I've done several web projects in PHP/MYSQL and have decided, handy though it is I really don't like PHP and don't want to do another large project in it...
As such I figure I'd best learn something new and so I am considering 2 options (although I'll happily entertain others if you have suggestions). I'm having terrible trouble deciding though. They both look quite involved so rather than just jump in and potentially waste days getting up to speed enough on both of them to make an informed choice I thought I'd come here and canvas some opinion.
So the two options I'm considering are...
**One of the PYTHON Web frameworks** - TurboGears seems well regarded?
Advantage: Of all the languages I ever tried Python is by far and away my favorite. There's loads of frameworks to choose from and I have done quite a lot of non web python coding over the last few years.
Disadvantage: There's loads to choose from so it's hard to pick! Need to run single server process? or mod\_python? which I don't like the sound of. What I do like is the notion of process separation and compartmentalization, i.e. if one users account is compromised it gives an attacker no leverage against the rest of the system. I'm not clear to what extent a python solution would handle that.
**Writing it as a SEASIDE app** Which I guess runs on a squeak app server?
Adv: From what I've heard it would permit good compartmentalization of users as each would have their own little private VM independent of all the systems other users which sounds wonderful from a security, scaling and redundancy standpoint.
Dis: I've not done any Smalltalk since Uni 15 years back and I never dug too deep into it then. I don't see much entry level help for seaside or that many projects using it. I suspect setting a server up to run it is hard for the same reason i.e. not because it's inherently hard but just cause there will be less help online and a presumption you are already rather au fait with Sqeak/Smalltalk.
So, what do people think? Would I be able to efficiently get the kind of strong separation and compartmentalization I'm after with a Python framework? Is Seaside as good as I think in terms of insulating users from each other? Might I be better off, security wise, sticking to the languages I'm most familiar with so I don't make any n00b mistakes or will Seaside be worth worth scaling the learning curve and prove more secure, comprehensible and maintainable in the long run? At the end of the day it's not a life or death decision and I can always bail if I start with one and then hate it so pls nobody get all holy language war and start flaming anyone! ;-)
Cheers for any replies this gets,
Roger :) | Disclaimer: I really don't like PHP, Python is nice, but doesn't come close to Smalltalk in my book. But I am a biased Smalltalker. Some answers about Seaside/Squeak:
Q: Which I guess runs on a squeak app server?
Seaside runs in several different Smalltalks (VW, Gemstone, Squeak etc). The term "app server" is not really used in Smalltalk country. :)
Q: From what I've heard it would permit good compartmentalization of users as each would have their own little private VM independent of all the systems other users which sounds wonderful from a security, scaling and redundancy standpoint.
Yes, each user has its own WASession and all UI components the user sees are instances living on the server side in that session. So sharing of state between sessions is something you must do explicitly, typically through a db.
Q: I've not done any Smalltalk since Uni 15 years back and I never dug too deep into it then. I don't see much entry level help for seaside or that many projects using it.
Smalltalk is easy to get going with and there is a whole free online book on Seaside.
Q: I suspect setting a server up to run it is hard for the same reason i.e. not because it's inherently hard but just cause there will be less help online and a presumption you are already rather au fait with Sqeak/Smalltalk.
No, not hard. :) In fact, quite trivial. Tons of help - Seaside ml, IRC on freenode, etc.
Q: Is Seaside as good as I think in terms of insulating users from each other?
I would say so.
Q: Might I be better off, security wise, sticking to the languages I'm most familiar with so I don't make any n00b mistakes or will Seaside be worth worth scaling the learning curve and prove more secure, comprehensible and maintainable in the long run?
The killer argument in favor of Seaside IMHO is the true component model. It really, really makes it wonderful for complex UIs and maintenance. If you are afraid of learning "something different" (but then you wouldn't even consider it in the first place I guess) then I would warn you. But if you are not afraid then you will probably love it.
Also - Squeak (or VW) is a truly awesome development environment - debugging live Seaside sessions, changing code in the debugger and resuming etc etc. It rocks. | Forget about mod\_python, there is [WSGI](http://www.python.org/dev/peps/pep-0333/).
I'd recommend [Django](http://www.djangoproject.com/). It runs on any [WSGI](http://www.python.org/dev/peps/pep-0333/) server, there are a lot to choose from. There is [mod\_wsgi](http://code.google.com/p/modwsgi/) for Apache, [wsgiref](http://docs.python.org/library/wsgiref.html) - reference implementation included in Python and [many more](http://wsgi.org/wsgi/Servers). Also [Google App Engine](http://code.google.com/appengine/) is WSGI, and includes Django.
Django is very popular and it's community is rapidly growing. | Dilemma: Should I learn Seaside or a Python framework? | [
"",
"python",
"frameworks",
"seaside",
""
] |
I am using VS2008 for developing a COM dll which by default uses CRT version 9
but I am using TSF (Text service framework) that is not compatible with the new CRT. I think the solution is to use the compatible one so how can I specify the CRT version? | I whole heartily join the recommendation **not** to manually change the CRT version you link against. If however, for some reason (which I cannot imagine) this is the right course of action for you, the way to do so is change the [manifest](http://msdn.microsoft.com/en-us/library/ms235342.aspx) for your project.
First make sure a manifest is *not* generated on every build (on VS2005: Configuration properties/Linker/Manifest file/Generate manifest), as it would overwrite your manual changes. Also make sure there that isolation is enabled.
Next, locate the manifest file - should be at the $(IntDir) (e.g., Debug). You should see a section similar to -
```
<dependency>
<dependentAssembly>
<assemblyIdentity type='win32' name='Microsoft.VC80.DebugCRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' />
</dependentAssembly>
</dependency>
```
(For debug builds, of course). You need to edit the version and publicKeyToken attributes of the CRT element.
You can inspect the files at your local WINDOWS\WinSxS folder to see the versions available. Check [here](http://msdn.microsoft.com/en-us/library/aa374228(VS.85).aspx) how to extract the publicKeyToken once you find the version you want. (Although I'd first try and look directly into manifests of other projects, linking against your desired CRT version).
If you do go there, expect some rough water. You may have some luck if your application is a console app that does not link against other Side-by-Side components (MFC, OpenMP, etc.). If your application is non-trivial, I'd be surprised if there aren't some intricate version dependencies amont the SxS components.
(edit) You'd also need to distribute with your application the specific CRT you're using. Here's [someone](http://blog.kalmbachnet.de/?postid=80) who did that. | The easiest way will be to build your DLL with a VC++ version that uses the CRT that is compatible with TFS.
I don't think that it is a good idea just to link your DLL with a different version of the CRT, unless you also use the same version of the header files. And the easiest way to do that will be to use the right VC++ version...
If you still want to try, you can:
* go to "Configuration settings->Linker->Input->Ignore specific library" and enter the crt you are using (libc.lib, libcmt.lib, etc. see this [code project article](http://www.codeproject.com/KB/cpp/Short_Story_VCPP_CRT.aspx) for details).
* Enter the name of the crt version you want to use in "Configuration settings->Linker->Input->Additional dependencies", and its path in "Configuration settings->Linker->General->Additional library directories".
You can also try to change the default directories in "Tools->Options->Projects and solution->VC++ directories->library files". Maybe changing $(VCInstallDir)lib to the path where you other version of the CRT resides will do the trick | How to Enforce C++ compiler to use specific CRT version? | [
"",
"c++",
"visual-studio-2008",
"visual-c++",
"msvcrt",
""
] |
I've tried to Google this issue, and I can't find anything that I see as relevant. So I must be looking for the wrong thing; none the less, I'd appreciate some advice...
```
Foobar &foobar = *new Foobar(someArg, anotherArg);
```
Is it just me or does that seem smelly?
I understand that the `new` keyword is designed for use with pointers (as such):
```
Foobar *foobar = new Foobar(someArg, anotherArg);
```
But what if you don't require a pointer on that instance, and you would like to use a reference instead? Or, is it the case that you don't need to explicitly initialize it (much like local variables); and if this is the case, what if I want to initialize with parameters?
The following does not work (unless I'm doing something wrong):
```
// the last argument is of type: int
Foobar &foobar(someArg, anotherArg);
```
... gives the compiler error:
> initializer expression list treated as compound expression
> invalid initialization of non-const reference of type ‘Foobar&’
> from a temporary of type ‘int’
And also this doesn't seem to work:
```
Foobar &foobar = Foobar(someArg, anotherArg);
```
... gives the compiler error:
> error: invalid initialization of non-const reference of type
> ‘Foobar&’ from a temporary of type ‘Foobar’
### Update 1:
Bare in mind that I am returning this value, so I don't want to use a local variable; I want to use a value on the heap, not the stack:
```
Foobar &helloWorld()
{
Foobar &foobar = *new Foobar(someArg, anotherArg);
foobar.HelloWorld();
return foobar;
}
```
Should I just be using pointers instead, or is this completely valid? | Why do you think you need to use new and references at all? Why not:
```
Foobar foobar(someArg, anotherArg);
```
For your function - return a value:
```
Foobar helloWorld()
{
Foobar foobar(someArg, anotherArg);
foobar.HelloWorld();
return foobar;
}
```
or a pointer:
```
Foobar * helloWorld()
{
Foobar * foobar = new Foobar(someArg, anotherArg);
foobar->HelloWorld();
return foobar;
}
```
If you do this - the caller is responsible for deleting the allocated object at some point.
Return from a non-member function is one place where references can typically not be used sensibly, as the thing you would like to refer to usually no longer exists. | Yes, that is smelly!
If you 'new' something, assign it to a pointer (or smart-pointer type), as it will need to be deleted again to avoid a memory leak. References aren't conventionally thought of as being things you need to delete again, so if somebody else sees that code (assigning a new'ed object to a reference), it may well confuse them.
You can do...
```
const Foobar &foobar = Foobar(someArg, anotherArg);
```
...if you really want a reference. Note that once foobar goes out of scope, the temporary object it is referencing will die. But, there's not a lot of point in writing that when you can straight-forwardly write:
```
Foobar foobar(someArg, anotherArg);
```
You probably don't actually need a reference... they're generally (but not exclusively) used for the types of method arguments. This is so that you can pass something that looks like an object, but only has the size of a pointer, and which the method can modify. The reference was introduced primarily to allow you to write a copy constructor (I won't explain that here!). | Am I initializing my C++ reference variables correctly? | [
"",
"c++",
"pointers",
"reference",
"initialization",
""
] |
I have a problem (it's my fault, I just can't spot what I'm doing wrong) where "ToString" isn't calling the correct method...
```
public class ClassA
{
public override ToString()
{
return "Hello, I'm class A.";
}
}
public class ClassB : ClassA
{
public override ToString()
{
return "Hello, I'm class B.";
}
}
ClassB myClassB = new ClassB();
List<ClassA> myClassAList = new List<ClassA>();
myClassAList.Add((ClassA) myClassB);
ClassA tempClassA = myClassAList[0];
Console.WriteLine(tempClassA.ToString());
```
I'm getting the "ToString" from "ClassB" and not "ClassA" what am I doing wrong? | You are overriding ToString in ClassB instead of hiding it from the original which will cause the overridden method to take precedence. What you could do is..
```
public class ClassA
{
public override string ToString()
{
return "Hello, I'm class A.";
}
}
public class ClassB : ClassA
{
public new string ToString()
{
return "Hello, I'm class B.";
}
}
...
List<ClassA> theList = new List<ClassA>
{
(ClassA)new ClassB(),
(ClassA)new ClassB()
};
ClassA a = theList[0];
Console.WriteLine(a.ToString());
// OR...
Console.WriteLine(new ClassA().ToString()); // I'm Class A
Console.WriteLine(new ClassB().ToString()); // I'm Class B
Console.WriteLine(((ClassA)new ClassB()).ToString()); // I'm Class A
``` | You are not doing anything wrong -- this is how polymorphism virtual methods work. When you put ClassB into collection of ClassA references, it is still ClassB object. Invoking .ToString() will always find ClassB.ToString() if the object is indeed of ClassB. | C# ToString inheritance | [
"",
"c#",
"polymorphism",
""
] |
I'm currently porting a large Linux project to Visual Studio. The project depends on a number of third-party libraries (Python, MPI, etc.) as well as a couple of in-house ones. But it can also be built without these libraries, or with only a few of them. So I don't want to create a different configuration for each possible combination, e.g. "Parallel with Python", "Parallel without Python", etc. There are just too many combinations. Is this a situation where I could use MSBuild?
Edit: One possibility I considered is to create a bunch of .vsprops files, but this is essentially the same as creating a bunch of different configurations.
Edit: Maybe CMake is more what I'm looking for? I'd love to hear from any CMake users out there... | There's no good solution to this that I'm aware of. The IDE seems to require a configuration for each set of command line arguments to the tools. So if N different sets of arguments are required -- as it sounds like the case is here -- N different configurations will be required. That's just how the IDE works, it appears.
Unfortunate, but one rarely wins in a fight against Visual Studio, so I personally have always given in and created as many configurations as needed. It's a pain, and it's fiddly, and yes the IDE should ideally provide some better mechanism for managing the combinations -- but it's doable, just about, and it doesn't actually take as long to set up as it feels like at the time.
(As I understand them, .vsprops can take some of the pain away by allowing easy sharing of configuration settings between configurations. So those miniscule text boxes in VS are only used to set up the settings that differ between configurations. This may make them still worth investigating. This isn't something I've used myself yet, though; only discovered it recently.) | One approach could be to conditionally reference your libraries using the *[Condition](http://msdn.microsoft.com/en-us/library/7szfhaft.aspx)* attribute of every assemblies *Reference* element (Python, MPI, etc).
This could separate your libraries from the configuration and platform properties, and allow you to build them by default, or conditionally using MSBuild properties.
So in your csproj:
```
<Reference Include="YourPythonLibrary"
Condition="$(BuildType) == '' Or $(BuildType) == 'TypeA'" />
<Reference Include="YourMpiLibrary"
Condition="$(BuildType) == 'TypeA' Or $(BuildType) == 'TypeB'" />
```
That includes Python by default and MPI only if the correct build type is set. Wouldn't matter what the configuration or platform is set as, and you could adjust the boolean logic to suit each library for each of your build types.
```
MSBuild /p:BuildType=TypeA
MSBuild /p:BuildType=TypeB
```
It would be nice to use some form of bitwise operation on the condition, but i'm not sure that is possible in MSBuild?
Note: Doesn't have to a Reference element, if it's just included as Content this approach will still work. | Avoiding too many configurations for a Visual Studio project | [
"",
"c++",
"visual-studio",
"configuration",
"msbuild",
""
] |
How can I format a Mailing Address so that I always push all non-null rows to the top? That is, I want to convert an address from the structure below to a mailing address.
Here is the structure:
```
[Line1] [varchar](50) NULL,
[Line2] [varchar](50) NULL,
[Line3] [varchar](50) NULL,
[City] [varchar](50) NULL,
[State] [varchar] (2) NULL,
[PostalCode] [varchar](50) NULL,
```
Here is some sample data:
```
Line1=
Line2=123 Some Address
Line3=
City=Royal Oak
State=MI
ZIP=45673-2312
```
Here is what the result should look like **(4 distinct or separate fields should be returned)**:
```
MailAddress1=123 Some Address
MailAddress2=ROYAL OAK MI 45673-2312
MailAddress3=
MailAddress4=
```
I am using SQL Server 2005.
Someone wrote this logic in our company and it just seemed to complex (Note: this is not the whole SELECT statement):
```
,CASE
WHEN eai.Line1 IS NULL OR eai.Line1 = '' THEN
CASE
WHEN eai.Line2 IS NULL OR eai.Line2 = '' THEN
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
ELSE eai.Line3
END
ELSE eai.Line2
END
ELSE eai.Line1
END
,CASE
WHEN eai.Line1 IS NULL OR eai.Line1 = '' THEN
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
ELSE eai.Line3
END
ELSE
CASE
WHEN eai.Line2 IS NULL OR eai.Line2 = '' THEN
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
ELSE eai.Line3
END
ELSE eai.Line2
END
END
,CASE
WHEN eai.Line1 IS NULL OR eai.Line1 = '' THEN
CASE
WHEN eai.Line2 IS NULL OR eai.Line2 = '' THEN NULL
ELSE
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN NULL
ELSE ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
END
END
ELSE
CASE
WHEN eai.Line2 IS NULL OR eai.Line2 = '' THEN
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN NULL
ELSE ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
END
ELSE
CASE
WHEN eai.Line3 IS NULL OR eai.Line3 = '' THEN ISNULL(LTRIM(RTRIM(eai.City)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(eai.PostalCode)),'')
ELSE eai.Line3
END
END
END
,CASE WHEN eai.Line2 IS NOT NULL AND eai.Line2 <> '' AND eai.Line3 IS NOT NULL AND eai.Line3 <> '' THEN eai.City + ' ' + eai.RegionCode + ' ' + eai.PostalCode ELSE NULL END
``` | The way to do this is with an UNPIVOT. Here is the solution:
```
With AddrTable as (
Select AddrFld, MailAddr From (
Select Cast(ISNULL([Line1], '') as Varchar(102)) as [A1],
Cast(ISNULL([Line2], '') as Varchar(102)) as [A2],
Cast(ISNULL([Line3], '') as Varchar(102)) as [A3],
Cast(ISNULL(LTRIM(RTRIM(City)),'') + ' ' + ISNULL(LTRIM(RTRIM(RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(PostalCode)),'') as Varchar(102)) as A4
From TableName Where UniqueID=@UniqueID) p
Unpivot (MailAddr For AddrFld in ([A1], [A2], [A3], [A4])) as unpvt)
Select Row_Number() over (Order by (Case Len(MailAddr) When 0 then 1 else 0 end), AddrFld) as RN,
MailAddr From AddrTable
Order By RN
```
Here's the output:
```
Address1
Westby WI 55555
-empty line-
-empty line-
```
Note that I had to use "Varchar(102)" as the field length (unpivot requires that all fields be the same) because your City/Region/Postal can have up to 102 chars in total. Also, note that "@UniqueID" is the identifier for the record whose address you need. This returns four and always four *rows* containing the data you need for your address.
***UPDATE:*** If you need to return this as a set of four *columns* rather than four rows, then just plop it into a view and then query the view with a *Pivot*. I've included the view here for completeness as I had to change the above just a bit when creating the view so the uniqueID field was included and no sort was done (the sort is now done in the query):
```
Create View AddressRows AS
With AddrTable as (
Select UniqueID, AddrFld, MailAddr From (
Select UniqueID,
Cast(ISNULL([Line1], '') as Varchar(102)) as [A1],
Cast(ISNULL([Line2], '') as Varchar(102)) as [A2],
Cast(ISNULL([Line3], '') as Varchar(102)) as [A3],
Cast(ISNULL(LTRIM(RTRIM(City)),'') + ' ' + ISNULL(LTRIM(RTRIM(RegionCode)),'') + ' ' + ISNULL(LTRIM(RTRIM(PostalCode)),'') as Varchar(102)) as A4
From TableName Where UniqueID=@UniqueID) p
Unpivot (MailAddr For AddrFld in ([A1], [A2], [A3], [A4])) as unpvt)
Select UniqueID,
Row_Number() over (Order by (Case Len(MailAddr) When 0 then 1 else 0 end), AddrFld) as RN,
MailAddr From AddrTable
```
And then, when you want to pull your matching "row" out, Pivot it back using this SQL (notice that I am querying again using UniqueID):
```
Select [Addr1], [Addr2], [Addr3], [Addr4] From (
Select Top 4 'Addr' + Cast(Row_Number() over (Order by RN) as Varchar(12)) as AddrCol, -- "Top 4" needed so we can sneak the "Order By" in
MailAddr
From AddressRows Where UniqueID=@UniqueID
) p PIVOT (Max([MailAddr]) for AddrCol in ([Addr1], [Addr2], [Addr3], [Addr4])
) as pvt
```
This returns:
```
Addr1 Addr2 Addr3 Addr4
-------------- ------------------ ------------- ------------------
Address1 Westby WI 54667
``` | Here's a three-minutes-invested solution:
```
DECLARE @address TABLE (
[Line1] [varchar](50) NULL,
[Line2] [varchar](50) NULL,
[Line3] [varchar](50) NULL,
[City] [varchar](50) NULL,
[State] [varchar] (2) NULL,
[PostalCode] [varchar](50) NULL
)
INSERT INTO @address (
[Line1],
[Line2],
[Line3],
[City],
[State],
[PostalCode]
)
VALUES (
NULL,
'123 Some Address',
NULL,
'Royal Oak',
'MI',
'45673-2312'
)
SELECT * FROM @address
SELECT
ISNULL(Line1 + CHAR(13), '')
+ ISNULL(Line2 + CHAR(13), '')
+ ISNULL(Line3 + CHAR(13), '')
+ ISNULL(City + ' ', '')
+ ISNULL([State] + ' ', '')
+ ISNULL(PostalCode, '')
FROM @address
```
Result:
```
123 Some Address
Royal Oak MI 45673-2312
```
Fiddle with the control characters until you get the result you need. | How can I improve this Mailing Address SQL Server SELECT Statement? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've been asked to provide Numpy & Scipy as python egg files. Unfortunately Numpy and Scipy do not make official releases of their product in .egg form for a Win32 platform - that means if I want eggs then I have to compile them myself.
At the moment my employer provides Visual Studio.Net 2003, which will compile no version of Numpy later than 1.1.1 - every version released subsequently cannot be compiled with VS2003.
What I'd really like is some other compiler I can use, perhaps for free, but at a push as a free time-limited trial... I can use that to compile the eggs. Is anybody aware of another compiler that I can get and use without paying anything and will definitely compile Numpy on Windows?
**Please only suggest something if you know for a fact that that it will compile Numpy - no speculation!**
Thanks
Notes: I work for an organization which is very sensitive about legal matters, so everything I do has to be totally legit. I've got to do everything according to licensed terms, and will be audited.
My environment:
* Windows 32
* Standard C Python 2.4.4 | Try compiling the whole Python stack with [MinGW32.](http://www.mingw.org/) This is a GCC-Win32 development environment that can be used to build Python and a wide variety of software. You will probably have to compile the whole Python distribution with it. [Here](http://uucode.com/texts/python-mingw/python-mingw.html) is a guide to compiling Python with MinGW. Note that you will probably have to provide a python distribution that is compiled with MinGW32 as well.
If recompiling the Python distro is not a goer I believe that Python 2.4 is compiled using VS2003. You are probably stuck with back-porting Scipy and Numpy to VS2003 or paying a consultant to do it. I would dig out the relevant mailing lists or contact the maintainers and get some view of the effort that would be required to do it.
Another alternative would be to upgrade the version of Python to a more recent one but you will probably have to regression test your application and upgrade the version of Visual Studio to 2005 or 2008. | You could try [GCC for Windows](http://gcc.gnu.org/install/specific.html#windows). GCC is the compiler most often used for compiling Numpy/Scipy (or anything else, really) on Linux, so it seems reasonable that it should work on Windows too. (Never tried it myself, though)
And of course it's distributed under the GPL, so there shouldn't be any legal barriers. | Can I compile numpy & scipy as eggs for free on Windows32? | [
"",
"python",
"windows",
"numpy",
"scipy",
""
] |
I imagine most of you know what I am getting at. You start a new job and within the first week or so of scanning through the code you realize you are in yet another C shop that throws in the occasional stream or hapless user defined class here and there. You quickly realize that not only aren't you going to learn anything new but it is just a matter of time before you are asked not use certain things because no one else understands them and won't be able to maintain your work.
How often do you see some new technique on say, StackOverflow, only to realize that if you ever used it at work you would be met with bewilderment or annoyance at best?
In your experience are these places the norm or the exception?
How do (or would) you try to determine a group's level of sophistication and commitment to C++ in the interview setting? For instance I have tried asking questions about the company's use of things like STL, Boost, 3rd party libs, etc., but that only seems to get incrementally closer to the reality of situation you'll find once in it. Thoughts? | It's really all across the board. On one end of the spectrum, I've worked in one place where the code was recently rewritten in C. Recently being 10 years ago. Everyone was highly skeptical of this new-fangled technology.
Slightly farther down the spectrum, you'll find C programmers who happen to have compilers with C++ features enabled. They'll dabble in the "class" keyword. But other than that, they are practically writing in C.
Next down the line I think you'll find those who just groked inheritance and polymorphism. These are the kind of guys and gals that would swoon over an MFC or wxWidgets program, thinking it was the greatest.
You'll go a little farther and find the folks who just read Meyer. They are all about figuring out how to use STL efficiently. They've got a basic notion of templates that equates mostly to the way STL uses them.
Then you've got the Modern C++ Template Metaprogramming geeks. Many of us these days use their work in the form of boost and all its goodness. Few actually dork around with template metaprogramming. That's probably a good thing.
I have seen perfectly good code written at each level.
The cool thing I always liked about C++ is that at each level of added complexity, the cost could be conceptualized in terms of "how this new abstraction might be implemented in C". So I can conceptualize that inheritance and polymorphism is adding the cost of a function pointer lookup. Since I have seen this and written this in C, I know what I'm in for. I also have a sense of how templates work in part because of how the preprocessor works in C. This applies to most features in the C++ and C world.
Since it's easy to map concepts to how they are implemented in C, I can easily visualize how the underlying architecture runs my code. | If you read between the lines, you'll probably find a heck of a lot of information buried in their questions, both in terms of what they want you for and their level of sophistication.
You may also want to return their questions not only with answers, but with harder questions requiring clarification on ambiguities in their questions.
Just don't come off as an arrogant jerk when you do it. ;-) | How to determine the actual level of development in a shop, e.g. C++ vs. C? | [
"",
"c++",
""
] |
This used to work for me and then it failed. I want to return only those items that contain all the filters, not at least one filter as it is doing now. WHat is wrong here?
```
private IQueryable<IndexItem> FilteredIndex (IQueryable<IndexItem> index, IEnumerable<string> filters)
{
var filteredIndex= from f in filters.AsQueryable()
where f.Length>0
from i in index
where i.FilterNames.Contains(f)
select i;
return filteredIndex;
}
``` | How about something like:
```
foreach(string s in filters) {
if(s.Length == 0) continue;
string tmp = s; // because of "capture" problem
index = index.Where(i => i.FilterNames.Contains(tmp));
}
return index;
```
This applies a succession of `Where` clauses, covering all the filters - essentially `AND`. | Straight forward. For a given item from index check that it is true for all filters that the given item contains the filter. With this just select all items from index for that the given condition is true.
```
index.Where(item =>
filters.All(filter => item.FilterNames.Contains(filter)))
```
I am not sure if the check for length greater than zero is required, nut it is easily integrated.
```
index.Where(item =>
filters.All(filter =>
(filter.Length > 0 ) || (item.FilterNames.Contains(filter))))
```
It works with LINQ to Objects and I guess it does what you want, but I am not sure if it works with LINQ to SQL. | Filtering a collection of items from contents of another collection | [
"",
"c#",
"linq",
"sorting",
"collections",
""
] |
I am using a PropertyGrid to show properties from my objects. However, I'm also allowing the user to create their own properties, and set values for these custom properties. Each object that can have these custom properties has a Dictionary collection, where the string is a unique key to identify the property, and Object is the value of a primitive type (string, bool, int etc..)
I've created a custom PropertyDescriptor with get and set methods that check the Dictionary for a matching key, or create/overwrite the value with a matching key respectively.
However, I also want to give the user the ability to clear the property, and thus remove its entry from the dictionary entirely. I'd put the code to to this in the ResetValue override method of my custom PropertyDescriptor, however I don't see any way through the PropertyGrid interface to envoke this? It doesn't seem to be a context menu option or something obvious like that.
So if I have a custom PropertyDescriptor with a custom ResetValue method, how do I actually evoke that method from a PropertyGrid? | I think the easiest way to achieve this is to add a contextmenu to your property grid, with a menu item "Reset", and then handling its click event like this:
```
private void resetToolStripMenuItem_Click(object sender, EventArgs e)
{
PropertyDescriptor pd = propGrid.SelectedGridItem.PropertyDescriptor;
pd.ResetValue(propGrid.SelectedObject);
}
```
I think Visual Studio does something like this. | Annotation:
The PropertyGrid.SelectedObject returns the wrong Value (component) in Childobjects.
Consequently the Method CanResetValue recived a incorrect component.
My Solution:
```
private void OnContextMenuOpening(object sender, CancelEventArgs e)
{
var lGrid = mCurrentControl as PropertyGrid;
if (lGrid != null)
{
var lItem = lGrid.SelectedGridItem;
// Für untergeordnete Eigenschaften kann nicht SelectedObject verwendet werden
// Component ist eine interne Eigenschaft der Klasse System.Windows.Forms.PropertyGridInternal.GridEntry
// ((System.Windows.Forms.PropertyGridInternal.GridEntry)(lItem)).Component
// Zugriff via Reflection
var lComponent = lItem.GetType().GetProperty("Component").GetValue(lItem, null);
if (lComponent != null)
tsmi_Reset.Enabled = lItem.PropertyDescriptor.CanResetValue(lComponent);
else
tsmi_Reset.Enabled = lItem.PropertyDescriptor.CanResetValue(lGrid.SelectedObject);
}
}
// Contextmenu -> Reset
private void OnResetProperty(object sender, EventArgs e)
{
var lGrid = mCurrentControl as PropertyGrid;
if (lGrid != null)
lGrid.ResetSelectedProperty();
}
``` | Resetting properties from a property grid | [
"",
"c#",
"propertygrid",
"propertydescriptor",
""
] |
Is there there any way to tell JUnit to run a specific test case multiple times with different data continuously before going on to the next test case? | take a look to [junit 4.4 theories](http://junit.sourceforge.net/doc/ReleaseNotes4.4.html):
```
import org.junit.Test;
import org.junit.experimental.theories.*;
import org.junit.runner.RunWith;
@RunWith(Theories.class)
public class PrimeTest {
@Theory
public void isPrime(int candidate) {
// called with candidate=1, candidate=2, etc etc
}
public static @DataPoints int[] candidates = {1, 2, 3, 4, 5};
}
``` | It sounds like that is a perfect candidate for parametrized tests.
But, basically, parametrized tests allow you to run the same set of tests on different data.
Here are some good blog posts about it:
* [Writing a parameterized JUnit test](http://ourcraft.wordpress.com/2008/08/27/writing-a-parameterized-junit-test/)
* [Unit Testing with JUnit 4.0](http://java-x.blogspot.com/2007/01/unit-testing-with-junit-40.html). | Running the same JUnit test case multiple time with different data | [
"",
"java",
"junit",
""
] |
According to [this post](http://readerszone.com/microsoft/internet-explorer/ie8-beta2-jscript-features.html) it was in the beta, but it's not in the release? | Even better for fallback is this:
```
var alertFallback = true;
if (typeof console === "undefined" || typeof console.log === "undefined") {
console = {};
if (alertFallback) {
console.log = function(msg) {
alert(msg);
};
} else {
console.log = function() {};
}
}
``` | console.log is only available after you have opened the Developer Tools (F12 to toggle it open and closed).
Funny thing is that after you've opened it, you can close it, then still post to it via console.log calls, and those will be seen when you reopen it.
I'm thinking that is a bug of sorts, and may be fixed, but we shall see.
I'll probably just use something like this:
```
function trace(s) {
if ('console' in self && 'log' in console) console.log(s)
// the line below you might want to comment out, so it dies silent
// but nice for seeing when the console is available or not.
else alert(s)
}
```
and even simpler:
```
function trace(s) {
try { console.log(s) } catch (e) { alert(s) }
}
``` | What happened to console.log in IE8? | [
"",
"javascript",
"logging",
"internet-explorer-8",
"console",
""
] |
I am trying to create a delegate (as a test) for:
```
Public Overridable ReadOnly Property PropertyName() As String
```
My intuitive attempt was declaring the delegate like this:
```
Public Delegate Function Test() As String
```
And instantiating like this:
```
Dim t As Test = AddressOf e.PropertyName
```
But this throws the error:
> Method 'Public Overridable ReadOnly Property PropertyName() As
> String' does not have a signature
> compatible with delegate 'Delegate
> Function Test() As String'.
So because I was dealing with a property I tried this:
```
Public Delegate Property Test() As String
```
But this throws a compiler error.
So the question is, how do I make a delegate for a property?
---
See this link:
<http://peisker.net/dotnet/propertydelegates.htm> | Re the problem using AddressOf - if you know the prop-name at compile time, you can (in C#, at least) use an anon-method / lambda:
```
Test t = delegate { return e.PropertyName; }; // C# 2.0
Test t = () => e.PropertyName; // C# 3.0
```
I'm not a VB expert, but reflector claims this is the same as:
```
Dim t As Test = Function
Return e.PropertyName
End Function
```
Does that work?
---
Original answer:
You create delegates for properties with `Delegate.CreateDelegate`; this can be open for any instance of the type, of fixed for a single instance - and can be for getter or setter; I'll give an example in C#...
```
using System;
using System.Reflection;
class Foo
{
public string Bar { get; set; }
}
class Program
{
static void Main()
{
PropertyInfo prop = typeof(Foo).GetProperty("Bar");
Foo foo = new Foo();
// create an open "getter" delegate
Func<Foo, string> getForAnyFoo = (Func<Foo, string>)
Delegate.CreateDelegate(typeof(Func<Foo, string>), null,
prop.GetGetMethod());
Func<string> getForFixedFoo = (Func<string>)
Delegate.CreateDelegate(typeof(Func<string>), foo,
prop.GetGetMethod());
Action<Foo,string> setForAnyFoo = (Action<Foo,string>)
Delegate.CreateDelegate(typeof(Action<Foo, string>), null,
prop.GetSetMethod());
Action<string> setForFixedFoo = (Action<string>)
Delegate.CreateDelegate(typeof(Action<string>), foo,
prop.GetSetMethod());
setForAnyFoo(foo, "abc");
Console.WriteLine(getForAnyFoo(foo));
setForFixedFoo("def");
Console.WriteLine(getForFixedFoo());
}
}
``` | I just create an helper with pretty good performance :
<http://thibaud60.blogspot.com/2010/10/fast-property-accessor-without-dynamic.html>
It don't use IL / Emit approach and it is very fast !
**Edit by oscilatingcretin 2015/10/23**
The source contains some casing issues and peculiar `=""` that have to be removed. Before link rot sets in, I thought I'd post a cleaned-up version of the source for easy copy pasta, as well as an example of how to use it.
**Revised source**
```
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;
namespace Tools.Reflection
{
public interface IPropertyAccessor
{
PropertyInfo PropertyInfo { get; }
object GetValue(object source);
void SetValue(object source, object value);
}
public static class PropertyInfoHelper
{
private static ConcurrentDictionary<PropertyInfo, IPropertyAccessor> _cache =
new ConcurrentDictionary<PropertyInfo, IPropertyAccessor>();
public static IPropertyAccessor GetAccessor(PropertyInfo propertyInfo)
{
IPropertyAccessor result = null;
if (!_cache.TryGetValue(propertyInfo, out result))
{
result = CreateAccessor(propertyInfo);
_cache.TryAdd(propertyInfo, result); ;
}
return result;
}
public static IPropertyAccessor CreateAccessor(PropertyInfo PropertyInfo)
{
var GenType = typeof(PropertyWrapper<,>)
.MakeGenericType(PropertyInfo.DeclaringType, PropertyInfo.PropertyType);
return (IPropertyAccessor)Activator.CreateInstance(GenType, PropertyInfo);
}
}
internal class PropertyWrapper<TObject, TValue> : IPropertyAccessor where TObject : class
{
private Func<TObject, TValue> Getter;
private Action<TObject, TValue> Setter;
public PropertyWrapper(PropertyInfo PropertyInfo)
{
this.PropertyInfo = PropertyInfo;
MethodInfo GetterInfo = PropertyInfo.GetGetMethod(true);
MethodInfo SetterInfo = PropertyInfo.GetSetMethod(true);
Getter = (Func<TObject, TValue>)Delegate.CreateDelegate
(typeof(Func<TObject, TValue>), GetterInfo);
Setter = (Action<TObject, TValue>)Delegate.CreateDelegate
(typeof(Action<TObject, TValue>), SetterInfo);
}
object IPropertyAccessor.GetValue(object source)
{
return Getter(source as TObject);
}
void IPropertyAccessor.SetValue(object source, object value)
{
Setter(source as TObject, (TValue)value);
}
public PropertyInfo PropertyInfo { get; private set; }
}
}
```
**Use it like this:**
```
public class MyClass
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public int Age { get; set; }
}
MyClass e = new MyClass();
IPropertyAccessor[] Accessors = e.GetType().GetProperties()
.Select(pi => PropertyInfoHelper.CreateAccessor(pi)).ToArray();
foreach (var Accessor in Accessors)
{
Type pt = Accessor.PropertyInfo.PropertyType;
if (pt == typeof(string))
Accessor.SetValue(e, Guid.NewGuid().ToString("n").Substring(0, 9));
else if (pt == typeof(int))
Accessor.SetValue(e, new Random().Next(0, int.MaxValue));
Console.WriteLine(string.Format("{0}:{1}",
Accessor.PropertyInfo.Name, Accessor.GetValue(e)));
}
``` | How do I create a delegate for a .NET property? | [
"",
"c#",
"vb.net",
"delegates",
""
] |
How can I detect that a client has disconnected from my server?
I have the following code in my `AcceptCallBack` method
```
static Socket handler = null;
public static void AcceptCallback(IAsyncResult ar)
{
//Accept incoming connection
Socket listener = (Socket)ar.AsyncState;
handler = listener.EndAccept(ar);
}
```
I need to find a way to discover as soon as possible that the client has disconnected from the `handler` Socket.
I've tried:
1. `handler.Available;`
2. `handler.Send(new byte[1], 0,
SocketFlags.None);`
3. `handler.Receive(new byte[1], 0,
SocketFlags.None);`
The above approaches work when you are connecting to a server and want to detect when the server disconnects but they do not work *when you are the server and want to detect client disconnection.*
Any help will be appreciated. | Since there are no events available to signal when the socket is disconnected, you will have to poll it at a frequency that is acceptable to you.
Using this extension method, you can have a reliable method to detect if a socket is disconnected.
```
static class SocketExtensions
{
public static bool IsConnected(this Socket socket)
{
try
{
return !(socket.Poll(1, SelectMode.SelectRead) && socket.Available == 0);
}
catch (SocketException) { return false; }
}
}
``` | Someone mentioned keepAlive capability of TCP Socket.
Here it is nicely described:
<http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html>
I'm using it this way: after the socket is connected, I'm calling this function, which sets keepAlive on. The `keepAliveTime` parameter specifies the timeout, in milliseconds, with no activity until the first keep-alive packet is sent. The `keepAliveInterval` parameter specifies the interval, in milliseconds, between when successive keep-alive packets are sent if no acknowledgement is received.
```
void SetKeepAlive(bool on, uint keepAliveTime, uint keepAliveInterval)
{
int size = Marshal.SizeOf(new uint());
var inOptionValues = new byte[size * 3];
BitConverter.GetBytes((uint)(on ? 1 : 0)).CopyTo(inOptionValues, 0);
BitConverter.GetBytes((uint)keepAliveTime).CopyTo(inOptionValues, size);
BitConverter.GetBytes((uint)keepAliveInterval).CopyTo(inOptionValues, size * 2);
socket.IOControl(IOControlCode.KeepAliveValues, inOptionValues, null);
}
```
I'm also using asynchronous reading:
```
socket.BeginReceive(packet.dataBuffer, 0, 128,
SocketFlags.None, new AsyncCallback(OnDataReceived), packet);
```
And in callback, here is caught timeout `SocketException`, which raises when socket doesn't get ACK signal after keep-alive packet.
```
public void OnDataReceived(IAsyncResult asyn)
{
try
{
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = socket.EndReceive(asyn);
}
catch (SocketException ex)
{
SocketExceptionCaught(ex);
}
}
```
This way, I'm able to safely detect disconnection between TCP client and server. | Instantly detect client disconnection from server socket | [
"",
".net",
"c#",
"sockets",
"tcp",
"connection",
""
] |
I am using this XML file:
```
<root>
<level1 name="A">
<level2 name="A1" />
<level2 name="A2" />
</level1>
<level1 name="B">
<level2 name="B1" />
<level2 name="B2" />
</level1>
<level1 name="C" />
</root>
```
Could someone give me a C# code using LINQ, the simplest way to print this result:
(Note the extra space if it is a `level2` node)
```
A
A1
A2
B
B1
B2
C
```
Currently I have written this code:
```
XDocument xdoc = XDocument.Load("data.xml"));
var lv1s = from lv1 in xdoc.Descendants("level1")
select lv1.Attribute("name").Value;
foreach (var lv1 in lv1s)
{
result.AppendLine(lv1);
var lv2s = from lv2 in xdoc...???
}
``` | Try this.
```
using System.Xml.Linq;
void Main()
{
StringBuilder result = new StringBuilder();
//Load xml
XDocument xdoc = XDocument.Load("data.xml");
//Run query
var lv1s = from lv1 in xdoc.Descendants("level1")
select new {
Header = lv1.Attribute("name").Value,
Children = lv1.Descendants("level2")
};
//Loop through results
foreach (var lv1 in lv1s){
result.AppendLine(lv1.Header);
foreach(var lv2 in lv1.Children)
result.AppendLine(" " + lv2.Attribute("name").Value);
}
Console.WriteLine(result);
}
``` | Or, if you want a more general approach - i.e. for nesting up to "levelN":
```
void Main()
{
XElement rootElement = XElement.Load(@"c:\events\test.xml");
Console.WriteLine(GetOutline(0, rootElement));
}
private string GetOutline(int indentLevel, XElement element)
{
StringBuilder result = new StringBuilder();
if (element.Attribute("name") != null)
{
result = result.AppendLine(new string(' ', indentLevel * 2) + element.Attribute("name").Value);
}
foreach (XElement childElement in element.Elements())
{
result.Append(GetOutline(indentLevel + 1, childElement));
}
return result.ToString();
}
``` | LINQ to read XML | [
"",
"c#",
"xml",
"linq",
"linq-to-xml",
""
] |
I inherited an assembly with MSTest, but these tests were run using nunit-console on the build machine (not sure how it worked). So I decided to sort it out and change them to proper NUnit tests, but now nunit-console (or gui) can't find any tests. They run just fine using ReSharper test runner though. Any idea what could be missing? | Thanks for help, everyone. Upgrading to the latest NUnit framework fixed the problem (all the obvious things you suggested were OK).
Frederik Gheysels, you can try that as well I guess. | Check:
* Is the class public?
* Does it have a public parameterless constructor (e.g. the default one if you don't specify any other constructors)
* Does it have the `[TestFixture]` attribute at the class level?
* Is each test public?
* Does each test have the `[Test]` attribute?
* Is each test parameterless?
I believe *some* versions of NUnit were able to find tests based on their names, e.g. `TestFooBarBaz()` without the `[Test]` attribute, but I don't know what the state of this is now - it could explain the discrepancy though. | NUnit doesn't find tests in assembly | [
"",
"c#",
"nunit",
"mstest",
""
] |
I am writing a directory scanner in .NET.
For each File/Dir I need the following info.
```
class Info {
public bool IsDirectory;
public string Path;
public DateTime ModifiedDate;
public DateTime CreatedDate;
}
```
I have this function:
```
static List<Info> RecursiveMovieFolderScan(string path){
var info = new List<Info>();
var dirInfo = new DirectoryInfo(path);
foreach (var dir in dirInfo.GetDirectories()) {
info.Add(new Info() {
IsDirectory = true,
CreatedDate = dir.CreationTimeUtc,
ModifiedDate = dir.LastWriteTimeUtc,
Path = dir.FullName
});
info.AddRange(RecursiveMovieFolderScan(dir.FullName));
}
foreach (var file in dirInfo.GetFiles()) {
info.Add(new Info()
{
IsDirectory = false,
CreatedDate = file.CreationTimeUtc,
ModifiedDate = file.LastWriteTimeUtc,
Path = file.FullName
});
}
return info;
}
```
Turns out this implementation is quite slow. Is there any way to speed this up? I'm thinking of hand coding this with FindFirstFileW but would like to avoid that if there is a built in way that is faster. | This implementation, which needs a bit of tweaking is 5-10X faster.
```
static List<Info> RecursiveScan2(string directory) {
IntPtr INVALID_HANDLE_VALUE = new IntPtr(-1);
WIN32_FIND_DATAW findData;
IntPtr findHandle = INVALID_HANDLE_VALUE;
var info = new List<Info>();
try {
findHandle = FindFirstFileW(directory + @"\*", out findData);
if (findHandle != INVALID_HANDLE_VALUE) {
do {
if (findData.cFileName == "." || findData.cFileName == "..") continue;
string fullpath = directory + (directory.EndsWith("\\") ? "" : "\\") + findData.cFileName;
bool isDir = false;
if ((findData.dwFileAttributes & FileAttributes.Directory) != 0) {
isDir = true;
info.AddRange(RecursiveScan2(fullpath));
}
info.Add(new Info()
{
CreatedDate = findData.ftCreationTime.ToDateTime(),
ModifiedDate = findData.ftLastWriteTime.ToDateTime(),
IsDirectory = isDir,
Path = fullpath
});
}
while (FindNextFile(findHandle, out findData));
}
} finally {
if (findHandle != INVALID_HANDLE_VALUE) FindClose(findHandle);
}
return info;
}
```
extension method:
```
public static class FILETIMEExtensions {
public static DateTime ToDateTime(this System.Runtime.InteropServices.ComTypes.FILETIME filetime ) {
long highBits = filetime.dwHighDateTime;
highBits = highBits << 32;
return DateTime.FromFileTimeUtc(highBits + (long)filetime.dwLowDateTime);
}
}
```
interop defs are:
```
[DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern IntPtr FindFirstFileW(string lpFileName, out WIN32_FIND_DATAW lpFindFileData);
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
public static extern bool FindNextFile(IntPtr hFindFile, out WIN32_FIND_DATAW lpFindFileData);
[DllImport("kernel32.dll")]
public static extern bool FindClose(IntPtr hFindFile);
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
public struct WIN32_FIND_DATAW {
public FileAttributes dwFileAttributes;
internal System.Runtime.InteropServices.ComTypes.FILETIME ftCreationTime;
internal System.Runtime.InteropServices.ComTypes.FILETIME ftLastAccessTime;
internal System.Runtime.InteropServices.ComTypes.FILETIME ftLastWriteTime;
public int nFileSizeHigh;
public int nFileSizeLow;
public int dwReserved0;
public int dwReserved1;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)]
public string cFileName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 14)]
public string cAlternateFileName;
}
``` | There is a long history of the .NET file enumeration methods being slow. The issue is there is not an instantaneous way of enumerating large directory structures. Even the accepted answer here has it's issues with GC allocations.
The best I've been able do is wrapped up in my library and exposed as the [FileFile](http://help.csharptest.net/?CSharpTest.Net.Library~CSharpTest.Net.IO.FindFile_members.html) ([source](http://code.google.com/p/csharptest-net/source/browse/src/Library/IO/FindFile.cs)) class in the [CSharpTest.Net.IO namespace](http://help.csharptest.net/?CSharpTest.Net.Library~CSharpTest.Net.IO_namespace.html). This class can enumerate files and folders without unneeded GC allocations and string marshaling.
The usage is simple enough, and the RaiseOnAccessDenied property will skip the directories and files the user does not have access to:
```
private static long SizeOf(string directory)
{
var fcounter = new CSharpTest.Net.IO.FindFile(directory, "*", true, true, true);
fcounter.RaiseOnAccessDenied = false;
long size = 0, total = 0;
fcounter.FileFound +=
(o, e) =>
{
if (!e.IsDirectory)
{
Interlocked.Increment(ref total);
size += e.Length;
}
};
Stopwatch sw = Stopwatch.StartNew();
fcounter.Find();
Console.WriteLine("Enumerated {0:n0} files totaling {1:n0} bytes in {2:n3} seconds.",
total, size, sw.Elapsed.TotalSeconds);
return size;
}
```
For my local C:\ drive this outputs the following:
> Enumerated 810,046 files totaling 307,707,792,662 bytes in 232.876 seconds.
Your mileage may vary by drive speed, but this is the fastest method I've found of enumerating files in managed code. The event parameter is a mutating class of type [FindFile.FileFoundEventArgs](http://code.google.com/p/csharptest-net/source/browse/src/Library/IO/FindFile.cs) so be sure you do not keep a reference to it as it's values will change for each event raised.
You might also note that the DateTime's exposed are only in UTC. The reason is that the conversion to local time is semi-expensive. You might consider using UTC times to improve performance rather than converting these to local time. | Is there a faster way to scan through a directory recursively in .NET? | [
"",
"c#",
".net",
"filesystems",
""
] |
I'm trying to write a lookup method for determining a SMS message to send to a user based on a few parameters associated with the user/system. We will have a default message that will be used as a last resort, but there are multiple ways to override the message by various parameters. Here is what I have so far for the lookup query - are there any better ways to do this? Maybe a lookup is not the right approach for this?
Here is the lookup table:
```
MessageLookup
{
ID bigint PK
Key varchar
CampaignTypeID bigint FK,
ServiceProviderID bigint FK nullable, -- optional override parameter
DistributorID bigint FK nullable, -- optional override parameter
CampaignID bigint FK nullable, -- optional override parameter
Message varchar
}
```
Here is an example of what the table would look like:
```
ID Key CTID SPID DistID CampID Message
1 Help 1 NULL NULL NULL 'This is the default message'
2 Help 1 375 NULL NULL 'This is the SP375 message'
3 Help 1 377 NULL NULL 'This is the SP377 message'
4 Help 1 NULL 13 NULL 'This is the Dist13 message'
5 Help 1 375 13 NULL 'This is the SP375/Dist13 message'
6 Help 1 NULL 13 500 'This is the Dist13/Camp500 message'
7 Help 1 375 13 500 'This is the SP375/Dist13/Camp500 msg'
8 Help 1 NULL NULL 500 'This is the Camp500 help message'
```
Here is the query I have:
```
select
--top 1
*
from MessageLookup ml
where ml.[Key] = @Key
and ml.CampaignTypeID = @CampaignTypeID
and
(
ml.ServiceProviderID = @ServiceProviderID or
ml.ServiceProviderID is null
)
and
(
ml.DistributorID = @DistributorID or
ml.DistributorID is null
)
and
(
ml.CampaignID = @CampaignID or
ml.CampaignID is null
)
order by
CampaignID desc, -- highest precedence lookup param
DistributorID desc,
ServiceProviderID desc -- lowest precedence lookup param
``` | I'm not sure what the best way is, but here are some alternatives:
One thought would be to store a pattern with each rule, like so:
```
ID Key CTID Rule Message
1 Help 1 '[%:%:%]' 'This is the default message'
2 Help 1 '[375:%:%]' 'This is the SP375 message'
3 Help 1 '[377:%:%]' 'This is the SP377 message'
4 Help 1 '[%:13:%]' 'This is the Dist13 message'
5 Help 1 '[375:13:%]' 'This is the SP375/Dist13 message'
```
and then use a LIKE test instead of all the ANDs.
Another thought would be to use OUTER JOINS.
Or (playing off the answer that just came in) to DRY things up further by writing:
```
where ml.[Key] = @Key
and ml.CampaignTypeID = @CampaignTypeID
and IsNull(ml.ServiceProviderID = @ServiceProviderID,true)
and IsNull(ml.DistributorID = @DistributorID, true)
and IsNull(ml.CampaignID = @CampaignID, true)
``` | I think that's a valid approach, easy to extend, intent is pretty clear, and you can tidy the sql by doing the following
```
select
--top 1
*
from MessageLookup ml
where ml.[Key] = @Key
and ml.CampaignTypeID = @CampaignTypeID
and ml.ServiceProviderID = IsNull(@ServiceProviderID, ml.ServiceProviderID)
and ml.DistributorID = IsNull(@DistributorID, ml.DistributorID)
and ml.CampaignID = IsNull(@CampaignID, ml.CampaignID)
....
``` | Sparsely populated lookup table in SQL | [
"",
"sql",
"lookup",
""
] |
So a using statement automatically calls the dispose method on the object that is being "used", when the using block is exited, right?
But when is this necessary/beneficial?
For example let's say you have this method:
```
public void DoSomething()
{
using (Font font1 = new Font("Arial", 10.0f))
{
// Draw some text here
}
}
```
Is it necessary to have the using statement here, since the object is created in the method? When the method exits, wont the Font object be disposed of anyway?
Or does the Dispose method get run at another time after the method exits?
For example if the method was like this:
```
public void DoSomething()
{
Font font1 = new Font("Arial", 10.0f);
// Draw some text here
}
// Is everything disposed or cleared after the method has finished running?
``` | The 'using' statement is most useful when working with unmanaged objects, like database connections.
In this way, the connection is closed and disposed no matter what happens in the code block.
For more discussion, see this article on CodeProject:
<http://www.codeproject.com/KB/cs/tinguusingstatement.aspx> | Without `using` (or manually calling `Dispose()`), the object will eventually be disposed of, just not at a deterministic time. That is, it can happen straight away, in two days' time, or (in some cases) never.
For things like, say, network connections, you want the connection to close when you're done with it, not "whenever", otherwise it'll idle around hogging up a socket.
Further, for things like mutex locks, you do not want those to be released "whenever", or else deadlock is likely to result. | When are C# "using" statements most useful? | [
"",
"c#",
"using-statement",
""
] |
I want to work with unsigned 8-bit variables in C++. Either `unsigned char` or `uint8_t` do the trick as far as the arithmetic is concerned (which is expected, since AFAIK `uint8_t` is just an alias for `unsigned char`, or so the debugger presents it.
The problem is that if I print out the variables using ostream in C++ it treats it as char. If I have:
```
unsigned char a = 0;
unsigned char b = 0xff;
cout << "a is " << hex << a <<"; b is " << hex << b << endl;
```
then the output is:
```
a is ^@; b is 377
```
instead of
```
a is 0; b is ff
```
I tried using `uint8_t`, but as I mentioned before, that's typedef'ed to `unsigned char`, so it does the same. How can I print my variables correctly?
**Edit:** I do this in many places throughout my code. Is there any way I can do this *without* casting to `int` each time I want to print? | I would suggest using the following technique:
```
struct HexCharStruct
{
unsigned char c;
HexCharStruct(unsigned char _c) : c(_c) { }
};
inline std::ostream& operator<<(std::ostream& o, const HexCharStruct& hs)
{
return (o << std::hex << (int)hs.c);
}
inline HexCharStruct hex(unsigned char _c)
{
return HexCharStruct(_c);
}
int main()
{
char a = 131;
std::cout << hex(a) << std::endl;
}
```
It's short to write, has the same efficiency as the original solution and it lets you choose to use the "original" character output. And it's type-safe (not using "evil" macros :-)) | Use:
```
cout << "a is " << hex << (int) a <<"; b is " << hex << (int) b << endl;
```
And if you want padding with leading zeros then:
```
#include <iomanip>
...
cout << "a is " << setw(2) << setfill('0') << hex << (int) a ;
```
As we are using C-style casts, why not go the whole hog with terminal C++ badness and use a macro!
```
#define HEX( x )
setw(2) << setfill('0') << hex << (int)( x )
```
you can then say
```
cout << "a is " << HEX( a );
```
**Edit:** Having said that, MartinStettner's solution is much nicer! | how do I print an unsigned char as hex in c++ using ostream? | [
"",
"c++",
"formatting",
"cout",
"ostream",
"unsigned-char",
""
] |
Does anyone have a exhaustive list of the names that C#/CLR gives to operators? (Maybe my lack of sleep is kicking in, but I can't seem to find it on Google) E.g. op\_Addition, op\_Subtraction. Furthermore is there any chance that these would be different in other cultures?
I am trying to create a class that can add/subtract etc. two objects and I have done all the primitives - I just need to do the 'rest'.
Many thanks. | Here is the full list of [C# overloadable operators](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/statements-expressions-operators/overloadable-operators)
You can find a list of the operator Metadata/Generated MSIL names under [Framework Design Guidelines -> Operator Overloads](https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/operator-overloads).
There is a different [F# operator overload list](https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/operator-overloading#overloaded-operator-names).
Finally, refer to [ECMA-335 Common Language
Infrastructure (CLI)](https://www.ecma-international.org/publications/files/ECMA-ST/ECMA-335.pdf) I.10.3 Operator overloading, where the operators for C++/CLI are listed. | Based on the *Expression* class:
```
== op_Equality
!= op_Inequality
> op_GreaterThan
< op_LessThan
>= op_GreaterThanOrEqual
<= op_LessThanOrEqual
& op_BitwiseAnd
| op_BitwiseOr
+ op_Addition
- op_Subtraction
/ op_Division
% op_Modulus
* op_Multiply
<< op_LeftShift
>> op_RightShift
^ op_ExclusiveOr
- op_UnaryNegation
+ op_UnaryPlus
! op_LogicalNot
~ op_OnesComplement
op_False
op_True
++ op_Increment
-- op_Decrement
``` | Method Names for Operator Methods in C# | [
"",
"c#",
"operators",
"operator-overloading",
""
] |
I was going to post a question, but figured it out ahead of time and decided to post the question and the answer - or at least my observations.
When using an anonymous delegate as the WaitCallback, where ThreadPool.QueueUserWorkItem is called in a foreach loop, it appears that the same one foreach-value is passed into each thread.
```
List< Thing > things = MyDb.GetTheThings();
foreach( Thing t in Things)
{
localLogger.DebugFormat( "About to queue thing [{0}].", t.Id );
ThreadPool.QueueUserWorkItem(
delegate()
{
try
{
WorkWithOneThing( t );
}
finally
{
Cleanup();
localLogger.DebugFormat("Thing [{0}] has been queued and run by the delegate.", t.Id );
}
});
}
```
For a collection of 16 Thing instances in Things I observed that each 'Thing' passed to WorkWithOneThing corresponded to the last item in the 'things' list.
I suspect this is because the delegate is accessing the 't' outer variable. Note that I also experimented with passing the Thing as a parameter to the anonymous delegate, but the behavior remained incorrect.
When I re-factored the code to use a named WaitCallback method and passed the Thing 't' to the method, voilà ... the i'th instance of Things was correctly passed into WorkWithOneThing.
A lesson in parallelism I guess. I also imagine that the Parallel.For family addresses this, but that library was not an option for us at this point.
Hope this saves someone else some time.
Howard Hoffman | This is correct, and describes how C# captures outside variables inside closures. It's not directly an issue about parallelism, but rather about anonymous methods and lambda expressions.
[This question](https://stackoverflow.com/questions/512166/cthe-foreach-identifier-and-closures) discusses this language feature and its implications in detail. | Below is a link detailing why that happens. It's written for VB but C# has the same semantics.
<http://blogs.msdn.com/jaredpar/archive/2007/07/26/closures-in-vb-part-5-looping.aspx> | Using Anonymous Delegates with .NET ThreadPool.QueueUserWorkItem | [
"",
"c#",
"closures",
""
] |
Is there any Java compiler flag that one can pass to tell the compiler to disallow the use of raw types? That is, for any generic class, let the compiler force that the parameterized version be used, and throw a compilation error otherwise? | JDK7 (b38) introduces [`-Xlint:rawtypes`](https://bugs.java.com/bugdatabase/view_bug?bug_id=6747671). As mentioned above, `-Xlint:unchecked` warns about unchecked conversions.
Maurizio Cimadamore of the javac team wrote a [weblog entry](http://blogs.oracle.com/mcimadamore/entry/diagnosing_raw_types) about it. | You can configure the use of raw types within Eclipse to be a warning or an error.
It's under Preferences / Java / Compiler / Errors and Warnings / Generic types / Usage of a raw type.
`javac` doesn't have anything like this as far as I'm aware - even with `-Xlint:all` you don't get a warning for something like:
```
ArrayList x = new ArrayList();
``` | Javac flag to disallow raw types? | [
"",
"java",
"generics",
"javac",
""
] |
My TCP Client receives a image within a packet.The image is compressed with zlib.The task is to decompress the image and put it on the form.
I'm planning to save the compressed image in the current directory,decompress it and load the decompressed file on the form.
The first problem comes with saving the file(compressed).The zlib can save it decompressed.
The code below loads the compressed file and saves it after decompression.
```
private void decompressFile(string inFile, string outFile)
{
System.IO.FileStream outFileStream = new System.IO.FileStream(outFile, System.IO.FileMode.Create);
zlib.ZOutputStream outZStream = new zlib.ZOutputStream(outFileStream);
System.IO.FileStream inFileStream = new System.IO.FileStream(inFile, System.IO.FileMode.Open);
try
{
CopyStream(inFileStream, outZStream);
}
finally
{
outZStream.Close();
outFileStream.Close();
inFileStream.Close();
}
}
public static void CopyStream(System.IO.Stream input, System.IO.Stream output)
{
byte[] buffer = new byte[2000];
int len;
while ((len = input.Read(buffer, 0, 2000)) > 0)
{
output.Write(buffer, 0, len);
}
output.Flush();
}
```
How to pass the byte[] array directly to that function?
I'm planning to save it as compressed and then call the function with the location of the compressed file,but I don't know neither how to save a file from a byte[] array nor a way to pass the byte[] array as the input file.
Any help will be highly appreciated.
Thanks. | Use the static void System.IO.File.WriteAllBytes(string path, byte[] bytes) method.
```
byte[] buffer = new byte[200];
File.WriteAllBytes(@"c:\data.dmp", buffer);
``` | ```
public static void SaveFile(this Byte[] fileBytes, string fileName)
{
FileStream fileStream = new FileStream(fileName, FileMode.Create, FileAccess.ReadWrite);
fileStream.Write(fileBytes, 0, fileBytes.Length);
fileStream.Close();
}
``` | Save file from a byte[] in C# NET 3.5 | [
"",
"c#",
".net-3.5",
"compression",
"zlib",
""
] |
I am using PHP, AS3 and mysql.
I have a website. A flash(as3) website. The flash website store the members' information in mysql database through php. In "members" table, i have "id" as the primary key and "username" as a unique field.
Now my situation is:
When flash want to display a member's profile. My questions:
1. Should Flash pass the member "ID" or "username" to php to process the mysql query?
2. Is there any different passing the "id" or "username"?
3. Which one is more secure?
4. Which one you recommend?
I would like to optimize my website in terms of security and performance. | 1) Neither is inarguably the thing it *should* do.
2) The ID is probably shorter and minisculely faster to look up. The ID gives away slightly more information about your system; if you know that a site uses serial IDs at all, and you know what one of them is, that's pretty much as good as knowing all of them, whereas knowing one username does not tell you the usernames of any other users. On the other hand, the username is more revelatory of the user's psychology and may constitute a password hint.
3) Both have extremely marginal downfalls, as described in item 2.
4) I'd use the ID. | The primary key is always the safest method for identifying database rows. For instance, you may later change your mind and allow duplicate usernames.
Depending on how your ActionScript is communicating with PHP, it will likely also require sending fewer bytes if you send an integer ID in your request rather than a username. | Should I use "id" or "unique username"? | [
"",
"php",
"mysql",
"database",
"flash",
"surrogate-key",
""
] |
Is there a way to know what are the tables used by one stored procedure by doing an SQL query?
Best regards, and thanks for the help.
P.S.: I'm using SQL Server 2005. | This article on TechRepublic
[Finding dependencies in SQL Server 2005](https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/datacenter/?p=277)
describes a way to do that:
> This tutorial will show how you can
> write a procedure that will look up
> all of the objects that are dependent
> upon other objects.
Here is the code to create the system stored procedure for finding object dependencies:
```
USE master
GO
CREATE PROCEDURE sp_FindDependencies
(
@ObjectName SYSNAME,
@ObjectType VARCHAR(5) = NULL
)
AS
BEGIN
DECLARE @ObjectID AS BIGINT
SELECT TOP(1) @ObjectID = object_id
FROM sys.objects
WHERE name = @ObjectName
AND type = ISNULL(@ObjectType, type)
SET NOCOUNT ON ;
WITH DependentObjectCTE (DependentObjectID, DependentObjectName, ReferencedObjectName, ReferencedObjectID)
AS
(
SELECT DISTINCT
sd.object_id,
OBJECT_NAME(sd.object_id),
ReferencedObject = OBJECT_NAME(sd.referenced_major_id),
ReferencedObjectID = sd.referenced_major_id
FROM
sys.sql_dependencies sd
JOIN sys.objects so ON sd.referenced_major_id = so.object_id
WHERE
sd.referenced_major_id = @ObjectID
UNION ALL
SELECT
sd.object_id,
OBJECT_NAME(sd.object_id),
OBJECT_NAME(referenced_major_id),
object_id
FROM
sys.sql_dependencies sd
JOIN DependentObjectCTE do ON sd.referenced_major_id = do.DependentObjectID
WHERE
sd.referenced_major_id <> sd.object_id
)
SELECT DISTINCT
DependentObjectName
FROM
DependentObjectCTE c
END
```
> This procedure uses a Common Table
> Expression (CTE) with recursion to
> walk down the dependency chain to get
> to all of the objects that are
> dependent on the object passed into
> the procedure. The main source of data
> comes from the system view
> sys.sql\_dependencies, which contains
> dependency information for all of your
> objects in the database. | Try [sp\_depends](http://msdn.microsoft.com/en-us/library/ms189487.aspx), although you should probably recompile the stored procedure to update the statistics in the database. | Stored procedures and the tables used by them | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I don't understand the following part of the Python docs:
<http://docs.python.org/reference/expressions.html#slicings>
Is this referring to list slicing ( `x=[1,2,3,4]; x[0:2]` )..? Particularly the parts referring to ellipsis..
```
slice_item ::= expression | proper_slice | ellipsis
```
> The conversion of a slice item that is an expression is that expression. The conversion of an ellipsis slice item is the built-in Ellipsis object. | Defining simple test class that just prints what is being passed:
```
>>> class TestGetitem(object):
... def __getitem__(self, item):
... print type(item), item
...
>>> t = TestGetitem()
```
Expression example:
```
>>> t[1]
<type 'int'> 1
>>> t[3-2]
<type 'int'> 1
>>> t['test']
<type 'str'> test
>>> t[t]
<class '__main__.TestGetitem'> <__main__.TestGetitem object at 0xb7e9bc4c>
```
Slice example:
```
>>> t[1:2]
<type 'slice'> slice(1, 2, None)
>>> t[1:'this':t]
<type 'slice'> slice(1, 'this', <__main__.TestGetitem object at 0xb7e9bc4c>)
```
Ellipsis example:
```
>>> t[...]
<type 'ellipsis'> Ellipsis
```
Tuple with ellipsis and slice:
```
>>> t[...,1:]
<type 'tuple'> (Ellipsis, slice(1, None, None))
``` | [Ellipsis](https://docs.python.org/dev/library/constants.html#Ellipsis) is used mainly by the [numeric python](https://numpy.org/) extension, which adds a multidimensional array type. Since there are more than one dimensions, [slicing](https://numpy.org/) becomes more complex than just a start and stop index; it is useful to be able to slice in multiple dimensions as well. eg, given a 4x4 array, the top left area would be defined by the slice "[:2,:2]"
```
>>> a
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
>>> a[:2,:2] # top left
array([[1, 2],
[5, 6]])
```
Ellipsis is used here to indicate a placeholder for the rest of the array dimensions not specified. Think of it as indicating the full slice [:] for dimensions not specified, so
for a 3d array, `a[...,0]` is the same as `a[:,:,0]` and for 4d, `a[:,:,:,0]`.
Note that the actual Ellipsis literal (...) is not usable outside the slice syntax in python2, though there is a builtin Ellipsis object. This is what is meant by "The conversion of an ellipsis slice item is the built-in Ellipsis object." ie. "`a[...]`" is effectively sugar for "`a[Ellipsis]`". In python3, `...` denotes Ellipsis anywhere, so you can write:
```
>>> ...
Ellipsis
```
If you're not using numpy, you can pretty much ignore all mention of Ellipsis. None of the builtin types use it, so really all you have to care about is that lists get passed a single slice object, that contains "`start`","`stop`" and "`step`" members. ie:
```
l[start:stop:step] # proper_slice syntax from the docs you quote.
```
is equivalent to calling:
```
l.__getitem__(slice(start, stop, step))
``` | "Slicing" in Python Expressions documentation | [
"",
"python",
"syntax",
""
] |
Edit: Since it appears that there's either no solution, or I'm doing something so non-standard that nobody knows - I'll revise my question to also ask: What is the best way to accomplish logging when a python app is making a lot of system calls?
My app has two modes. In interactive mode, I want all output to go to the screen as well as to a log file, including output from any system calls. In daemon mode, all output goes to the log. Daemon mode works great using `os.dup2()`. I can't find a way to "tee" all output to a log in interactive mode, without modifying each and every system call.
---
In other words, I want the functionality of the command line 'tee' for any output generated by a python app, **including system call output**.
To clarify:
To redirect all output I do something like this, and it works great:
```
# open our log file
so = se = open("%s.log" % self.name, 'w', 0)
# re-open stdout without buffering
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
# redirect stdout and stderr to the log file opened above
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
```
The nice thing about this is that it requires no special print calls from the rest of the code. The code also runs some shell commands, so it's nice not having to deal with each of their output individually as well.
Simply, I want to do the same, except *duplicating* instead of redirecting.
At first thought, I thought that simply reversing the `dup2`'s should work. Why doesn't it? Here's my test:
```
import os, sys
### my broken solution:
so = se = open("a.log", 'w', 0)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
os.dup2(sys.stdout.fileno(), so.fileno())
os.dup2(sys.stderr.fileno(), se.fileno())
###
print("foo bar")
os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)
```
The file "a.log" should be identical to what was displayed on the screen. | Since you're comfortable spawning external processes from your code, you could use `tee` itself. I don't know of any Unix system calls that do exactly what `tee` does.
```
# Note this version was written circa Python 2.6, see below for
# an updated 3.3+-compatible version.
import subprocess, os, sys
# Unbuffer output (this ensures the output is in the correct order)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
tee = subprocess.Popen(["tee", "log.txt"], stdin=subprocess.PIPE)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno())
print "\nstdout"
print >>sys.stderr, "stderr"
os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)
```
You could also emulate `tee` using the [multiprocessing](http://docs.python.org/dev/library/multiprocessing.html) package (or use [processing](http://pypi.python.org/pypi/processing) if you're using Python 2.5 or earlier).
**Update**
Here is a Python 3.3+-compatible version:
```
import subprocess, os, sys
tee = subprocess.Popen(["tee", "log.txt"], stdin=subprocess.PIPE)
# Cause tee's stdin to get a copy of our stdin/stdout (as well as that
# of any child processes we spawn)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno())
# The flush flag is needed to guarantee these lines are written before
# the two spawned /bin/ls processes emit any output
print("\nstdout", flush=True)
print("stderr", file=sys.stderr, flush=True)
# These child processes' stdin/stdout are
os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)
``` | I had this same issue before and found this snippet very useful:
```
class Tee(object):
def __init__(self, name, mode):
self.file = open(name, mode)
self.stdout = sys.stdout
sys.stdout = self
def __del__(self):
sys.stdout = self.stdout
self.file.close()
def write(self, data):
self.file.write(data)
self.stdout.write(data)
def flush(self):
self.file.flush()
```
from: <http://mail.python.org/pipermail/python-list/2007-May/438106.html> | How to duplicate sys.stdout to a log file? | [
"",
"python",
"tee",
""
] |
I often show messages about user actions to logged in users in my Django app views using:
```
request.user.message_set.create("message to user")
```
How could I do the same for anonymous (not logged in) users? There is no request.user for anonymous users, but the Django documentation says that using the "session" middleware you can do the same thing as the above code. The Django documentation that links to the session middleware claims it is possible, but I couldn't find how to do it from the session documentation. | See <http://code.google.com/p/django-session-messages/> until the patch that enables session based messages lands in Django tree (as I saw recently, it's marked for 1.2, so no hope for quick addition...).
Another project with similar functionality is Django Flash (<http://djangoflash.destaquenet.com/>). | This is what I do, using context processors:
`project/application/context.py` (check for messages and add them to the context):
```
def messages(request):
messages = {}
if 'message' in request.session:
message_type = request.session.get('message_type', 'error')
messages = {'message': request.session['message'],
'message_type': message_type}
del request.session['message']
if 'message_type' in request.session:
del request.session['message_type']
return messages
```
`project/settings.py` (add the context to the `TEMPLATE_CONTEXT_PROCESSORS`):
```
TEMPLATE_CONTEXT_PROCESSORS = (
"django.core.context_processors.request",
"django.core.context_processors.debug",
"django.core.context_processors.media",
"django.core.context_processors.auth",
"project.application.context.messages",
)
```
With the above the function `messages` will be called on every request and whatever it returns will be added to the template's context. With this in place, if I want to give a user a message, I can do this:
```
def my_view(request):
if someCondition:
request.session['message'] = 'Some Error Message'
```
Finally, in a template you can just check if there are errors to display:
```
{% if message %}
<div id="system_message" class="{{ message_type }}">
{{ message }}
</div>
{% endif %}
```
The message type is just used to style depending on what it is ("error","notice","success") and the way that this is setup you can only add 1 message at a time for a user, but that is all I really ever need so it works for me. It could be easily changed to allow for multiple messages and such. | How to send a session message to an anonymous user in a Django site? | [
"",
"python",
"django",
"session",
"django-views",
""
] |
The project I am working on requires some executions to be done at a certain time. I am not sure what would be the best way to deal with this situation. The method must be able to survive server restart/maintenance. And method calls must be programmatically.
I am considering going down this path:
I could have a table in database (or even a message queue) called TaskTable which could have TaskID(PK), TaskName(varchar), TaskStatus(enum success,failed, scheduled) and TimeOfExecution. But I need a windows service that periodically polls the database for any unexecuted tasks. Problem I am facing is that: What do I use as the TaskName to save into database? Class name? class and method name ToString? And how can I convert the string back and programmatically invoke the method calls (I don’t want to have a giant switch statement)? A typtical task would look like below. So I ned to be able to get the name of the task "SendIncompleteNotification" and class name save it into database and on retrival invoke programatically
```
public static Task<string> SendIncompleteNotification
{
get
{
return new Task<string>
(
a => Console.WriteLine("Sample Task")
, "This is a sample task which does nothing."
);
}
}
```
The problem now is I am having problem saving the method/property name progrmatically.
```
var type = ApplicationTask.SendIncompleteNotification.GetType();
//type.Name shows "Task`1" rather than SendIncompleteNotification
```
Is there any better ways of dealing with this situation? Thanks!
Updated:
Sorry my head was spinning. I now realized what I did wrong was to have another method/property to return my Task. What I should have done was to have a new class inhrite from my Task. And there i can easily get the class name and save the string into db and later retireve back and invoke. | Is the database a requirement?
If not, what about a Windows Scheduled Task (they have a tendancy to "just work") which calls into a general console app. The arguments to the console app could be:
* A DLL containing the task to execute
* The name of a class implementing an interface you define
* Other arguments
This way you can put all of your tasks into one assembly, or multiple. Alternatively you could create an attribute, apply that attribute to your tasks to give them a "friendly name", and use reflection over the assembly to find classes with the matching attribute.
Edit: example:
```
interface ITask
{
void Execute(ExcecutionContext context);
}
[TaskName("Send Emails")
class SendEmailsTask : ITask
{
public void Execute(ExcecutionContext context)
{
// Send emails. ExecutionContext might contain a dictionary of
// key/value pairs for additional arguments needed for your task.
}
}
class TaskExecuter
{
public void ExecuteTask(string name)
{
// "name" comes from the database entry
var types = Assembly.GetExecutingAssembly().GetTypes();
foreach (var type in types)
{
// Check type.GetCustomAttributes for the TaskAttribute, then check the name
}
}
}
```
Edit 2: This is in answer to your code sample.
```
class YourClass
{
public static Task<string> SendIncompleteNotification
{
get {
return new Task<string>(
s => Console.WriteLine("Executing task... arguments: {0}", s),
"My task");
}
}
}
interface ITask
{
void Execute(object o);
}
class Task<T> : ITask
{
public Task(Action<T> action, string name)
{
Action = action;
}
public string Name { get; set; }
public Action<T> Action { get; set; }
void ITask.Execute(object o)
{
Action((T)o);
}
}
class Program
{
static void Main(string[] args)
{
// Assume that this is what is stored in the database
var typeName = typeof (YourClass).FullName;
var propertyName = "SendIncompleteNotification";
var arguments = "some arguments";
// Execute the task
var type = Type.GetType(typeName);
var property = type.GetProperty(propertyName);
var task = (ITask)property.GetValue(null, null);
task.Execute(arguments);
Console.ReadKey();
}
}
``` | You might want to look into Windows Workflow. They are designed for long running processes, can be persisted to a database and woken up on event or timer, as far as I know. | c# reliable delayed/scheduled execution best practice | [
"",
"c#",
"scheduled-tasks",
"delayed-execution",
""
] |
My project includes multiple plugins and every plugin includes the plugin.properties file with near to 20 translations.
The MANIFEST.MF file defines the name of the properties files where the external plugin strings are stored.
```
Bundle-Localization: plugin
```
The name of the plugin i define like
```
%plugin.name
```
Eclipse will search the "%plugin.name" in the plugin.properties file at runtime.
Which class read out the MANIFEST.MF Bundle-Localization entry and at which point is the string with the starting '%' suffix is searched in the "plugin.properties" file?
I want to find and patch these class in that way, that i can first look into some other directories/files for the "%plugin.name" identifier. With these new mechanism i can add fragments to my product and overwrite single lines in a "plugin.properties" file without changing the original plugin.
With these mechanism i could create a build process for multiple customers just by adding different fragments. The fragments including the customer names and special string they want to change.
I want to do it that way, because the fragment mechanism only add files to the original plugin. When the "plugin.properties" file is existing in the plugin, the fragment "plugin.properties" files are ignored.
**UPDATE 1:**
The method
```
class ManifestLocalization{
...
protected ResourceBundle getResourceBundle(String localeString) {
}
...
}
```
returns the ResourceBundle of the properties file for the given locale string.
When somebody nows how i can now first look into the fragment to get the resource path please post it.
**UPDATE 2:**
The method in class ManifestLocalization
```
private URL findInResolved(String filePath, AbstractBundle bundleHost) {
URL result = findInBundle(filePath, bundleHost);
if (result != null)
return result;
return findInFragments(filePath, bundleHost);
}
```
Searchs for the properties file and cache it. The translations can than get from the cached file. The problem is, that the complete file is cached and not single translations.
A solution would be to first read the fragment file, than read the bundle file. When both files are existing merge them into one file and write the new properties file to the disk. The URL of the new properties file returns, so that the new propetries file can cached. | Although I got the information wrong ... I had exactly the same problem. The plugin is not activated twice and I cannot get to the fragments Bundle-Localization key.
I want all my language translations in the plugin.properties (I know this is frowned upon but it is much easier to manage a single file).
I (half)solved the problem by using
```
public void populate(Bundle bundle) {
String localisation = (String) bundle.getHeaders().get("Bundle-Localization");
Locale locale = Locale.getDefault();
populate(bundle.getEntry(getFileName(localisation)));
populate(bundle.getEntry(getFileName(localisation, locale.getLanguage())));
populate(bundle.getEntry(getFileName(localisation, locale.getLanguage(), locale.getCountry())));
populate(bundle.getResource(getFileName("fragment")));
populate(bundle.getResource(getFileName("fragment", locale.getLanguage())));
populate(bundle.getResource(getFileName("fragment", locale.getLanguage(), locale.getCountry())));
}
```
and simply call my fragment localisation file name 'fragment.properties'.
This is not particularly elegant, but it works.
By the way, to get files from the fragment you need the getResource, it seems that fragment files are on the classpath, or are only searched when using getResource.
If someone has a better approach, please correct me.
All the best,
Mark. | ```
/**
* The Hacked NLS (National Language Support) system.
* <p>
* Singleton.
*
* @author mima
*/
public final class HackedNLS {
private static final HackedNLS instance = new HackedNLS();
private final Map<String, String> translations;
private final Set<String> knownMissing;
/**
* Create the NLS singleton.
*/
private HackedNLS() {
translations = new HashMap<String, String>();
knownMissing = new HashSet<String>();
}
/**
* Populates the NLS key/value pairs for the current locale.
* <p>
* Plugin localization files may have any name as long as it is declared in the Manifest under
* the Bundle-Localization key.
* <p>
* Fragments <b>MUST</b> define their localization using the base name 'fragment'.
* This is due to the fact that I have no access to the Bundle-Localization key for the
* fragment.
* This may change.
*
* @param bundle The bundle to use for population.
*/
public void populate(Bundle bundle) {
String baseName = (String) bundle.getHeaders().get("Bundle-Localization");
populate(getLocalizedEntry(baseName, bundle));
populate(getLocalizedEntry("fragment", bundle));
}
private URL getLocalizedEntry(String baseName, Bundle bundle) {
Locale locale = Locale.getDefault();
URL entry = bundle.getEntry(getFileName(baseName, locale.getLanguage(), locale.getCountry()));
if (entry == null) {
entry = bundle.getResource(getFileName(baseName, locale.getLanguage(), locale.getCountry()));
}
if (entry == null) {
entry = bundle.getEntry(getFileName(baseName, locale.getLanguage()));
}
if (entry == null) {
entry = bundle.getResource(getFileName(baseName, locale.getLanguage()));
}
if (entry == null) {
entry = bundle.getEntry(getFileName(baseName));
}
if (entry == null) {
entry = bundle.getResource(getFileName(baseName));
}
return entry;
}
private String getFileName(String baseName, String...arguments) {
String name = baseName;
for (int index = 0; index < arguments.length; index++) {
name += "_" + arguments[index];
}
return name + ".properties";
}
private void populate(URL resourceUrl) {
if (resourceUrl != null) {
Properties props = new Properties();
InputStream stream = null;
try {
stream = resourceUrl.openStream();
props.load(stream);
} catch (IOException e) {
warn("Could not open the resource file " + resourceUrl, e);
} finally {
try {
stream.close();
} catch (IOException e) {
warn("Could not close stream for resource file " + resourceUrl, e);
}
}
for (Object key : props.keySet()) {
translations.put((String) key, (String) props.get(key));
}
}
}
/**
* @param key The key to translate.
* @param arguments Array of arguments to format into the translated text. May be empty.
* @return The formatted translated string.
*/
public String getTranslated(String key, Object...arguments) {
String translation = translations.get(key);
if (translation != null) {
if (arguments != null) {
translation = MessageFormat.format(translation, arguments);
}
} else {
translation = "!! " + key;
if (!knownMissing.contains(key)) {
warn("Could not find any translation text for " + key, null);
knownMissing.add(key);
}
}
return translation;
}
private void warn(String string, Throwable cause) {
Status status;
if (cause == null) {
status = new Status(
IStatus.ERROR,
MiddlewareActivator.PLUGIN_ID,
string);
} else {
status = new Status(
IStatus.ERROR,
MiddlewareActivator.PLUGIN_ID,
string,
cause);
}
MiddlewareActivator.getDefault().getLog().log(status);
}
/**
* @return The NLS instance.
*/
public static HackedNLS getInstance() {
return instance;
}
/**
* @param key The key to translate.
* @param arguments Array of arguments to format into the translated text. May be empty.
* @return The formatted translated string.
*/
public static String getText(String key, Object...arguments) {
return getInstance().getTranslated(key, arguments);
}
}
``` | plugin.properties mechanism in eclipse RCP | [
"",
"java",
"plugins",
"rcp",
"fragment",
""
] |
In my Java command-line arguments, any characters after space get ignored. For example,
```
java test.AskGetCampaignByName "Dummy books"
```
I get the first argument (args[0]) as "Dummy" only. Single quotes also do not help.
Is there a workaround/fix for this? Could it be because of my terminal settings?
My $TERM is xterm, and $LANG is "en\_IN". | The arguments are handled by the shell (I assume you are using Bash under Linux?), so any terminal settings should not affect this.
Since you already have quoted the argument, it ought to work. The only possible explanation I can think of is if your `java` command is a wrapper script and messes up the escaping of the arguments when passing on to the real program. This is easy to do, or perhaps a bit hard to do correctly.
A correct wrapper script should pass all its arguments on as `${1+"$@"}`, and any other version is most likely a bug with regards to being able to handle embedded spaces properly. This is not uncommon to do properly, however also any occurrences of `$2` or similar are troublesome and must be written as `"$2"` (or possibly `${2+"$2"}`) in order to handle embedded spaces properly, and this is sinned against a lot.
The reason for the not-so-intuitive syntax `${1+"$@"}` is that the original `$*` joined all arguments as `"$1 $2 $3 ..."` which did not work for embedded spaces. Then `"$@"` was introduced that (correctly) expanded to `"$1" "$2" "$3" ...` for all parameters and if no parameters are given it should expand to nothing. Unfortunately some Unix vendor messed up and made `"$@"` expand to `""` even in case of no arguments, and to work around this the clever (but not so readable) hack of writing `${1+"$@"}` was invented, making `"$@"` only expand if parameter `$1` is set (i.e. avoiding expansion in case of no arguments).
If my wrapper assumption is wrong you could try to debug with [strace](https://en.wikipedia.org/wiki/Strace):
```
strace -o outfile -f -ff -F java test.AskGetCampaignByName "Dummy books"
```
and find out what arguments are passed to `execve`. Example from running "`strace /bin/echo '1 2' 3`":
```
execve("/bin/echo", ["/bin/echo", "1 2", "3"], [/* 93 vars */]) = 0
brk(0) = 0x2400000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f420075b000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f420075a000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/usr/lib64/alliance/lib/tls/x86_64/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/usr/lib64/alliance/lib/tls/x86_64", 0x7fff08757cd0) = -1 ENOENT (No such file or directory)
open("/usr/lib64/alliance/lib/tls/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
...
``` | In case your program needs more than positional arguments (= when the command line usage is important), you should consider options and switches. Apache Commons has a great [library](http://commons.apache.org/cli/) for this. | Space in Java command-line arguments | [
"",
"java",
""
] |
I'm developing an application that needs to write to the registry. It works fine on XP, but when I run it on Vista, from Visual Studio, I get a security exception in:
Registry.LocalMachine.OpenSubKey("SOFTWARE", true);
I'm trying to write a new key into that branch of the registry.
What's the right way to do this, firstly so that I can run my application from VS on Vista, and secondly so that my users don't run into problems running on Vista.
Thanks... | On both XP and Vista you need Admin rights to write a new key under LocalMachine.
You'll be finding this works on XP and fails on Vista due to different account defaults.
The quick and dirty solution is to ensure your application runs with Admin rights in both cases, though in on Vista this tends to be frowned upon.
The better solution would be to redesign things slightly - can the new sub key be written by your installer (which runs with Admin rights), or could you store your information somewhere else? | Standard users, and admin's running with UAC on Vista, do not have permission to write the the local machine registry key. This would fail on XP too if you ran as standard user.
Your options are:
* Use Registry.CurrentUser instead, if the setting is per-user.
* Run your app as administrator
* Loosen the ACL on the key so anyone
can write - which is definitely
not recommended, since any malware
on the box can toast the key. | SecurityException when using Registry.LocalMachine.OpenSubKey | [
"",
"c#",
"security",
"windows-vista",
"registry",
""
] |
I wrote a little Linq like DSL on top of [Google Collections](http://code.google.com/p/google-collections/)
```
public class IterableQuery {
public static <T> Where<T> from(Iterable<T> originalCollection) {
return new Where<T>( Iterables.transform(originalCollection, IterableQuery.<T>SAME()));
}
private static <T> Function<T, T> SAME() {
return new Function<T, T>(){
public T apply(T arg0) {
return arg0;
}
};
}
public static class SelectOrderBy<T>{
private final Iterable<T> iterable;
public SelectOrderBy(Iterable<T> iteable) {
this.iterable = iteable;
}
public SelectOrderBy<T> orderyBy( Comparator<T> sort ){
Ordering.forComparator(sort).sort((List< ? extends T>) iterable);
return new SelectOrderBy<T>( iterable);
}
public <F> Iterable<F> select( Function<? super T,? extends F> function){
return Iterables.transform(iterable, function);
}
public Iterable<T> selectEveryThing( ){
return iterable;
}
}
public static class Where<T>{
private final Iterable<T> iterable;
public Where(Iterable<T> iterable) {
this.iterable = iterable;
}
public SelectOrderBy<T> where(Predicate<T> predicate) {
return new SelectOrderBy<T>( Iterables.filter(iterable, predicate));
}
}
}
```
so I could do query collections in a more concise readable way
```
Iterable<? extends NewOrder > currentlyAssigned =
IterableQuery.
from(orders).
where(placedInLast10Days).
orderBy(lastName).
select(orderToNewOrder);
```
I am concerned whether this approach will cause an explosion of mini objects and cause some Garbage Collection issues( or any other issues) ? | I believe Google Collections uses deferred execution for most of its Iterators. Deferred execution would minimize the number of intermediate objects created as it would eliminate most of the intermediate/temporary lists that could be created for each call (where, orderby, etc).
Basically, the each element returned by currentlyAssigned.iterator() isn't calculated until you call iterator.next(). Until then, your currentlyAssigned iterable is just a set of operations, nothing more.
Your only concern on the explosion of mini-objects if those objects last longer than the duration of a single element operation... peak memory usage could get quite big in that case and you could potentially run out of memory on very large lists or if you were converting objects (i.e. calling ToUpper() on all strings or something). This would only be the case if the result of where() was another list, then orderby() created another list, so on and so on.
As far as the GC handling many short lived objects, there's no problem. The modern Java garbage collector heavily optimized to handle that exact behavior. | I think it depends on how transform behaves, if its like a lazy filter, i.e. you don't attach a reference to every result. then its more than OK object count wise. Garbage collection wise, you are not keeping any hidden references, once you lose the root reference the whole graph becomes unreachable and gets collected. way to go man this is really neat. | Would this cause Garbage Collection issues | [
"",
"java",
"garbage-collection",
"guava",
""
] |
Say I have two overloaded versions of a C# method:
```
void Method( TypeA a ) { }
void Method( TypeB b ) { }
```
I call the method with:
```
Method( null );
```
Which overload of the method is called? What can I do to ensure that a particular overload is called? | It depends on `TypeA` and `TypeB`.
* If exactly one of them is applicable (e.g. there is no conversion from `null` to `TypeB` because it's a value type but `TypeA` is a reference type) then the call will be made to the applicable one.
* Otherwise it depends on the relationship between `TypeA` and `TypeB`.
+ If there is an implicit conversion from `TypeA` to `TypeB` but no implicit conversion from `TypeB` to `TypeA` then the overload using `TypeA` will be used.
+ If there is an implicit conversion from `TypeB` to `TypeA` but no implicit conversion from `TypeA` to `TypeB` then the overload using `TypeB` will be used.
+ Otherwise, the call is ambiguous and will fail to compile.
See section 7.4.3.4 of the C# 3.0 spec for the detailed rules.
Here's an example of it not being ambiguous. Here `TypeB` derives from `TypeA`, which means there's an implicit conversion from `TypeB` to `TypeA`, but not vice versa. Thus the overload using `TypeB` is used:
```
using System;
class TypeA {}
class TypeB : TypeA {}
class Program
{
static void Foo(TypeA x)
{
Console.WriteLine("Foo(TypeA)");
}
static void Foo(TypeB x)
{
Console.WriteLine("Foo(TypeB)");
}
static void Main()
{
Foo(null); // Prints Foo(TypeB)
}
}
```
In general, even in the face of an otherwise-ambiguous call, to ensure that a particular overload is used, just cast:
```
Foo((TypeA) null);
```
or
```
Foo((TypeB) null);
```
Note that if this involves inheritance in the declaring classes (i.e. one class is overloading a method declared by its base class) you're into a whole other problem, and you need to cast the target of the method rather than the argument. | Jon Skeet has given a comprehensive answer, but from a design point of view you shouldn't depend on corner-cases of the compiler specification. If nothing else, if you have to look up what it does before you write it, the next person to try to read it won't know what it does either. It's not maintainable.
Overloads are there for convenience, and two different overloads with the same name should do the same thing. If the two methods do different things, rename one or both of them.
It's more usual for an overloaded method to have variants with varying numbers of parameters, and for the overload with less parameters to supply sensible defaults.
e.g. `string ToString(string format, System.IFormatProvider provider)` has the most parameters,
`string ToString(System.IFormatProvider provider)` supplies a default format, and
`string ToString()` supplies a default format and provider, | C#: Passing null to overloaded method - which method is called? | [
"",
"c#",
"null",
"overloading",
""
] |
I'm writing an upload function, and have problems catching "System.Web.HttpException: Maximum request length exceeded" with files larger than the specified max size in `httpRuntime`in web.config (max size set to 5120). I'm using a simple `<input>` for the file.
The problem is that the exception is thrown before the upload button's click-event, and the exception happens before my code is run. So how do I catch and handle the exception?
**EDIT:** The exception is thrown instantly, so I'm pretty sure it's not a timeout issue due to slow connections. | There is no easy way to catch such exception unfortunately. What I do is either override the OnError method at the page level or the Application\_Error in global.asax, then check if it was a Max Request failure and, if so, transfer to an error page.
```
protected override void OnError(EventArgs e) .....
private void Application_Error(object sender, EventArgs e)
{
if (GlobalHelper.IsMaxRequestExceededException(this.Server.GetLastError()))
{
this.Server.ClearError();
this.Server.Transfer("~/error/UploadTooLarge.aspx");
}
}
```
It's a hack but the code below works for me
```
const int TimedOutExceptionCode = -2147467259;
public static bool IsMaxRequestExceededException(Exception e)
{
// unhandled errors = caught at global.ascx level
// http exception = caught at page level
Exception main;
var unhandled = e as HttpUnhandledException;
if (unhandled != null && unhandled.ErrorCode == TimedOutExceptionCode)
{
main = unhandled.InnerException;
}
else
{
main = e;
}
var http = main as HttpException;
if (http != null && http.ErrorCode == TimedOutExceptionCode)
{
// hack: no real method of identifying if the error is max request exceeded as
// it is treated as a timeout exception
if (http.StackTrace.Contains("GetEntireRawContent"))
{
// MAX REQUEST HAS BEEN EXCEEDED
return true;
}
}
return false;
}
``` | As GateKiller said you need to change the maxRequestLength. You may also need to change the executionTimeout in case the upload speed is too slow. Note that you don't want either of these settings to be too big otherwise you'll be open to DOS attacks.
The default for the executionTimeout is 360 seconds or 6 minutes.
You can change the maxRequestLength and executionTimeout with the [httpRuntime Element](https://learn.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/e1f13641%28v=vs.100%29).
```
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.web>
<httpRuntime maxRequestLength="102400" executionTimeout="1200" />
</system.web>
</configuration>
```
EDIT:
If you want to handle the exception regardless then as has been stated already you'll need to handle it in Global.asax. Here's a link to a [code example](https://web.archive.org/web/20201031041307/http://geekswithblogs.net/sglima/archive/2011/09/20/how-to-handle-maximum-request-length-exceeded-exception.aspx). | Catching "Maximum request length exceeded" | [
"",
"c#",
"asp.net",
"httpexception",
""
] |
Lets say I have this extention method:
```
public static bool HasFive<T>(this IEnumerable<T> subjects)
{
if(subjects == null)
throw new ArgumentNullException("subjects");
return subjects.Count() == 5;
}
```
Do you think this null check and exception throwing is really necessary? I mean, when I use the `Count` method, an `ArgumentNullException` will be thrown anyways, right?
I can maybe think of one reason why I should, but would just like to hear others view on this. And yes, my reason for asking is partly laziness (want to write as little as possible), but also because I kind of think a bunch of null checking and exception throwing kind of clutters up the methods which often end up being twice as long as they really needed to be. Someone should know better than to send null into a method :p
Anyways, what do you guys think?
---
**Note:** `Count()` is an extension method and *will* throw an `ArgumentNullException`, not a `NullReferenceException`. See [`Enumerable.Count<TSource> Method (IEnumerable<TSource>)`](http://msdn.microsoft.com/en-us/library/bb338038.aspx). Try it yourself if you don't believe me =)
---
**Note2:** After the answers given here I have been persuaded to start checking more for null values. I am still lazy though, so I have started to use the `Enforce` class in [Lokad Shared Libraries](http://abdullin.com/shared-libraries/). Can recommend taking a look at it. Instead of my example I can do this instead:
```
public static bool HasFive<T>(this IEnumerable<T> subjects)
{
Enforce.Argument(() => subjects);
return subjects.Count() == 5;
}
``` | Yes, it will throw an `ArgumentNullException`. I can think of two reasons for putting the extra checking in:
* If you later go back and change the method to do something before calling `subjects.Count()` and forget to put the check in at that point, you could end up with a side effect before the exception is thrown, which isn't nice.
* Currently, the stack trace will show `subjects.Count()` at the top, and probably with a message with the `source` parameter name. This could be confusing to the caller of `HasFive` who can see a `subjects` parameter name.
EDIT: Just to save me having to write it yet again elsewhere:
The call to `subjects.Count()` will throw an `ArgumentNullException`, *not* a `NullReferenceException`. `Count()` is another extension method here, and assuming the implementation in `System.Linq.Enumerable` is being used, that's documented (correctly) to throw an `ArgumentNullException`. Try it if you don't believe me.
EDIT: Making this easier...
If you do a lot of checks like this you may want to make it simpler to do so. I like the following extension method:
```
internal static void ThrowIfNull<T>(this T argument, string name)
where T : class
{
if (argument == null)
{
throw new ArgumentNullException(name);
}
}
```
The example method in the question can then become:
```
public static bool HasFive<T>(this IEnumerable<T> subjects)
{
subjects.ThrowIfNull("subjects");
return subjects.Count() == 5;
}
```
Another alternative would be to write a version which checked the value *and returned it* like this:
```
internal static T NullGuard<T>(this T argument, string name)
where T : class
{
if (argument == null)
{
throw new ArgumentNullException(name);
}
return argument;
}
```
You can then call it fluently:
```
public static bool HasFive<T>(this IEnumerable<T> subjects)
{
return subjects.NullGuard("subjects").Count() == 5;
}
```
This is also helpful for copying parameters in constructors etc:
```
public Person(string name, int age)
{
this.name = name.NullGuard("name");
this.age = age;
}
```
(You might want an overload without the argument name for places where it's not important.) | I think @Jon Skeet is absolutely spot on, however I'd like to add the following thoughts:-
* Providing a meaningful error message is useful for debugging, logging and exception reporting. An exception thrown by the BCL is less likely to describe the specific circumstances of the exception WRT your codebase. Perhaps this is less of an issue with null checks which (most of the time) necessarily can't give you much domain-specific information - 'I was passed a null unexpectedly, no idea why' is pretty much the best you can do most of the time, however sometimes you can provide more information and obviously this is more likely to be relevant when dealing with other exception types.
* The null check clearly demonstrates to other developers and you, a form of documentation, if/when you come back to the code a year later, that *it's possible someone might pass a null, and it would be problematic if they did so*.
* Expanding on Jon's excellent point - you might do something before the null gets picked up - I think it is vitally important to engage in [defensive programming](http://en.wikipedia.org/wiki/Defensive_programming). Checking for an exception before running other code is a form of defensive programming as you are taking into account things might not work the way you expected (or changes might be made in the future that you didn't expect) and ensuring that no matter what happens (assuming your null check isn't removed) such problems cannot arise.
* It's a form of runtime [assert](http://en.wikipedia.org/wiki/Assert) that your parameter is not null. You can proceed on the assumption that it isn't.
* The above assumption can result in slimmer code, you write the rest of your code knowing the parameter is not null, cutting down on extraneous subsequent null checks. | C#: Should I bother checking for null in this situation? | [
"",
"c#",
""
] |
I have an object which i need to validate. This object has properties which cannot be null, strings which cannot exceed a given size and date strings which i need to check for format correctness.
How would you perform this validation. I don't want to go over the whole object, I'm looking for something more dynamic
EDIT:
This object will be used in webservices input so any setter validation will be useless also i dont know how the system will behave with the added dependency of any annotation based system
Im looking for a custom solution (yes i know reinventing the wheel) | It seems you want the object to be validated on the server side of a web service. If so, the answer I'm suggesting may apply to your situation.
As you've no doubt seen already, there are several ways of going about validating the fields in an object. But you know what needs to be done: check each field to see whether its value is valid. There is no magic here: you just check the fields.
If you're looking to keep your validation code in one place, one thing you might do is create an xml schema containing your validation logic, generate your class from that, and validate the objects you get over the wire using the schema as a reference. This makes it easy to maintain your validation code and your class: you simply update the schema, and re-generate the class. (I wouldn't even keep the class in source control. Just generate it before you compile your code. This guarantees that your class is always in sync with your schema.) I'm sure there are libraries for this in Java (JAXB, maybe?), but I haven't worked with Java for years. It's a *very* common thing to do in many languages. | Without the exact scenario, I can recommend you use the object's setter methods (you *did* make the fields private, right?) to validate every change to the values. the String setters can call the length-validating code, the Date setters can call the format-validating code, etc.
Hope this helps,
Yuval =8-) | Object Validation | [
"",
"java",
""
] |
Quick question here: why not ALWAYS use ArrayLists in Java? They apparently have equal access speed as arrays, in addition to extra useful functionality. I understand the limitation in that it cannot hold primitives, but this is easily mitigated by use of wrappers. | If you need a collection of primitives, then an array may well be the best tool for the job. Boxing is a comparatively expensive operation. For a collection (not including maps) of primitives that will be used as primitives, I almost always use an array to avoid repeated boxing and unboxing.
I rarely worry about the performance difference between an array and an `ArrayList`, however. If a `List` will provide better, cleaner, more maintainable code, then I will always use a `List` (or `Collection` or `Set`, etc, as appropriate, but your question was about `ArrayList`) unless there is some compelling reason not to. Performance is **rarely** that compelling reason.
Using `Collection`s almost always results in better code, in part because arrays don't play nice with generics, as Johannes Weiß already pointed out in a comment, but also because of so many other reasons:
* Collections have a very rich API and a large variety of implementations that can (in most cases) be trivially swapped in and out for each other
* A Collection can be trivially converted to an array, if *occasional* use of an array version is useful
* Many Collections grow more gracefully than an array grows, which can be a performance concern
* Collections work very well with generics, arrays fairly badly
* As TofuBeer pointed out, array covariance is strange and can act in unexected ways that no object will act in. Collections handle covariance in expected ways.
* arrays need to be manually sized to their task, and if an array is not full you need to keep track of that yourself. If an array needs to be resized, you have to do that yourself.
All of this together, I rarely use arrays and only a little more often use an `ArrayList`. However, I do use `List`s very often (or just `Collection` or `Set`). My most frequent use of arrays is when the item being stored is a primitive and will be inserted and accessed and used as a primitive. If boxing and unboxing every become so fast that it becomes a trivial consideration, I may revisit this decision, but it is more convenient to work with something, to store it, in the form in which it is always referenced. (That is, 'int' instead of 'Integer'.) | Plenty of projects do just use `ArrayList` or `HashMap` or whatever to handle all their collection needs. However, let me put one caveat on that. Whenever you are creating classes and using them throughout your code, if possible refer to the interfaces they implement rather than the concrete classes you are using to implement them.
For example, rather than this:
```
ArrayList insuranceClaims = new ArrayList();
```
do this:
```
List insuranceClaims = new ArrayList();
```
or even:
```
Collection insuranceClaims = new ArrayList();
```
If the rest of your code only knows it by the interface it implements (`List` or `Collection`) then swapping it out for another implementation becomes much easier down the road if you find you need a different one. I saw this happen just a month ago when I needed to swap out a regular `HashMap` for an implementation that would return the items to me in the same order I put them in when it came time to iterate over all of them. Fortunately just such a thing was available in the Jakarta Commons Collections and I just swapped out A for B with only a one line code change because both implemented Map. | Why not always use ArrayLists in Java, instead of plain ol' arrays? | [
"",
"java",
""
] |
I'm trying to make my own [WYSIWYG](https://en.wikipedia.org/wiki/WYSIWYG) editor. Is there a way to get the text which has the user selected in a [textarea](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/textarea)?
For example, if the user selects some word and then clicks button, how do I find out which text was selected?
I'm using jQuery. | Try the jquery-fieldselection plugin.
You can download it from [here](http://github.com/localhost/jquery-fieldselection/tree/master). There is an example too. | [`window.getSelection().toString()`](https://developer.mozilla.org/en-US/docs/Web/API/Window/getSelection) worked for me with Chrome, but [not Firefox](https://bugzilla.mozilla.org/show_bug.cgi?id=85686).
For obtaining the selected content in a `<textarea>` with Firefox:
```
function getSel() // JavaScript
{
// Obtain the object reference for the <textarea>
var txtarea = document.getElementById("mytextarea");
// Obtain the index of the first selected character
var start = txtarea.selectionStart;
// Obtain the index of the last selected character
var finish = txtarea.selectionEnd;
// Obtain the selected text
var sel = txtarea.value.substring(start, finish);
// Do something with the selected content
}
```
You could also use [activeElement](https://developer.mozilla.org/en-US/docs/Web/API/DocumentOrShadowRoot/activeElement) instead of [getElementById](https://developer.mozilla.org/en-US/docs/Web/API/Document/getElementById).
Reference:
* *[textarea](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/textarea)*
* *[document](https://developer.mozilla.org/en-US/docs/Web/API/document?redirectlocale=en-US&redirectslug=DOM%2Fdocument#Introduction)* | How can I get the selected text in a textarea? | [
"",
"javascript",
"jquery",
""
] |
Is there an object that will event at a given DateTime or DateTimeOffset? I am using a Timer to do this operation now but it requires a bit of calculation on my part, was just thinking there might already be something in place. | I don't see how you have to do any calculation:
```
public void StartTimer(DateTime target) {
double msec = (target - DateTime.Now).TotalMilliseconds;
if (msec <= 0 || msec > int.MaxValue) throw new ArgumentOutOfRangeException();
timer1.Interval = (int)msec;
timer1.Enabled = true;
}
``` | I like:
```
System.Timers.Timer _WaitForScheduledTime;
_WaitForScheduledTime = new System.Timers.Timer();
_WaitForScheduledTime.Elapsed += new ElapsedEventHandler(WaitForScheduledTime_OnElapsed);
_WaitForScheduledTime.Interval = _ListOfJobs.IntervalUntilFirstJobIsToRun().TotalMilliseconds;
_WaitForScheduledTime.Start();
```
...
```
private void WaitForScheduledTime_OnElapsed(object source, EventArgs e)
{
log.Debug("Ready to run at least one job");
// restart the timer
_WaitForScheduledTime.Interval = _ListOfJobs.IntervalUntilFirstJobIsToRun().TotalMilliseconds;
_WaitForScheduledTime.Start();
}
``` | Class in the BCL that will event at a given DateTime? | [
"",
"c#",
".net",
"datetime",
"timer",
""
] |
Does the ANSI standard *mandate* the logical operators to be short-circuited, in either C or C++?
I'm confused for I recall the K&R book saying your code shouldn't depend on these operations being short circuited, for they may not. Could someone please point out where in the standard it's said logic ops are always short-circuited? I'm mostly interested on C++, an answer also for C would be great.
I also remember reading (can't remember where) that evaluation order isn't strictly defined, so your code shouldn't depend or assume functions within an expression would be executed in a specific order: by the end of a statement all referenced functions will have been called, but the compiler has freedom in selecting the most efficient order.
Does the standard indicate the evaluation order of this expression?
```
if( functionA() && functionB() && functionC() ) {
cout << "Hello world";
}
``` | Yes, short-circuiting and evaluation order are required for operators `||` and `&&` in both C and C++ standards.
C++ standard says (there should be an equivalent clause in the C standard):
> **1.9.18**
>
> *In the evaluation of the following expressions*
>
> ```
> a && b
> a || b
> a ? b : c
> a , b
> ```
>
> *using the built-in meaning of the operators in these expressions, **there is a sequence point after the evaluation of the first expression** (12).*
In C++ there is an extra trap: short-circuiting does **NOT** apply to types that overload operators `||` and `&&`.
> *Footnote 12: The operators indicated in this paragraph are the built-in operators, as described in clause 5. When one of these operators is overloaded (clause 13) in a valid context, thus designating a user-defined operator function, the expression designates a function invocation, and the operands form an argument list, **without an implied sequence point between them.***
It is usually not recommended to overload these operators in C++ unless you have a very specific requirement. You can do it, but it may break expected behaviour in other people's code, especially if these operators are used indirectly via instantiating templates with the type overloading these operators. | Short circuit evaluation, and order of evaluation, is a mandated semantic standard in both C and C++.
If it wasn't, code like this would not be a common idiom
```
char* pChar = 0;
// some actions which may or may not set pChar to something
if ((pChar != 0) && (*pChar != '\0')) {
// do something useful
}
```
Section **6.5.13 Logical AND operator** of the C99 specification [(PDF link)](http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf) says
> (4). Unlike the bitwise binary & operator, the && operator guarantees
> left-to-right evaluation; there is a
> sequence point after the evaluation of
> the first operand. If the first
> operand compares equal to 0, the
> second operand is not evaluated.
Similarly, section **6.5.14 Logical OR operator** says
> (4) Unlike the bitwise | operator, the ||
> operator guarantees left-to-right
> evaluation; there is a sequence point
> after the evaluation of the first
> operand. If the first operand compares
> unequal to 0, the second operand is
> not evaluated.
Similar wording can be found in the C++ standards, [check section 5.14 in this draft copy](http://www.csci.csusb.edu/dick/c++std/cd2/index.html). As checkers notes in another answer, if you override && or ||, then both operands must be evaluated as it becomes a regular function call. | Is short-circuiting logical operators mandated? And evaluation order? | [
"",
"c++",
"c",
"logical-operators",
"short-circuiting",
"operator-precedence",
""
] |
Is there a way to perform updates on a PIVOTed table in SQL Server 2008 where the changes propagate back to the source table, assuming there is no aggregation? | This will only really work if the pivoted columns form a unique identifier. So let's take Buggy's example; here is the original table:
```
TaskID Date Hours
```
and we want to pivot it into a table that looks like this:
```
TaskID 11/15/1980 11/16/1980 11/17/1980 ... etc.
```
In order to create the pivot, you would do something like this:
```
DECLARE @FieldList NVARCHAR(MAX)
SELECT
@FieldList =
CASE WHEN @FieldList <> '' THEN
@FieldList + ', [' + [Date] + ']'
ELSE
'[' + [Date] + ']'
END
FROM
Tasks
DECLARE @PivotSQL NVARCHAR(MAX)
SET @PivotSQL =
'
SELECT
TaskID
, ' + @FieldList + '
INTO
##Pivoted
FROM
(
SELECT * FROM Tasks
) AS T
PIVOT
(
MAX(Hours) FOR T.[Date] IN (' + @FieldList + ')
) AS PVT
'
EXEC(@PivotSQL)
```
So then you have your pivoted table in `##Pivoted`. Now you perform an update to one of the hours fields:
```
UPDATE
##Pivoted
SET
[11/16/1980 00:00:00] = 10
WHERE
TaskID = 1234
```
Now `##Pivoted` has an updated version of the hours for a task that took place on 11/16/1980 and we want to save that back to the original table, so we use an `UNPIVOT`:
```
DECLARE @UnPivotSQL NVarChar(MAX)
SET @UnPivotSQL =
'
SELECT
TaskID
, [Date]
, [Hours]
INTO
##UnPivoted
FROM
##Pivoted
UNPIVOT
(
Value FOR [Date] IN (' + @FieldList + ')
) AS UP
'
EXEC(@UnPivotSQL)
UPDATE
Tasks
SET
[Hours] = UP.[Hours]
FROM
Tasks T
INNER JOIN
##UnPivoted UP
ON
T.TaskID = UP.TaskID
```
You'll notice that I modified Buggy's example to remove aggregation by day-of-week. That's because there's no going back and updating if you perform any sort of aggregation. If I update the SUNHours field, how do I know which Sunday's hours I'm updating? This will only work if there is no aggregation. I hope this helps! | `PIVOT`s always require an aggregate function in the pivot clause.
Thus there is always aggregation.
So, no, it cannot be updatable.
**You CAN put an `INSTEAD OF TRIGGER` on a view based on the statement and thus you can make any view updatable.**
Example [here](http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=120679) | Updates on PIVOTs in SQL Server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"pivot",
"pivot-table",
""
] |
I want to be able to add a range and get updated for the entire bulk.
I also want to be able to cancel the action before it's done (i.e. collection changing besides the 'changed').
---
Related Q
[Which .Net collection for adding multiple objects at once and getting notified?](https://stackoverflow.com/questions/57020/which-net-collection-for-adding-multiple-objects-at-once-and-getting-notified) | First of all, please **vote and comment** on the [API request](https://github.com/dotnet/corefx/issues/10752#issuecomment-359197102) on the .NET repo.
Here's my optimized version of the `ObservableRangeCollection` (optimized version of James Montemagno's [one](https://github.com/jamesmontemagno/mvvm-helpers/blob/release-1.3.0/MvvmHelpers/ObservableRangeCollection.cs)).
It performs very fast and is meant to reuse existing elements when possible and avoid unnecessary events, or batching them into one, when possible.
The `ReplaceRange` method replaces/removes/adds the required elements by the appropriate indices and batches the possible events.
Tested on Xamarin.Forms UI with great results for very frequent updates to the large collection (5-7 updates per second).
Note:
Since **WPF** is not accustomed to work with range operations, it will throw a `NotSupportedException`, when using the `ObservableRangeCollection` from below in WPF UI-related work, such as binding it to a `ListBox` etc. (you can still use the `ObservableRangeCollection<T>` if not bound to UI).
However you can use the [`WpfObservableRangeCollection<T>`](https://gist.github.com/weitzhandler/65ac9113e31d12e697cb58cd92601091#file-wpfobservablerangecollection-cs) workaround.
The real solution would be creating a `CollectionView` that knows how to deal with range operations, but I still didn't have the time to implement this.
**[RAW Code](https://gist.github.com/weitzhandler/65ac9113e31d12e697cb58cd92601091)** - open as Raw, then do `Ctrl`+`A` to select all, then `Ctrl`+`C` to copy.
```
// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using System.Collections.Generic;
using System.Collections.Specialized;
using System.ComponentModel;
using System.Diagnostics;
namespace System.Collections.ObjectModel
{
/// <summary>
/// Implementation of a dynamic data collection based on generic Collection<T>,
/// implementing INotifyCollectionChanged to notify listeners
/// when items get added, removed or the whole list is refreshed.
/// </summary>
public class ObservableRangeCollection<T> : ObservableCollection<T>
{
//------------------------------------------------------
//
// Private Fields
//
//------------------------------------------------------
#region Private Fields
[NonSerialized]
private DeferredEventsCollection _deferredEvents;
#endregion Private Fields
//------------------------------------------------------
//
// Constructors
//
//------------------------------------------------------
#region Constructors
/// <summary>
/// Initializes a new instance of ObservableCollection that is empty and has default initial capacity.
/// </summary>
public ObservableRangeCollection() { }
/// <summary>
/// Initializes a new instance of the ObservableCollection class that contains
/// elements copied from the specified collection and has sufficient capacity
/// to accommodate the number of elements copied.
/// </summary>
/// <param name="collection">The collection whose elements are copied to the new list.</param>
/// <remarks>
/// The elements are copied onto the ObservableCollection in the
/// same order they are read by the enumerator of the collection.
/// </remarks>
/// <exception cref="ArgumentNullException"> collection is a null reference </exception>
public ObservableRangeCollection(IEnumerable<T> collection) : base(collection) { }
/// <summary>
/// Initializes a new instance of the ObservableCollection class
/// that contains elements copied from the specified list
/// </summary>
/// <param name="list">The list whose elements are copied to the new list.</param>
/// <remarks>
/// The elements are copied onto the ObservableCollection in the
/// same order they are read by the enumerator of the list.
/// </remarks>
/// <exception cref="ArgumentNullException"> list is a null reference </exception>
public ObservableRangeCollection(List<T> list) : base(list) { }
#endregion Constructors
//------------------------------------------------------
//
// Public Methods
//
//------------------------------------------------------
#region Public Methods
/// <summary>
/// Adds the elements of the specified collection to the end of the <see cref="ObservableCollection{T}"/>.
/// </summary>
/// <param name="collection">
/// The collection whose elements should be added to the end of the <see cref="ObservableCollection{T}"/>.
/// The collection itself cannot be null, but it can contain elements that are null, if type T is a reference type.
/// </param>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
public void AddRange(IEnumerable<T> collection)
{
InsertRange(Count, collection);
}
/// <summary>
/// Inserts the elements of a collection into the <see cref="ObservableCollection{T}"/> at the specified index.
/// </summary>
/// <param name="index">The zero-based index at which the new elements should be inserted.</param>
/// <param name="collection">The collection whose elements should be inserted into the List<T>.
/// The collection itself cannot be null, but it can contain elements that are null, if type T is a reference type.</param>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is not in the collection range.</exception>
public void InsertRange(int index, IEnumerable<T> collection)
{
if (collection == null)
throw new ArgumentNullException(nameof(collection));
if (index < 0)
throw new ArgumentOutOfRangeException(nameof(index));
if (index > Count)
throw new ArgumentOutOfRangeException(nameof(index));
if (collection is ICollection<T> countable)
{
if (countable.Count == 0)
{
return;
}
}
else if (!ContainsAny(collection))
{
return;
}
CheckReentrancy();
//expand the following couple of lines when adding more constructors.
var target = (List<T>)Items;
target.InsertRange(index, collection);
OnEssentialPropertiesChanged();
if (!(collection is IList list))
list = new List<T>(collection);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, list, index));
}
/// <summary>
/// Removes the first occurence of each item in the specified collection from the <see cref="ObservableCollection{T}"/>.
/// </summary>
/// <param name="collection">The items to remove.</param>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
public void RemoveRange(IEnumerable<T> collection)
{
if (collection == null)
throw new ArgumentNullException(nameof(collection));
if (Count == 0)
{
return;
}
else if (collection is ICollection<T> countable)
{
if (countable.Count == 0)
return;
else if (countable.Count == 1)
using (IEnumerator<T> enumerator = countable.GetEnumerator())
{
enumerator.MoveNext();
Remove(enumerator.Current);
return;
}
}
else if (!(ContainsAny(collection)))
{
return;
}
CheckReentrancy();
var clusters = new Dictionary<int, List<T>>();
var lastIndex = -1;
List<T> lastCluster = null;
foreach (T item in collection)
{
var index = IndexOf(item);
if (index < 0)
{
continue;
}
Items.RemoveAt(index);
if (lastIndex == index && lastCluster != null)
{
lastCluster.Add(item);
}
else
{
clusters[lastIndex = index] = lastCluster = new List<T> { item };
}
}
OnEssentialPropertiesChanged();
if (Count == 0)
OnCollectionReset();
else
foreach (KeyValuePair<int, List<T>> cluster in clusters)
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, cluster.Value, cluster.Key));
}
/// <summary>
/// Iterates over the collection and removes all items that satisfy the specified match.
/// </summary>
/// <remarks>The complexity is O(n).</remarks>
/// <param name="match"></param>
/// <returns>Returns the number of elements that where </returns>
/// <exception cref="ArgumentNullException"><paramref name="match"/> is null.</exception>
public int RemoveAll(Predicate<T> match)
{
return RemoveAll(0, Count, match);
}
/// <summary>
/// Iterates over the specified range within the collection and removes all items that satisfy the specified match.
/// </summary>
/// <remarks>The complexity is O(n).</remarks>
/// <param name="index">The index of where to start performing the search.</param>
/// <param name="count">The number of items to iterate on.</param>
/// <param name="match"></param>
/// <returns>Returns the number of elements that where </returns>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is out of range.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="count"/> is out of range.</exception>
/// <exception cref="ArgumentNullException"><paramref name="match"/> is null.</exception>
public int RemoveAll(int index, int count, Predicate<T> match)
{
if (index < 0)
throw new ArgumentOutOfRangeException(nameof(index));
if (count < 0)
throw new ArgumentOutOfRangeException(nameof(count));
if (index + count > Count)
throw new ArgumentOutOfRangeException(nameof(index));
if (match == null)
throw new ArgumentNullException(nameof(match));
if (Count == 0)
return 0;
List<T> cluster = null;
var clusterIndex = -1;
var removedCount = 0;
using (BlockReentrancy())
using (DeferEvents())
{
for (var i = 0; i < count; i++, index++)
{
T item = Items[index];
if (match(item))
{
Items.RemoveAt(index);
removedCount++;
if (clusterIndex == index)
{
Debug.Assert(cluster != null);
cluster.Add(item);
}
else
{
cluster = new List<T> { item };
clusterIndex = index;
}
index--;
}
else if (clusterIndex > -1)
{
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, cluster, clusterIndex));
clusterIndex = -1;
cluster = null;
}
}
if (clusterIndex > -1)
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, cluster, clusterIndex));
}
if (removedCount > 0)
OnEssentialPropertiesChanged();
return removedCount;
}
/// <summary>
/// Removes a range of elements from the <see cref="ObservableCollection{T}"/>>.
/// </summary>
/// <param name="index">The zero-based starting index of the range of elements to remove.</param>
/// <param name="count">The number of elements to remove.</param>
/// <exception cref="ArgumentOutOfRangeException">The specified range is exceeding the collection.</exception>
public void RemoveRange(int index, int count)
{
if (index < 0)
throw new ArgumentOutOfRangeException(nameof(index));
if (count < 0)
throw new ArgumentOutOfRangeException(nameof(count));
if (index + count > Count)
throw new ArgumentOutOfRangeException(nameof(index));
if (count == 0)
return;
if (count == 1)
{
RemoveItem(index);
return;
}
//Items will always be List<T>, see constructors
var items = (List<T>)Items;
List<T> removedItems = items.GetRange(index, count);
CheckReentrancy();
items.RemoveRange(index, count);
OnEssentialPropertiesChanged();
if (Count == 0)
OnCollectionReset();
else
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, removedItems, index));
}
/// <summary>
/// Clears the current collection and replaces it with the specified collection,
/// using the default <see cref="EqualityComparer{T}"/>.
/// </summary>
/// <param name="collection">The items to fill the collection with, after clearing it.</param>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
public void ReplaceRange(IEnumerable<T> collection)
{
ReplaceRange(0, Count, collection, EqualityComparer<T>.Default);
}
/// <summary>
/// Clears the current collection and replaces it with the specified collection,
/// using the specified comparer to skip equal items.
/// </summary>
/// <param name="collection">The items to fill the collection with, after clearing it.</param>
/// <param name="comparer">An <see cref="IEqualityComparer{T}"/> to be used
/// to check whether an item in the same location already existed before,
/// which in case it would not be added to the collection, and no event will be raised for it.</param>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
/// <exception cref="ArgumentNullException"><paramref name="comparer"/> is null.</exception>
public void ReplaceRange(IEnumerable<T> collection, IEqualityComparer<T> comparer)
{
ReplaceRange(0, Count, collection, comparer);
}
/// <summary>
/// Removes the specified range and inserts the specified collection,
/// ignoring equal items (using <see cref="EqualityComparer{T}.Default"/>).
/// </summary>
/// <param name="index">The index of where to start the replacement.</param>
/// <param name="count">The number of items to be replaced.</param>
/// <param name="collection">The collection to insert in that location.</param>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is out of range.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="count"/> is out of range.</exception>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
public void ReplaceRange(int index, int count, IEnumerable<T> collection)
{
ReplaceRange(index, count, collection, EqualityComparer<T>.Default);
}
/// <summary>
/// Removes the specified range and inserts the specified collection in its position, leaving equal items in equal positions intact.
/// </summary>
/// <param name="index">The index of where to start the replacement.</param>
/// <param name="count">The number of items to be replaced.</param>
/// <param name="collection">The collection to insert in that location.</param>
/// <param name="comparer">The comparer to use when checking for equal items.</param>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is out of range.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="count"/> is out of range.</exception>
/// <exception cref="ArgumentNullException"><paramref name="collection"/> is null.</exception>
/// <exception cref="ArgumentNullException"><paramref name="comparer"/> is null.</exception>
public void ReplaceRange(int index, int count, IEnumerable<T> collection, IEqualityComparer<T> comparer)
{
if (index < 0)
throw new ArgumentOutOfRangeException(nameof(index));
if (count < 0)
throw new ArgumentOutOfRangeException(nameof(count));
if (index + count > Count)
throw new ArgumentOutOfRangeException(nameof(index));
if (collection == null)
throw new ArgumentNullException(nameof(collection));
if (comparer == null)
throw new ArgumentNullException(nameof(comparer));
if (collection is ICollection<T> countable)
{
if (countable.Count == 0)
{
RemoveRange(index, count);
return;
}
}
else if (!ContainsAny(collection))
{
RemoveRange(index, count);
return;
}
if (index + count == 0)
{
InsertRange(0, collection);
return;
}
if (!(collection is IList<T> list))
list = new List<T>(collection);
using (BlockReentrancy())
using (DeferEvents())
{
var rangeCount = index + count;
var addedCount = list.Count;
var changesMade = false;
List<T>
newCluster = null,
oldCluster = null;
int i = index;
for (; i < rangeCount && i - index < addedCount; i++)
{
//parallel position
T old = this[i], @new = list[i - index];
if (comparer.Equals(old, @new))
{
OnRangeReplaced(i, newCluster, oldCluster);
continue;
}
else
{
Items[i] = @new;
if (newCluster == null)
{
Debug.Assert(oldCluster == null);
newCluster = new List<T> { @new };
oldCluster = new List<T> { old };
}
else
{
newCluster.Add(@new);
oldCluster.Add(old);
}
changesMade = true;
}
}
OnRangeReplaced(i, newCluster, oldCluster);
//exceeding position
if (count != addedCount)
{
var items = (List<T>)Items;
if (count > addedCount)
{
var removedCount = rangeCount - addedCount;
T[] removed = new T[removedCount];
items.CopyTo(i, removed, 0, removed.Length);
items.RemoveRange(i, removedCount);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, removed, i));
}
else
{
var k = i - index;
T[] added = new T[addedCount - k];
for (int j = k; j < addedCount; j++)
{
T @new = list[j];
added[j - k] = @new;
}
items.InsertRange(i, added);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, added, i));
}
OnEssentialPropertiesChanged();
}
else if (changesMade)
{
OnIndexerPropertyChanged();
}
}
}
#endregion Public Methods
//------------------------------------------------------
//
// Protected Methods
//
//------------------------------------------------------
#region Protected Methods
/// <summary>
/// Called by base class Collection<T> when the list is being cleared;
/// raises a CollectionChanged event to any listeners.
/// </summary>
protected override void ClearItems()
{
if (Count == 0)
return;
CheckReentrancy();
base.ClearItems();
OnEssentialPropertiesChanged();
OnCollectionReset();
}
/// <summary>
/// Called by base class Collection<T> when an item is set in list;
/// raises a CollectionChanged event to any listeners.
/// </summary>
protected override void SetItem(int index, T item)
{
if (Equals(this[index], item))
return;
CheckReentrancy();
T originalItem = this[index];
base.SetItem(index, item);
OnIndexerPropertyChanged();
OnCollectionChanged(NotifyCollectionChangedAction.Replace, originalItem, item, index);
}
/// <summary>
/// Raise CollectionChanged event to any listeners.
/// Properties/methods modifying this ObservableCollection will raise
/// a collection changed event through this virtual method.
/// </summary>
/// <remarks>
/// When overriding this method, either call its base implementation
/// or call <see cref="BlockReentrancy"/> to guard against reentrant collection changes.
/// </remarks>
protected override void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
if (_deferredEvents != null)
{
_deferredEvents.Add(e);
return;
}
base.OnCollectionChanged(e);
}
protected virtual IDisposable DeferEvents() => new DeferredEventsCollection(this);
#endregion Protected Methods
//------------------------------------------------------
//
// Private Methods
//
//------------------------------------------------------
#region Private Methods
/// <summary>
/// Helper function to determine if a collection contains any elements.
/// </summary>
/// <param name="collection">The collection to evaluate.</param>
/// <returns></returns>
private static bool ContainsAny(IEnumerable<T> collection)
{
using (IEnumerator<T> enumerator = collection.GetEnumerator())
return enumerator.MoveNext();
}
/// <summary>
/// Helper to raise Count property and the Indexer property.
/// </summary>
private void OnEssentialPropertiesChanged()
{
OnPropertyChanged(EventArgsCache.CountPropertyChanged);
OnIndexerPropertyChanged();
}
/// <summary>
/// /// Helper to raise a PropertyChanged event for the Indexer property
/// /// </summary>
private void OnIndexerPropertyChanged() =>
OnPropertyChanged(EventArgsCache.IndexerPropertyChanged);
/// <summary>
/// Helper to raise CollectionChanged event to any listeners
/// </summary>
private void OnCollectionChanged(NotifyCollectionChangedAction action, object oldItem, object newItem, int index) =>
OnCollectionChanged(new NotifyCollectionChangedEventArgs(action, newItem, oldItem, index));
/// <summary>
/// Helper to raise CollectionChanged event with action == Reset to any listeners
/// </summary>
private void OnCollectionReset() =>
OnCollectionChanged(EventArgsCache.ResetCollectionChanged);
/// <summary>
/// Helper to raise event for clustered action and clear cluster.
/// </summary>
/// <param name="followingItemIndex">The index of the item following the replacement block.</param>
/// <param name="newCluster"></param>
/// <param name="oldCluster"></param>
//TODO should have really been a local method inside ReplaceRange(int index, int count, IEnumerable<T> collection, IEqualityComparer<T> comparer),
//move when supported language version updated.
private void OnRangeReplaced(int followingItemIndex, ICollection<T> newCluster, ICollection<T> oldCluster)
{
if (oldCluster == null || oldCluster.Count == 0)
{
Debug.Assert(newCluster == null || newCluster.Count == 0);
return;
}
OnCollectionChanged(
new NotifyCollectionChangedEventArgs(
NotifyCollectionChangedAction.Replace,
new List<T>(newCluster),
new List<T>(oldCluster),
followingItemIndex - oldCluster.Count));
oldCluster.Clear();
newCluster.Clear();
}
#endregion Private Methods
//------------------------------------------------------
//
// Private Types
//
//------------------------------------------------------
#region Private Types
private sealed class DeferredEventsCollection : List<NotifyCollectionChangedEventArgs>, IDisposable
{
private readonly ObservableRangeCollection<T> _collection;
public DeferredEventsCollection(ObservableRangeCollection<T> collection)
{
Debug.Assert(collection != null);
Debug.Assert(collection._deferredEvents == null);
_collection = collection;
_collection._deferredEvents = this;
}
public void Dispose()
{
_collection._deferredEvents = null;
foreach (var args in this)
_collection.OnCollectionChanged(args);
}
}
#endregion Private Types
}
/// <remarks>
/// To be kept outside <see cref="ObservableCollection{T}"/>, since otherwise, a new instance will be created for each generic type used.
/// </remarks>
internal static class EventArgsCache
{
internal static readonly PropertyChangedEventArgs CountPropertyChanged = new PropertyChangedEventArgs("Count");
internal static readonly PropertyChangedEventArgs IndexerPropertyChanged = new PropertyChangedEventArgs("Item[]");
internal static readonly NotifyCollectionChangedEventArgs ResetCollectionChanged = new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset);
}
}
``` | Please refer to the [updated and optimized C# 7 version](https://stackoverflow.com/a/45364074/75500). I didn't want to remove the VB.NET version so I just posted it in a separate answer.
## [Go to updated version](https://stackoverflow.com/a/45364074/75500)
Seems it's not supported, I implemented by myself, FYI, hope it to be helpful:
I updated the VB version and from now on it raises an event before changing the collection so you can regret (useful when using with `DataGrid`, `ListView` and many more, that you can show an "Are you sure" confirmation to the user), **the updated VB version is in the bottom of this message**.
Please accept my apology that the screen is too narrow to contain my code, I don't like it either.
**VB.NET:**
```
Imports System.Collections.Specialized
Namespace System.Collections.ObjectModel
''' <summary>
''' Represents a dynamic data collection that provides notifications when items get added, removed, or when the whole list is refreshed.
''' </summary>
''' <typeparam name="T"></typeparam>
Public Class ObservableRangeCollection(Of T) : Inherits System.Collections.ObjectModel.ObservableCollection(Of T)
''' <summary>
''' Adds the elements of the specified collection to the end of the ObservableCollection(Of T).
''' </summary>
Public Sub AddRange(ByVal collection As IEnumerable(Of T))
For Each i In collection
Items.Add(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
''' <summary>
''' Removes the first occurence of each item in the specified collection from ObservableCollection(Of T).
''' </summary>
Public Sub RemoveRange(ByVal collection As IEnumerable(Of T))
For Each i In collection
Items.Remove(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
''' <summary>
''' Clears the current collection and replaces it with the specified item.
''' </summary>
Public Sub Replace(ByVal item As T)
ReplaceRange(New T() {item})
End Sub
''' <summary>
''' Clears the current collection and replaces it with the specified collection.
''' </summary>
Public Sub ReplaceRange(ByVal collection As IEnumerable(Of T))
Dim old = Items.ToList
Items.Clear()
For Each i In collection
Items.Add(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
''' <summary>
''' Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class.
''' </summary>
''' <remarks></remarks>
Public Sub New()
MyBase.New()
End Sub
''' <summary>
''' Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class that contains elements copied from the specified collection.
''' </summary>
''' <param name="collection">collection: The collection from which the elements are copied.</param>
''' <exception cref="System.ArgumentNullException">The collection parameter cannot be null.</exception>
Public Sub New(ByVal collection As IEnumerable(Of T))
MyBase.New(collection)
End Sub
End Class
End Namespace
```
**C#:**
```
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Collections.Specialized;
using System.Linq;
/// <summary>
/// Represents a dynamic data collection that provides notifications when items get added, removed, or when the whole list is refreshed.
/// </summary>
/// <typeparam name="T"></typeparam>
public class ObservableRangeCollection<T> : ObservableCollection<T>
{
/// <summary>
/// Adds the elements of the specified collection to the end of the ObservableCollection(Of T).
/// </summary>
public void AddRange(IEnumerable<T> collection)
{
if (collection == null) throw new ArgumentNullException("collection");
foreach (var i in collection) Items.Add(i);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
/// <summary>
/// Removes the first occurence of each item in the specified collection from ObservableCollection(Of T).
/// </summary>
public void RemoveRange(IEnumerable<T> collection)
{
if (collection == null) throw new ArgumentNullException("collection");
foreach (var i in collection) Items.Remove(i);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
/// <summary>
/// Clears the current collection and replaces it with the specified item.
/// </summary>
public void Replace(T item)
{
ReplaceRange(new T[] { item });
}
/// <summary>
/// Clears the current collection and replaces it with the specified collection.
/// </summary>
public void ReplaceRange(IEnumerable<T> collection)
{
if (collection == null) throw new ArgumentNullException("collection");
Items.Clear();
foreach (var i in collection) Items.Add(i);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
/// <summary>
/// Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class.
/// </summary>
public ObservableRangeCollection()
: base() { }
/// <summary>
/// Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class that contains elements copied from the specified collection.
/// </summary>
/// <param name="collection">collection: The collection from which the elements are copied.</param>
/// <exception cref="System.ArgumentNullException">The collection parameter cannot be null.</exception>
public ObservableRangeCollection(IEnumerable<T> collection)
: base(collection) { }
}
```
## Update - Observable range collection with collection changing notification
```
Imports System.Collections.Specialized
Imports System.ComponentModel
Imports System.Collections.ObjectModel
Public Class ObservableRangeCollection(Of T) : Inherits ObservableCollection(Of T) : Implements INotifyCollectionChanging(Of T)
''' <summary>
''' Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class.
''' </summary>
''' <remarks></remarks>
Public Sub New()
MyBase.New()
End Sub
''' <summary>
''' Initializes a new instance of the System.Collections.ObjectModel.ObservableCollection(Of T) class that contains elements copied from the specified collection.
''' </summary>
''' <param name="collection">collection: The collection from which the elements are copied.</param>
''' <exception cref="System.ArgumentNullException">The collection parameter cannot be null.</exception>
Public Sub New(ByVal collection As IEnumerable(Of T))
MyBase.New(collection)
End Sub
''' <summary>
''' Adds the elements of the specified collection to the end of the ObservableCollection(Of T).
''' </summary>
Public Sub AddRange(ByVal collection As IEnumerable(Of T))
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Add, collection)
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
Dim index = Items.Count - 1
For Each i In collection
Items.Add(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, collection, index))
End Sub
''' <summary>
''' Inserts the collection at specified index.
''' </summary>
Public Sub InsertRange(ByVal index As Integer, ByVal Collection As IEnumerable(Of T))
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Add, Collection)
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
For Each i In Collection
Items.Insert(index, i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
''' <summary>
''' Removes the first occurence of each item in the specified collection from ObservableCollection(Of T).
''' </summary>
Public Sub RemoveRange(ByVal collection As IEnumerable(Of T))
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Remove, collection)
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
For Each i In collection
Items.Remove(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
''' <summary>
''' Clears the current collection and replaces it with the specified item.
''' </summary>
Public Sub Replace(ByVal item As T)
ReplaceRange(New T() {item})
End Sub
''' <summary>
''' Clears the current collection and replaces it with the specified collection.
''' </summary>
Public Sub ReplaceRange(ByVal collection As IEnumerable(Of T))
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Replace, Items)
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
Items.Clear()
For Each i In collection
Items.Add(i)
Next
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset))
End Sub
Protected Overrides Sub ClearItems()
Dim e As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Reset, Items)
OnCollectionChanging(e)
If e.Cancel Then Exit Sub
MyBase.ClearItems()
End Sub
Protected Overrides Sub InsertItem(ByVal index As Integer, ByVal item As T)
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Add, item)
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
MyBase.InsertItem(index, item)
End Sub
Protected Overrides Sub MoveItem(ByVal oldIndex As Integer, ByVal newIndex As Integer)
Dim ce As New NotifyCollectionChangingEventArgs(Of T)()
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
MyBase.MoveItem(oldIndex, newIndex)
End Sub
Protected Overrides Sub RemoveItem(ByVal index As Integer)
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Remove, Items(index))
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
MyBase.RemoveItem(index)
End Sub
Protected Overrides Sub SetItem(ByVal index As Integer, ByVal item As T)
Dim ce As New NotifyCollectionChangingEventArgs(Of T)(NotifyCollectionChangedAction.Replace, Items(index))
OnCollectionChanging(ce)
If ce.Cancel Then Exit Sub
MyBase.SetItem(index, item)
End Sub
Protected Overrides Sub OnCollectionChanged(ByVal e As Specialized.NotifyCollectionChangedEventArgs)
If e.NewItems IsNot Nothing Then
For Each i As T In e.NewItems
If TypeOf i Is INotifyPropertyChanged Then AddHandler DirectCast(i, INotifyPropertyChanged).PropertyChanged, AddressOf Item_PropertyChanged
Next
End If
MyBase.OnCollectionChanged(e)
End Sub
Private Sub Item_PropertyChanged(ByVal sender As T, ByVal e As ComponentModel.PropertyChangedEventArgs)
OnCollectionChanged(New NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset, sender, IndexOf(sender)))
End Sub
Public Event CollectionChanging(ByVal sender As Object, ByVal e As NotifyCollectionChangingEventArgs(Of T)) Implements INotifyCollectionChanging(Of T).CollectionChanging
Protected Overridable Sub OnCollectionChanging(ByVal e As NotifyCollectionChangingEventArgs(Of T))
RaiseEvent CollectionChanging(Me, e)
End Sub
End Class
Public Interface INotifyCollectionChanging(Of T)
Event CollectionChanging(ByVal sender As Object, ByVal e As NotifyCollectionChangingEventArgs(Of T))
End Interface
Public Class NotifyCollectionChangingEventArgs(Of T) : Inherits CancelEventArgs
Public Sub New()
m_Action = NotifyCollectionChangedAction.Move
m_Items = New T() {}
End Sub
Public Sub New(ByVal action As NotifyCollectionChangedAction, ByVal item As T)
m_Action = action
m_Items = New T() {item}
End Sub
Public Sub New(ByVal action As NotifyCollectionChangedAction, ByVal items As IEnumerable(Of T))
m_Action = action
m_Items = items
End Sub
Private m_Action As NotifyCollectionChangedAction
Public ReadOnly Property Action() As NotifyCollectionChangedAction
Get
Return m_Action
End Get
End Property
Private m_Items As IList
Public ReadOnly Property Items() As IEnumerable(Of T)
Get
Return m_Items
End Get
End Property
End Class
``` | ObservableCollection Doesn't support AddRange method, so I get notified for each item added, besides what about INotifyCollectionChanging? | [
"",
"c#",
"vb.net",
"observablecollection",
"addrange",
"inotifycollectionchanged",
""
] |
Are there any further benefits to the "new", shorter way to handle properties, other than shorter code - like performance benefits, less memory usage etc.?
Writing
```
public string StrVariable { get; set;}
```
Rather than
```
private string strVariable;
public string StrVariable
{
set
{
strVariable = value;
}
get
{
return strVariable;
}
}
```
And are there any drawbacks - perhaps some would argue the code is *less* readable, less explicit? | [Heres a link you may find useful.](https://stackoverflow.com/questions/9304/c-3-0-auto-properties-useful-or-not) | One big drawback **in a specific scenario** - you lose control of the field name. This might sound insignificant, but it [is a big problem](http://marcgravell.blogspot.com/2009/03/obfuscation-serialization-and.html) if you are using binary serialization (via BinaryFormatter).
Other things:
* they can't be `readonly` at the field level
* for `struct`s you need to call `: this()` in custom constructors
They do, however, do a fantastic job in 99% of cases - they express the code neatly, while leaving it possible to add extra implementation details later (by switching to explicit fields) without breaking calling code (except for the points above). | Are there any more benefits, or perhaps drawbacks of the "new" way to define properties? | [
"",
"c#",
"coding-style",
""
] |
Part of me thinks that this shouldn't be possible (even if it is), but I'll ask anyway.
Given the following class hierarchy (`Grandparent` and `Parent` are from a 3rd party and thus, not under my control), how would I override `myMethod()` in `Child` such that it bypasses the overridden implementation in `Parent` and invokes the one in `Grandparent`?
```
class Grandparent {
public void myMethod() {
// do stuff
}
}
class Parent extends Grandparent {
@Override public void myMethod() {
super.myMethod();
// do something else
}
}
class Child extends Parent {
@Override public void myMethod() {
// ??? I want to *only* do what Grandparent did here
}
}
```
Pretend that the `Parent` class provides a lot of other helpful behavior and is a crucial element of `Child`'s hierarchy (in other words, I'm not looking for "make `Child` a subclass of `Grandparent`". | The idea behind **inheritance** is that each class defines their methods how they need, so you don't need to be inspecting any code.
It seems like you're subclassing here just to re-use code, and that's not the idea of subclassing.
Maybe you should have a helper member to do some of the tasks you need, instead of subclassing, and have both "Child" and "Parent" classes extend "Grandparent".
The main question you need to ask yourself is: "Is Child really a descendant of Parent, Grandparent or neiter?" In other words, **for every instance of Child, can I say it's a Parent**?
If the answer is no, then you're subclassing wrongly: inheritance is supposed to mean something, not just code re-use (i.e. **Ford *IS ALSO* a Car**, not just "Ford" *uses* "Car" methods). | Assuming that I couldn't touch the code in Parent or Grandparent and assuming that I'm not, as Seb suggested (and as Steve apparently agreed) simply misusing inheritance entirely:
I'd create a local instance of a Grandfather object (or a local class extending Grandfather, if it's abstract) and access its interpretation of myMethod() directly. Of course, depending on how much state information myMethod() is supposed to read and/or manipulate, the amount of work involved could be anything from "easy" to "excruciating".
It's an ugly solution, and, depending on how much state information is accessed, could be brittle as hell. But if Grandfather is reliably stable and/or myMethod() is fairly self-contained, it could work. The devil is in the details, as always.
I definitely agree with Seb that this is re-use, not inheritance. But, hey. Re-use is often a Good Thing. | How to override method to invoke superclass' superclass method? | [
"",
"java",
"oop",
"class-design",
""
] |
We have some basic C# logic that iterates over a directory and returns the folders and files within. When run against a network share (\\server\share\folder) that is inaccessible or invalid, the code seems to 'hang' for about 30 seconds before returning back from the call.
I'd like to end up with a method that will attempt to get folders and files from the given path, but **without the timeout** period. In other words, to reduce or eliminate the timeout altogether.
I've tried something as simple as validating the existence of the directory ahead of time thinking that an 'unavailable' network drive would quickly return false, but that did not work as expected.
```
System.IO.Directory.Exists(path) //hangs
System.IO.DirectoryInfo di = new System.IO.DirectoryInfo(path); //hangs
```
Any suggestions on what may help me achieve an efficient (and hopefully managed) solution? | Place it on its own thread, if it doesn't come back in a certain amount of time, move on. | You can use this code:
```
var task = new Task<bool>(() => { var fi = new FileInfo(uri.LocalPath); return fi.Exists; });
task.Start();
return task.Wait(100) && task.Result;
``` | How To: Prevent Timeout When Inspecting Unavailable Network Share - C# | [
"",
"c#",
"windows",
"networking",
"filesystems",
""
] |
I've dealt with instances where I would throw/rethrow an exception knowing that the code surrounding it would catch the specific exception. But is there any time you would want to throw an exception, knowing that it wouldn't be caught?
Or at least, NOT catch an exception?
Exceptions immediately halt the application unless their handled right? So I guess I'm asking if you would ever want to purposely let your application die? | If your application is primarily going to be used by other clients and is not standalone, it generally makes sense to throw exceptions if a condition arises that you don't know how to (or don't want to) handle, and there's no sensible way for you to recover from it. Clients should be able to decide how they want to handle any exceptions that you might throw.
On the other hand, if your application *is* the endpoint, throwing an exception essentially becomes a notification mechanism to alert people that something has gone terribly wrong. In such cases, you need to consider a few things:
* **How important is the continued running of the application? Is this error really unrecoverable?** Throwing an exception and terminating your program is not something you want to be doing on the space shuttle.
* **Are you using exceptions as a proxy for real logging?** There's almost never a reason to do this; consider a real logging mechanism instead. Catch the exception and have the logger work out what happened.
* **What are you trying to convey by throwing the exception yourself?** Ask yourself what the value in throwing a new exception is, and consider carefully whether there isn't a better way to do what you want.
* **Not catching an exception may leave resources in a bad state.** If you don't gracefully exit, things are generally not cleaned up for you. Make sure you understand what you're doing if you need to do this -- and if you're not going to catch it, at least consider a `try-finally` block so you can do some tidying up. | There's a very good rule that I came across a while ago:
**Throw an exception when a method can't do what its name says it does.**
The idea is that an exception indicates that something has gone wrong. When you are implementing a method, it is not your responsibility to be aware of whether it will be used correctly or not. Whether the code using your method catches the exception or not is not your responsibility, but the responsibility of the person using your method.
Another rule to follow is:
**Don't catch an exception unless you know what you want to do with it.**
Obviously, you should include cleanup code in a try...finally block, but you should never just catch an exception just for the sake of catching it. And you should never swallow exceptions silently. While there are occasions when you may want to catch all exceptions (e.g. by doing catch (Exception ex) in C#), these are fairly uncommon and generally have a very specific technical reason. For example, when you are using threads in .NET 2.0 or later, if an exception escapes from your thread, it will cause the entire application domain to unload. In these cases, however, at the very minimum you should log the exception details as an error and provide an explanation in the comments. | Would you ever NOT catch an exception, or throw an exception that won't be caught? | [
"",
"c#",
"exception",
"try-catch",
"unhandled-exception",
""
] |
I keep seeing this referred to on DotNetKicks etc... Yet cannot find out exactly what it is (In English) or what it does? Could you explain what it is, or why I would use it? | [Moq](https://github.com/Moq/moq4) is a mocking framework for C#/.NET. It is used in unit testing to isolate your class under test from its dependencies and ensure that the proper methods on the dependent objects are being called. For more information on mocking you may want to look at the Wikipedia article on [Mock Objects](http://en.wikipedia.org/wiki/Mock_object).
Other mocking frameworks (for .NET) include [JustMock](https://www.telerik.com/products/mocking.aspx), [TypeMock](http://www.typemock.com/), [RhinoMocks](http://ayende.com/projects/rhino-mocks.aspx), [nMock](http://nmock.org/), .etc. | In simple English, Moq is a library which when you include in your project give you power to do Unit Testing in easy manner.
Why? Because one function may call another, then another and so on. But in real what is needed, just the return value from first call to proceed to next line.
Moq helps to ignore actual call of that method and instead you return what that function was returning. and verify after all lines of code has executed, what you desired is what you get or not. Too Much English, so here is an example:
```
String Somethod()
{
IHelper help = new IHelper();
String first = help.firstcall();
String second= secondcall(first);
return second;
}
```
Now, here `first` is needed to for `secondcall()`, but you can not actually call `help.firstcall()` as it in some other layer. So Mocking is done, faking that method was called:
```
[TestMethod]
public void SomeMethod_TestSecond
{
mockedIHelper.Setup(x=>x.firstcall()).Returns("Whatever i want");
}
```
Here, think, `SetUP` as faking method call, we are just interested in `Returns`. | What is use of Moq? | [
"",
"c#",
"moq",
""
] |
This is a little subjective I think; I'm not sure if the opinion will be unanimous (I've seen a lot of code snippets where references are returned).
According to a comment toward [this question I just asked, regarding initializing references](https://stackoverflow.com/questions/752479/am-i-initializing-my-c-reference-variables-correctly), returning a reference can be evil because, [as I understand] it makes it easier to miss deleting it, which can lead to memory leaks.
This worries me, as I have followed examples (unless I'm imagining things) and done this in a fair few places... Have I misunderstood? Is it evil? If so, just how evil?
I feel that because of my mixed bag of pointers and references, combined with the fact that I'm new to C++, and total confusion over what to use when, my applications must be memory leak hell...
Also, I understand that using smart/shared pointers is generally accepted as the best way to avoid memory leaks. | In general, returning a reference is perfectly normal and happens all the time.
If you mean:
```
int& getInt() {
int i;
return i; // DON'T DO THIS.
}
```
That is all sorts of evil. The stack-allocated `i` will go away and you are referring to nothing. This is also evil:
```
int& getInt() {
int* i = new int;
return *i; // DON'T DO THIS.
}
```
Because now the client has to eventually do the strange:
```
int& myInt = getInt(); // note the &, we cannot lose this reference!
delete &myInt; // must delete...totally weird and evil
int oops = getInt();
delete &oops; // undefined behavior, we're wrongly deleting a copy, not the original
```
Note that rvalue references are still just references, so all the evil applications remain the same.
If you want to allocate something that lives beyond the scope of the function, use a smart pointer (or in general, a container):
```
std::unique_ptr<int> getInt() {
return std::make_unique<int>(0);
}
```
And now the client stores a smart pointer:
```
std::unique_ptr<int> x = getInt();
```
References are also okay for accessing things where you know the lifetime is being kept open on a higher-level, e.g.:
```
struct immutableint {
immutableint(int i) : i_(i) {}
const int& get() const { return i_; }
private:
int i_;
};
```
Here we know it's okay to return a reference to `i_` because whatever is calling us manages the lifetime of the class instance, so `i_` will live at least that long.
And of course, there's nothing wrong with just:
```
int getInt() {
return 0;
}
```
If the lifetime should be left up to the caller, and you're just computing the value.
Summary: it's okay to return a reference if the lifetime of the object won't end after the call. | No.
What is evil is making a reference to a dynamically allocated object and losing the original pointer. When you `new` an object you assume an obligation to have a guaranteed `delete`.
But have a look at, eg, `operator<<`: that *must* return a reference, or
```
cout << "foo" << "bar" << "bletch" << endl ;
```
won't work. | Is the practice of returning a C++ reference variable evil? | [
"",
"c++",
"reference",
"c++-faq",
""
] |
I would like to know how this query would be written if no joins were used. I been trying to figure it out for cases where joins aren't viable or can't be used(aren't available).
```
SELECT
*
FROM
(
table1
INNER JOIN
table2
ON
table1.id = table2.id
)
INNER JOIN
table3
ON
(
table1.id2 = table3.id2
)
AND
(
table1.id3 = table3.id3
)
WHERE
table1.id = 1
```
The reason I can't use joins is because the application uses HQL as opposed to standard SQL and HQL makes joins incredibly difficult to do. | It is impossible to extract data from two different tables without joining them in one way or another. You are using a JOIN statement BUT you can achieve the same thing by placing it in the where clause for example:
```
SELECT * FROM table1, table2 WHERE table1.id = table2.id AND ...
``` | You can do the selects separated and do the joins within the application. | How could I rewrite a query without using joins | [
"",
"sql",
"inner-join",
""
] |
Is one of these faster?
```
inline int ProcessByValue(int i)
{
// process i somehow
}
inline int ProcessByReference(const int& i)
{
// process i somehow
}
```
I know that integral types should be passed by value. However, I am concerned that the compiler might inline ProcessByValue to contain a copy. Is there a rule for this? | The parameter should be typed according to what makes sense for the function.
If the function takes a primitive type, pass by value would make sense. Some people I know would complain if it were passed by const ref (as it's 'unnecessary'), but I don't think I'd complain. If the function takes a user defined type and doesn't modify the parameter, then pass by const ref would make sense.
If it's a user defined type and the parameter is modified, then the semantics of the function would dictate how it should be passed. | It doesn't make a difference. In both case, the code will be inlined to the same. Needlessly copying the int (in pass-by-value) will be eliminated by the compiler, and needlessly creating a reference to the int, and following that layer of indirection when accessing the int, will also be eliminated.
Your question seems to be based on some false assumptions:
* That the inline keyword will actually get your function inlined. (It might, but that's certainly not guaranteed)
* That the choice of reference vs value depends on the function being inline. (The exact same performance considerations would apply to a non-inlined function)
* That it makes a difference, and that you can outsmart the compiler with trivial changes like this (The compiler will apply the same optimizations in either case)
* And that the optimization would actually make a measurable difference in performance. (even if it didn't, the difference would be so small as to be negligible.)
> I know that integral types should be
> passed by value. However, I am
> concerned that the compiler might
> inline ProcessByValue to contain a
> copy. Is there a rule for this?
Yes, it will create a copy. Just like passing by reference would create a reference. And then, at least for simple types like ints, the compiler would eliminate both again.
Inlining a function is not allowed to change the behavior of a function. If you create the function to take a value argument, it will behave as if it was given a value argument, whether or not it's inlined. If you define the function to take a reference, it will behave as if passed a reference, whether or not it's inlined. So do what leads to correct behavior. | should I take arguments to inline functions by reference or value? | [
"",
"c++",
"inline",
""
] |
I have a Windows service that runs as mydomain\userA. I want to be able to run arbitrary .exes from the service. Normally, I use Process.Start() and it works fine, but in some cases I want to run the executable as a different user (mydomain\userB).
If I change the ProcessStartInfo I use to start the process to include credentials, I start getting errors - either an error dialog box that says "The application failed to initialize properly (0xc0000142). Click on OK to terminate the application.", or an "Access is denied" Win32Exception. If I run the process-starting code from the command-line instead of running it in the service, the process starts using the correct credentials (I've verified this by setting the ProcessStartInfo to run whoami.exe and capturing the command-line output).
I've also tried impersonation using WindowsIdentity.Impersonate(), but this hasn't worked - as I understand it, impersonation only affects the current thread, and starting a new process inherits the process' security descriptor, not the current thread.
I'm running this in an isolated test domain, so both userA and userB are domain admins, and both have the Log On as a Service right domain-wide. | When you launch a new process using ProcessStartInfo the process is started in the same window station and desktop as the launching process. If you are using different credentials then the user will, in general, not have sufficient rights to run in that desktop. The failure to initialize errors are caused when user32.dll attempts to initialize in the new process and can't.
To get around this you must first retrieve the security descriptors associated with the window station and desktop and add the appropriate permissions to the DACL for your user, then launch your process under the new credentials.
EDIT: A detailed description on how to do this and sample code was a little long for here so I put together an [article](http://asprosys.blogspot.com/2009/03/perils-and-pitfalls-of-launching.html) with code.
```
//The following security adjustments are necessary to give the new
//process sufficient permission to run in the service's window station
//and desktop. This uses classes from the AsproLock library also from
//Asprosys.
IntPtr hWinSta = GetProcessWindowStation();
WindowStationSecurity ws = new WindowStationSecurity(hWinSta,
System.Security.AccessControl.AccessControlSections.Access);
ws.AddAccessRule(new WindowStationAccessRule("LaunchProcessUser",
WindowStationRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ws.AcceptChanges();
IntPtr hDesk = GetThreadDesktop(GetCurrentThreadId());
DesktopSecurity ds = new DesktopSecurity(hDesk,
System.Security.AccessControl.AccessControlSections.Access);
ds.AddAccessRule(new DesktopAccessRule("LaunchProcessUser",
DesktopRights.AllAccess, System.Security.AccessControl.AccessControlType.Allow));
ds.AcceptChanges();
EventLog.WriteEntry("Launching application.", EventLogEntryType.Information);
using (Process process = Process.Start(psi))
{
}
``` | *Based on the [answer by @StephenMartin](https://stackoverflow.com/q/677874/850848#678449).*
A new process launched using the `Process` class runs in the same window station and desktop as the launching process. If you are running the new process using different credentials, then the new process won't have permissions to access the window station and desktop. What results in errors like 0xC0000142.
The following is a "compact" standalone code to grant a user an access to the current window station and desktop. It does not require the AsproLock library.
Call the `GrantAccessToWindowStationAndDesktop` method with the username you use to run the `Process` (`Process.StartInfo.UserName`), before calling `Process.Start`.
```
public static void GrantAccessToWindowStationAndDesktop(string username)
{
IntPtr handle;
const int WindowStationAllAccess = 0x000f037f;
handle = GetProcessWindowStation();
GrantAccess(username, handle, WindowStationAllAccess);
const int DesktopRightsAllAccess = 0x000f01ff;
handle = GetThreadDesktop(GetCurrentThreadId());
GrantAccess(username, handle, DesktopRightsAllAccess);
}
private static void GrantAccess(string username, IntPtr handle, int accessMask)
{
SafeHandle safeHandle = new NoopSafeHandle(handle);
GenericSecurity security =
new GenericSecurity(
false, ResourceType.WindowObject, safeHandle,
AccessControlSections.Access);
security.AddAccessRule(
new GenericAccessRule(
new NTAccount(username), accessMask, AccessControlType.Allow));
security.Persist(safeHandle, AccessControlSections.Access);
}
[DllImport("user32.dll", SetLastError = true)]
private static extern IntPtr GetProcessWindowStation();
[DllImport("user32.dll", SetLastError = true)]
private static extern IntPtr GetThreadDesktop(int dwThreadId);
[DllImport("kernel32.dll", SetLastError = true)]
private static extern int GetCurrentThreadId();
// All the code to manipulate a security object is available in .NET framework,
// but its API tries to be type-safe and handle-safe, enforcing a special
// implementation (to an otherwise generic WinAPI) for each handle type.
// This is to make sure only a correct set of permissions can be set
// for corresponding object types and mainly that handles do not leak.
// Hence the AccessRule and the NativeObjectSecurity classes are abstract.
// This is the simplest possible implementation that yet allows us to make use
// of the existing .NET implementation, sparing necessity to
// P/Invoke the underlying WinAPI.
private class GenericAccessRule : AccessRule
{
public GenericAccessRule(
IdentityReference identity, int accessMask, AccessControlType type) :
base(identity, accessMask, false, InheritanceFlags.None,
PropagationFlags.None, type)
{
}
}
private class GenericSecurity : NativeObjectSecurity
{
public GenericSecurity(
bool isContainer, ResourceType resType, SafeHandle objectHandle,
AccessControlSections sectionsRequested)
: base(isContainer, resType, objectHandle, sectionsRequested)
{
}
new public void Persist(
SafeHandle handle, AccessControlSections includeSections)
{
base.Persist(handle, includeSections);
}
new public void AddAccessRule(AccessRule rule)
{
base.AddAccessRule(rule);
}
#region NativeObjectSecurity Abstract Method Overrides
public override Type AccessRightType
{
get { throw new NotImplementedException(); }
}
public override AccessRule AccessRuleFactory(
System.Security.Principal.IdentityReference identityReference,
int accessMask, bool isInherited, InheritanceFlags inheritanceFlags,
PropagationFlags propagationFlags, AccessControlType type)
{
throw new NotImplementedException();
}
public override Type AccessRuleType
{
get { return typeof(AccessRule); }
}
public override AuditRule AuditRuleFactory(
System.Security.Principal.IdentityReference identityReference,
int accessMask, bool isInherited, InheritanceFlags inheritanceFlags,
PropagationFlags propagationFlags, AuditFlags flags)
{
throw new NotImplementedException();
}
public override Type AuditRuleType
{
get { return typeof(AuditRule); }
}
#endregion
}
// Handles returned by GetProcessWindowStation and GetThreadDesktop
// should not be closed
private class NoopSafeHandle : SafeHandle
{
public NoopSafeHandle(IntPtr handle) :
base(handle, false)
{
}
public override bool IsInvalid
{
get { return false; }
}
protected override bool ReleaseHandle()
{
return true;
}
}
``` | Starting a process with credentials from a Windows Service | [
"",
"c#",
".net",
"windows-services",
""
] |
I downloaded zthreads (found here: <http://zthread.sourceforge.net/>) and tried to compile but I get this error from make:
```
MutexImpl.h:156: error: there are no arguments to 'ownerAcquired' that depend on a template parameter, so a declaration of 'ownerAcquired' must be available
```
MutexImpl.h:156: error: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated)
and then after that for every function in that source file I get this kind of error:
```
MutexImpl.h:167: error: there are no arguments to 'function' that depend on a template parameter, so a declaration of 'function' must be available
```
So I'm guessing it's a makefile error but I'm not for sure how to tell make to tell g++ to compile the files with -fpermissive. Does anyone know how to put that into the makefile (if that is the problem)? | CXXFLAGS += -fpermissive | Standard gmake convention is to use the `CXXFLAGS` variable to pass options to the C++ compiler. You can take advantage of that fact as well as a feature called "command-line overrides" to get your extra flag tacked onto the flags passed to g++ by invoking gmake this way:
```
make CXXFLAGS+=-fpermissive
```
I downloaded the source myself to verify that this works and found that it does, although there are still a bunch of other warnings emitted. You may wish to log a bug for these issues if you intend to continue using the library.
Hope this helps,
Eric Melski | compiling zthreads | [
"",
"c++",
"makefile",
""
] |
I am trying to sort a List like this:
```
public void Example()
{
string target = "hello";
List<string> myStings = new List<string>();
myStings.Add("babab");
myStings.Add("Helll");
myStings.Add("atest");
myStings.OrderBy(each => Distance(each, target));
}
public int Distance(string stringA, string stringB)
{
// Whatever
}
```
The problem is that the list doesn't get ordered and the Distance method doesn't get fired (I put a breakpoint in there but doesn't get hit).
Any help appreciated! | This is because of **[Linq's Deferred Execution](http://blogs.msdn.com/charlie/archive/2007/12/09/deferred-execution.aspx)**
> In LINQ, execution of a query is
> usually deferred until the moment when
> you actually request the data.
So, to see your method working, apply the `ToList()` method to your `IOrderedEnumerable` so that you will **actually be requesting the data**, and thus the execution takes place.
```
myStings = myStings.OrderBy(each => Distance(each, target)).ToList();
``` | myStings = myStings.OrderBy(each => Distance(each, target)).ToList(); | List ordering - Extension method not firing! | [
"",
"c#",
"linq",
"extension-methods",
""
] |
I seem to be forgetting my C++ ...
I'm trying to declare some functions in C in separate sources, and including the appropriate `.h` when necessary. It compiles OK; but the problem is during linking, where the linker complains about functions already being defined.
I even tried defining the functions as `extern`, in a (vain) attempt to just declare the functions and let the implementation come true only on the `.c`.
This is an abridged description of my code:
**common.h**
```
#include <stdio.h>
```
**module1.h**
```
#include "common.h"
#ifndef MODULE1_H_
#define MODULE1_H_
int init(int option);
int open(char* db, char* username, char* password);
int get(int handler, int date[2], int time[2], int* data, int& rowsize, int& numrows);
int put(int handler, int* data, int& datasize, int& numrows);
int close(int handler);
int finalize();
#endif /* MODULE1_H_ */
```
**module2.h**
```
#include "common.h"
#ifndef MODULE2_H_
#define MODULE2_H_
int get1(int handler, int date, int time, int *data, int& datasize, int& rowsize);
int put1(int handler, int* data, int datasize);
#endif /*MODULE2_H_*/
```
**module1.cpp**
```
#include "module1.h"
int init(int option) { ... }
int finalize() { ... }
int get(int handler, int date[2], int time[2], int* data, int& rowsize, int& numrows) {
....
}
...
```
**module2.cpp**
```
#include "module1.h"
#include "module2.h"
int get1(int handler, int date, int time, int* data, int rowsize) {
int daterange[2]={date,date};
int timerange[2]={time,time};
int rsize, numrows, result;
result=get(handler, daterange,timerange, data, rsize, numrows);
rowsize=rsize;
if(numrows!=1) printf("Uh oh...\n");
return result;
}
...
```
**Compilation & linkage:**
```
g++ -o module1.o -c module1.cpp
g++ -o module2.o -c module2.cpp
g++ -fPIC -shared -o library.so module1.o module2.o
```
As I said, it compiles OK. The problem is during linkage, where the linker "sees" that there are two implemented functions from `module1.h`: one from the inclusion of `module1.h` in `module1.cpp`; and the other from the inclusion of `module1.h` together with `module2.h` in `module2.cpp`.
I know the functions are supposed to be declared, but I'm obviously doing a faux-pas.
Could someone please point out where it is? Thank you in advance. | Thank you all for your replies and comments. I figured out the problem (it turned out to be a very stupid thing) and am close to solving it (hopefully).
It turns out it comes from another include file ([cfortran.h](http://www-zeus.desy.de/~burow/cfortran/)) which implements a layer for using C function calls in Fortran (and vice-versa). It's very useful and I've been using it with success up until now, but I was "blinded" by the errors; that include's documentation states that care should be taken when using it in C++ context (not in C) but this is the first instance where that warning actually produces effects.
Thank you once more for your help. | Your function names need to be changed. There are already functions with some of those names you mentioned (Example: [open](http://www.manpagez.com/man/2/open/)) and they are probably being included by something you are linking to. | C++ multiple declaration of function error when linking | [
"",
"c++",
"function",
"declaration",
""
] |
In maven 2.x, how would one set a plugin's property on the command line instead of in the <configuration> of that plugin in the pom or in settings.xml?
For example, if I was using `mvn dependency:copy-dependencies`([seen here](http://maven.apache.org/plugins/maven-dependency-plugin/copy-dependencies-mojo.html)) how can I set the useRepositoryLayout property without touching either the pom or my settings.xml?
Thanks! | Answer was right in front of me in the copy-dependencies mojo docs (I even linked to it). The documentation for the property includes the Expression you can refer to it by.
> useRepositoryLayout: Place each
> artifact in the same directory layout
> as a default repository. example:
> /outputDirectory/junit/junit/3.8.1/junit-3.8.1.jar
>
> ```
> * Type: boolean
> * Since: 2.0-alpha-2
> * Required: No
> * Expression: ${mdep.useRepositoryLayout}
> * Default: false
> ```
To set this property from command line you need to run
```
mvn -Dmdep.useRepositoryLayout=true <goals go here>
``` | Define the properties as arbitrary properties ... not the standard maven props such as version. In my case I defined a new property build.version:
```
<properties> build.version=unknown </properties>
```
I use the property:
```
<warName>${build.version}</warName>
```
I define the property:
```
mvn -P prod -Dbuild.version=app_name-branch_name-build_number package
``` | Set plugin's property on the command line in maven 2 | [
"",
"java",
"maven-2",
"command-line",
""
] |
I have a string representing a stand-alone (and valid XHTML 1.0 Strict) HTML document, something like
```
var html = "<?xml ... <!DOCTYPE ... <html><head><style>...</style></head>
<body><table>...</table></body></html>";
```
The body of this HTML document contains a table whose CSS-style
is described in the head of the HTML document.
I also have a DOM-tree of another HTML document.
How can I include into this DOM-tree the DOM-tree of the table with the correct style (as described in the HTML-string)?
I am especially interested in a jQuery-based solution.
**EDIT**: To be more concrete, an example of an HTML-string that I'm talking about
is embedded into [this XML-document](http://attempto.ifi.uzh.ch/ws/ape/apews.perl?text=Every+man+is+a+human.&cdrshtml=on). | I may be waaaay missing the point, but why not load the string into an IFRAME to render - this solves all the problems of having two separate DOM trees, and two separate sets of CSS rules, to me.
This code will do it:
```
$(function() {
var $frame = $('<iframe style="width:200px; height:100px; border: 0px;">');
$('body').html( $frame );
setTimeout( function() {
var doc = $frame[0].contentWindow.document;
var $body = $('body',doc);
$body.html(your_html);
}, 1 );
});
```
(which I lifted from here: [putting html inside an iframe (using javascript)](https://stackoverflow.com/questions/620881))
Then if you are concerned about the size of the IFRAME, you can set it with:
```
$frame[0].style.height = $frame[0].contentWindow.document.body.offsetHeight + 'px';
$frame[0].style.width = $frame[0].contentWindow.document.body.offsetWidth + 'px';
``` | There's no sense in having two full DOM trees on the same page, so you'll want to extract out what you want and only use that.
Convert the string to a jQuery object and parse out what you need like this:
```
var html = "<html><head><style>...</style></head>
<body><table>...</table></body></html>";
// Not sure if you are a trying to merge to DOMs together or just
// trying to place what you want on the page so, I'm going to go
// with the former as it may be more tricky.
var other_html = "<html><head><style>...</style></head>
<body><!-- some stuff --></body></html>";
var jq_html = $(html);
var jq_other_html = $(other_html);
var style = jq_html.find('head').find('style');
var table_and_stuff = jq_html.find('body').html();
jq_other_html.find('head').append(style).end().append(table_and_stuff);
```
That should probably do it. The CSS should automatically be recognized by the browser once it's inserted into the page.
**NOTE:**
For the new CSS style sheet to only add new styles and not override your current one(s), you must *prepend* the sheet to the head tag and not *append* it. So, the last line would need to be like this instead:
```
jq_other_html.find('head').prepend(style).end().append(table_and_stuff);
``` | Including the rendering of an HTML document into another HTML document | [
"",
"javascript",
"jquery",
"html",
"css",
""
] |
There is about 2000 lines of this, so manually would probably take more work than to figure out a way to do ths programatically. It only needs to work once so I'm not concerned with performance or anything.
```
<tr><td>Canada (CA)</td><td>Alberta (AB)</td></tr>
<tr><td>Canada (CA)</td><td>British Columbia (BC)</td></tr>
<tr><td>Canada (CA)</td><td>Manitoba (MB)</td></tr>
```
Basically its formatted like this, and I need to divide it into 4 parts, Country Name, Country Abbreviation, Division Name and Division Abbreviation.
In keeping with my complete lack of efficiency I was planning just to do a string.Replace on the HTML tags after I broke them up and then just finding the index of the opening brackets and grabbing the space delimited strings that are remaining. Then I realized I have no way of keeping track of which is the country and which is the division, as well as figuring out how to group them by country.
So is there a better way to do this? Or better yet, an easier way to populate a database with Country and Provinces/States? I looked around SO and the only readily available databases I can find dont provide the full name of the countries or the provinces/states or use IPs instead of geographic names. | 1. Paste it into a spreadsheet. Some spreadsheets will parse the HTML table for you.
2. Save it as a .CSV file and process it that way. **Or**. Add a column to the spreadsheet that says something like the following:
="INSERT INTO COUNTRY(CODE,NAME) VALUES=('" & A1 & "','" & B1 & "');"
Then you have a column of INSERT statements that you can cut, paste and execute.
---
**Edit**
Be sure to include the `<table>` tag when pasting into a spreadsheet.
```
<table><tr><th>country</th><th>name></th></tr>
<tr><td>Canada (CA)</td><td>Alberta (AB)</td></tr>
<tr><td>Canada (CA)</td><td>British Columbia (BC)</td></tr>
<tr><td>Canada (CA)</td><td>Manitoba (MB)</td></tr>
</table>
```
Processing a CSV file requires almost no parsing. It's got quotes and commas. Much easier to live with than XML/HTML. | ```
/<tr><td>([^\s]+)\s\(([^\)])\)<\/td><td>([^\s]+)\s\(([^\)])\)<\/td><\/tr>/
```
Then you should have 4 captures with the 4 pieces of data from any PCRE engine :)
Alternatively, something like <http://jacksleight.com/assets/blog/really-shiny/scripts/table-extractor.txt> provides more completeness. | I need to parse an HTML formatted country list into SQL inserts. Is there an easier way to do this? | [
"",
"c#",
"database",
"parsing",
""
] |
When implementing something that implements IDictionary, what should I unit test?
It seems to be overkill to test the entire interface, but then what do I know? I have only been unit testing for a few days... | Every public member of your IDictionary should be tested. You should also set up some tests to ensure that your IDictionary behaves the same as some other concrete implementation of an IDictionary. In fact, you could structure most of your tests like that:
```
void Test_IDictionary_Add(IDictionary a, IDictionary b)
{
string key = "Key1", badKey = 87;
int value = 9, badValue = "Horse";
a.Add(key, value);
b.Add(key, value);
Assert.That(a.Count, Is.EqualTo(b.Count));
Assert.That(a.Contains(key), Is.EqualTo(b.Contains(key)));
Assert.That(a.ContainsKey(key), Is.EqualTo(b.ContainsKey(key)));
Assert.That(a.ContainsValue(value), Is.EqualTo(b.ContainsValue(value)));
Assert.That(a.Contains(badKey), Is.EqualTo(b.Contains(badKey)));
Assert.That(a.ContainsValue(badValue), Is.EqualTo(b.ContainsValue(badValue)));
// ... and so on and so forth
}
[Test]
void MyDictionary_Add()
{
Test_IDictionary_Add(new MyDictionary(), new Hashtable());
}
``` | You should test everything you implement. If you have code that is called through the public interface, there should be a unit test that supports that. If you write the code, it is worth testing.
The exception might be simple properties ( Property {get; set;} ) | How to Unit Test an Implementation of IDictionary | [
"",
"c#",
"unit-testing",
""
] |
I'd like to save my files using the primary key of the entry.
Here is my code:
```
def get_nzb_filename(instance, filename):
if not instance.pk:
instance.save() # Does not work.
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename)
name = models.CharField(max_length=256)
```
I know the first time an object is saved the primary key isn't available, so I'm willing to take the extra hit to save the object just to get the primary key, and then continue on.
The above code doesn't work. It throws the following error:
```
maximum recursion depth exceeded while calling a Python object
```
I'm assuming this is an infinite loop. Calling the `save` method would call the `get_nzb_filename` method, which would again call the `save` method, and so on.
I'm using the latest version of the Django trunk.
How can I get the primary key so I can use it to save my uploaded files?
---
**Update @muhuk:**
I like your solution. Can you help me implement it? I've updated my code to the following and the error is `'File' object has no attribute 'create'`. Perhaps I'm using what you've written out of context?
```
def create_with_pk(self):
instance = self.create()
instance.save()
return instance
def get_nzb_filename(instance, filename):
if not instance.pk:
create_with_pk(instance)
name_slug = re.sub('[^a-zA-Z0-9]', '-', instance.name).strip('-').lower()
name_slug = re.sub('[-]+', '-', name_slug)
return u'files/%s_%s.nzb' % (instance.pk, name_slug)
class File(models.Model):
nzb = models.FileField(upload_to=get_nzb_filename, blank=True, null=True)
name = models.CharField(max_length=256)
```
Instead of enforcing the required field in my model I'll do it in my Form class. No problem. | It seems you'll need to pre-generate your `File` models with empty file fields first. Then pick up one and save it with the given file object.
You can have a custom manager method like this;
```
def create_with_pk(self):
instance = self.create()
instance.save() # probably this line is unneeded
return instance
```
But this will be troublesome if either of your fields is required. Because you are initially creating a null object, you can't enforce required fields on the model level.
### EDIT
`create_with_pk` is supposed to be a [custom manager method](http://docs.djangoproject.com/en/dev/topics/db/managers/#adding-extra-manager-methods), in your code it is just a regular method. Hence `self` is meaningless. It is all properly [documented](http://docs.djangoproject.com/en/dev/topics/db/managers/) with examples. | You can do this by setting `upload_to` to a temporary location and by creating a custom save method.
The save method should call super first, to generate the primary key (this will save the file to the temporary location). Then you can rename the file using the primary key and move it to it's proper location. Call super one more time to save the changes and you are good to go! This worked well for me when I came across this exact issue.
For example:
```
class File( models.Model ):
nzb = models.FileField( upload_to='temp' )
def save( self, *args, **kwargs ):
# Call save first, to create a primary key
super( File, self ).save( *args, **kwargs )
nzb = self.nzb
if nzb:
# Create new filename, using primary key and file extension
oldfile = self.nzb.name
dot = oldfile.rfind( '.' )
newfile = str( self.pk ) + oldfile[dot:]
# Create new file and remove old one
if newfile != oldfile:
self.nzb.storage.delete( newfile )
self.nzb.storage.save( newfile, nzb )
self.nzb.name = newfile
self.nzb.close()
self.nzb.storage.delete( oldfile )
# Save again to keep changes
super( File, self ).save( *args, **kwargs )
``` | Django: Access primary key in models.filefield(upload_to) location | [
"",
"python",
"django",
"file-io",
""
] |
The following code does not produce an exception but instead passes the value 4 to tst. Can anyone explain the reason behind this?
```
public enum testing
{
a = 1,
b = 2,
c = 3
}
testing tst = (testing)(4);
``` | In C#, unlike Java, enums are not checked. You can have any value of the underlying type. This is why it's pretty important to check your input.
```
if(!Enum.IsDefined(typeof(MyEnum), value))
throw new ArgumentOutOfRangeException();
``` | Since your enum is based on int, it can accept any value that int can. And since you are telling the compiler explicitly (by casting) that it is ok to cast 4 into your enum it does so. | Casting an out-of-range number to an enum in C# does not produce an exception | [
"",
"c#",
"exception",
"enums",
"casting",
""
] |
How can I access events added with `attachEvent()` / `addEventListener()` in JavaScript?
Use case: debug events using FireBug's console. | There is no way to access them.
Depending on what you're trying to achieve, better way to debug the events might be to output the [event properties](http://www.quirksmode.org/js/events_properties.html) you're interested in from the event handler function... | If you always add and remove handlers with a custom method, you can maintain a log of them in the same method. It adds some overhead to do so.
For example, here is a piece that concerns IE-
```
//Run=window.Run || {Shadow:{},nextid:0};
else if(window.attachEvent){
Run.handler= function(who, what, fun){
if(who.attachEvent){
who.attachEvent('on'+what, fun);
var hoo=who.id || who.tagName+(++Run.nextid);
if(!Run.Shadow[hoo])Run.Shadow[hoo]={};
if(!Run.Shadow[hoo][what])Run.Shadow[hoo][what]=[];
Run.Shadow[hoo][what].push(fun);
}
}
}
``` | Access events added with attachEvent() / addEventListener() in JavaScript | [
"",
"javascript",
"events",
""
] |
I've been a Windows user since forever, and now I need Linux to create an application using Mono. Which Linux distribution is best for me? I will use it in a virtual machine. | Mono is primarily written on and tested on openSUSE. The packages we release are for openSUSE. In fact, we release a VMWare Image of openSUSE with the new version of Mono all set up and ready to go:
<http://www.go-mono.com/mono-downloads/download.html>
Having said that, we have a great community of people who work to ensure Mono runs well and is packaged on all the major distributions, such as Fedora and Ubuntu. | No distribution is generally better than any other. Download the live CDs of the distributions from the net, run them in your VM and use the one you like best. | Which Linux distribution is best for developing a Mono application in a virtual machine? | [
"",
"c#",
"linux",
"mono",
""
] |
is there a way to handle error reporting in a centralized manner in PHP? I'd like to be notified of all errors raised by my application by email.
What's the best way to achieve this for a whole application?
Thanks | As Kalium mentioned, you'll want to be using [`set_error_handler`](http://ie.php.net/manual/en/function.set-error-handler.php), but that only gets you part of the way. You also want to trap uncaught exceptions. For that, use [`set_exception_handler`](http://ie.php.net/manual/en/function.set-exception-handler.php).
What I do myself is have my code trap errors, and convert those errors to approprate exceptions that I can catch elsewhere. For anything that isn't caught, I use the default exception handler to log and report them, and to display an appropriate error if necessary. | You can use [set\_error\_handler](https://www.php.net/manual/en/function.set-error-handler.php) to handle errors at runtime any way you like. | Centralized error reporting in PHP | [
"",
"php",
"error-reporting",
""
] |
Lest say you want the last element of a python list: what is the difference between
```
myList[-1:][0]
```
and
```
myList[len(myList)-1]
```
I thought there was no difference but then I tried this
```
>>> list = [0]
>>> list[-1:][0]
0
>>> list[-1:][0] += 1
>>> list
[0]
>>> list[len(list)-1] += 1
>>> list
[1]
```
I was a little surprised... | if you use slicing [-1:], the returned list is a shallow-copy, not reference. so [-1:][0] modifies the new list. [len(list)-1] is reference to last object. | `list[-1:]` creates a new list. To get the same behaviour as `list[len(list)-1]` it would have to return a view of some kind of `list`, but as I said, it creates a new temporary list. You then proceed to edit the temporary list.
Anyway, you know you can use `list[-1]` for the same thing, right? | Whats the difference between list[-1:][0] and list[len(list)-1]? | [
"",
"python",
"list",
"slice",
""
] |
I'll try to explain my case as good as i can.
I'm making a website where you can find topics by browsing their tags. Nothing strange there. I'm having tricky time with some of the queries though. They might be easy for you, my mind is pretty messed up from doing alot of work :P.
I have the tables "topics" and "tags". They are joined using the table tags\_topics which contains topic\_id and tag\_id. When the user wants to find a topic they might first select one tag to filter by, and then add another one to the filter. Then i make a query for fetching all topics that has both of the selected tags. They might also have other tags, but they MUST have those tags chosen to filter by. The amount of tags to filter by differs, but we always have a list of user-selected tags to filter by.
This was mostly answered in [Filtering from join-table](https://stackoverflow.com/questions/648308/filtering-from-join-table) and i went for the multiple joins-solution.
Now I need to fetch the tags that the user can filter by. So if we already have a defined filter of 2 tags, I need to fetch all tags but those in the filter that is associated to topics that includes all the tags in the filter. This might sound wierd, so i'll give a practical example :P
Let's say we have three topics: tennis, gym and golf.
* tennis has tags: sport, ball, court and racket
* gym has tags: sport, training and muscles
* golf has tags: sport, ball, stick and outside
1. User selects tag sport, so we show all three tennis, gym and golf, and we show ball, court, racket, training, muscles, stick and outside as other possible filters.
2. User now adds ball to the filter. Filter is now sport and ball, so we show the topics tennis and golf, with court, racket, stick and outside as additional possible filters.
3. User now adds court to the filter, so we show tennis and racket as an additional possible filter.
I hope I'm making some sense. By the way, I'm using MySQL. | ```
SELECT DISTINCT `tags`.`tag`
FROM `tags`
LEFT JOIN `tags_topics` ON `tags`.`id` = `tags_topics`.`tag_id`
LEFT JOIN `topics` ON `tags_topics`.`topic_id` = `topics`.`id`
LEFT JOIN `tags_topics` AS `tt1` ON `tt1`.`topic_id` = `topics`.`id`
LEFT JOIN `tags` AS `t1` ON `t1`.`id` = `tt1`.`tag_id`
LEFT JOIN `tags_topics` AS `tt2` ON `tt2`.`topic_id` = `topics`.`id`
LEFT JOIN `tags` AS `t2` ON `t2`.`id` = `tt2`.`tag_id`
LEFT JOIN `tags_topics` AS `tt3` ON `tt3`.`topic_id` = `topics`.`id`
LEFT JOIN `tags` AS `t3` ON `t3`.`id` = `tt3`.`tag_id`
WHERE `t1`.`tag` = 'tag1'
AND `t2`.`tag` = 'tag2'
AND `t3`.`tag` = 'tag3'
AND `tags`.`tag` NOT IN ('tag1', 'tag2', 'tag3')
``` | ```
SELECT topic_id
FROM topic_tag
WHERE tag_id = 1
OR tag_id = 2
OR tag_id = 3
GROUP BY topic_id
HAVING COUNT(topic_id) = 3;
```
The above query will get all topic\_ids that have all three tag\_ids of 1, 2 and 3. Then use this as a subquery:
```
SELECT tag_name
FROM tag
INNER JOIN topic_tag
ON tag.tag_id = topic_tag.tag_id
WHERE topic_id IN
( SELECT topic_id
FROM topic_tag
WHERE tag_id = 1
OR tag_id = 2
OR tag_id = 3
GROUP BY topic_id
HAVING COUNT(topic_id) = 3
)
AND
(
tag.tag_id <> 1
OR tag.tag_id <> 2
OR tag.tag_id <> 3
)
```
I think this is what you are looking for. | Fetch fields from a table that has the same relation to another table | [
"",
"sql",
"mysql",
"join",
"filter",
"tags",
""
] |
When I do `sizeof(int)` in my C#.NET project I get a return value of 4. I set the project type to x64, so why does it say 4 instead of 8? Is this because I'm running managed code? | There are [various 64-bit data models](http://en.wikipedia.org/wiki/64-bit); Microsoft uses [LP64](https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models) for .NET: both *long*s and pointers are 64-bits (although C-style pointers can only be used in C# in `unsafe` contexts or as a `IntPtr` value which cannot be used for pointer-arithmetic). Contrast this with ILP64 where *int*s are also 64-bits.
Thus, on all platforms, `int` is 32-bits and `long` is 64-bits; you can see this in the names of the underlying types [`System.Int32`](https://learn.microsoft.com/en-us/dotnet/api/system.int32) and [`System.Int64`](https://learn.microsoft.com/en-us/dotnet/api/system.int64). | The keyword `int` aliases `System.Int32` which still requires 4 bytes, even on a 64-bit machine. | sizeof(int) on x64? | [
"",
"c#",
"64-bit",
"clr",
"sizeof",
""
] |
When running JUnit tests, I always seem to run into this error:
> eclipse outOfMemoryError: heap space
I have monitored Eclipse with JConsole and heap memory peaks at about 150MB. I have set heap memory to 1GB.
I am using the following arguments when starting Eclipse:
```
-vm "C:\Program Files\Java\jre1.5.0_08\bin\javaw.exe" -vmargs -Xmx1024M
-XX:MaxPermSize=128M -Dcom.sun.management.jmxremote.port=8999
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
```
Does anyone know what may be causing this issue? It happens only when running JUnit tests. | Junit tests are run in a different vm as the Eclipse IDE. So it is that vm that is out of memory and not the Eclipse one.
You can change the settings of the test vm in the run configurations of the test.
You go to the run configurations and then under arguments, you can set the vm arguments. | Further to @Thijs Wouters response, to fix this issue in eclipse I did the following:
* Added a new Run configuration under JUnit (Run>Run configuration>JUnit>New)
* Within the arguments tab set VM arguments to "-Xms64m -Xmx256m" or higher if needs be | Why does heap space run out only when running JUnit tests? | [
"",
"java",
"eclipse",
"junit",
"out-of-memory",
""
] |
I am using asp.net MVC framework. On my page i have a dropdwonbox and when an option is clicked i want to go to another page. But i can't find how/where to set the autopostback property to true. This is the code i'm using:
Aspx:
```
<%= Html.DropDownList("qchap", new SelectList( (IEnumerable)ViewData["qchap"], "Id", "Title" )) %>
```
Controller:
```
public ActionResult Index(int id)
{
Chapter c = new Chapter();
ViewData["qchap"] = c.GetAllChaptersByManual(id);
return View();
}
```
What do i have to do to use the autopostback functionality? | You can use the onchange client event:
```
<%= Html.DropDownList("qchap",
new SelectList( (IEnumerable)ViewData["qchap"], "Id", "Title" ),
new { onchange = "this.form.submit();" }) %>
``` | It seems the DropDownList helper method doesn't support this.
Maybe using it within a form and a custom custom html attribute to submit the form do it. | C# How to set the autopostback property when using asp.net mvc? | [
"",
"c#",
"asp.net-mvc",
"combobox",
"autopostback",
""
] |
How can I make sure the form won't submit if one of the validations is false?
```
$('#form').submit(function(){
validateForm1();
validateForm(document.forms['dpart2']);
validateForm(document.forms['dpart3']);
});
``` | If the function returns false, form won't be submitted.
```
$('#form').submit(function(){
return validateForm1()
&& validateForm(document.forms['dpart2'])
&& validateForm(document.forms['dpart3']);
}
});
``` | ```
$('#form').submit(function(){
return (validateForm1() &&
validateForm(document.forms['dpart2']) &&
validateForm(document.forms['dpart3']))
});
```
Basically, you return false in the event handler function. | How to not submit a form if validation is false | [
"",
"javascript",
"jquery",
"submit",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.