Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am planning to start an application which converts the speech to text in Linux.
Are there any existing interfaces so that I can extend them?
or Is there any such existing application in Linux?
Any inputs on this?
EDIT: The application that I am planning to write should be able convert every word that we speak to text, not just the Yes/No. | Well, this is quite an undertaking and without saying what technology you want to use, here are some links:
* [Speech Recognition](http://en.wikipedia.org/wiki/Speech_recognition) on Wikipedia
* [Java Speech API](http://java.sun.com/products/java-media/speech/)
* [W3C Speech Recognition Grammar Specification](http://www.w3.org/TR/speech-grammar/)
* [Sphinx](http://cmusphinx.sourceforge.net/sphinx4/) - An open source recognition engine written in Java
Good luck. With more detail, we may be able to provide better answers. For example, there's a big difference between "yes/no" call center-style recognition vs. even partial natural language understanding. | Dave's suggestions are a great start. Sphinx is very nifty.
I just want to add that you should be as probabilistic as possible. As a one-time linguist and even earlier one-time phonology buff, I can confidently say don't get caught up with linguistic models. Let's not forget the oft misattributed "every time I fire a linguist my accuracy goes up". It's really about the model and its capabilities to account for noise and variation rather than anything a liberal arts major from MIT has to say.
A good book to pick up would be Jurafsky and Martin's "Speech and Language Processing". It has some very useful applications of computational models for the task. Harvey Sussman's work on linear correlates in the F2 slopes for a variety of vowels (starting with barn owls and working its way towards humans) seems like it would be a nice thing to implement in a model one of these days. | Speech to text conversion in Linux | [
"",
"c++",
"linux",
"interface",
"speech-recognition",
""
] |
There are lots of IDEs for PHP development, but I'm curious about all the OTHER brilliant little tools and apps and websites that people use every day to make PHP development a little easier. | I have a whole list of tools that I can't live without:
* **Eclipse with PHP Development Tools (or Zend Studio for Eclipse)** - Both of these are great IDEs with awesome PHP editing features. They also both give CSS and JavaScript (JS) editing capabilities now. With the existing Eclipse extensions, you can get a ton of functionality out of the box
* **Notepad++** - Great little text editor with syntax highlighting and tabs. I use this for quick edits and editing huge text files that slow down eclipse.
* **Filezilla** - If you're on Windows, then I suggest Filezilla as the FTP application of choice. It works great, is relatively fast, and is free.
* **PuTTY** - You'll need to SSH into remote servers some times. Use this for your terminal commands. I use it every single day.
* **Browsers** - You're going to need to test so you should have every browser you reasonably can installed on your system. Firefox, Opera, Internet Explorer, Safari, Chrome, etc.
* **Firebug** - The ultimate front end debugging tool. I cannot count the number of times Firebug has helped me decipher problems with my code or a legacy system that I'm working on.
* **FirePHP** - Another great tool. This one hooks into Firebug and allows you to utiliize a server side PHP library to generate custom headers that will be output as log messages. You can log arrays, strings, ints, doubles, etc.
* **SVN** - This is an obvious one. If you're not using source control in some way, then you're just not developing safely. On windows, I use TortoiseSVN except when I'm in my IDE, then I use the built-in SVN support.
* **Redmine** - Awesome project management tool (in RoR) that makes my life infinitely easier. It is very customizable and will help you have a good workflow throughout a project.
* **Harvest** - Hosted time tracking with widgets for most major operating systems. If you're developing, you better be tracking your time so you know where it is going.
* **Adobe CS** - If you're working with designers at all, then they're going to be sending you PSD files that you need to code. If you don't have the Adobe Create Suite (some version) then you're going to be at a serious disadvantage.
* **jQuery** - I hate writing JavaScript without a library anymore. jQuery is awesome and it makes you feel like you can do anything at any time. Its collection of plugins is similarly amazing and I recommend this JS framework over any other.
That's pretty much all I can think of off the top of my head, but it is pretty much everything I use every day. | I'm surprised no one has mentioned XDebug so far.
Turn on XDebug in your development and enjoy profiling, stack traces and if your IDE supports it, breakpoints.
A most worthy tool. | Which tools should no PHP/Web developer live without? | [
"",
"php",
"development-environment",
""
] |
In PHP, you can make two variables point to the same data.
```
$a = 'foo';
$b = 'bar';
$a =& $b;
echo $a // Outputs: bar
echo $b // Outputs: bar
```
What we are trying to do in Ruby is set `@app_session` to be equal to `session[@current_app[:uid]]`. So we only have to deal with `@app_session` in our app, and everything is automatically saved to the session.
Is there a way to do this in Ruby? After 15 minutes of reading, googling, and asking around here at the office, we're still lost... lol | All variables in Ruby are references to an object.
```
a = b
```
means that a and b point to the same object. That's why when you want to copy a given object, you need to write
```
a = b.dup
```
Writing
```
@app_session = session[@current_app[:uid]]
```
points to the same object so you should be fine.
EDIT: you can verify this this way:
```
irb(main):001:0> a = 1
=> 1
irb(main):002:0> b = a
=> 1
irb(main):004:0> p a.object_id
3
=> nil
irb(main):005:0> p b.object_id
3
``` | Variables and constants are pointers to memory locations. Ruby does this by defaut, so you don't have to "emulate" the PVP behaviour by hand. Evidence:
```
a = "hi"
b = a
a.upcase!
puts a
# => "HI"
puts b
# => "HI"
```
As for your `@app_session` question, I'd do something like this:
```
class ApplicationController < ActionController::Base
def app_session
session[@current_app[:uid]]
end
end
```
This lets you call the method 'app\_session', so that you can easily change the implementation later on. (Say, if you find out that you need to use a constant instead of an instance variable, or whatever.) | Does Ruby support var references like PHP? | [
"",
"php",
"ruby",
"reference",
""
] |
How to decrease the number of possible cache misses when designing a C++ program?
Does inlining functions help every time? or is it good only when the program is CPU-bounded (i.e. the program is computation oriented not I/O oriented)? | Here are some things that I like consider when working on this kind of code.
* Consider whether you want "structures of arrays" or "arrays of structures". Which you want to use will depend on each part of the data.
* Try to keep structures to multiples of 32 bytes so they pack cache lines evenly.
* Partition your data in hot and cold elements. If you have an array of objects of class o, and you use o.x, o.y, o.z together frequently but only occasionally need to access o.i, o.j, o.k then consider puting o.x, o.y, and o.z together and moving the i, j, and k parts to a parallel axillary data structure.
* If you have multi dimensional arrays of data then with the usual row-order layouts, access will be very fast when scanning along the preferred dimension and very slow along the others. Mapping it along a [space-filling](http://en.wikipedia.org/wiki/Hilbert_curve) [curve](http://en.wikipedia.org/wiki/Z-order_(curve)) instead will help to balance access speeds when traversing in any dimension. (Blocking techniques are similar -- they're just Z-order with a larger radix.)
* If you must incur a cache miss, then try to do as much as possible with that data in order to amortize the cost.
* Are you doing anything multi-threaded? Watch out for slowdowns from cache consistency protocols. Pad flags and small counters so that they'll be on separate cache lines.
* SSE on Intel provides some prefetch intrinsics if you know what you'll be accessing far enough ahead of time. | For data bound operations
1. use arrays & vectors over lists,maps & sets
2. process by rows over columns | decreasing cache misses through good design | [
"",
"c++",
"caching",
"inline",
""
] |
I'd like to know if it's possible to determine what **inline styling** has been attributed to an HTML element. I do not need to retrieve the value, but rather just detect if it's been set inline or not.
For instance, if the HTML were:
```
<div id="foo" style="width: 100px; height: 100px; background: green;"></div>
```
How can I determine that `width`, `height`, and `background` have been explicitly declared, inline?
As far as I can tell, the solution can work two ways. I can ask it if a specific attribute is set and it will tell me true or false, or it can tell me all attributes that have been set. Like so:
```
var obj = document.getElementById('foo');
obj.hasInlineStyle('width'); //returns true;
obj.hasInlineStyle('border'); //returns false;
//or
obj.getInlineStyles(); //returns array with values:
// 'width' 'height' and 'background'
```
I'm not interested in css attributes that are inherited via declarations in a stylesheet, only inline styles. One last thing, I have no control over the HTML source.
Thanks | **Updated to work with IE**
You could try something like this
```
function hasInlineStyle(obj, style) {
var attrs = obj.getAttribute('style');
if(attrs === null) return false;
if(typeof attrs == 'object') attrs = attrs.cssText;
var styles = attrs.split(';');
for(var x = 0; x < styles.length; x++) {
var attr = styles[x].split(':')[0].replace(/^\s+|\s+$/g,"").toLowerCase();
if(attr == style) {
return true;
}
}
return false;
}
```
So if you have an element like this:
```
<span id='foo' style='color: #000; line-height:20px;'></span>
```
That also has a stylesheet like this:
```
#foo { background-color: #fff; }
```
The function would return...
```
var foo = document.getElementById('foo');
alert(hasInlineStyle(foo,'color')); // true
alert(hasInlineStyle(foo,'background-color')); // false
``` | The style property of an HTML Element returns a style object which lists all the properties. Any that have a value (other than null or empty string) have been set on the inline style attribute. | How to read inline styling of an element? | [
"",
"javascript",
"css",
"inline",
"styling",
""
] |
The snippet says it all :-)
```
UTF8Encoding enc = new UTF8Encoding(true/*include Byte Order Mark*/);
byte[] data = enc.GetBytes("a");
// data has length 1.
// I expected the BOM to be included. What's up?
``` | You wouldn't want it to be used for *every* call to GetBytes, otherwise you'd have no way of (say) writing a file a line at a time.
By exposing it with [GetPreamble](http://msdn.microsoft.com/en-us/library/system.text.encoding.getpreamble.aspx), callers can insert the preamble just at the appropriate point (i.e. at the start of their data). I agree that the documentation could be a lot clearer though. | Thank you both. The following works, and LINQ makes the combination simple :-)
```
UTF8Encoding enc = new UTF8Encoding(true);
byte[] data = enc.GetBytes("a");
byte[] combo = enc.GetPreamble().Concat(data).ToArray();
``` | Why isn't the Byte Order Mark emitted from UTF8Encoding.GetBytes? | [
"",
"c#",
".net",
"unicode",
"encoding",
"utf-8",
""
] |
Does anybody know of a fully thread-safe `shared_ptr` implementation? E.g. boost implementation of `shared_ptr` is thread-safe for the targets (refcounting) and also safe for simultaneous `shared_ptr` instance reads, but not writes or for read/write.
(see [Boost docs](http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm#ThreadSafety), examples 3, 4 and 5).
Is there a shared\_ptr implementation that is fully thread-safe for `shared_ptr` instances?
Strange that boost docs say that:
> shared\_ptr objects offer the same level of thread safety as built-in types.
But if you compare an ordinary pointer (built-in type) to `smart_ptr`, then simultaneous write of an ordinary pointer is thread-safe, but simultaneous write to a `smart_ptr` isn't.
EDIT: I mean a lock-free implementation on x86 architecture.
EDIT2: An example use case for such a smart pointer would be where there are a number of worker threads which update a global shared\_ptr with a their current work item and a monitor thread that takes random samples of the work items. The shared-ptr would own the work item until another work item pointer is assigned to it (thereby destroying the previous work item). The monitor would get ownership of the work item (thereby preventing the work item to be destroyed) by assigning it to its own shared-ptr. It can be done with XCHG and manual deletion, but would be nice if a shared-ptr could do it.
Another example is where the global shared-ptr holds a "processor", and is assigned by some thread, and used by some other thread. When the "user" thread sees that the processor shard-ptr is NULL, it uses some alternative logic to do the processing. If it's not NULL, it prevents the processor from being destroyed by assigning it to its own shared-ptr. | Adding the necessary barriers for such a fully thread-safe shared\_ptr implementation would likely impact performance. Consider the following race (note: pseudocode abounds):
Thread 1:
global\_ptr = A;
Thread 2:
global\_ptr = B;
Thread 3:
local\_ptr = global\_ptr;
If we break this down into its constituent operations:
Thread 1:
```
A.refcnt++;
tmp_ptr = exchange(global_ptr, A);
if (!--tmp_ptr.refcnt) delete tmp_ptr;
```
Thread 2:
```
B.refcnt++;
tmp_ptr = exchange(global_ptr, B);
if (!--tmp_ptr.refcnt) delete tmp_ptr;
```
Thread 3:
```
local_ptr = global_ptr;
local_ptr.refcnt++;
```
Clearly, if thread 3 reads the pointer after A's swap, then B goes and deletes it before the reference count can be incremented, bad things will happen.
To handle this, we need a dummy value to be used while thread 3 is doing the refcnt update:
(note: compare\_exchange(variable, expected, new) atomically replaces the value in variable with new if it's currently equal to new, then returns true if it did so successfully)
Thread 1:
```
A.refcnt++;
tmp_ptr = global_ptr;
while (tmp_ptr == BAD_PTR || !compare_exchange(global_ptr, tmp_ptr, A))
tmp_ptr = global_ptr;
if (!--tmp_ptr.refcnt) delete tmp_ptr;
```
Thread 2:
```
B.refcnt++;
while (tmp_ptr == BAD_PTR || !compare_exchange(global_ptr, tmp_ptr, A))
tmp_ptr = global_ptr;
if (!--tmp_ptr.refcnt) delete tmp_ptr;
```
Thread 3:
```
tmp_ptr = global_ptr;
while (tmp_ptr == BAD_PTR || !compare_exchange(global_ptr, tmp_ptr, BAD_PTR))
tmp_ptr = global_ptr;
local_ptr = tmp_ptr;
local_ptr.refcnt++;
global_ptr = tmp_ptr;
```
You've now had to add a loop, with atomics in it in the middle of your /read/ operation. This is not a good thing - it can be extremely expensive on some CPUs. What's more, you're busy-waiting as well. You can start to get clever with futexes and whatnot - but by that point you've reinvented the lock.
This cost, which has to be borne by every operation, and is very similar in nature to what a lock would give you anyway, is why you generally don't see such thread-safe shared\_ptr implementations. If you need such a thing, I would recommend wrapping a mutex and shared\_ptr into a convenience class to automate locking. | Simultaneous write to a built-in pointer is certainly not thread safe. Consider the implications of writing to the same value with respect to memory barriers if you really want to drive yourself crazy (for instance, you could have two threads thinking the same pointer had different values).
RE: Comment - the reason built-ins aren't double deleting is because they aren't deleting at all (and the implementation of boost::shared\_ptr I use wouldn't double delete, since it uses a special atomic increment and decrement, so it would only single delete, but then the result would could have the pointer from one and the ref count of the other. Or pretty much any combination of the two. It would be bad.). The statement in the boost docs is correct as it is, you get the same guarantees as you do with a built-in.
RE: EDIT2 - The first situation you are describing are very different between using built-ins and shared\_ptrs. In one (XCHG and manual delete) there's no reference count; you are assuming you are the one and only owner when you do this. If using shared pointers, you are saying other threads might have ownership, which makes things far more complex. I believe it is possible with a compare-and-swap, but this would be very non-portable.
C++0x is coming out with an atomics library, which should make it much easier to write generic multi-threaded code. You'll probably have to wait till that comes out to see good cross-platform reference implementations of thread-safe smart pointers. | Fully thread-safe shared_ptr implementation | [
"",
"c++",
"boost",
"thread-safety",
"shared-ptr",
""
] |
I get this message when compiling C++ on gcc 4.3
```
error: ‘NULL’ was not declared in this scope
```
It appears and disappears and I don't know why. Why?
Thanks. | `NULL` is not a keyword. It's an identifier defined in some standard headers. You can include
```
#include <cstddef>
```
To have it in scope, including some other basics, like `std::size_t`. | GCC is taking steps towards C++11, which is probably why you now need to include *cstddef* in order to use the *NULL* constant. The preferred way in C++11 is to use the new *nullptr* keyword, which is implemented in GCC since version 4.6. *nullptr* is not implicitly convertible to integral types, so it can be used to disambiguate a call to a function which has been overloaded for both pointer and integral types:
```
void f(int x);
void f(void * ptr);
f(0); // Passes int 0.
f(nullptr); // Passes void * 0.
``` | error: ‘NULL’ was not declared in this scope | [
"",
"c++",
"gcc",
"pointers",
"null",
""
] |
I'm making a call:
```
myResult = MakeMyCall(inputParams, out messages);
```
but I don't actually care about the messages. If it was an input parameter I didn't care about I'd just pass in a null. If it was the return I didn't care about I'd just leave it off.
Is there a way to do something similar with an out, or do I need to declare a variable that I will then ignore? | Starting with C# 7.0, it is possible to avoid predeclaring out parameters as well as ignoring them.
```
public void PrintCoordinates(Point p)
{
p.GetCoordinates(out int x, out int y);
WriteLine($"({x}, {y})");
}
public void PrintXCoordinate(Point p)
{
p.GetCoordinates(out int x, out _); // I only care about x
WriteLine($"{x}");
}
```
Source: <https://blogs.msdn.microsoft.com/dotnet/2017/03/09/new-features-in-c-7-0/> | You have to declare a variable which you will then ignore. This is most commonly the case with the TryParse (or TryWhatever) pattern, when it is used to test the validity of user input (e.g. can it be parsed as a number?) without caring about the actual parsed value.
You used the word "dispose" in the question, which I suspect was just unfortunate - but if the out parameter is of a type which implements IDisposable, you should certainly call Dispose unless the method documentation explicitly states that receiving the value doesn't confer ownership. I can't remember ever seeing a method with a disposable `out` parameter though, so I'm hoping this was just an unlucky choice of words. | How to explicitly discard an out argument? | [
"",
"c#",
"out",
""
] |
I have a model Foo which have a ForeignKey to the User model.
Later, I need to grab all the User's id and put then on a list
```
foos = Foo.objects.filter(...)
l = [ f.user.id for f in foos ]
```
But when I do that, django grabs the whole User instance from the DB instead of giving me just the numeric user's id, which exist in each Foo row.
How can I get all the ids without querying each user or using a select\_related?
Thanks | Use [queryset's values() function](http://docs.djangoproject.com/en/dev/ref/models/querysets/#values-fields), which will return a list of dictionaries containing name/value pairs for each attribute passed as parameters:
```
>>> Foo.objects.all().values('user__id')
[{'user__id': 1}, {'user__id' 2}, {'user__id': 3}]
```
The ORM will then be able to optimize the SQL query to only return the required fields, instead of doing a "SELECT \*". | Whenever you define a ForeignKey in Django, it automatically adds a FIELD\_id field to your model.
For instance, if `Foo` has a FK to `User` with an attribute named "`user`", then you also have an attribute named **user\_id** which contains the id of the related user.
```
l = [ f.user_id for f in foos ]
```
Calling `.values()` also adds performance if you select your attributes wisely | Get foreign key without requesting the whole object | [
"",
"python",
"django",
"django-models",
""
] |
Here's a simple method with a `foreach` loop:
```
IEnumerable<XElement> FieldsToXElements(object instance)
{
var fieldElements = new List<XElement>();
foreach (var field in instance.GetType().GetFields(instance))
{
fieldElements.Add(new XElement(field.Name, field.GetValue(instance)));
}
return fieldElements;
}
```
Kind of ugly. If there were some operator in LINQ that mean "do something" (e.g. perform an `Action` for each selected in the LINQ statement), it would look nicer, more terse:
```
IEnumerable<XElement> FieldsToXElements(object instance)
{
var fieldElements = new List<XElement>();
from field in instance.GetType().GetFields(instance))
let name = field.Name
let value = field.GetValue(instance)
do fieldElements.Add(new XElement(name, value));
return fieldElements;
}
```
I realize that's subjective, and just my opinion. For a foreach loop that has one line that simply invokes a method, a "`do`" operator makes sense, in my opinion. **But I'm wondering if anyone at MS has thought the same thing. Is such LINQ operator planned in any upcoming releases (e.g. alongside the C# 4.0 debut)?**
Here's another example, with a predicate, where the fictitious `do` operator would really make the code look cleaner. This:
```
IEnumerable<XElement> FieldsToXElements
(object instance, Func<FieldInfo, bool> predicate)
{
var fieldElements = new List<XElement>();
foreach (var field in instance.GetType().GetFields(instance).Where(predicate))
{
fieldElements.Add(new XElement(field.Name, field.GetValue(instance)));
}
return fieldElements;
}
```
vs. this:
```
IEnumerable<XElement> FieldsToXElements
(object instance, Func<FieldInfo, bool> predicate)
{
var fieldElements = new List<XElement>();
from field in instance.GetType().GetFields(instance))
let name = field.Name
let value = field.GetValue(instance)
where predicate(field)
do fieldElements.Add(new XElement(name, value));
return fieldElements;
}
``` | No, I don't expect any direct **language** suport (i.e. inside the query syntax) any time soon.
It sounds like you mean the mythical `ForEach` extension method; trivial to add, but Eric Lippert has commented numerous times about the cross between side-effect-free functional code and `Action<T>` with side-effects. In particular, C# 3.0 / .NET 3.5 expression trees are lousy at side-effecs (making full lambda support tricky). The runtime side [is much better in .NET 4.0](http://marcgravell.blogspot.com/2008/11/future-expressions.html), but it is unclear at the moment how much of this will make it into the language (lambda compiler) in C# 4.0.
All you need (for delegate version) is:
```
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{ // note: omitted arg/null checks
foreach(T item in source) { action(item); }
}
```
Then from any query you can simply use `.ForEach(x => /* do something */)`. | For your specific example (populating a `List<XElement>`), I'd do it this way.
```
IEnumerable<XElement> FieldsToXElements(object instance)
{
List<XElement> fieldElements =
(
from field in instance.GetType().GetFields(instance))
let name = field.Name
let value = field.GetValue(instance)
select new XElement(name, value)
).ToList(); //Another Option is List<T>.AddRange()
return fieldElements;
}
```
Also: Don't forget that `List<T>` already implements `.ForEach<T>()`, so to use it against any `Enumerable<T>`, this is all the code you need.
```
myEnumerable.ToList().ForEach( x => myAction(x) );
``` | Any plans for "do"/Action LINQ operator? | [
"",
"c#",
"linq",
"c#-4.0",
""
] |
I generate all my columns in a subclassed `DataGridView` programmatically. However Visual Studio 2008 keeps reading my constructor class (which populates a `DataTable` with empty content and binds it to the `DataGridView`) and generates code for the columns in the `InitializeComponent` method - in the process setting `AutoGenerateColumns` to `false`.
This causes errors in design-time compilation which are only solved by manually going into the design code and deleting all references to these autogenerated columns.
How can I stop it doing this?
**I have tried:**
* Making the control 'Frozen'
* Setting the `DataGridView` instantiated object `protected` (suggested in a previous post which referred to [this site](http://weblogs.asp.net/rweigelt/archive/2003/09/24/28984.aspx)) | It sounds like you are adding controls in the constructor. Perhaps add the columns slightly later - perhaps something like overriding `OnParentChanged`; you'll then be able to check `DesignMode` so you only add the columns during execution (not during design). | I've seen this behavior before for ComboBox's with the Items property and it's really frustrating. Here's how I've gotten around it with ComboBox. You should be able to apply this to the DataGridView.
I created a "new" property called Items and set it to not be browsable and to be explicitly be hidden from serialization. Under the hood it just accesses the real Items property.
```
[Browsable(false)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public new ObjectCollection Items
{
get { return ((ComboBox)this).Items; }
}
[Browsable(false)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public new object DataSource
{
get { return ((ComboBox)this).DataSource; }
``` | How do you prevent the Visual Studio designer auto-generating columns in a DataGridView? | [
"",
"c#",
".net",
"winforms",
"visual-studio-2008",
"datagridview",
""
] |
If I am using a tree structure of nodes similar to the code below, do I have to worry about the circular reference?
I have read that PHP uses a memory allocation mechanism which can make life very hard for the garbage collector when there are circular references involved.
What I want to know is:
* If my tree consists of only a few nodes, say 25, is this a problem?
* Will the memory be freed at the end of the script or am I slowly creating a problem for the server?
* Under what circumstances will this problem have an effect during script execution?
* Will manually destroying the references solve the problem and should I always do it?
```
class Node {
private $parent;
private $children;
function addChild( Node $child ) {
$this->children[] = $child;
$child->setParent( $this );
}
function setParent( $parent ) {
$this->parent = $parent;
}
}
//eg
$node0 = new Node;
$node1 = new Node;
// nodes 1 and 2 have a circular reference to each other
$node0->addChild( $node1 );
``` | Point by point:
> * If my tree consists of only a few nodes, say 25, is this a problem?
Not unless your nodes are real monsters.
> * Will the memory be freed at the end of the script or am I slowly creating a problem for the server?
When the interpreter shuts down all the memory is released.
> * Under what circumstances will this problem have an effect during script execution?
I doubt you will have anything to worry about unless you have very low memory limits or very large dynamic data structures. If you have 25 nodes that aren't being created/freed frequently you won't have an issue.
> * Will manually destroying the references solve the problem and should I always do it?
It will help. When loading a large data set into our database with Propel we ran into a lot of problem with memory consumption that we tracked to circular references not being freed. Our solution was to call a method that cleared all references. | Maybe so, but since it throws out all objects at the end of every request (unless you're caching), I don't think many PHP programmers worry about this.
If you're writing command line scripts in PHP, then maybe you've got a case for worrying about it, but you'd have to be writing some pretty convoluted PHP code before it becomes something worth worrying about. And if that's the case you have bigger problems.
Good luck. | What is the scale of PHP's circular reference problem and should I worry about it? | [
"",
"php",
"garbage-collection",
"circular-reference",
""
] |
I am using this line to obtain and save an image from a URL.
```
file_put_contents("./images/".$pk.".jpg", file_get_contents($PIC_URL))
```
I am unsure what the best way to deal with an error is. At the moment it is failing because there is no permission, which will be remedied shortly, but I would like it to be able to deal with situations where PIC\_URL is empty or not an image. Should I dead with the error at this level(probably it is better for permission related things) or should I check higher up if PIC\_URL is empty, or both?
Which is the best approach? | I'm not talented enough to claim this is the best method, but I would just test along the way:
```
$imageDir = "/path/to/images/dir/";
$imagePath = "$imageDir$pk.jpg";
if (!is_dir($imageDir) or !is_writable($imageDir)) {
// Error if directory doesn't exist or isn't writable.
} elseif (is_file($imagePath) and !is_writable($imagePath)) {
// Error if the file exists and isn't writable.
}
$image = file_get_contents(urlencode($PIC_URL));
if (empty($image)) {
// Error if the image is empty/not accessible.
exit;
}
file_put_contents($imagePath, $image);
``` | I would use is\_writable(), passing it the folder name if the image doesn't already exist, or the filename if it does, before you attempt to write the image.
<https://www.php.net/manual/en/function.is-writable.php> | error clause for file_put_contents | [
"",
"php",
"error-handling",
""
] |
For a homework assignment I was given a Card class that has enumerated types for the Rank and Suit. I am required to compare two poker hands (each hand is an `ArrayList` of 5 cards) and decide the winner.
The `isStraight()` function is really bothering me, because I have to start over the count after the Ace. For example,
QUEEN, KING, ACE, TWO, THREE
Is still considered a straight. What is the best way to code this functionality?
Here is the Rank/Suit enumerated type code, if that helps.
```
public enum Rank
{
TWO(2), THREE(3), FOUR(4), FIVE(5), SIX(6), SEVEN(7), EIGHT(8), NINE(9),
TEN(10), JACK(11), QUEEN(12), KING(13), ACE(14);
private final int points;
private Rank(int points)
{
this.points = points;
}
public int points()
{
return this.points;
}
}
public enum Suit
{
DIAMONDS, CLUBS, HEARTS, SPADES;
}
``` | You do realize that by the rules of any poker game I've ever played or heard of a straight cannot wrap right? Ace can be low [A,2,3,4,5] or high [10,J,Q,K,A] but it can't wrap. According to those rules (not yours) I've implemented something similar before. Basically you sort the array and walk it, making sure the current card is one higher than the previous. In the first iteration, if it is an ace, then you explicitly check for [A,2,3,4,5]. If it is you return true and if it isn't you continue with the normal straight logic. This should set you in the right direction. | You could write a fancy algorithm to return true despite the number of possible cards but if you realize there are only 10 valid combinations on a sorted hand, you could just look for these:
```
2-6, 3-7, 4-8, 5-9, 6-T, 7-J, 8-Q, 9-K, T-A, (2-5,A)
``` | Function to determine whether a poker hand is a straight? | [
"",
"java",
"performance",
"poker",
""
] |
I'm on the look out for a Cor C++ library / SDK that will allow me to either write a 3d PDF directly, or convert a DXF or DWG into a 3D PDF. So far I have come up with the [PDF3d library](http://www.pdf3d.co.uk/) which fits the bill, but is reasonably costly and has an expensive per user run time license. I don't mind a reasonable SDK cost, but the per seat cost kills it for me.
Anyone aware of any alternatives? | I use PDF XChange a lot for 2D CAD plots to PDF and it works well. I don't know if (I don't think so) it does 3D. I could find no mention of it at first glance on their site.
Your other option is a 3D DWF. Also consider that DWG TrueView is a free viewer and printer that handles 3D DWGs natively. You used to be able to automate TrueView through COM but I'm not sure if that is still so. Here is a blog post on some automation of Trueview - <http://through-the-interface.typepad.com/through_the_interface/2007/10/au-handouts-t-1.html> | I know that acrobat itself provides their free SDK 8.1 which supports 3d features as given in <http://www.adobe.com/devnet/acrobat/pdfs/Acrobat_SDK_whats_new.pdf>
But I am not sure how much work is required to plugin for conversions.
Hope this might work for you. | Any recommendations for a PDF 3D SDK with C++ interface | [
"",
"c++",
"pdf",
"sdk",
"3d",
""
] |
I have a scenario where I have multiple threads adding to a queue and multiple threads reading from the same queue. If the queue reaches a specific size **all threads** that are filling the queue will be blocked on add until an item is removed from the queue.
The solution below is what I am using right now and my question is: How can this be improved? Is there an object that already enables this behavior in the BCL that I should be using?
```
internal class BlockingCollection<T> : CollectionBase, IEnumerable
{
//todo: might be worth changing this into a proper QUEUE
private AutoResetEvent _FullEvent = new AutoResetEvent(false);
internal T this[int i]
{
get { return (T) List[i]; }
}
private int _MaxSize;
internal int MaxSize
{
get { return _MaxSize; }
set
{
_MaxSize = value;
checkSize();
}
}
internal BlockingCollection(int maxSize)
{
MaxSize = maxSize;
}
internal void Add(T item)
{
Trace.WriteLine(string.Format("BlockingCollection add waiting: {0}", Thread.CurrentThread.ManagedThreadId));
_FullEvent.WaitOne();
List.Add(item);
Trace.WriteLine(string.Format("BlockingCollection item added: {0}", Thread.CurrentThread.ManagedThreadId));
checkSize();
}
internal void Remove(T item)
{
lock (List)
{
List.Remove(item);
}
Trace.WriteLine(string.Format("BlockingCollection item removed: {0}", Thread.CurrentThread.ManagedThreadId));
}
protected override void OnRemoveComplete(int index, object value)
{
checkSize();
base.OnRemoveComplete(index, value);
}
internal new IEnumerator GetEnumerator()
{
return List.GetEnumerator();
}
private void checkSize()
{
if (Count < MaxSize)
{
Trace.WriteLine(string.Format("BlockingCollection FullEvent set: {0}", Thread.CurrentThread.ManagedThreadId));
_FullEvent.Set();
}
else
{
Trace.WriteLine(string.Format("BlockingCollection FullEvent reset: {0}", Thread.CurrentThread.ManagedThreadId));
_FullEvent.Reset();
}
}
}
``` | That looks very unsafe (very little synchronization); how about something like:
```
class SizeQueue<T>
{
private readonly Queue<T> queue = new Queue<T>();
private readonly int maxSize;
public SizeQueue(int maxSize) { this.maxSize = maxSize; }
public void Enqueue(T item)
{
lock (queue)
{
while (queue.Count >= maxSize)
{
Monitor.Wait(queue);
}
queue.Enqueue(item);
if (queue.Count == 1)
{
// wake up any blocked dequeue
Monitor.PulseAll(queue);
}
}
}
public T Dequeue()
{
lock (queue)
{
while (queue.Count == 0)
{
Monitor.Wait(queue);
}
T item = queue.Dequeue();
if (queue.Count == maxSize - 1)
{
// wake up any blocked enqueue
Monitor.PulseAll(queue);
}
return item;
}
}
}
```
(edit)
In reality, you'd want a way to close the queue so that readers start exiting cleanly - perhaps something like a bool flag - if set, an empty queue just returns (rather than blocking):
```
bool closing;
public void Close()
{
lock(queue)
{
closing = true;
Monitor.PulseAll(queue);
}
}
public bool TryDequeue(out T value)
{
lock (queue)
{
while (queue.Count == 0)
{
if (closing)
{
value = default(T);
return false;
}
Monitor.Wait(queue);
}
value = queue.Dequeue();
if (queue.Count == maxSize - 1)
{
// wake up any blocked enqueue
Monitor.PulseAll(queue);
}
return true;
}
}
``` | Use .net 4 BlockingCollection, to enqueue use Add(), to dequeue use Take(). It internally uses non-blocking ConcurrentQueue. More info here [Fast and Best Producer/consumer queue technique BlockingCollection vs concurrent Queue](https://stackoverflow.com/questions/5001003/fast-and-best-producer-consumer-queue-technique-blockingcollection-vs-concurrent) | Creating a blocking Queue<T> in .NET? | [
"",
"c#",
".net",
"multithreading",
"collections",
"queue",
""
] |
I have my Windows Application that accepts args and I use this in order to set up the Window behaviour
problem is that I need to pass text in some of this arguments but my application is looking at it as multiple args, so, this:
```
"http://www.google.com/" contact 450 300 false "Contact Info" true "Stay Visible" true
```
has actually **11** arguments instead of the **9** that I am expecting.
What is the trick to get **"contact info"** and **"stay visible"** to be passed as only one argument? | Are you running it directly from the command line? If so, I'd expect that to work just fine. (I assume you're using the parameters from the Main method, by the way?)
For instance, here's a small test app:
```
using System;
class Test
{
static void Main(string[] args)
{
foreach (string arg in args)
{
Console.WriteLine(arg);
}
}
}
```
Execution:
```
>test.exe first "second arg" third
first
second arg
third
```
This is a console app, but there's no difference between that and WinForms in terms of what gets passed to the Main method. | [MSDN says](http://msdn.microsoft.com/en-us/library/96s74eb0(VS.80).aspx), that it should work the way you mentioned.
```
class CommandLine
{
static void Main(string[] args)
{
// The Length property provides the number of array elements
System.Console.WriteLine("parameter count = {0}", args.Length);
for (int i = 0; i < args.Length; i++)
{
System.Console.WriteLine("Arg[{0}] = [{1}]", i, args[i]);
}
}
}
``` | passing args (arguments) into a window form application | [
"",
"c#",
"windows",
"arguments",
""
] |
I have a set of records in my MS SQL table. With Date as the primary key. But the Dates are only for working days and not the continues days. Eg:
1/3/2000 12:00:00 AM 5209.540000000 5384.660000000 5209.540000000 5375.110000000
1/4/2000 12:00:00 AM 5533.980000000 5533.980000000 5376.430000000 5491.010000000
1/5/2000 12:00:00 AM 5265.090000000 5464.350000000 5184.480000000 5357.000000000
1/6/2000 12:00:00 AM 5424.210000000 5489.860000000 5391.330000000 5421.530000000
1/7/2000 12:00:00 AM 5358.280000000 5463.250000000 5330.580000000 5414.480000000
1/10/2000 12:00:00 AM 5617.590000000 5668.280000000 5459.970000000 5518.390000000
1/11/2000 12:00:00 AM 5513.040000000 5537.690000000 5221.280000000 5296.300000000
1/12/2000 12:00:00 AM 5267.850000000 5494.300000000 5267.850000000 5491.200000000
In this i am trying to introduce a new column to the table and the value to it should be the value of the 3rd cloumn minus the value of 3rd column of the previous working day. Please help me in writing such a query. I am finding it difficult as the dates are not present for the week ends. | There are a few ways of doing this. Here is one.
```
CREATE TABLE MyTable
(
MyDate datetime NOT NULL PRIMARY KEY,
Col2 decimal(14,4) NOT NULL,
Col3 decimal(14,4) NOT NULL,
Col4 decimal(14,4) NOT NULL,
Col5 decimal(14,4) NOT NULL
)
GO
INSERT INTO MyTable
SELECT '1/3/2000 12:00:00 AM', 5209.540000000, 5384.660000000, 5209.540000000, 5375.110000000
UNION ALL
SELECT '1/4/2000 12:00:00 AM', 5533.980000000, 5533.980000000, 5376.430000000, 5491.010000000
UNION ALL
SELECT '1/5/2000 12:00:00 AM', 5265.090000000, 5464.350000000, 5184.480000000, 5357.000000000
UNION ALL
SELECT '1/6/2000 12:00:00 AM', 5424.210000000, 5489.860000000, 5391.330000000, 5421.530000000
UNION ALL
SELECT '1/7/2000 12:00:00 AM', 5358.280000000, 5463.250000000, 5330.580000000, 5414.480000000
UNION ALL
SELECT '1/10/2000 12:00:00 AM', 5617.590000000, 5668.280000000, 5459.970000000, 5518.390000000
UNION ALL
SELECT '1/11/2000 12:00:00 AM', 5513.040000000, 5537.690000000, 5221.280000000, 5296.300000000
UNION ALL
SELECT '1/12/2000 12:00:00 AM', 5267.850000000, 5494.300000000, 5267.850000000, 5491.200000000
GO
CREATE VIEW MyView
AS
SELECT T1.*,
CalculatedColumn = Col3 -
(SELECT Col3 FROM MyTable Q2
WHERE Q2.MyDate = (SELECT MAX(Q1.MyDate)
FROM MyTable Q1
WHERE Q1.MyDate < T1.MyDate)
)
FROM MyTable T1
GO
SELECT * FROM MyView
GO
```
**Results**
```
MyDate Col2 Col3 Col4 Col5 CalculatedColumn
----------------------- --------- --------- --------- --------- ----------------
2000-01-03 00:00:00.000 5209.5400 5384.6600 5209.5400 5375.1100 NULL
2000-01-04 00:00:00.000 5533.9800 5533.9800 5376.4300 5491.0100 149.3200
2000-01-05 00:00:00.000 5265.0900 5464.3500 5184.4800 5357.0000 -69.6300
2000-01-06 00:00:00.000 5424.2100 5489.8600 5391.3300 5421.5300 25.5100
2000-01-07 00:00:00.000 5358.2800 5463.2500 5330.5800 5414.4800 -26.6100
2000-01-10 00:00:00.000 5617.5900 5668.2800 5459.9700 5518.3900 205.0300
2000-01-11 00:00:00.000 5513.0400 5537.6900 5221.2800 5296.3000 -130.5900
2000-01-12 00:00:00.000 5267.8500 5494.3000 5267.8500 5491.2000 -43.3900
``` | You need to break this down into 2 parts. First is updating your existing data, and second is ensuring all new data has the correct value added.
For the first part, consider using a CURSOR. It will probably take a while to run, but at least you only run it once. Use a CURSOR like a FOR loop; iterate through each row in your data, ignoring the first row (since you haven't specified how to calculate the value for the new column when there is no previous date). Most likely you should sort by date ascending, just in case.
As you iterate through, use variables to store the values from this row. As you loop, copy those variables into previous row versions before you get the new row. For example, you have a variable called 'Col3' and another one called 'lastCol3'. Before you loop to the next row (ie the cursor moves to the next row) you copy the value of col3 into lastCol3 and then you get the new value for col3. Now you have your current and previous value on a per row basis, and can call 'update' to update the new column.
For new data going forward, you need to ensure the new value is provided, or if you want SQL Server to do it, use a stored procedure which selects the most recent row, col3, and uses the value to calculate the new value before inserting into the table. | How to get the next record in SQL table | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I am creating a C# Windows Mobile application that I need to programmatically invoke the click event on a Button.
I have looked at the Button class and do not see a way to do this. | You might consider changing your design: at least move the logic from button1\_Click handler somewhere else so that you'll be able to invoke it from wherever you want. | You can do something like this:
```
private void button1_Click(object sender, EventArgs e)
{
OnButtonClick();
}
private void OnButtonClick()
{
}
```
Then you can call your `OnButtonClick()` wherever you need to. | How do I programmatically invoke an event? | [
"",
"c#",
"windows-mobile",
""
] |
I'm familiar with ORM as a concept, and I've even used nHibernate several years ago for a .NET project; however, I haven't kept up with the topic of ORM in Java and haven't had a chance to use any of these tools.
But, now I may have the chance to begin to use some ORM tools for one of our applications, in an attempt to move away from a series of legacy web services.
I'm having a hard time telling the difference betweeen the JPA spec, what you get with the Hibernate library itself, and what JDO has to offer.
So, I understand that this question is a bit open-ended, but I was hoping to get some opinions on:
* What are the pros and cons of each?
* Which would you suggest for a new project?
* Are there certain conditions when it would make sense to use one framework vs the other? | Some notes:
* JDO and JPA are both specifications, not implementations.
* The idea is you can swap JPA implementations, if you restrict your code to use standard JPA only. (Ditto for JDO.)
* Hibernate can be used as one such implementation of JPA.
* However, Hibernate provides a native API, with features above and beyond that of JPA.
IMO, I would recommend Hibernate.
---
There have been some comments / questions about what you should do if you *need* to use Hibernate-specific features. There are many ways to look at this, but my advice would be:
* If you are not worried by the prospect of vendor tie-in, then make your choice between Hibernate, and other JPA and JDO implementations *including* the various vendor specific extensions in your decision making.
* If you are worried by the prospect of vendor tie-in, and you can't use JPA without resorting to vendor specific extensions, then don't use JPA. (Ditto for JDO).
In reality, you will probably need to trade-off *how much* you are worried by vendor tie-in versus *how much* you need those vendor specific extensions.
And there are other factors too, like how well you / your staff know the respective technologies, how much the products will cost in licensing, and whose story you believe about what is going to happen in the future for JDO and JPA. | Make sure you evaluate the DataNucleus implementation of JDO. We started out with Hibernate because it appeared to be so popular but pretty soon realized that it's not a 100% transparent persistence solution. There are too many caveats and the documentation is full of 'if you have this situation then you must write your code like this' that took away the fun of freely modeling and coding however we want. JDO has *never* caused me to adjust my code or my model to get it to 'work properly'. I can just design and code simple POJOs as if I was going to use them 'in memory' only, yet I can persist them transparently.
The other advantage of JDO/DataNucleus over hibernate is that it doesn't have all the run time reflection overhead and is more memory efficient because it uses build time byte code enhancement (maybe add 1 sec to your build time for a large project) rather than hibernate's run time reflection powered proxy pattern.
Another thing you might find annoying with Hibernate is that a reference you have to what you think is the object... it's often a 'proxy' for the object. Without the benefit of byte code enhancement the proxy pattern is required to allow on demand loading (i.e. avoid pulling in your entire object graph when you pull in a top level object). Be prepared to override equals and hashcode because the object you think you're referencing is often just a proxy for that object.
Here's an example of frustrations you'll get with Hibernate that you won't get with JDO:
<http://blog.andrewbeacock.com/2008/08/how-to-implement-hibernate-safe-equals.html>
<http://burtbeckwith.com/blog/?p=53>
If you like coding to 'workarounds' then, sure, Hibernate is for you. If you appreciate clean, pure, object oriented, model driven development where you spend all your time on modeling, design and coding and none of it on ugly workarounds then spend a few hours evaluating [JDO/DataNucleus](http://www.datanucleus.org). The hours invested will be repaid a thousand fold.
## Update Feb 2017
For quite some time now DataNucleus' implements the JPA persistence standard in addition to the JDO persistence standard so porting existing JPA projects from Hibernate to DataNucleus should be very straight forward and you can get all of the above mentioned benefits of DataNucleus with very little code change, if any.
So in terms of the question, the choice of a particular standard, JPA (RDBMS only) vs JDO (RDBMS + No SQL + ODBMSes + others), DataNucleus supports both, Hibernate is restricted to JPA only.
## Performance of Hibernate DB updates
Another issue to consider when choosing an ORM is the efficiency of its dirty checking mechanism - that becomes very important when it needs to construct the SQL to update the objects that have changed in the current transaction - especially when there are a lot of objects.
There is a detailed technical description of Hibernate's dirty checking mechanism in this SO answer:
[JPA with HIBERNATE insert very slow](https://stackoverflow.com/questions/19983065/jpa-with-hibernate-insert-very-slow/42212442#42212442) | Hibernate vs JPA vs JDO - pros and cons of each? | [
"",
"java",
"hibernate",
"orm",
"jpa",
"jdo",
""
] |
Quick (and hopefully easy) question: I need to trigger a download of a PDF file that's generated by a PHP file. I can do this:
```
<a href="download.php">Download</a>
```
but should I be doing this another way? Javascript maybe? The above works but the window shows "Loading..." until the download starts. I'd like to provide some feedback to the user that something is happening.
Ideas?
**Note:** I already have the code that sends the file from the server. It works perfectly. This question is simply about how best to call that script from the client.
Some sites have downloads that automatically start. How do they do that?
The problem with a direct URL is that if the PHP script errors it'll replace th econtent of the existing page, which is not what I want. | **EDIT**
Yes javascript, something like:
```
<a href="download.php" onclick="this.innerHTML='Downloading..'; downloadPdf(this);">Download</a>
```
If you need to actually understand when the download is started you probably need to call an iframe and then use the "onload" event on it.. for example:
```
// javascript
function downloadPdf(el) {
var iframe = document.createElement("iframe");
iframe.src = "download.php";
iframe.onload = function() {
// iframe has finished loading, download has started
el.innerHTML = "Download";
}
iframe.style.display = "none";
document.body.appendChild(iframe);
}
``` | The solution you have for download is fine. You may want to consider some visual feedback to the user, perhaps by using javascript to show a "Downloading, please wait message" on the current page when the link is clicked via an onclick handler. Or simply indicate that the download may take some time to start next to the link. Since IE will unload the page, stopping any GIF animations, I prefer text indications for file downloads. | Correct way to trigger a file download (in PHP)? | [
"",
"javascript",
"html",
"download",
""
] |
I have a list with 15 numbers. How can I produce all 32,768 combinations of those numbers (i.e., any number of elements, in the original order)?
I thought of looping through the decimal integers 1–32768 and using the binary representation of each numbers as a filter to pick out the appropriate list elements. Is there a better way to do it?
---
For combinations **of a specific length**, see [Get all (n-choose-k) combinations of length n](https://stackoverflow.com/questions/27974126). Please use that question to close duplicates instead where appropriate.
When closing questions about combinatorics as duplicates, it is very important to make sure of what OP **actually wants, not** the words that were used to describe the problem. It is extremely common for people who want, for example, a Cartesian product (see [How to get the cartesian product of multiple lists](https://stackoverflow.com/questions/533905)) to ask about "combinations". | Have a look at [itertools.combinations](http://docs.python.org/library/itertools.html#itertools.combinations):
> ```
> itertools.combinations(iterable, r)
> ```
>
> Return r length subsequences of elements from
> the input iterable.
>
> Combinations are emitted in lexicographic sort order. So, if the
> input iterable is sorted, the
> combination tuples will be produced in
> sorted order.
Since 2.6, batteries are included! | [This answer](https://stackoverflow.com/questions/464864/python-code-to-pick-out-all-possible-combinations-from-a-list/464882#464882) missed one aspect: the OP asked for ALL combinations... not just combinations of length "r".
So you'd either have to loop through all lengths "L":
```
import itertools
stuff = [1, 2, 3]
for L in range(len(stuff) + 1):
for subset in itertools.combinations(stuff, L):
print(subset)
```
Or -- if you want to get snazzy (or bend the brain of whoever reads your code after you) -- you can generate the chain of "combinations()" generators, and iterate through that:
```
from itertools import chain, combinations
def all_subsets(ss):
return chain(*map(lambda x: combinations(ss, x), range(0, len(ss)+1)))
for subset in all_subsets(stuff):
print(subset)
``` | Get all possible (2^N) combinations of a list’s elements, of any length | [
"",
"python",
"combinations",
""
] |
How good is C# type inference? I read somewhere that it's only for local variables? Does it work for class level attributes? For method signatures? Method return types? etc. | There are a few main kinds of type inference in C#:
* Implicitly typed local variables:
+ Only for local variables
+ Only when the value is assigned as part of the declaration
+ Value cannot be null
+ Value cannot be a lambda expression, anonymous method or method group (without a cast)
+ The compile-time type of the value is used for the type of the variable
+ Any further uses of the variable are only checked against the type determined by the initial declaration+assignment; they don't contribute to the inference itself.
* Generic method type argument inference, i.e. you don't specify the type arguments in a call to a generic method, the compiler figures them out based on the arguments.
+ Would be really handy to have this for generic types as well as generic methods
+ Really handy anyway - LINQ would be hard or impossible to use without it
+ Anonymous types would be fairly useless without it
+ Really complicated rules, even the spec is wrong in a few places
* Lambda expression parameter type inference
+ Compiler tries to work out the types of the parameters for lambda expressions based on the context in which it's used
+ Usually works pretty well, in my experience
* Array type inference, e.g. `new[] { "Hi", "there" }` instead of `new string[] { "Hi", "there" }`
+ Various small restrictions, nothing major
I've probably forgotten some other features which might be called "type inference". I suspect you're mostly interested in the first, but the others might be relevant to you too :) | It can only be used for local variables, but it can detect the type in many different forms.
```
var myVar = SomeMethodThatReturnsInt(); // `myVar` will be deduced as an `int`.
var myIntList = new List<int>(); // `myIntList` will be deduced as a List<int>.
var myOwnVar = new { Name = "John", Age = 100 }; // `myOwnVar` will be deduced as an AnonymousType.
```
> Notice: When you work with anonymous types, you **must** declare variable with `var`.
Another example of Type Inference with Lambda expressions:
```
var myList = new List<int>();
// <add values to list>
int x = myList.Find(i => i == 5); // `i` will be deduced as an `int`.
``` | How good is the C# type inference? | [
"",
"c#",
".net",
"type-inference",
""
] |
I'm currently working on a Java project that is emitting the following warning when I compile:
```
/src/com/myco/apps/AppDBCore.java:439: warning: unmappable character for encoding UTF8
[javac] String copyright = "� 2003-2008 My Company. All rights reserved.";
```
I'm not sure how SO will render the character before the date, but it should be a copyright symbol, and is displayed in the warning as a question mark in a diamond.
It's worth noting that the character appears in the output artifact correctly, but the warnings are a nuisance and the file containing this class may one day be touched by a text editor that saves the encoding incorrectly...
How can I inject this character into the "copyright" string so that the compiler is happy, and the symbol is preserved in the file without potential re-encoding issues? | Use the "\uxxxx" escape format.
According to [Wikipedia](http://en.wikipedia.org/wiki/Copyright_symbol), the copyright symbol is unicode U+00A9 so your line should read:
```
String copyright = "\u00a9 2003-2008 My Company. All rights reserved.";
``` | Try with:
javac -encoding ISO-8859-1 file\_name.java | "unmappable character for encoding" warning in Java | [
"",
"java",
"encoding",
"utf-8",
"ascii",
""
] |
<http://docs.djangoproject.com/en/dev/ref/templates/builtins/#regroup>
I can think of a few ways of doing it with loops but I'd particularly like to know if there is a neat one-liner. | Combine [`itertools.groupby`](http://docs.python.org/library/itertools.html#itertools.groupby) with [`operator.itemgetter`](http://docs.python.org/library/operator.html#operator.itemgetter) to get a pretty nice solution:
```
from operator import itemgetter
from itertools import groupby
key = itemgetter('gender')
iter = groupby(sorted(people, key=key), key=key)
for gender, people in iter:
print '===', gender, '==='
for person in people:
print person
``` | If the source of data (`people` in this case) is already sorted by the key, you can bypass the `sorted` call:
```
iter = groupby(people, key=lambda x:x['gender'])
for gender, people in iter:
print '===', gender, '==='
for person in people:
print person
```
Note: If `sorted` is a common dictionary, there are no guarantees of order; therefore you must call `sorted`. Here I'm supposing that `sorted` is a `collections.OrderedDict` or some other kind of ordered data structure. | What's the idiomatic Python equivalent to Django's 'regroup' template tag? | [
"",
"python",
"django",
"django-templates",
""
] |
In my classes I often write a quick `operator!=` by returning `!(*this == rhs)`, e.g.:
```
class Foo
{
private:
int n_;
std::string str_;
public:
...
bool operator==(const Foo& rhs) const
{
return n_ == rhs.n_ && str_ == rhs.str_;
}
bool operator!=(const Foo& rhs) const
{
return !(*this == rhs);
}
};
```
I can't see any obvious problems with doing this but thought I'd ask if anyone knows of any. | I believe that's the preferred method of implementing `operator!=` so that you don't repeat yourself, and you have a guaranteed correct relationship with `operator==`. | Defining `operator!=` as `!operator==` is just fine
For getting these trivial equivalent operators easily defined, I always use [Boost.Operators](http://www.boost.org/doc/libs/1_37_0/libs/utility/operators.htm).
The case with only `operator==` and `operator!=` (i.e. using [equality\_comparable<>](http://www.boost.org/doc/libs/1_37_0/libs/utility/operators.htm#equality_comparable2)) doesn't gain very much.
But when you need less and greater than too, or some combination of `operator+`, `operator*` etc. this becomes very convenient.
An example for your case would read
```
class Foo : private boost::equality_comparable< Foo >
{
private:
int n_;
std::string str_;
public:
...
bool operator==(const Foo& rhs) const
{
return n_ == rhs.n_ && str_ == rhs.str_;
}
};
``` | Quick and dirty operator!= | [
"",
"c++",
"operator-overloading",
""
] |
Ignoring programming style and design, is it "safe" to call delete on a variable allocated on the stack?
For example:
```
int nAmount;
delete &nAmount;
```
or
```
class sample
{
public:
sample();
~sample() { delete &nAmount;}
int nAmount;
}
``` | [No](https://msdn.microsoft.com/en-us/library/h6227113(v=vs.140).aspx), it is not safe to call `delete` on a stack-allocated variable. You should only call `delete` on things created by `new`.
* For each `malloc` or `calloc`, there should be exactly one `free`.
* For each `new` there should be exactly one `delete`.
* For each `new[]` there should be exactly one `delete[]`.
* For each stack allocation, there should be no explicit freeing or deletion. The destructor is called automatically, where applicable.
In general, you cannot mix and match any of these, e.g. no `free`-ing or `delete[]`-ing a `new` object. Doing so results in undefined behavior. | Well, let's try it:
```
jeremy@jeremy-desktop:~$ echo 'main() { int a; delete &a; }' > test.cpp
jeremy@jeremy-desktop:~$ g++ -o test test.cpp
jeremy@jeremy-desktop:~$ ./test
Segmentation fault
```
So apparently it is not safe at all. | Calling delete on variable allocated on the stack | [
"",
"c++",
"stack",
"heap-memory",
"delete-operator",
""
] |
I have a very simple greasemonkey script that I want to call an already existing javascript function on the page. I've read the documentation and nothing seems to work
```
window.setTimeout(function() {
alert('test') // This alert works, but nothing after it does
myFunction() // undefined
window.myFunction() // undefined
document.myFunction() // undefined
}, 1000);
``` | Try using:
unsafeWindow.myFunction();
More details and info - <http://wiki.greasespot.net/UnsafeWindow> | One way to call a function in the original page is like this:
```
location.href = "javascript:void(myFunction());";
```
It is a bit ugly. There is also the [unsafeWindow](http://wiki.greasespot.net/UnsafeWindow) provided by GreaseMonkey too, but the authors advise against using it.
```
unsafeWindow.myFunction();
```
Looks neater but make sure you understand the ramifications. From the manual:
> unsafeWindow bypasses Greasemonkey's
> XPCNativeWrapper-based security model,
> which exists to make sure that
> malicious web pages cannot alter
> objects in such a way as to make
> greasemonkey scripts (which execute
> with more privileges than ordinary
> Javascript running in a web page) do
> things that their authors or users did
> not intend. User scripts should
> therefore avoid calling or in any
> other way depending on any properties
> on unsafeWindow - especally if if they
> are executed for arbitrary web pages,
> such as those with @include \*, where
> the page authors may have subverted
> the environment in this way.
In other words, your script elevates the privileges available to the original page script if you use unsafeWindow. | How can I get greasemonkey to call a function on a page after it loads | [
"",
"javascript",
"greasemonkey",
""
] |
I found that the size of msi file is 69.0 MB although the size my main application is 1.5 MB, so where all this size goes for?! | Like the previous poster I would recommend using [Orca](https://learn.microsoft.com/en-us/windows/win32/msi/orca-exe) to inspect the package. It is included in the Windows Installer SDK.
You have to know what it is required for your application to run. Mostly you need the VC++ runtime, data access components, 3rd party dlls. If your application is managed you can add ~20 mb for the .NET 2.0 runtime. The .NET 3.5 redistributable ia about ~200Mb. The client profile is about ~70mb, which maybe answers your question.
Consider using web downloaders, which would download and install .NET runtime on demand. | ***Prerequisites & Runtimes***: My guess is that you have included a large number of *merge modules*, *runtime packages* or *prerequisites* such as the `.NET framework` in your MSI. A prerequisite like `Crystal Reports` is a common way to bloat your package.
***MSI File Fragmentation***: Another possibility is that you have **saved the MSI many times**. MSI files are COM-structured storage files, and they don't feature any "defragmentation" of the internal streams. To determine if this is the problem, just open the MSI in Orca and do a Save As... If the file size is reduced, the problem (or part of it) was caused by file fragmentation.
***Old-Style Office Files***: Note that the "Save As..." behavior is the same as in Office documents. When doing Save As on a Word document, for example, the file size may be drastically reduced. The cause of this is the "defragmentation" that occurs when the file content is streamed to a new file with no fragmentation. Newer versions of Office uses a Zip-format with XML and resources in there instead of COM structured storage files. | Why msi file size is bigger than main application | [
"",
"c#",
"windows-installer",
""
] |
How many records should there be before I consider indexing my sql tables? | There's no good reason to forego obvious indexes (FKs, etc.) when you're creating the table. It will never noticeably affect performance to have unnecessary indexes on tiny tables, and it's good to take a first cut when your mind is into schema design. Also, some indexes serve to prevent duplicates, which can be useful regardless of table size.
I guess the proper answer to your question is that the number of records in the table should have nothing to do with when to create indexes. | I would create the index entries when I create my table. If you decide to create indices after the table has grown to 100, 1000, 100000 entries it can just take alot of time and perhaps make your database unavailable while you are doing it.
Think about the table first, create the indices you think you'll need, and then move on.
In some cases you will discover that you should have indexed a column, if thats the case, fix it when you discover it.
Creating an index on a searched field is not a pre-optimization, its just what should be done. | When should you consider indexing your sql tables? | [
"",
"sql",
"indexing",
""
] |
In one of our ASP.NET applications in C#, we take a certain data collection (SubSonic collection) and export it to Excel. We also want to import Excel files in a specific format. I'm looking for a library I can use for this purpose.
Requirements:
* Excel 2007 files (Does Excel 2003 support over 64k rows? I need more than that.)
* Does not require Excel on the server
* Takes a typed collection and, if it can, tries to put numeric fields as numeric in Excel.
* Works well with large files (100k to 10M) - fast enough.
* Doesn't crash when exporting GUIDs!
* Does not cost a crapload of money (no enterprise library like aspose). Free is always great, but can be a commercial library.
What library do you recommend? Have you used it for large quantities of data? Are there other solutions?
Right now, I am using a simple tool that generates HTML that is loaded by Excel later on, but I am losing some capabilities, plus Excel complains when we load it. I don't need to generate charts or anything like that, just export raw data.
I am thinking of flat CSV files, but Excel is a customer requirement. I can work with CSV directly, if I had a tool to convert to and from Excel. Given Excel 2007 is an xml-based (and zipped) file format, I am guessing this kind of library should be easy to find. However, what matters most to me are your comments and opinions.
---
EDIT: Ironically, in my opinion and following the answer with the most votes, the best Excel import&export library is no export at all. This is not the case for all scenarios, but it is for mine. XLS files support only 64k rows. XLSX supports up to 1M. The free libraries that I've tried feature bad performance (one second to load one row when you have 200k rows). I haven't tried the paid ones, as I feel they are overpriced for the value they deliver when all you need is a fast XLSX<->CSV conversion routine. | I'm going to throw my hand in for flat csv files, if only because you've got the greatest control over the code. Just make sure that you read in the rows and process them one at a time (reading the document to the end and splitting will eat up all of your memory - same with writing, stream it out).
Yes, the user will have to save-as CSV in excel before you can process it, but perhaps this limitation can be overcome by training and providing clear instructions on the page?
Finally, when you export to the customer, if you set the mime type to text/csv, Excel is usually mapped to that type so it appears to the user to be 'an Excel file'. | I discovered the [Open XML SDK](http://msdn.microsoft.com/en-us/library/bb448854(office.14).aspx) since my original answer. It provides strongly typed classes for spreadsheet objects, among other things, and seems to be fairly easy to work with.
I am going to use it for reports in one of my projects. Alas, version 2.0 is not supposed to get released until late 2009 or 2010. | Import and Export Excel - What is the best library? | [
"",
"c#",
"asp.net",
"excel",
"export-to-excel",
"import-from-excel",
""
] |
*Nearly identical to [Query a Table's Foreign Key relationships](https://stackoverflow.com/questions/85978/query-a-tables-foreign-key-relationships), but for SQL Server 2000*
For a given table 'foo', I need a query to generate a set of tables that have foreign keys that point to foo. | ```
SELECT o2.name
FROM sysobjects o
INNER JOIN sysforeignkeys fk on o.id = fk.rkeyid
INNER JOIN sysobjects o2 on fk.fkeyid = o2.id
WHERE o.name = 'foo'
``` | Try this T-SQL:
```
select col_name(fkeyid, fkey) as column_name, object_name(rkeyid) as referenced_table_name, col_name(rkeyid, rkey) as referenced_column_name from sysforeignkeys where object_name(fkeyid) = 'tableNameHere' order by constid
```
I've rewritten the query slightly to give you all of the other tables that rely on a particular table:
```
select object_name(fkeyid),
col_name(fkeyid, fkey) as column_name,
col_name(rkeyid, rkey) as referenced_column_name
from sysforeignkeys
where object_name(rkeyid) = 'tableNameHere'
order by constid
``` | SQL Server 2000 - Query a Table’s Foreign Key relationships | [
"",
"sql",
"sql-server",
"database",
"sql-server-2000",
"foreign-keys",
""
] |
I have a `date` and `time` column in my mysql table called `start_date` and `start_time` respectively. I want my users the ability to set reminders for themselves to be sent X hours before `start_time`, min. 1 hour and max 24 hours.
I'll be running a CRON script to send these reminders. Right now I do:
```
SELECT * FROM auctions WHERE start_date=CURDATE() AND status='0'
```
To get all the auctions that will be starting today and haven't yet started. My question is, how can I figure out if the time now is `X` hours before `start_time` so I can send them a reminder if it is.
Any suggestions at all? | Something like this:
```
SELECT col1, col2, col3
FROM records
WHERE (records.startDate BETWEEN NOW() AND ADDDATE(NOW(), INTERVAL 9 HOUR))
AND (records.status = '0');
``` | Is there some reason why you can't just use a simple timestamp field instead of one for date and one for time. That way you could find all the ones that start in the next 5 hours (say), by doing
```
select * from auctions where start_ts between now() and now() + interval '5 hours';
```
Note: the interval syntax varies slightly between databases, and that's the one for postgresql, so you might have to change it slightly for mysql. | MySQL date/time calculation | [
"",
"php",
"mysql",
"datetime",
""
] |
I'm thinking of something like Jython/Jango? Does this exist? Or does Jython allow you to do everything-Python in Java including Django (I'm not sure how Jython differs from Python)? | <http://wiki.python.org/jython/DjangoOnJython> | "I'm not sure how Jython differs from Python"
<http://www.jython.org/Project/>
> Jython is an implementation of the
> high-level, dynamic, object-oriented
> language Python seamlessly integrated
> with the Java platform.
<http://docs.python.org/reference/introduction.html#alternate-implementations>
> Python implemented in Java. This
> implementation can be used as a
> scripting language for Java
> applications, or can be used to create
> applications using the Java class
> libraries. It is also often used to
> create tests for Java libraries. More
> information can be found at the Jython
> website. | Is there a Python equivalent of Groovy/Grails for Java | [
"",
"python",
"django",
"grails",
"groovy",
"jython",
""
] |
I have been wondering what the best practices or ramifications are in setting up the PK in a M2M table in SQL Server. For instance:
I have 2 tables
* Users
* Roles
I am making a new table
* UserRole
Which has 2 fields
RoleId & UserID
now should I
1. create a UserRoleID as the PK and make UserID and RoleID the FKs
* make the PK UserID AND RoleID and set them as FKs
* something else
I would like to know the performance issues with each of the options and what the recommended best practices are. | Standard procedure for these cases is to have two indexes. The unique PK is the two fields composite, with the field with the greater cardinality first, i.e. UserID; and the second index with just the secondary field (i.e. RoleID).
Then cluster on whichever is likely to be involved in more multirecord result sets (i.e. if querying for multiple roles per user, or multiple users per role). | Declare the PK as (UserID, RoleID). (Note: the order is important)
Declare UserID as an FK with the reference to the Users table.
Declare RoleID as an FK with the reference to the Roles table.
With any luck, your DBMS will give you a composite index on (UserID, RoleID) in that order.
With any luck this will speed up joins between users and roles. A good DBMS will give you a merge join for a join with no restrictions other than the join condition. A three way join should run pretty fast as well, assuming the number of roles is small.
When you join UserRoles and Roles, without joining in Users, you may find it's disappointingly slow. How often do you do that, and how important is speed in this case? If it is important, you can create an index on just RoleID. | Best Practice for PK in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2005",
"database-design",
"primary-key",
""
] |
I am curious why C# allows me to ignore delegate parameters in some cases but not others.
For instance this is permitted:
```
Action<int> action = delegate { Console.WriteLine("delegate"); };
```
but this is not:
```
Action<int> action = () => Console.WriteLine("lambda");
```
Is there a way to initialize a delegate and ignore the parameters using a lambda? I know that I can add a single parameter to the lambda and fix the previous line but this is more of an academic question pertaining to the compiler and why or how this works. | I believe that your first sample actually creates an anonymous function that is able to take on many different signatures whose body is the single statement `Console.WriteLine...`. Because it can match different signatures, it does not cause a problem. In the second sample, the lambda syntax itself defines a function that takes no parameters with the same body. Obviously the latter is not consistent with the defined Action so you get the error.
[C# Anonymous Method Reference](http://msdn.microsoft.com/en-us/library/0yw3tz5k.aspx)
> There is one case in which an
> anonymous method provides
> functionality not found in lambda
> expressions. Anonymous methods enable
> you to omit the parameter list, and
> this means that an anonymous method
> can be converted to delegates with a
> variety of signatures. This is not
> possible with lambda expressions. | To elaborate on tvanfosson's answer; this behavior is described in the C# 3.0 language specification (§7.14):
> The behavior of lambda-expressions and
> anonymous-method-expressions is the
> same except for the following points:
>
> • anonymous-method-expressions permit
> the parameter list to be omitted
> entirely, yielding convertibility to
> delegate types of any list of value
> parameters.
>
> • lambda-expressions permit parameter
> types to be omitted and inferred
> whereas anonymous-method-expressions
> require parameter types to be
> explicitly stated.
>
> • The body of a lambda-expression can
> be an expression or a statement block
> whereas the body of an
> anonymous-method-expression must be a
> statement block.
>
> • Since only lambda-expressions can
> have an expression body, no
> anonymous-method-expression can be
> successfully converted to an
> expression tree type (§4.6).
I think:
```
Action<int> action = () => Console.WriteLine("lambda");
```
is the equivalent of:
```
Action<int> action = delegate() { Console.WriteLine("delegate"); };
```
which wouldn't compile either. As Daniel Plaisted says () is explicitly saying there aren't any parameters.
If there were an equivalent of delegate{} it might be:
```
Action<int> action = => Console.WriteLine("lambda")
```
Which isn't very pretty and I suspect it suspect isn't in the spirit of lambda expressions. | Can I ignore delegate parameters with lambda syntax? | [
"",
"c#",
"delegates",
"lambda",
""
] |
I'm working on a project using C++, Boost, and Qt. I understand how to compress single files and bytestreams using, for example, the qCompress() function in Qt.
How do I zip a directory of multiple files, including subdirectories? I am looking for a cross-platform (Mac, Win, Linux) solution; I'd prefer not to fire off a bunch of new processes.
Is there a standard way to combine bytestreams from multiple files into a zipped archive, or maybe there is a convenience function or method that would be available in the Boost iostream library?
Many thanks for the assistance.
**Update**: The QuaZip library looks really great. There is an example in the download package (in the "tests" dir) that shows very clearly how to zip up a directory of files.
**Update 2**: After completing this task on my Linux build environment, I discovered that QuaZip doesn't work at all with the Visual Studio compiler. It may be possible to tackle all those compiler errors, but a word of caution to anyone looking down this path. | I have found the following two libraries:
* [ZipIOS++](http://zipios.sf.net). Seems to be "pure" C++. They don't list Windows explicitly as a supported platform. So i think you should try your luck yourself.
* [QuaZIP](http://quazip.sourceforge.net/index.html). Based on Qt4. Actually looks nice. They list Windows explicitly (Using mingw). Apparently, it is a C++ wrapper for [this] library.
Ah, and of course, i have ripped those sites from [this](http://lists.trolltech.com/qt-interest/2006-07/thread00070-0.html) Qt Mailinglist question about Zipping/Unzipping of directories :) | Just for the record...
Today, I needed to do something very similar in Visual C++. (Though wanted to maintain the possibility to compile the project on other platforms; however preferred not to adopt Qt just for this purpose.)
Ended up using the [Minizip](http://www.winimage.com/zLibDll/minizip.html) library. It is written in plain C, but devising a simple C++ wrapper around it was a breeze, and the end result works great, at least for my purposes. | How do I zip a directory of files using C++? | [
"",
"c++",
"qt",
"boost",
"compression",
"zlib",
""
] |
I have a script that seemed to work before but no longer does.
It is displaying this message after the script runs:
```
Array (
[0] => Unrouteable address [1] => -All RCPT commands were rejected with this error:\\
503-Unrouteable address 503 valid RCPT command must precede DATA
)
Array (
[0] => Unrouteable address [1] => -All RCPT commands were rejected with this error:\\
503-Unrouteable address 503 valid RCPT command must precede DATA
)
Array (
[0] => Unrouteable address [1] => -All RCPT commands were rejected with this error:\\
503-Unrouteable address 503 valid RCPT command must precede DATA
)
```
The actual function:
```
function emailUser($table, $subject, $message) {
$query = "SELECT * FROM $table";
$result=mysql_query($query);
while($row = mysql_fetch_array($result)) {
$i = 0;
while($i <= 0) {
$to = $row['email'];
$to_all .= '<li>'.$row['email'].'</li>';
$mail = new htmlMimeMail();
$mail->setHTML($message);
$mail->setSubject($subject);
$mail->setSMTPParams('mail.site.net', 25, 'site.net');
$mail->setReturnPath("email@site.net");
$mail->setFrom("email@site.net");
$mail_result = $mail->send(array($to), 'smtp');
if (!$mail_result) {
print_r($mail->errors);
//failure
} else {
//success
}
$i++;
}
}
print '<h3>Mail successuly sent to:</h3>';
print '<ul>'.$to_all.'</ul>';
}
```
Is there a better script to use? Maybe the email server has changed?
Any help is appreciated. | Try reordering to the following (setHTML is at the end):
```
$mail->setSubject($subject);
$mail->setSMTPParams('mail.site.net', 25, 'site.net');
$mail->setReturnPath("email@site.net");
$mail->setFrom("email@site.net");
$mail->setHTML($message);
``` | Are you sure that $row['email'] is the correct column?
The error seems to indicate to me that the recipient list that the function is receiving is suspect. | Sending mail from PHP script results in error | [
"",
"php",
"email",
""
] |
I am working on a web app using C# and asp.net I have been receiving an out of memory exception. What the app does is read a bunch of records(products) from a data source, could be hundreds/thousands, processes those records through settings in a wizard and then updates a different data source with the processes product information. Although there are multiple DB classes, right now all the logic is in one big class. The only reason for this, is all the information has to do with one thing, a product. Would it help the memory if I divided my app into different classes?? I don't think it would because if I divided the business logic into two classes, both of the classes would remain alive the entire time sending messages to each other, and so I don't know how this would help. I guess my other solution would be to find out what's sucking up all the memory. Is there a good tool you could recommend??
Thanks | Are you using datareaders to stream through your data? (to avoid loading too much into memory)
My gut is telling me this is a trivial issue to fix, don't pump datatables with 1 million records, work through tables one row at a time, or in small batches ... Release and dispose objects when you are done with them. (Example: don't have `static List<Customer> allCustomers = AllCustomers()`)
Have a development rule that ensures no one reads tables into memory if there are more than X amount of rows involved.
If you need a tool to debug this look at [.net memory profiler](http://memprofiler.com/) or [windbg with the sos extension](http://msdn.microsoft.com/en-us/library/bb190764.aspx) both will allow you to sniff through your your managed heaps.
Another note is, if you care about maintainability and would like to reduce your defect count, get rid of the SuperDuperDoEverything class and model information correctly in a way that is better aligned with your domain. The SuperDuperDoEverything class is a bomb waiting to explode. | Also note that you may not actually be running out of memory. What happens is that .NET goes to look for contiguous blocks of memory, and if it doesn't find any, it throws an OOM - even if you have plenty of total memory to cover the request.
Someone referenced both Perfmon and WinDBG. You could also setup adplus to capture a memory dump on crash - I believe the syntax is `adplus -crash -iis`. Once you have the memory dump, you can do something like:
```
.symfix C:\symbols
.reload
.loadby sos mscorwks
!dumpheap -stat
```
And that will give you an idea for what your high-memory objects are.
And of course, check out [Tess Fernandez's](http://blogs.msdn.com/tess) excellent blog, for example this article on [Memory Leaks with XML Serializers](http://blogs.msdn.com/tess/archive/2006/02/15/net-memory-leak-xmlserializing-your-way-to-a-memory-leak.aspx) and how to troubleshoot them.
If you are able to repro this in your dev environment, and you have VS Team Edition for Developers, there are memory profilers built right in. Just launch a new performance session, and run your app. It will spit out a nice report of what's hanging around.
Finally, make sure your objects don't define a destructor. This isn't C++, and there's nothing deterministic about it, other than it guarantees your object will survive a round of Garbage Collection since it has to be placed in the finalizer queue, and then cleaned up the next round. | Out of Memory Exception | [
"",
"c#",
"asp.net",
"memory-management",
""
] |
How can I convert a string of bytes into an int in python?
Say like this: `'y\xcc\xa6\xbb'`
I came up with a clever/stupid way of doing it:
```
sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))
```
I know there has to be something builtin or in the standard library that does this more simply...
This is different from [converting a string of hex digits](https://stackoverflow.com/questions/209513/convert-hex-string-to-int-in-python) for which you can use int(xxx, 16), but instead I want to convert a string of actual byte values.
UPDATE:
I kind of like James' answer a little better because it doesn't require importing another module, but Greg's method is faster:
```
>>> from timeit import Timer
>>> Timer('struct.unpack("<L", "y\xcc\xa6\xbb")[0]', 'import struct').timeit()
0.36242198944091797
>>> Timer("int('y\xcc\xa6\xbb'.encode('hex'), 16)").timeit()
1.1432669162750244
```
My hacky method:
```
>>> Timer("sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))").timeit()
2.8819329738616943
```
FURTHER UPDATE:
Someone asked in comments what's the problem with importing another module. Well, importing a module isn't necessarily cheap, take a look:
```
>>> Timer("""import struct\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""").timeit()
0.98822188377380371
```
Including the cost of importing the module negates almost all of the advantage that this method has. I believe that this will only include the expense of importing it once for the entire benchmark run; look what happens when I force it to reload every time:
```
>>> Timer("""reload(struct)\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""", 'import struct').timeit()
68.474128007888794
```
Needless to say, if you're doing a lot of executions of this method per one import than this becomes proportionally less of an issue. It's also probably i/o cost rather than cpu so it may depend on the capacity and load characteristics of the particular machine. | You can also use the [struct](https://docs.python.org/3.6/library/struct.html) module to do this:
```
>>> struct.unpack("<L", "y\xcc\xa6\xbb")[0]
3148270713L
``` | In Python 3.2 and later, use
```
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='big')
2043455163
```
or
```
>>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='little')
3148270713
```
according to the [endianness](https://en.wikipedia.org/wiki/Endianness) of your byte-string.
This also works for bytestring-integers of arbitrary length, and for two's-complement signed integers by specifying `signed=True`. See the [docs for `from_bytes`](http://docs.python.org/dev/library/stdtypes.html#int.from_bytes). | How to convert a string of bytes into an int? | [
"",
"python",
"arrays",
"string",
""
] |
I am raising exceptions in two different places in my Python code:
```
holeCards = input("Select a hand to play: ")
try:
if len(holeCards) != 4:
raise ValueError(holeCards + ' does not represent a valid hand.')
```
AND **(edited to correct raising code)**
```
def __init__(self, card):
[...]
if self.cardFace == -1 or self.cardSuit == -1:
raise ValueError(card, 'is not a known card.')
```
For some reason, the first outputs a concatenated string like I expected:
```
ERROR: Amsterdam does not represent a valid hand.
```
But, the second outputs some weird hybrid of set and string:
```
ERROR: ('Kr', 'is not a known card.')
```
Why is the "+" operator behaving differently in these two cases?
**Edit:** The call to init looks like this:
```
card1 = PokerCard(cardsStr[0:2])
card2 = PokerCard(cardsStr[2:4])
``` | "card" probably represents a tuple containing the string "Kr." When you use the + operator on a tuple, you create a new tuple with the extra item added.
edit: nope, I'm wrong. Adding a string to a tuple:
```
>> ("Kr",) + "foo"
```
generates an error:
```
TypeError: can only concatenate tuple (not "str") to tuple
```
It would probably be helpful to determine the type of "card." Do you know what type it is? If not, try putting in a print statement like:
```
if len(card) != 2:
print type(card)
raise ValueError(card + ' is not a known card.')
``` | Um, am I missing something or are you comparing the output of
```
raise ValueError(card, 'is not a known card.')
```
with
```
raise ValueError(card + ' is not a known card.')
```
???
The second uses "+", but the first uses ",", which does and should give the output you show!
(nb. the question was edited from a version with "+" in both cases. Perhaps this question should be deleted???) | Why does concatenation work differently in these two samples? | [
"",
"python",
"concatenation",
""
] |
The JDBC 3.0 spec talks about Connection (and Prepared Statement) pooling.
We have several standalone Java programs (i.e. we are not using an application server) that have been using DBCP to provide connection pooling. Should we continue to use DBCP, or can we take advantage of the JDBC-provided pooling and get rid of DBCP?
We are using MySQL (Connector/J) and will eventually be adding SQL Server support (jTDS); it's unlikely that we'll support any other databases.
EDIT: See comment below about my attempt to eliminate the connection pooling library. It appears that DBCP is still relevant (note that some commenters recommended C3P0 over DBCP). | Based on the encouragement of other posters, I attempted to eliminate DBCP and use the MySQL JDBC driver directly (Connector/J 5.0.4). I was unable to do so.
It appears that while the driver does provide a foundation for pooling, it does not provide the most important thing: an actual pool (the source code came in handy for this). It is left up to the application server to provide this part.
I took another look at the JDBC 3.0 documentation (I have a printed copy of something labeled "Chapter 11 Connection Pooling", not sure exactly where it came from) and I can see that the MySQL driver is following the JDBC doc.
When I look at DBCP, this decision starts to make sense. Good pool management provides many options. For example, when do you purge unused connection? which connections do you purge? is there a hard or soft limit on the max number of connections in the pool? should you test a connection for "liveness" before giving it to a caller? etc.
Summary: if you're doing a standalone Java application, you need to use a connection pooling library. Connection pooling libraries are still relevant. | DBCP has serious flaws. I don't think it's appropriate for a production application, especially when so many drivers support pooling in their `DataSource` natively.
The straw that broke the camel's back, in my case, was when I found that the entire pool was locked the whole time a new connection attempt is made to the database. So, if something happens to your database that results in slow connections or timeouts, other threads are blocked when they try to return a connection to the pool—even though they are done using a database.
Pools are meant to improve performance, not degrade it. DBCP is naive, complicated, and outdated. | Is DBCP (Apache Commons Database Connection Pooling) still relevant? | [
"",
"java",
"database",
"jdbc",
"pooling",
"apache-commons-dbcp",
""
] |
I've got this nasty problem where sending multiple, large messages in quick succession from a Java (NIO) server (running Linux) to a client will lead to truncated packets. The messages have to be large and sent very rapidly for the problem to occur. Here's basically what my code is doing (not actual code, but more-or-less what's happening):
```
//-- setup stuff: --
Charset charset = Charset.forName("UTF-8");
CharsetEncoder encoder = charset.newEncoder();
String msg = "A very long message (let's say 20KB)...";
//-- inside loop to handle incoming connections: --
ServerSocketChannel ssc = (ServerSocketChannel)key.channel();
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
sc.socket().setTcpNoDelay(true);
sc.socket().setSendBufferSize(1024*1024);
//-- later, actual sending of messages: --
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
sc.write(bb);
bb.rewind();
}
```
So, if the packets are long enough and sent as quickly as possible (i.e. in a loop like this with no delay), then on the other end it often comes out something like this:
```
[COMPLETE PACKET 1]
[COMPLETE PACKET 2]
[COMPLETE PACKET 3]
[START OF PACKET 4][SOME OR ALL OF PACKET 5]
```
There is data loss, and the packets start to run together, such that the start of packet 5 (in this example) arrives in the same message as the start of packet 4. It's not just truncating, its running the messages together.
I imagine that this is related to the TCP buffer or "window size", or that the server here is just providing data faster than the OS, or network adapter, or something, can handle it. But how do I check for, and prevent it from happening? If I reduce the length of message per use of sc.write(), but then increase the repetitions, I'll still run into the same problem. It seems to simply be an issue with the amount of data in a short amount of time. I don't see that sc.write() is throwing any exceptions either (I know that in my example above I'm not checking, but have in my tests).
I'd be happy if I could programmatically check if it is not ready for more data yet, and put in a delay, and wait until it is ready. I'm also not sure if "sc.socket().setSendBufferSize(1024\*1024);" has any effect, or if I'd need to adjust this on the Linux side of things. Is there a way to really "flush" out a SocketChannel? As a lame workaround, I could try to explicitly force a complete send of anything that is buffered any time I'm trying to send a message of over 10KB, for example (which is not that often in my application). But I don't know of any way to force a send of the buffer (or wait until it has sent). Thanks for any help! | There are many reasons why sc.write() would not send some or all of the data. You have to check the return value and/or the number of bytes remaining in the buffer.
```
for (int n=0; n<20; n++){
ByteBuffer bb = encoder.encode(CharBuffer.wrap(msg+'\0'));
if(sc.write(bb) > 0 && bb.remaining() == 0) {
// all data sent
} else {
// could not send all data.
}
bb.rewind();
}
``` | You are not checking the return value of:
```
sc.write(bb);
```
This returns the number of bytes written, which might be less than the data available in your buffer. Because of how nio works you can probably just call remaining() on your bytebuffer to see if there are any left. | Java NIO: Sending large messages quickly leads to truncated packets and data loss | [
"",
"java",
"networking",
"network-programming",
"nio",
""
] |
Has anyone tried ASP.NET MVC using IronPython? Having done a lot of Python development recently, it would be nice to continue with the language as I go into a potential ASP.NET MVC project.
I'm especially interested in exploiting the dynamic aspects of Python with .NET features such as LINQ and want to know if this will be possible. The other route that may be viable for certain dynamic programming would be C# 4.0 with its `dynamic` keyword.
Thoughts, experiences? | Yes, [there is an MVC example from the DLR team](http://www.codeplex.com/aspnet/Release/ProjectReleases.aspx?ReleaseId=17613).
You might also be interested in [Spark](http://sparkviewengine.com/documentation/ironpython). | Using IronPython in ASP.NET MVC: <http://www.codevoyeur.com/Articles/Tags/ironpython.aspx>
this page contains following articles:
* A Simple IronPython ControllerFactory for ASP.NET MVC
* A Simple IronPython ActionFilter for ASP.NET MVC
* A Simple IronPython Route Mapper for ASP.NET MVC
* An Unobtrusive IronPython ViewEngine for ASP.NET MVC | IronPython on ASP.NET MVC | [
"",
"python",
"asp.net-mvc",
"linq",
"dynamic",
"ironpython",
""
] |
I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()? | I must agree that the documentation is a bit terse but the tutorial gets you up and running quickly.
<http://twistedmatrix.com/projects/core/documentation/howto/tutorial/index.html>
The event-based programming paradigm of Twisted and it's defereds might be a bit weird at the start (was for me) but it is worth the learning curve.
You'll get up and running doing much more complex stuff more quickly than if you were to write your own framework and it would also mean one less thing to bug hunt as Twisted is very much production proven.
I don't really know of another framework that can offer as much as Twisted can, so my vote would definitely go for Twisted even if the docs aren't for the faint of heart.
I agree with Greg that SocketServer is a nice middle ground but depending on the target audience of your application and the design of it you might have some nice stuff to look forward to in Twisted (the PerspectiveBroker which is very useful comes to mind - <http://twistedmatrix.com/projects/core/documentation/howto/pb-intro.html>) | The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop. | Good Python networking libraries for building a TCP server? | [
"",
"python",
"networking",
"twisted",
""
] |
I could be totally wrong here, but as I understand it, C++ doesn't really have a native "pointer to member function" type. I know you can do tricks with Boost and mem\_fun etc. But why did the designers of C++ decide not to have a 64-bit pointer containing a pointer to the function and a pointer to the object, for example?
What I mean specifically is a pointer to a member function of a *particular* object of *unknown type*. I.E. something you can use for a **callback**. This would be a type which contains *two* values. The first value being a pointer to the *function*, and the second value being a pointer to the *specific instance* of the object.
What I do not mean is a pointer to a general member function of a class. E.G.
```
int (Fred::*)(char,float)
```
It would have been so useful and made my life easier.
Hugo | @RocketMagnet - This is in response to your [other question](https://stackoverflow.com/questions/462491/again-why-doesnt-c-have-a-pointer-to-member-function-type), the one which was labeled a duplicate. I'm answering *that* question, not this one.
In general, C++ pointer to member functions can't portably be cast across the class hierarchy. That said you can often get away with it. For instance:
```
#include <iostream>
using std::cout;
class A { public: int x; };
class B { public: int y; };
class C : public B, public A { public: void foo(){ cout << "a.x == " << x << "\n";}};
int main() {
typedef void (A::*pmf_t)();
C c; c.x = 42; c.y = -1;
pmf_t mf = static_cast<pmf_t>(&C::foo);
(c.*mf)();
}
```
Compile this code, and the compiler **rightly** complains:
```
$ cl /EHsc /Zi /nologo pmf.cpp
pmf.cpp
pmf.cpp(15) : warning C4407: cast between different pointer to member representations, compiler may generate incorrect code
$
```
So to answer "why doesn't C++ have a pointer-to-member-function-on-void-class?" is that this imaginary base-class-of-everything has no members, so there's no value you could safely assign to it! "void (C::*)()" and "void (void::*)()" are mutually incompatible types.
Now, I bet you're thinking "wait, i've cast member-function-pointers just fine before!" Yes, you may have, using reinterpret\_cast and single inheritance. This is in the same category of other reinterpret casts:
```
#include <iostream>
using std::cout;
class A { public: int x; };
class B { public: int y; };
class C : public B, public A { public: void foo(){ cout << "a.x == " << x << "\n";}};
class D { public: int z; };
int main() {
C c; c.x = 42; c.y = -1;
// this will print -1
D& d = reinterpret_cast<D&>(c);
cout << "d.z == " << d.z << "\n";
}
```
So if `void (void::*)()` did exist, but there is nothing you could safely/portably assign to it.
Traditionally, you use functions of signature `void (*)(void*)` anywhere you'd thing of using `void (void::*)()`, because while member-function-pointers don't cast well up and down the inheritance heirarchy, void pointers do cast well. Instead:
```
#include <iostream>
using std::cout;
class A { public: int x; };
class B { public: int y; };
class C : public B, public A { public: void foo(){ cout << "a.x == " << x << "\n";}};
void do_foo(void* ptrToC){
C* c = static_cast<C*>(ptrToC);
c->foo();
}
int main() {
typedef void (*pf_t)(void*);
C c; c.x = 42; c.y = -1;
pf_t f = do_foo;
f(&c);
}
```
So to your question. Why doesn't C++ support this sort of casting. Pointer-to-member-function types already have to deal with virtual vs non-virtual base classes, and virtual vs non-virtual member functions, all in the same type, inflating them to 4\*sizeof(void\*) on some platforms. I think because it would further complicate the implementation of pointer-to-member-function, and raw function pointers already solve this problem so well.
Like others have commented, C++ gives library writers enough tools to get this done, and then 'normal' programmers like you and me should use those libraries instead of sweating these details.
EDIT: marked community wiki. Please only edit to include relevant references to the C++ standard, and add in italic. (esp. add references to standard where my understanding was wrong! ^\_^ ) | As others pointed out, C++ does have a member function pointer type.
The term you were looking for is "bound function". The reason C++ doesn't provide syntax sugar for function binding is because of its philosophy of only providing the most basic tools, with which you can then build all you want. This helps keep the language "small" (or at least, less mind bogglingly huge).
Similarly, C++ doesn't have a lock{} primitive like C#'s but it has RAII which is used by boost's scoped\_lock.
There is of course the school of thought that says you should add syntax sugar for everything that might be of use. For better or worse, C++ does not belong to that school. | Why doesn't C++ have a pointer to member function type? | [
"",
"c++",
"function",
"pointers",
"member",
""
] |
When databinding my xaml to some data I often use the "get" part of a property to do some logic. Like giving to sum of totals of a list or a check if something is positive.
For example:
```
public List<SomeClass> ListOfSomeClass{get;set;}
public double SumOfSomeClass
{
get
{
return ListOfSomeClass.Sum(s => s.Totals);
}
}
public bool SumPositive
{
get
{
if(SumOfSomeClass >= 0)
return true;
else
return false;
}
}
```
This way I can bind to SumPositive and SumOfSomeClass. Is this considered good practice? Even if it gets more complex than this? Or would it be better call a method and return the outcome? What about calls to another class or even a database? | Property getters are expected to be fast and idempotent (i.e. no destructive actions should be performed there). Though it's perfectly fine to iterate over an in-memory collection of objects, I wouldn't recomment doing any kind of heavy lifting in either *get* or *set* parts. And speaking of iterating, I'd still cache the result to save a few milliseconds. | Yes, unless it is an operation that might have performance implications. In that case you should use a method instead (as it is more intuitive to the end user that a method might be slow whereas a property will be quick) | Logic in get part of property. Good practice? | [
"",
"c#",
"wpf",
"data-binding",
"properties",
""
] |
Given a list of strings, I want to sort it alphabetically and remove duplicates. I know I can do this:
```
from sets import Set
[...]
myHash = Set(myList)
```
but I don't know how to retrieve the list members from the hash in alphabetical order.
I'm not married to the hash, so any way to accomplish this will work. Also, performance is not an issue, so I'd prefer a solution that is expressed in code clearly to a fast but more opaque one. | A list can be sorted and deduplicated using built-in functions:
```
myList = sorted(set(myList))
```
* [`set`](http://docs.python.org/library/functions.html#func-set) is a built-in function for Python >= 2.3
* [`sorted`](http://docs.python.org/library/functions.html#sorted) is a built-in function for Python >= 2.4 | If your input is already sorted, then there may be a simpler way to do it:
```
from operator import itemgetter
from itertools import groupby
unique_list = list(map(itemgetter(0), groupby(yourList)))
``` | How to sort and remove duplicates from Python list? | [
"",
"python",
"list",
"sorting",
"unique",
""
] |
I would like to provide a WHERE condition on an inner query by specifying innertable.id = outertable.id. However, MySQL (5.0.45) reports "Unknown column 'outertable.id' in 'where clause'". Is this type of query possible?
The inner query is pivoting rows to columns using a GROUP BY. This could be entirely be performed in the outer query, but would possibly incur additional overhead due to the extra joins.
Alternatively, I can leave off the WHERE condition in the inner query and instead specify an ON outertable.id = innerquery.id, but it would then fetch the entire inner query rowset to join again the outer, which is inefficient.
The actual SQL appears below:
```
select t.ticketid, u.userid, t.fullname, u.loginapi_userid, t.email, tp.subject, tp.contents, a.PhoneNumber, a.Location, a.Extension, a.BusinessUnit, a.Department
from swtickets t
inner join swticketposts tp on t.ticketid = tp.ticketid
inner join swusers u on t.userid = u.userid
left join
(
select
cfv.typeid,
min(case cfv.customfieldid when 1 then cfv.fieldvalue end) as 'PhoneNumber',
min(case cfv.customfieldid when 3 then cfv.fieldvalue end) as 'Location',
min(case cfv.customfieldid when 5 then cfv.fieldvalue end) as 'Extension',
min(case cfv.customfieldid when 8 then cfv.fieldvalue end) as 'BusinessUnit',
min(case cfv.customfieldid when 9 then cfv.fieldvalue end) as 'Department'
from swcustomfieldvalues cfv
where cfv.typeid = t.ticketid
group by cfv.typeid
) as a on 1 = 1
where t.ticketid = 2458;
``` | The answer to your question is no, it is not possible to reference correlation names as you are doing. The derived table is produced by your inner query before the outer query starts evaluating joins. So the correlation names like `t`, `tp`, and `u` are not available to the inner query.
To solve this, I'd recommend using the same constant integer value in the inner query, and then join the derived table in the outer query using a real condition instead of `1=1`.
```
SELECT t.ticketid, u.userid, t.fullname, u.loginapi_userid, t.email,
tp.subject, tp.contents, a.PhoneNumber, a.Location, a.Extension,
a.BusinessUnit, a.Department
FROM swtickets t
INNER JOIN swticketposts tp ON (t.ticketid = tp.ticketid)
INNER JOIN swusers u ON (t.userid = u.userid)
LEFT OUTER JOIN (
SELECT cfv.typeid,
MIN(CASE cfv.customfieldid WHEN 1 THEN cfv.fieldvalue END) AS 'PhoneNumber',
MIN(CASE cfv.customfieldid WHEN 3 THEN cfv.fieldvalue END) AS 'Location',
MIN(CASE cfv.customfieldid WHEN 5 THEN cfv.fieldvalue END) AS 'Extension',
MIN(CASE cfv.customfieldid WHEN 8 THEN cfv.fieldvalue END) AS 'BusinessUnit',
MIN(CASE cfv.customfieldid WHEN 9 THEN cfv.fieldvalue END) AS 'Department'
FROM swcustomfieldvalues cfv
WHERE cfv.typeid = 2458
GROUP BY cfv.typeid
) AS a ON (a.typeid = t.ticketid)
WHERE t.ticketid = 2458;
``` | You're using the Entity-Attribute-Value design, and there's ultimately no way to make this scalable if you try to generate conventional result sets. Don't try to do this in one query.
Instead, query your normalized tables first:
```
SELECT t.ticketid, u.userid, t.fullname, u.loginapi_userid, t.email,
tp.subject, tp.contents
FROM swtickets t
INNER JOIN swticketposts tp ON (t.ticketid = tp.ticketid)
INNER JOIN swusers u ON (t.userid = u.userid)
WHERE t.ticketid = 2458;
```
Then query your custom fields, with the result on multiple rows of the result set:
```
SELECT cfv.customfieldid, cfv.fieldvalue
FROM swcustomfieldvalues cfv
WHERE cfv.typeid = 2458;
```
You'll get multiple rows in the result set, one row for each custom field:
```
+---------------+--------------+
| customfieldid | fieldvalue |
+---------------+--------------+
| 1 | 415-555-1234 |
| 3 | Third office |
| 5 | 123 |
| 8 | Support |
| 9 | Engineering |
+---------------+--------------+
```
You then need to write application code to map the result-set fields to the application object fields, in a loop.
Using an Entity-Attribute-Value table in this way is more scalable both in terms of performance and code maintenance. | MySQL correlated subquery in JOIN syntax | [
"",
"sql",
"mysql",
"join",
"subquery",
"entity-attribute-value",
""
] |
I was told that in c# attributes are not allowed on the auto-implemented properties. Is that true? if so why?
EDIT: I got this information from a popular book on LINQ and could not believe it!
EDIT: Refer page 34 of LINQ Unleashed by Paul Kimmel where he says "*Attributes are not allowed on auto-implemented properties, so roll your own if you need an attribute*" | The easiest way to prove that's wrong is to just test it:
```
using System;
using System.ComponentModel;
using System.Reflection;
class Test
{
[Description("Auto-implemented property")]
public static string Foo { get; set; }
static void Main(string[] args)
{
var property = typeof(Test).GetProperty("Foo");
var attributes = property.GetCustomAttributes
(typeof(DescriptionAttribute), false);
foreach (DescriptionAttribute description in attributes)
{
Console.WriteLine(description.Description);
}
}
}
```
I suggest you email the author so he can publish it as an erratum. If he meant that you can't apply an attribute to the *field*, this will give him a chance to explain more carefully. | You can apply attributes to automatic properties without a problem.
Quote from [MSDN](http://msdn.microsoft.com/en-us/library/bb384054.aspx):
> Attributes are permitted on
> auto-implemented properties but
> obviously not on the backing fields
> since those are not accessible from
> your source code. If you must use an
> attribute on the backing field of a
> property, just create a regular
> property. | Do auto-implemented properties support attributes? | [
"",
"c#",
"attributes",
"automatic-properties",
""
] |
I am trying to convert some [VBA](http://en.wikipedia.org/wiki/Visual_Basic_for_Applications) code to C#. I am new to C#. Currently I am trying to open an Excel file from a folder and if it does not exist then create it. I am trying something like the following. How can I make it work?
```
Excel.Application objexcel;
Excel.Workbook wbexcel;
bool wbexists;
Excel.Worksheet objsht;
Excel.Range objrange;
objexcel = new Excel.Application();
if (Directory("C:\\csharp\\error report1.xls") = "")
{
wbexcel.NewSheet();
}
else
{
wbexcel.Open("C:\\csharp\\error report1.xls");
objsht = ("sheet1");
}
objsht.Activate();
``` | You need to have installed Microsoft Visual Studio Tools for Office (VSTO).
VSTO can be selected in the Visual Studio installer under Workloads > Web & Cloud > Office/SharePoint Development.
After that create a generic .NET project and add a reference to `Microsoft.Office.Interop.Excel` via 'Add Reference... > Assemblies' dialog.
```
Application excel = new Application();
Workbook wb = excel.Workbooks.Open(path);
```
`Missing.Value` is a special reflection struct for unnecessary parameters replacement
---
In newer versions, the assembly reference required is called `Microsoft Excel 16.0 Object Library`. If you do not have the latest version installed you might have `Microsoft Excel 15.0 Object Library`, or an older version, but it is the same process to include.
[](https://i.stack.imgur.com/J6wmd.png) | ```
FileInfo fi = new FileInfo("C:\\test\\report.xlsx");
if(fi.Exists)
{
System.Diagnostics.Process.Start(@"C:\test\report.xlsx");
}
else
{
//file doesn't exist
}
``` | How to open an Excel file in C#? | [
"",
"c#",
".net",
"excel",
"vsto",
""
] |
Can anyone direct me to a smallish C# application that would be symbolic of the "right way" to design a program? I'm looking for a relatively simple (potentially trivial) program from which to analyze and learn.
The application should have a relatively trivial problem to solve and should solve it in a rather straight-forward way while showing off best practices/good object oriented design.
I've been studying C# rather a lot of late, and while I'm becoming confident in my understanding of parts of the .Net framework and the C# syntax, I'm having difficulties with the general concept of design and how a project fits together.
Thanks for any sources you can provide! | There are plenty of projects on this site:
<http://www.codeplex.com/> | First, take a look at the previous question on this topic. It's at <https://stackoverflow.com/questions/143088/open-source-c-projects-that-have-very-high-code-quality-to-learn-from>.
To that list I would add:
* ASP.NET MVC Storefront (MVC
reference)
* SubSonic
* Rawr (good Windows Forms app)
All of these are on Codeplex. | Where can I find the source for a small, well-designed C# application (for learning purposes)? | [
"",
"c#",
""
] |
I'm working on creating a call back function for an ASP.NET cache item removal event.
The documentation says I should call a method on an object or calls I know will exist (will be in scope), such as a static method, but it said I need to ensure the static is thread safe.
Part 1: What are some examples of things I could do to make it un-thread safe?
Part 2: Does this mean that if I have
```
static int addOne(int someNumber){
int foo = someNumber;
return foo +1;
}
```
and I call Class.addOne(5); and Class.addOne(6); simutaneously, Might I get 6 or 7 returned depending on who which invocation sets foo first? (i.e. a race condition) | That *addOne* function is indeed thread safe because it doesn't access any data that could be accessed by another thread. Local variables cannot be shared among threads because each thread gets its own stack. You do have to make sure, however, that the function parameters are value types and not reference types.
```
static void MyFunction(int x) { ... } // thread safe. The int is copied onto the local stack.
static void MyFunction(Object o) { ... } // Not thread safe. Since o is a reference type, it might be shared among multiple threads.
``` | No, addOne is thread-safe here - it only uses local variables. Here's an example which *wouldn't* be thread-safe:
```
class BadCounter
{
private static int counter;
public static int Increment()
{
int temp = counter;
temp++;
counter = temp;
return counter;
}
}
```
Here, two threads could both call Increment at the same time, and end up only incrementing once.
(Using `return ++counter;` would be just as bad, by the way - the above is a more explicit version of the same thing. I expanded it so it would be more obviously wrong.)
The details of what is and isn't thread-safe can be quite tricky, but in general if you're not mutating any state (other than what was passed into you, anyway - bit of a grey area there) then it's *usually* okay. | How do I know if this C# method is thread safe? | [
"",
"c#",
"concurrency",
"thread-safety",
"static-methods",
""
] |
I am attempting to troubleshoot an intermittent failure that appears to be related to removing an object from a HashMap and then putting that same object back using a new key. My HashMap is created as follows:
```
transactions = new HashMap<Short, TransactionBase>();
```
The code that does the re-assignment is as follows:
```
transactions.remove(transaction.tran_no);
transaction.tran_no = generate_transaction_id();
transactions.put(transaction.tran_no, transaction);
```
The intermittent behaviour that I am seeing is that code that executes immediately after this that depends upon the transaction object being locatable does not appear to find the transaction object using the new transaction id. However, at some future point, the transaction can be located. So pulling at straws, is there any sort of asynchronous effect to put() or remove that can cause this sort of behaviour?
I should mention that, to the best of my knowledge, the container is being accessed by only one thread. I have already read in he documentation that class HashMap is not "synchronised". | There are no "asynchronous" effects in the HashMap class. As soon as you put something in there, it's there. You should double- and triple- check to make sure that there are no threading issues.
The only other thing I can think of is that you're making a copy of the HashMap somewhere. The copy obviously won't be affected by you adding stuff into the original, or vice versa. | There is a slight difference between remove/get and put (although my guess is that you have a threading issue).
The parameter for `remove`/`get` is of type `Object`; for `put` it is of type `K`. The reason for that has been stated many times before. This means it has problems with boxing. I'm not even going to guess what the rules are. If a value gets boxed as a `Byte` in one place and a `Short` in another, then those two objects cannot be equal.
There is a similar issue with `List.remove(int)` and `List.remove(Object)`. | strange HashMap.put() behaviour | [
"",
"java",
"hashmap",
""
] |
Using Python, how can information such as CPU usage, memory usage (free, used, etc), process count, etc be returned in a generic manner so that the same code can be run on Linux, Windows, BSD, etc?
Alternatively, how could this information be returned on all the above systems with the code specific to that OS being run only if that OS is indeed the operating environment? | Regarding cross-platform: your best bet is probably to write platform-specific code, and then import it conditionally. e.g.
```
import sys
if sys.platform == 'win32':
import win32_sysinfo as sysinfo
elif sys.platform == 'darwin':
import mac_sysinfo as sysinfo
elif 'linux' in sys.platform:
import linux_sysinfo as sysinfo
#etc
print 'Memory available:', sysinfo.memory_available()
```
For specific resources, as Anthony points out you can access `/proc` under linux. For Windows, you could have a poke around at the [Microsoft Script Repository](http://www.microsoft.com/technet/scriptcenter/scripts/python/default.mspx?mfr=true). I'm not sure where to get that kind of information on Macs, but I can think of a great website where you could ask :-) | In a Linux environment you could read from the /proc file system.
```
~$ cat /proc/meminfo
MemTotal: 2076816 kB
MemFree: 130284 kB
Buffers: 192664 kB
Cached: 1482760 kB
SwapCached: 0 kB
Active: 206584 kB
Inactive: 1528608 kB
HighTotal: 1179484 kB
HighFree: 120768 kB
LowTotal: 897332 kB
LowFree: 9516 kB
SwapTotal: 2650684 kB
SwapFree: 2650632 kB
Dirty: 64 kB
Writeback: 12 kB
AnonPages: 59668 kB
Mapped: 22008 kB
Slab: 200744 kB
PageTables: 1220 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3689092 kB
Committed_AS: 263892 kB
VmallocTotal: 114680 kB
VmallocUsed: 3604 kB
VmallocChunk: 110752 kB
``` | How can I return system information in Python? | [
"",
"python",
"operating-system",
""
] |
My question is pretty vague :o) - But here is an example :
When I was writing C code, I was able to log counter's value when something failed :
```
<...>
for ( int i = 0 ; i < n ; i++ )
if ( SUCCESS != myCall())
Log( "Failure, i = %d", i );
<...>
```
Now, using exceptions, I get this :
```
try
{
<...>
for ( int i = 0 ; i < n ; i++ )
myCall();
<...>
}
catch ( Exception exception )
{
Log( "Failure ! Maybe in myCall() ? Don't know. i's value ? No clue." );
}
```
Of course, one may declare "i" outside of the try/catch statement (and this is what I'm doing). But I don't like it - I like declare variable where they are used, not before.
But maybe am I missing something here. Do you have any elegant solution ?
Thank in advance !
Sylvain.
ADDED : myCall() is an obscure API call - I have no clue what it can throw. Also, I can of course add a Try/Catch block around each call, but I would then be better to use return codes ? Would I then add a lot of noise around important lines of code ?. | how about:
```
for(int i = 0; i < n; i++)
{
try
{
myCall();
}
catch(Exception e)
{
Log(String.Format("Problem with {0}", i));
}
}
``` | I think you've got it wrong, and its not surprising as many other people do too.
Exceptions are not to be used for program flow. Read that again, its important.
Exceptions are for the "whoo, that's wasn't supposed to happen" errors that you hope never to see at runtime. Obviously you will see them the moment your first user uses it, which is why you have to consider the cases where they might happen, but you should still not try to put code in to catch, handle and continue as if nothing had happened.
For errors like that, you want error codes. If you use exceptions as if they were 'super error codes' then you end up writing code like you mentioned - wrapping every method call in a try/catch block! You might as well return an enum instead, its a *lot* faster and significantly easier to read error return code than litter everything with 7 lines of code instead of 1. (its also more likely to be correct code too - see erikkallen's reply)
Now, in the real world, it is often the case that methods throw exceptions where you'd rather they didn't (EndOfFile for example), in which case you have to use the "try/catch wrapper" anti-pattern, but if you get to design your methods, don't use exceptions for day-to-day error handling - use them for exceptional circumstances only. (yes, I know its difficult to get this kind of design right, but so is much of design work) | Exceptions vs return codes : do we lose something (while gaining something else)? | [
"",
"c#",
"exception",
"return-code",
""
] |
I have a small problem with a script.
I want to have a default action on `:hover` for clients with Javascript disabled, but for those with Javascript enabled I want another action (actually... same action, but I want to add a small transition effect).
So... How can I do this? I am using jQuery. | Apply two classes to the relvant element. one contains the hover behaviour, and one contains all the other styling.
You can then use the jquery
```
$(element).removeClass('hover');
```
method to remove the class with the hover behaviour and then apply whatever you want using
```
$(element).bind('mouseover', function () { doSomething(); });
$(element).bind('mouseout', function () { doSomething(); });
``` | How about putting the `:hover` fall-back in a stylesheet that is only loaded if javascript is disabled?
```
<noscript>
<link href="noscript.css" rel="stylesheet" type="text/css" />
</noscript>
``` | How do I remove :hover? | [
"",
"javascript",
"jquery",
""
] |
How do I clear the value from a cell and make it NULL? | If you've opened a table and you want to clear an existing value to NULL, click on the value, and press `Ctrl`+`0`. | I think Zack [properly answered](https://stackoverflow.com/a/444658/1165522) the question but just to cover all the bases:
```
Update myTable set MyColumn = NULL
```
This would set the entire column to null as the Question Title asks.
To set a specific row on a specific column to null use:
```
Update myTable set MyColumn = NULL where Field = Condition.
```
This would set a specific cell to null as the inner question asks. | How do I set a column value to NULL in SQL Server Management Studio? | [
"",
"sql",
"sql-server",
"ssms",
""
] |
I am trying to build an SQL Statement for the following search scenario:
I have trying to return all of the columns for an individual record for Table A based on the value of the status column in Table B. Each record in table A can have multiple rows in table B, making it a one to many relationship. The status column is nullable with a data type of integer.
Here are the possible values for status in table B:
* NULL = Pending,
* 1 = Approved,
* 2 = Denied,
* 6 = Forced Approval,
* 7 = Forced Denial
The end user can search on the following scenarios:
* Approved - All table B records must have a value of 1 or 6 for status.
* Denied - One table B record must have a value of 2 or 5. Any other records can have 1,6, or null.
* Pending - All table B records can have a value of 1,6 or null. One record must be null because it is not considered completed.
**UPDATE**
I consulted with one of our DBAs and he developed the following solution:
Approved:
```
SELECT a.* FROM TableA a INNER JOIN TableB ON b.id = a.id
WHERE
(b.status in (1,6) and b.status IS NOT NULL) AND
b.id NOT IN (SELECT id from TableB WHERE status IS NULL)
AND b.id NOT IN (SELECT id from TableB WHERE status in (2,7))
```
Denied:
```
SELECT a.* FROM TableA a INNER JOIN TableB ON b.id = a.id
WHERE
(b.status in (2,7))
```
Pending:
```
SELECT a.* FROM TableA a INNER JOIN TableB ON b.id = a.id
WHERE
(b.status IN (1,6) OR b.status IS NULL)
AND b.id NOT IN (SELECT b.id FROM TableA a INNER JOIN TableB b ON b.id = a.id WHERE (b.status IN (1,6) AND b.status IS NOT NULL) AND b.id NOT IN (SELECT id from TableB WHERE status IS NULL))
AND b.id NOT IN (SELECT id FROM TableB WHERE status IN (2,7))
```
**UPDATE 2:**
@Micth Wheat - How would I refactor the following solution using the EXIST/NOT EXIST t-sql keyword? | As an example for 'Approved':
```
select
*
from
A
where
(select count(*) from B where B.parent_id = A.id and B.status in (1,6)) > 0
and (select count(*) from B where B.parent_id = A.id and B.status not in (1,6)) = 0
```
Refactored to use *exists* and *not exists*:
```
select
*
from
A
where
exists (select * from B where B.parent_id = A.id and B.status in (1,6))
and not exists (select * from B where B.parent_id = A.id and B.status not in (1,6))
```
If you have passed in a criteria, you can package it all up in one query like this, if it is more convenient:
```
select
*
from
A
where
(@Criteria = 'Approved'
and (select count(*) from B where B.parent_id = A.id and B.status in (1,6)) > 0
and (select count(*) from B where B.parent_id = A.id and B.status not in (1,6)) = 0
)
or (@Criteria = 'Denied'
and (select count(*) from B where B.parent_id = A.id and B.status in (2,7)) > 0
)
or (@Criteria = 'Pending'
and (select count(*) from B where B.parent_id = A.id and B.status not in (2,7)) = 0
and (select count(*) from B where B.parent_id = A.id and B.status is null) > 0
)
```
Note, I changed the Denied example to be values of 2 and 7, rather than 2 and 5, based on your sample data.
Edit: You could also use *exists* and *not exists*, as Joe suggests.
Edit: The method using max(case ...), often also seen as sum(case ...) for counting values, does perform better in some cases (depends mostly on your volume of data whether the performance increase is noticeable - sometimes it can be a big difference). I personally find the subqueries more readable, so I start with them, and if better performance is needed, I would benchmark both methods, and if max(case ...) works better, I would switch. | From what I read Chris Teixeira and hova use basically the same logic, but;
- Hova parses the tables only once
- Chris Teixeira parses the tables multiple times
=>Hova's technique is preferred (in my opinion)
However, Hova did it very slightly wrong...
The logic should be:
```
- If Any 2 or 7 records => DENIED
- ElseIf Any NULL records => PENDING
- Else => ACCEPTED
```
This gives the following code...
```
SELECT
[main].id,
CASE WHEN MAX(CASE WHEN [status].value IN (2,7) THEN 1 ELSE 0 END) = 1 THEN 'DENIED'
WHEN MAX(CASE WHEN [status].value IS NULL THEN 1 ELSE 0 END) = 1 THEN 'PENDING'
ELSE 'ACCEPTED' END
FROM
[main]
INNER JOIN
[status]
ON [main].id = [status].main_id
GROUP BY
[main].id
```
Also, the use of MAX rather than SUM (which Hova used) means that the query engine only has to find one match, not several. Also, it is easier for an optimiser to use appropriate indexes on the [status] table. (In the case, the index would be (main\_id, value) in that order on the [status] table.
Dems.
EDIT:
Something similar could be as follows. I have no SQL Instance to test on here, so i can't tell you if it's faster, but I image it could be...
```
SELECT
[main].id,
MIN(CASE WHEN [status].value IN (2,7) THEN -1 -- Denied
WHEN [status].value IS NULL THEN 0 -- Pending
ELSE 1 END) -- Accepted
FROM
[main]
INNER JOIN
[status]
ON [main].id = [status].main_id
GROUP BY
[main].id
```
EDIT:
Another option is just to join on a mapping table instead of using the CASE statements.
```
DECLARE @map TABLE (
status_value INT,
status_result INT
)
INSERT INTO @map VALUES (1, 1)
INSERT INTO @map VALUES (2, -1)
INSERT INTO @map VALUES (6, 1)
INSERT INTO @map VALUES (7, -1)
INSERT INTO @map VALUES (NULL, 0)
SELECT
[main].id,
MIN([map].status_result)
FROM
[main]
INNER JOIN
[status]
ON [main].id = [status].main_id
INNER JOIN
@map AS [map]
ON [status].value = [map].status_value
OR ([status].value IS NULL AND [map].status_value IS NULL)
-- # This has been faster than using ISNULLs in my experience...
GROUP BY
[main].id
``` | Trying to build an SQL statement for complex search scenario | [
"",
"sql",
"sql-server-2005",
"t-sql",
"search",
""
] |
This may be a ridiculous question for you C# pros but here I go. I'm a Flash developer getting started in Silverlight and I'm trying to figure out how to create a "codebase" (a reusable set of classes) for animation. I'd like to store it in a single location and reuse it across a bunch of different projects. Normally in Flash I would add a "project path" reference and then start using the code. My question is, how do I add a folder to visual studio so that I can "use" those classes in my project. I tried "Add > Existing Item" but that copied the files into my project directory. | The easiest way would to create a new `ClassLibrary` project and build it. This will output a `.dll` file in a folder you can specify in the project settings menus, which you reference from every project that needs it.
Also, you can copy this `.dll` into the `/bin/` folder of your project - this will do the same thing for this specific project, but when you start the next one you can change some details in the codebase library without breaking the first project. | The solution described by Tomas (adding a reference to a dll binary) is the correct solution to this problem; better than referencing the source code and compiling it into each project.
But just for extra information, if you ever *do* need to add a source code file to your Visual Studio project without having it make a copy of the file you can use the following steps:
1. Right click on your project in Solution Explorer and select Add -> Existing Item.
2. Navigate to the location of the source code file and select it.
3. On the "Add" button in the dialog window there is a drop down arrow. Click this and select "Add as Link".
This will allow you to use this source code file in your project without having VS make a copy of the file. | Add codebase as reference instead of copy Visual Studio | [
"",
"c#",
"visual-studio",
"silverlight",
""
] |
I'm doing a probability calculation. I have a query to calculate the total number of times an event occurs. From these events, I want to get the number of times a sub-event occurs. The query to get the total events is 25 lines long and I don't want to just copy + paste it twice.
I want to do two things to this query: calculate the number of rows in it, and calculate the number of rows in the result of a query on this query. Right now, the only way I can think of doing that is this (replace @total@ with the complicated query to get all rows, and @conditions@ with the less-complicated conditions that rows, from @total@, must have to match the sub-event):
```
SELECT (SELECT COUNT(*) FROM (@total@) AS t1 WHERE @conditions@) AS suboccurs,
COUNT(*) AS totaloccurs FROM (@total@) as t2
```
As you notice, @total@ is repeated twice. Is there any way around this? Is there a better way to do what I'm trying to do?
To re-emphasize: @conditions@ does depend on what @total@ returns (it does stuff like `t1.foo = bar`).
Some final notes: @total@ by itself takes ~250ms. This more complicated query takes ~300ms, so postgres is likely doing some optimization, itself. Still, the query looks terribly ugly with @total@ literally pasted in twice. | ```
SELECT COUNT(*) as totaloccurs, COUNT(@conditions@) as suboccurs
FROM (@total@ as t1)
``` | If your sql supports subquery factoring, then rewriting it using the WITH statement is an option. It allows subqueries to be used more than once. With will create them as either an inline-view or a temporary table in Oracle.
Here is a contrived example.
```
WITH
x AS
(
SELECT this
FROM THERE
WHERE something is true
),
y AS
(
SELECT this-other-thing
FROM somewhereelse
WHERE something else is true
),
z AS
(
select count(*) k
FROM X
)
SELECT z.k, y.*, x.*
FROM x,y, z
WHERE X.abc = Y.abc
``` | SQL - Use results of a query as basis for two other queries in one statement | [
"",
"sql",
"optimization",
"count",
"nested",
""
] |
I have a java string, which has a variable length.
I need to put the piece `"<br>"` into the string, say each 10 characters.
For example this is my string:
```
`this is my string which I need to modify...I love stackoverlow:)`
```
How can I obtain this string?:
```
`this is my<br> string wh<br>ich I nee<br>d to modif<br>y...I love<br> stackover<br>flow:)`
```
Thanks | Something like:
```
public static String insertPeriodically(
String text, String insert, int period)
{
StringBuilder builder = new StringBuilder(
text.length() + insert.length() * (text.length()/period)+1);
int index = 0;
String prefix = "";
while (index < text.length())
{
// Don't put the insert in the very first iteration.
// This is easier than appending it *after* each substring
builder.append(prefix);
prefix = insert;
builder.append(text.substring(index,
Math.min(index + period, text.length())));
index += period;
}
return builder.toString();
}
``` | Try:
```
String s = // long string
s.replaceAll("(.{10})", "$1<br>");
```
**EDIT:** The above works... most of the time. I've been playing around with it and came across a problem: since it constructs a default Pattern internally it halts on newlines. to get around this you have to write it differently.
```
public static String insert(String text, String insert, int period) {
Pattern p = Pattern.compile("(.{" + period + "})", Pattern.DOTALL);
Matcher m = p.matcher(text);
return m.replaceAll("$1" + insert);
}
```
and the astute reader will pick up on another problem: you have to escape regex special characters (like "$1") in the replacement text or you'll get unpredictable results.
I also got curious and benchmarked this version against Jon's above. This one is slower by an order of magnitude (1000 replacements on a 60k file took 4.5 seconds with this, 400ms with his). Of the 4.5 seconds, only about 0.7 seconds was actually constructing the Pattern. Most of it was on the matching/replacement so it doesn't even ledn itself to that kind of optimization.
I normally prefer the less wordy solutions to things. After all, more code = more potential bugs. But in this case I must concede that Jon's version--which is really the naive implementation (I mean that in a **good way**)--is significantly better. | Putting char into a java string for each N characters | [
"",
"java",
"string",
"char",
""
] |
I am writing an application in C# that needs to run as a service but also have user interaction. I understand that services have no UI, etc, so I've divided up my program into a windows form application and a service that can communicate with each other.
The problem I'm having is that I need the service to make sure the windows form application is always running and restart it if it is not. I'm able to detect if it is running, and restart it with the following code on Windows 2000/XP:
```
System.Diagnostics.Process.Start("ExePath");
```
but on Vista, it runs the new process as a Local/System process which is invisible to the user. Does someone way around this? Is there some way to detect which user is currently logged on and run the new process as that user? I don't need to account for fast-user switching at this point. Something - anything - basic would suffice.
I would be grateful for any help or tips you have on the subject.
I need to clarify that I am setting the "Allow service to interact with desktop" option when the service is installed. This is what allows it to work on 2000/XP. However, Vista still has the aforementioned problem. | The general idea for this sort of thing is, if the user needs to interact with a service, they should launch a separate application. If you want to help them out, you can configure that separate application to start with windows by placing a shortcut in the start up menu. You can also build crash recovery into your application so it can automatically restart.
You shouldn't really rely on monitoring the forms application, what if no one is logged in? What if multiple people are logged in? It just gets messy doing things this way.
Having the service just sit there and broadcast to listeners is the way to go. When the forms application starts it can notify the service it wants to listen to events. | See the question: [How can a Windows Service execute a GUI application?](https://stackoverflow.com/questions/267838/how-can-a-windows-service-execute-a-gui-application). It addresses the same question from C/C++ (short answer: CreateProcessAsUser), but the answer's still valid (with some P/Invoke) for C#. | C# Run Windows Form Application from Service (and in Vista) | [
"",
"c#",
"winforms",
"windows-services",
"windows-vista",
""
] |
function Obj1(param)
{
this.test1 = param || 1;
}
```
function Obj2(param, par)
{
this.test2 = param;
this.ob = Obj1;
this.ob(par);
}
```
now why if I do:
```
alert(new Obj2(44,55).test1);
```
the output is 55? how can 'view' test1 an Obj2 istance if I haven't
link both via prototype chain?
Thanks | > how can 'view' test1 an Obj2 istance if I haven't link both via prototype chain?
Methods are not bound in JavaScript. When you write one of:
```
o.method(x);
o['method'](x);
```
the value of ‘o’ is assigned to ‘this’ inside the method. But! If you detach the function from its object and call it directly:
```
m= o.method;
m(x);
```
the reference to ‘o’ is lost, and the method is called as if it were a plain old function, with the global object as ‘this’. Similary, if you move the function to another object and call it there:
```
o2.method= o.method;
o2.method(x);
```
then ‘this’ will be ‘o2’, and not ‘o’. This is extremely strange behaviour for a language with first-class functions, and highly counter-intuitive, but that's JavaScript for you.
If you want to be able to use bound methods, you'll need to create your own, generally using a closure. See ECMAScript 3.1's proposed "Function.bind" method or the similar implementation in many frameworks.
So anyway:
```
this.ob = Obj1;
this.ob(par);
```
This is taking Obj1 as a function and turning into a method on ‘this’, which is an Obj2 instance. So when Obj1 is called, its own ‘this’ is also an Obj2, and that's the ‘this’ it writes its param to. The same could be written more simply and clearly as:
```
Obj1.call(this, par);
```
Are you doing this deliberately? It can be used as a kind of inheritance, to call another class's constructor on your own class, and this method is taught in some JS object-orientation tutorials. However it really isn't a very good way of doing it because you end up with multiple copies of the same property; using prototypes would save you this, and make a property update on a superclass filter through to the subclasses as expected. | Well it's not clear what you *want* but the reason this happens is because here:
```
this.ob = Obj1;
```
you add the Obj1 method to the object instance of Obj2, and when you use it here:
```
this.ob(par);
```
the context of "this" inside the method Obj1 is the *Obj2 instance*. So that instance now has a test1 member.
Nothing to do with inheritance really, but it's a bit like a mix-in. Remember in JS functions are first class objects. | javascript calling function inside another mystery | [
"",
"javascript",
""
] |
I have status tables in my database, and "localised" tables that contain language-specific versions of those statuses. The point of the main status table is to define the status ID values, and other meta-data about the status. The "localised" table is to display the text representation in a UI, according to the users' preferred language. Here is an example schema:
```
create table [Language]
(
ID smallint primary key,
ISOName varchar(12)
)
create table EmployeeStatus
(
ID smallint primary key,
Code varchar(50)
)
create table EmployeeStatusLocalised
(
EmployeeStatusID smallint,
LanguageID smallint,
Description varchar(50),
constraint PK_EmployeeStatusLocalised primary key
(EmployeeStatusID, LanguageID),
constraint FK_EmployeeStatusLocalised_EmployeeStatus foreign key
(EmployeeStatusID) references EmployeeStatus (ID),
constraint FK_EmployeeStatusLocalised_Language foreign key
(LanguageID) references [Language] (ID)
)
create table Employee
(
ID int identity(1,1) primary key,
EmployeeName varchar(50) not null,
EmployeeStatusID smallint not null,
constraint FK_Employee_EmployeeStatus foreign key
(EmployeeStatusID) references EmployeeStatus (ID)
)
```
This is how I'd typically access that data:
```
select e.EmployeeName, esl.Description as EmployeeStatus
from Employee e
inner join EmployeeStatusLocalised esl on
e.EmployeeStatusID = esl.EmployeeStatusID and esl.LanguageID = 1
```
I'm not really happy that my LINQ to SQL is doing things in the most efficient way, though. Here's an example:
```
using (var context = new MyDbDataContext())
{
var item = (from record in context.Employees
select record).Take(1).SingleOrDefault();
Console.WriteLine("{0}: {1}", item.EmployeeName,
item.EmployeeStatus.EmployeeStatusLocaliseds.
Where(esl => esl.LanguageID == 1).Single().Description);
}
``` | Personally, I'd probably leave the EmployeeStatus codes in the DB and move all localization logic into the client. If this is a web app (ASP.NET or ASP.NET MVC) then you'd use the the EmployeeStatus code as a key into a resource file, and then use UICulture="Auto" and Culture="Auto" to tell ASP.NET to pick up the right resources based upon the "Accept-Language" HTTP Header.
You'd provide default (culture insensitive) resources embedded in your app, and allow satalite assemblies to override the defaults where they needed.
The problem, for me, with adding localization into the DB is that you firstly end up with much more complicated queries, you have to keep pumping the locale into each of these queries, and you can't cache the outputs of the queries so widely. Secondly, you have a mixure of tables that hold entities and tables that hold localization. Finally, a DBA is required to do the localization.
Ideally you want someone who understands how to translate text to do the localization, and for them to use some tool that they're comfortable with. There are plenty of .resx tools out there, and apps that allow language experts to "do their thing".
If you're stuck with DB tables for localization because "that's how it is" then perhaps you should query the lookups seperately to the real data, and join the two at the UI. This would at least give you an "upgrade path" to .RESX in the future.
You should check out Guy Smith-Ferrier's book on i18n if you're interested in this area:
<http://www.amazon.co.uk/NET-Internationalization-Developers-Guide-Building/dp/0321341384/ref=sr_1_1?ie=UTF8&s=books&qid=1239106912&sr=8-1> | One option could be to maintain a cache of the localised data, using something like Caching Application Block or ASP.NET caching, then just refer to that cache in the view.
This would limit the amount of database calls, because LINQ might not need to load the status records in order to get the localised description. | Localisation/I18n of database data in LINQ to SQL | [
"",
"c#",
".net",
"linq",
"linq-to-sql",
"internationalization",
""
] |
I'll try to explain this better on another question. This is the query that I think should work but, off course, MySQL doesn't support this specific subselect query:
```
select *
from articles a
where a.article_id in
(select f.article_id
from articles f
where f.category_id = a.category_id
order by f.is_sticky, f.published_at
limit 3) /* limit isn't allowed inside a IN subquery */
```
What I'm trying to archive is this: In an articles table I have several articles for several categories. I need to obtain at most three articles per category (any number of categories).
Here's the data:
```
CREATE TABLE articles (
article_id int(10) unsigned NOT NULL AUTO_INCREMENT,
category_id int(10) unsigned NOT NULL,
title varchar(100) NOT NULL,
is_sticky boolean NOT NULL DEFAULT 0,
published_at datetime NOT NULL,
PRIMARY KEY (article_id)
);
INSERT INTO articles VALUES
(1, 1, 'foo', 0, '2009-02-06'),
(1, 1, 'bar', 0, '2009-02-07'),
(1, 1, 'baz', 0, '2009-02-08'),
(1, 1, 'qox', 1, '2009-02-09'),
(1, 2, 'foo', 0, '2009-02-06'),
(1, 2, 'bar', 0, '2009-02-07'),
(1, 2, 'baz', 0, '2009-02-08'),
(1, 2, 'qox', 1, '2009-02-09');
```
What I'm trying to retrieve is the following:
```
1, 1, qox, 1, 2009-02-09
1, 1, foo, 0, 2009-02-06
1, 1, bar, 0, 2009-02-07
1, 2, qox, 1, 2009-02-09
1, 2, foo, 0, 2009-02-06
1, 2, bar, 0, 2009-02-07
```
Notice how 'quox' jumped to the first place in it's category because it is sticky.
Can you figure out a way to avoid the LIMIT inside the subquery?
Thanks | This is a simplification of your solution
```
select *
from articles a
where a.article_id =
(select f.article_id
from articles f
where f.category_id = a.category_id
order by f.is_sticky, f.published_at
limit 1) or a.article_id =
(select f.article_id
from articles f
where f.category_id = a.category_id
order by f.is_sticky, f.published_at
limit 1, 1) or
a.article_id =
(select f.article_id
from articles f
where f.category_id = a.category_id
order by f.is_sticky, f.published_at
limit 2, 1)
``` | Have a look at this code snippet called [Within-group quotas (Top N per group)](http://www.artfulsoftware.com/infotree/queries.php?&bw=1271#104).
Depending on the size of your set, there's two solutions proposed, one playing with a count and the other one using a temporary table for bigger tables.
So basically, if you have a big table, until MySQL implements a LIMIT in subqueries or something similar, you'll have to manually (well, or using a dynamic query in a loop) aggregate all your categories with one of the proposed solutions here.
---
**// A solution using a temporary table and a stored procedure:**
Run that once:
```
DELIMITER //
CREATE PROCEDURE top_articles()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE catid INT;
DECLARE cur1 CURSOR FOR SELECT DISTINCT(category_id) FROM articles;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
# This temporary table will hold all top N article_id for each category
CREATE TEMPORARY TABLE top_articles (
article_id int(10) unsigned NOT NULL
);
# Loop through each category
REPEAT
FETCH cur1 INTO catid;
INSERT INTO top_articles
SELECT article_id FROM articles
WHERE category_id = catid
ORDER BY is_sticky DESC, published_at
LIMIT 3;
UNTIL done END REPEAT;
# Get all fields in correct order based on our temporary table
SELECT * FROM articles WHERE article_id
IN (SELECT article_id FROM top_articles)
ORDER BY category_id, is_sticky DESC, published_at;
# Remove our temporary table
DROP TEMPORARY TABLE top_articles;
END;
//
DELIMITER ;
```
And then, to try it out:
```
CALL top_articles();
```
You should see the results you were waiting for. And it should work for any number of articles per category, and any number of categories easily. This is what I'm getting:
```
+------------+-------------+-------+-----------+---------------------+
| article_id | category_id | title | is_sticky | published_at |
+------------+-------------+-------+-----------+---------------------+
| 5 | 1 | qox | 1 | 2009-02-09 00:00:00 |
| 1 | 1 | foo | 0 | 2009-02-06 00:00:00 |
| 2 | 1 | foo | 0 | 2009-02-06 00:00:00 |
| 9 | 2 | qox | 1 | 2009-02-09 00:00:00 |
| 6 | 2 | foo | 0 | 2009-02-06 00:00:00 |
| 7 | 2 | bar | 0 | 2009-02-07 00:00:00 |
+------------+-------------+-------+-----------+---------------------+
```
Although I don't know how it would translate performance-wise. It probably can be optimized and cleaned out a little bit. | Rewriting a MySQL query | [
"",
"sql",
"mysql",
"subquery",
"limit",
""
] |
I have a reporting module which creates PDF reports from ListViews.
Now, I have a ListView in Virtual mode and therefore I cannot loop over the Items collection.
How do I loop over all elements in the list view from the reporting module?
I can get the `VirtualListSize` property, so I know how many elements there are in the list.
Could I somehow call the `RetreiveVirtualItem` explicitly?
The Reporting module has no knowledge about the underlaying list in the ListView. | In a virtual ListView, you cannot iterate the `Items`, but you can still access them by their index:
```
for (int i = 0; i < theVirtualListView.VirtualListSize; i++) {
this.DoSomething(theVirtualListView.Items[i]);
}
``` | So, a listview in virtual mode is just a visualization of your underlying list, correct?
Perhaps report should be getting the data from the underlying list instead of the virtual list view. | Loop through all elements in a ListView in virtual mode | [
"",
"c#",
".net",
"winforms",
"listview",
""
] |
I'm setting up my development environment for writing web applications using Java, and I'm in need of some help.
*What I have done:*
**Installation of Java SE 6 from Sun**
I installed (sudo apt-get install …) the following packages (one at a time as some of them requires user interaction) to get Java SE 6 from Sun: sun-java6-bin sun-java6-doc sun-java6-fonts sun-java6-javadb sun-java6-jdk sun-java6-jre sun-java6-plugin sun-java6-source .
I also set JAVA\_HOME by adding JAVA\_HOME=”/usr/lib/jvm/java-6-sun/” to /etc/environment.
**Began installing and configuring Tomcat 5.5**
I installed (sudo apt-get install …) the following packages: tomcat5.5 tomcat5.5-admin
**What I have at the moment:**
I'm able to connect to <http://localhost:8180/admin> and get the admin web application which I installed by installing the tomcat5.5-admin package. The Tomcat server is running as a deamon.
**My problems and questions:**
1. The environment variable $CATALINA\_HOME doesn't seem to be defined. How should I define it? (The same way as my $JAVA\_HOME maybe? But if so, which path should I assign it?)
2. To deploy a war, can I just copy the JAR to $CATALINA\_HOME/webapps? Or do I need to change the file permissions on the war? (Here I assume $CATALINA\_HOME is set correctly.)
3. What's the user and password I need for the admin web application I installed? How do I change them?
4. Are there any best practices for Tomcat 5.5 on Ubuntu which I should think of? | 1. The path where your Tomcat is installed. I don't know, I am on openSUSE, and usually install tomcat in my preferred directory, most likely under my /home/adeel/. You can check it using *whereis tomcat* command on your console, or may be try *whereis catalina*. It might give you the path of your tomcat. This time I am running tomcat under my Netbeans, just for a try, and didn't set the environment variable.
2. You may just copy the *war* file under the *CATALINA\_HOME/webapps/* directory. Or you can even place it exploded. I don't think you would need to change the privileges of your *war*.
3. You can set it in *CATALINA\_HOME/webapps/tomcat-users.xml*. Below is the example,
<?xml version='1.0' encoding='utf-8'?>
<tomcat-users>
<role rolename="manager"/>
<role rolename="tomcat"/>
<user username="tomcat" password="tomcat" roles="tomcat,manager"/>
</tomcat-users>
4. Not really. The only suggestion I have in my mind at the moment is, to set it as user-variable, not in system-variable.
NOTE: In case you are using Tomcat under Netbeans, number of tomcat configuration is provided within the IDE, for example, you can change the port, username/password, VM options, etc. Beware Netbeans keep its own place and file for web-app deployment in Tomcat, and doesn't use *CATALINA\_HOME/webapps/*. | You have to add it manually. You can either do it in the file:
```
~/.bashrc <---- for session-wide variables
/etc/environment <---- for system-wide variables
```
You should set CATALINA\_HOME in here by adding the following line:
```
CATALINA_HOME=PATH_TO_WHERE_YOU_ARE_RUNNING_TOMCAT
```
Afterwards, if you edited .bashrc, reload the file, by typing in:
```
Source .bashrc
```
If it was /etc/environment, I believe you can do the same thing. If that doesn't work, try logging off then in. If that doesn't work, just restart. | Install of Tomcat 5.5 on Ubuntu (using apt) leaves CATALINA_HOME unset | [
"",
"java",
"linux",
"tomcat",
"ubuntu",
""
] |
I have a Default.aspx that redirects to a Page1.aspx. My ddl is loaded and ordered from a database. How do I make the ddl selection show up on the page load depending on what the user selects on the Default page? | You will need to pass some kind of data from Default.aspx to Page1.aspx. I would suggest using a query string. So for example on Default.aspx you would redirect to:
`Response.Redirect("~/Page1.aspx?selectValue=5");`
And then on Page1.aspx in Page\_Load you would do something like
`myDropDownList.SelectedValue = Request.QueryString("selectValue");`
Obviously you'll need to do some more checking on Page1.aspx to make sure Request.QueryString("selectValue") exists, etc. but you get the idea.
edit-this is C#, so added the semicolon ;) | The best you can do is dropdownelement.focus(); | How do I force drop down list selections on page load? | [
"",
"c#",
".net",
"asp.net",
""
] |
Somewhat related to my question about integers instead of decimals; my vendor provides a lot of "boolean" fields in the char(1) format (i.e. Y/N). Some of these fields legitimately cannot be a boolean because they can have several values, but the majority can be treated as a boolean. In my previous question the advice was to do what's "right" instead of what the vendor provides; should I still apply this logic and create my database schema using a bit field for these columns, or keep it as a char(1) and do the conversion in my code?
Also on this subject, how should I deal with the tri-state fields as far as code goes? Logically the field is a boolean (in the sense that I'm only interested in the Y/N value and the third value is really either yes or no), but the values can be more than just true/false (e.g. there is a `UpsShippable` field that can be `Y`, `N` or `G`); this field has multiple states so how would I best encapsulate it? Something like an enum (or static constants, since Enums can't be used with the char datatype)? In the multi-value cases the data is more like a type indicator than a flag.
To sum up (I get a little long-winded): 1) When dealing with `char(1)` values in data, would you keep them as chars or convert to bit (or whatever your database's boolean type is) and why, and 2) How would you tackle a tri-state char field in your code, assuming you would leave it as `char(1)` in the data schema?
**EDIT:** To clarify, none of these fields are used for "real" logic, it's basically just an indicator. For example, if the item isn't shippable via UPS (i.e. the value is N/G) then on the customer-facing page it says the item cannot be shipped via UPS, and on the back-end the logic won't make a call to UPS's web service to calculate shipping. The other Y/N fields are simply there as extra detail about an item and have no logic, although they need to be changeable (for example, have a checkbox indicating if it's recycled on a data entry form on the back end); I might display an image or filter items by them (e.g. you can search for all recycled products, and I'll check to make sure their recycled indicator is true) but nothing else, at least not at this point in time. | I would suggest in the bistate fields, you always make the boolean conversions.
The tri-state field is something different... There's many solutions, none definitely "right". You could create tables for the optionsets, and link by index...but you lose in human readability for very little gain except being "right". | I most often deal with Oracle which doesn't really have bit types so its never really been an issue.
That being said, one character code fields are common and fine. Whatever you do though don't give them a misleading name that suggests it's a boolean type if it had 3 states. That's just going to confuse people.
Bad name: SHIP\_FLAG ('flag' is ambiguous but many will interpret it as Y/N or T/F)
Bad name: HAS\_BOOKED (again, implies boolean)
Bad name: IS\_SENT (ditto)
Good name: SHIP\_CODE (this can mean anything you want it to mean)
Also, one character fields allow you to extend the meaning later. Bit fields don't (really). | How do you deal with char(1) in place of a boolean and tri-state fields? | [
"",
"sql",
"sql-server",
"database-design",
"types",
""
] |
In a Windows Form window, multiple events can trigger an asynchronous method. This method downloads a file and caches it. My problem is that I want that method to be executed once. In other words, I want to prevent the file to be downloaded multiple times.
**If the method downloading the file is triggered twice, I want the second call to wait for the file (or wait for the first method to be done).**
Does someone have an idea on how to achieve that?
**UPDATE**: I am simply trying to prevent unnecessary downloads. In my case, when a client put its mouse over an item in a ListBox for more than a couple milliseconds, we start to download. We make the assumption that the user will click and request the file. What can potentially happen is that the user keeps his mouse over the item for one second and then click. In this case two downloads start. I am looking for the best way to handle such scenario.
**UPDATE 2:**: There is a possibility that the user will move its mouse over multiple items. In consequences, multiple downloads will occur. I've not really tough of this scenario, but right now if we face such scenario we don't abandon the download. The file will be downloaded (files are usually around 50-100kb) and then are going to be cached. | Maintain the state of what's happening in a form variable and have your async method check that state before it does anything. Make sure you synchronize access to it, though! Mutexes and semaphores are good for this kind of thing.
If you can download different files simultaneously, you'll need to keep track of what's being downloaded in a list for reference.
If only one file can be downloaded at a time, and you don't want to queue things up, you could just unhook the event while something is being downloaded, too, and rehook it when the download is complete. | Here is a dummy implementation that supports multiple file downloads:
```
Dictionary<string, object> downloadLocks = new Dictionary<string, object>();
void DownloadFile(string localFile, string url)
{
object fileLock;
lock (downloadLocks)
{
if (!downloadLocks.TryGetValue(url, out fileLock))
{
fileLock = new object();
downloadLocks[url] = fileLock;
}
}
lock (fileLock)
{
// check if file is already downloaded
// if not then download file
}
}
``` | Multithreading and concurency with C# | [
"",
"c#",
"multithreading",
"concurrency",
"asynchronous",
""
] |
I want to set up a personal workspace on my home machine. I mainly intend to use it for Java development on home projects. Which tools do you recommend me to use? (I prefer free tools, since this is just for home use.)
I hope you could recommend some tools for me, and I'd be grateful if you could give me a brief comparison of some tools in each category. For example, I'm thinking about source control and issue tracking tools, but any other categories are welcome.
I also hope to get some information about the ability to move between suggested tools (e.g. migrate between CVS and SVN), and even between operating systems if applicable. (I'm currently using Windows Vista, but I may want to switch to Linux sometime soon.) Maybe even information about how easy it would be to move the project to an open-source repository (e.g. SF.net).
Thank you.
(P.S. Any comments on how to enhance/clarify my question are more than welcome.) | The question is too generic, but let me attempt to give it a try:
Netbeans or Eclipse IDE. If you do GUI development, I suggest Netbeans, otherwise go for eclipse.
Git or Mercurial as a VCS. If you're dealing with a large code base or if you dig the UNIX philosophy go for Git. If you don't want to handle cross-OS differences, go for Hg.
Any of these IDE gives you integration with the source control system, so you're just a google search away from it.
As for opensource, if you use Git, just push to github. If you use Hg, push it to bitbucket and you're done.
For issue tracking I suggest you hosted Redmine (RoR app) that integrates with CVS, SVN, Git, Mercurial and a few others.
If you choose Eclipse, you can integrate it with Mylyn. | A few other thing besides Eclipse and Subversion.
[IcoFX - freeware icon editor](http://icofx.ro/), a copy of [Effective Java](http://java.sun.com/docs/books/effective/), [Cygwin](http://www.cygwin.com/), [VMWare Player](http://www.vmware.com/products/player/) (or [VMware Workstation](http://www.vmware.com/products/ws/)), [CruiseControl](http://cruisecontrol.sourceforge.net/), and [SciTE - free source code editor](http://www.scintilla.org/SciTE.html). | Setting up a personal (Java) workspace: What do I need? | [
"",
"java",
"workspace",
""
] |
In C++, arrays cannot be passed simply as parameters. Meaning if I create a function like so:
```
void doSomething(char charArray[])
{
// if I want the array size
int size = sizeof(charArray);
// NO GOOD, will always get 4 (as in 4 bytes in the pointer)
}
```
I have no way of knowing how big the array is, since I have only a pointer to the array.
Which way do I have, without changing the method signature, to get the size of the array and iterate over it's data?
---
**EDIT:** just an addition regarding the solution. If the char array, specifically, was initialized like so:
```
char charArray[] = "i am a string";
```
then the `\0` is already appended to the end of the array. In this case the answer (marked as accepted) works out of the box, so to speak. | Without changing the signature? Append a sentinel element. For char arrays specifically, it could be the null-terminating `'\0'` which is used for standard C strings.
```
void doSomething(char charArray[])
{
char* p = charArray;
for (; *p != '\0'; ++p)
{
// if '\0' happens to be valid data for your app,
// then you can (maybe) use some other value as
// sentinel
}
int arraySize = p - charArray;
// now we know the array size, so we can do some thing
}
```
Of course, then your array itself cannot contain the sentinel element as content.
For other kinds of (i.e., non-char) arrays, it could be any value which is not legal data. If no such value exists, then this method does not work.
Moreover, this requires co-operation on the caller side. You really have to make sure that the caller reserves an array of `arraySize + 1` elements, and *always* sets the sentinel element.
However, if you really cannot change the signature, your options are rather limited. | Use templates. This technically doesn't fit your criteria, because it changes the signature, but calling code does not need to be modified.
```
void doSomething(char charArray[], size_t size)
{
// do stuff here
}
template<size_t N>
inline void doSomething(char (&charArray)[N])
{
doSomething(charArray, N);
}
```
This technique is used by Microsoft's [Secure CRT functions](http://msdn.microsoft.com/en-us/library/td1esda9%28VS.80%29.aspx) and by STLSoft's [array\_proxy](http://www.stlsoft.org/doc-1.9/classstlsoft_1_1array__proxy.html) class template. | Passing an array as a function parameter in C++ | [
"",
"c++",
"arrays",
"pointers",
"parameters",
"arguments",
""
] |
We've encountered the following "issue".
I like to use the following writing:
```
SELECT Id, Name
FROM Table1
JOIN Table2 ON Table1.FK1 = Table2.FK1
```
but a colleague of mine uses:
```
SELECT Id, Name
FROM Table1 JOIN Table2
ON Table1.FK1 = Table2.FK1
```
Is there a best practice for this?
I think it's more convenient if all used tables are alligned. | We would actually do:
```
SELECT Id, Name
FROM Table1
JOIN Table2
ON Table2.FK1 = Table2.FK1
``` | I prefer the first:
```
SELECT Id, Name
FROM Table1
JOIN Table2 ON Table1.FK1 = Table2.FK1
``` | SQL Select Convention | [
"",
"sql",
""
] |
I am getting JDBC error when I attempt a commit through hibernate to SQL Server
> Cannot insert explicit value for identity column in table 'Report' when IDENTITY\_INSERT is set to OFF
I am using mappings generated by netbeans that contain,
```
<class name="orm.generated.Report" table="Report" schema="dbo" catalog="DatabaseName">
<id name="id" type="int">
<column name="ID" />
<generator class="assigned" />
</id>
```
Which looks to me like it should be doing the identity insert properly.
Any idea on how to fix this?
EDIT:
Some links to documentation, for posterity,
<http://www.hibernate.org/hib_docs/v3/reference/en-US/html/mapping.html#mapping-declaration-id-generator>
<http://www.roseindia.net/hibernate/hibernateidgeneratorelement.shtml> | You cannot insert into an identity column in SQL Server unless "IDENTITY\_INSERT" is set to "ON". Since your generator class is "assigned", Hibernate is assuming that you are setting an explicit value for "id" in Java before saving the object and that Hibernate can directly insert the value into the database. You need to either:
1. Pick a different generator class, such as "native"
2. Set IDENTITY\_INSERT to "ON" | try to change the type of the generator class from 'assigned' to 'identity', it worked for me | Hiberate problems, jdbc IDENTITY_INSERT is set to OFF | [
"",
"sql",
"sql-server",
"hibernate",
"jdbc",
""
] |
I'm placing content on my page through an ajax (post) request like so:
```
$("input#ViewMore").click(function() {
var data = { before: oldestDate, threadId: 1 };
$.post("/Message/More", data,function(html) {
$('tbody#posts').prepend(html);
return false;
},
"html");
return false;
});
```
with the html coming back looking something like:
```
<div id="comment">Message output <a href="#" id="quote">Quote</a></div>
```
This is all working fine and dandy, everything appears as it should, no problems.
The problem occurs when I have an event hooked into the "quote" anchor that has been added through the ajax call. Specifically, a jQuery event on that anchor does not fire. Why?
For instance:
```
$("#quote).click(function() { ... });
```
Does nothing. Acts like there is no event on it. I know it is working on other anchors on the page that were not added through a ajax request, so there is not a code error there, plus if I refresh the page it will then fire correctly. Is there some reason that this is happening, do I need someway to reinitialize that event on the anchor tag somehow? Any ideas?
Working with jQuery 1.3.1 (didn't work with 1.2.6 either) so I believe it is my implementation not code itself. | You can use [Events/live](http://docs.jquery.com/Events/live) of jQuery 1.3, *live* will bind a handler to an event for all current - and future - matched elements. | When the new content is added to the page with Ajax you have to re-register all the events to those new elements. | How do I keep events elements added through a ajax call in jQuery "active" | [
"",
"javascript",
"jquery",
"event-handling",
"jquery-events",
""
] |
Pretty straightforward, but I just want to know which is faster.
I think simply multiplying a number by `-1` is much faster than calling a predefined method, provided that you are sure that value is negative.
But if that's the case then what is the `abs()` function for? Is it simply for making sure that the value returned would always be positive regardless of value's sign? | *Updated August, 2012*:
I did some profiling with these implementations:
```
/* Test 1: */ b = Math.abs(a);
/* Test 2: */ b = abs(a); //local copy: abs = Math.abs;
/* Test 3: */ b = a < 0 ? a * -1 : a;
/* Test 4: */ b = a < 0 ? -a : a;
```
I got the following result on Windows 7. Values are normalized after the fastest result per browser to make it easier to compare which method is faster:
```
1:Math 2:abs 3:*-1 4:- 1.0= Version
Chrome 1.0 1.0 1.0 1.0 111ms 21.0.1180.75 m
Firefox 1.0 1.0 1.2 1.2 127ms 14.0.1
IE 1.4 1.0 1.1 1.0 185ms 9.0.8112
Opera 1.9 1.6 1.1 1.0 246ms 12.00
Safari 1.6 1.6 1.1 1.0 308ms 5.1.7
```
**Conclusion:**
When I did this test 3 years ago, -a was fastest, but now Math.abs(x) is faster in Firefox! In Chrome `abs(a)` and `-a` got the same time and it was only 3 ms difference to the slowest method when I tested it with 10 000 000 numbers.
**My Recommendation**: Use **Math.abs(a)**. If you are in a tight loop and by profiling has found it to be too slow, you can use a local reference to the abs function:
```
var abs=Math.abs; //A local reference to the global Math.abs function
for (i=0;i<1234567890;++i) if ( abs( v[i] ) > 10) ++x;
``` | I would suggest picking the method that more clearly shows your intention, rather than worrying about the performance. In this case, the performance gain of multiplying by -1 is probably minimal.
When you use `Math.abs()`, it is very clear that you want a positive value. When you use `* -1` it is not clear, and requires more investigation to determine if the input value is always negative. | Which is faster: Math.abs(value) or value * -1 ? | [
"",
"javascript",
"performance",
""
] |
There are a lot of articles around the web concerning Python performance. The first thing you read is concatenating strings should not be done using '+'; avoid s1 + s2 + s3, and instead use *str.join*
I tried the following: concatenating two strings as part of a directory path: three approaches:
1. '+' which I should not do
2. *str.join*
3. *os.path.join*
Here is my code:
```
import os, time
s1 = '/part/one/of/dir'
s2 = 'part/two/of/dir'
N = 10000
t = time.clock()
for i in xrange(N):
s = s1 + os.sep + s2
print time.clock() - t
t = time.clock()
for i in xrange(N):
s = os.sep.join((s1, s2))
print time.clock() - t
t = time.clock()
for i in xrange(N):
s = os.path.join(s1, s2)
print time.clock() - t
```
Here the results (Python 2.5 on [Windows XP](https://en.wikipedia.org/wiki/Windows_XP)):
```
0.0182201927899
0.0262544541275
0.120238186697
```
Shouldn't it be exactly the other way around? | It is true you should not use '+'. Your example is quite special. Try the same code with:
```
s1 = '*' * 100000
s2 = '+' * 100000
```
Then the second version (str.join) is much faster. | Most of the performance issues with string concatenation are ones of asymptotic performance, so the differences become most significant when you are concatenating many long strings.
In your sample, you are performing the same concatenation many times. You aren't building up any long string, and it may be that the Python interpreter is optimizing your loops. This would explain why the time increases when you move to *str.join* and *path.join* - they are more complex functions that are not as easily reduced. (*os.path.join* does a lot of checking on the strings to see if they need to be rewritten in any way before they are concatenated. This sacrifices some performance for the sake of portability.)
By the way, since file paths are not usually very long, you almost certainly want to use *os.path.join* for the sake of the portability. If the performance of the concatenation is a problem, you're doing something very odd with your filesystem. | Python string join performance | [
"",
"python",
"performance",
"string",
""
] |
I support a legacy Java application that uses flat files (plain text) for persistence. Due to the nature of the application, the size of these files can reach 100s MB per day, and often the limiting factor in application performance is file IO. Currently, the application uses a plain ol' java.io.FileOutputStream to write data to disk.
Recently, we've had several developers assert that using memory-mapped files, implemented in native code (C/C++) and accessed via JNI, would provide greater performance. However, FileOutputStream already uses native methods for its core methods (i.e. write(byte[])), so it appears a tenuous assumption without hard data or at least anecdotal evidence.
I have several questions on this:
1. Is this assertion really true?
Will memory mapped files **always**
provide faster IO compared to Java's
FileOutputStream?
2. Does the class MappedByteBuffer
accessed from a FileChannel provide
the same functionality as a native
memory mapped file library accessed
via JNI? What is MappedByteBuffer
lacking that might lead you to use a
JNI solution?
3. What are the risks of using
memory-mapped files for disk IO in a production
application? That is, applications
that have continuous uptime with
minimal reboots (once a month, max).
Real-life anecdotes from production
applications (Java or otherwise)
preferred.
Question #3 is important - I could answer this question myself *partially* by writing a "toy" application that perf tests IO using the various options described above, but by posting to SO I'm hoping for real-world anecdotes / data to chew on.
[EDIT] Clarification - each day of operation, the application creates multiple files that range in size from 100MB to 1 gig. In total, the application might be writing out multiple gigs of data per day. | You might be able to speed things up a bit by examining how your data is being buffered during writes. This tends to be application specific as you would need an idea of the expected data writing patterns. If data consistency is important, there will be tradeoffs here.
If you are just writing out new data to disk from your application, memory mapped I/O probably won't help much. I don't see any reason you would want to invest time in some custom coded native solution. It just seems like too much complexity for your application, from what you have provided so far.
If you are sure you really need better I/O performance - or just O performance in your case, I would look into a hardware solution such as a tuned disk array. Throwing more hardware at the problem is often times more cost effective from a business point of view than spending time optimizing software. It is also usually quicker to implement and more reliable.
In general, there are a lot of pitfalls in over optimization of software. You will introduce new types of problems to your application. You might run into memory issues/ GC thrashing which would lead to more maintenance/tuning. The worst part is that many of these issues will be hard to test before going into production.
If it were my app, I would probably stick with the FileOutputStream with some possibly tuned buffering. After that I'd use the time honored solution of throwing more hardware at it. | Memory mapped I/O will not make your disks run faster(!). For linear access it seems a bit pointless.
A NIO mapped buffer is the real thing (usual caveat about any reasonable implementation).
As with other NIO direct allocated buffers, the buffers are not normal memory and wont get GCed as efficiently. If you create many of them you may find that you run out of memory/address space without running out of Java heap. This is obviously a worry with long running processes. | Performance / stability of a Memory Mapped file - Native or MappedByteBuffer - vs. plain ol' FileOutputStream | [
"",
"java",
"performance",
"file-io",
"java-native-interface",
"production",
""
] |
In code, I sometimes see people specify constants in hex format like this:
```
const int has_nukes = 0x0001;
const int has_bio_weapons = 0x0002;
const int has_chem_weapons = 0x0004;
// ...
int arsenal = has_nukes | has_bio_weapons | has_chem_weapons; // all of them
if(arsenal &= has_bio_weapons){
std::cout << "BIO!!"
}
```
But it doesn't make sense to me to use the hex format here. Is there a way to do it directly in binary? Something like this:
```
const int has_nukes = 0b00000000000000000000000000000001;
const int has_bio_weapons = 0b00000000000000000000000000000010;
const int has_chem_weapons = 0b00000000000000000000000000000100;
// ...
```
I know the C/C++ compilers won't compile this, but there must be a workaround? Is it possible in other languages like Java? | I'd use a bit shift operator:
```
const int has_nukes = 1<<0;
const int has_bio_weapons = 1<<1;
const int has_chem_weapons = 1<<2;
// ...
int dangerous_mask = has_nukes | has_bio_weapons | has_chem_weapons;
bool is_dangerous = (country->flags & dangerous_mask) == dangerous_mask;
```
It is even better than flood of 0's. | In C++14 you will be able to use binary literals with the following syntax:
```
0b010101010 /* more zeros and ones */
```
This feature is already implemented in the latest `clang` and `gcc`. You can try it if you run those compilers with `-std=c++1y` option. | Binary literals? | [
"",
"c++",
""
] |
I created the following code to calculate the duration between two timestamps which can come in two different formats:
```
public class dummyTime {
public static void main(String[] args) {
try {
convertDuration("2008-01-01 01:00 pm - 01:56 pm");
convertDuration("2008-01-01 8:30 pm - 2008-01-02 09:30 am");
} catch (Exception e) {
e.printStackTrace();
}
}
private static String convertDuration(String time) throws Exception {
String ts[] = time.split(" - ");
SimpleDateFormat formatNew = new SimpleDateFormat("HH:mm");
Date beg, end;
String duration = null;
beg = getDateTime(ts[0]);
end = getDateTime(ts[1], beg);
duration = formatNew.format(end.getTime() - beg.getTime());
System.out.println(duration + " /// " + time + " /// " + beg + " /// "
+ end);
return duration;
}
private static Date getDateTime(String dateTime) throws ParseException {
DateFormat formatOldDateTime = new SimpleDateFormat(
"yyyy-MM-dd hh:mm aa");
DateFormat formatOldTimeOnly = new SimpleDateFormat("hh:mm aa");
Date date = null;
try {
date = formatOldDateTime.parse(dateTime);
} catch (ParseException e) {
date = formatOldTimeOnly.parse(dateTime);
}
return date;
}
private static Date getDateTime(String dateTime, Date orig)
throws ParseException {
Date end = getDateTime(dateTime);
if (end.getYear() == 70) {
end.setYear(orig.getYear());
end.setMonth(orig.getMonth());
end.setDate(orig.getDate());
}
return end;
}
}
```
The output it generates is:
```
01:56 /// 2008-01-01 01:00 pm - 01:56 pm /// Tue Jan 01 13:00:00 CET 2008 /// Tue Jan 01 13:56:00 CET 2008
14:00 /// 2008-01-01 8:30 pm - 2008-01-02 09:30 am /// Tue Jan 01 20:30:00 CET 2008 /// Wed Jan 02 09:30:00 CET 2008
```
My questions are:
1. Why are the results always wrong
(always +1h)?
2. What is a better
way to identify timestamps without
day? == 70 doesn't look good and the
getDay & setDay functions are
deprecated too.
Many many thanks, this issue has been driving me crazy for several hours. | 1. At my computer this is off by 2 hours, because I'm at GMT+2, and you're probably at GMT+1. Note that `formatNew.format(end.getTime() - beg.getTime());` receives date, i.e. treats your 56 minutes as 1970-01-01-00:56:00 GMT+1. To fix this quickly, call `formatNew.setTimeZone( TimeZone.getTimeZone( "GMT" ) );`
2. For the 2nd item, you can check if format-yyyy-MM-dd failed (you catch a parse error), and this is how you know that there's no year. | You are formatting time of day, not number of hours and minutes. As you are in the CET timezone [Central European Time] in winter, that is one hour different from UTC ("GMT").
You probably want to be using `Calendar` instead of `Date`. Or [Joda-Time](http://joda-time.sourceforge.net/). | Java: calculating duration | [
"",
"java",
"datetime",
"duration",
""
] |
What is some good software that is written with WPF?
I keep hearing about what it can do. I would like to see it in action. | Some samples
* <http://bigpicture.vertigo.com/obama/>
* <http://www.photosuru.com/>
* <http://seattlepi.nwsource.com/newsreader/>
Also take a look at [scott hanselman](http://www.hanselman.com/blog/CategoryView.aspx?category=WPF) blog filtered by WPF tag, he writes very good articles and he also has [podcasts](http://www.hanselminutes.com/) | [This site](http://www.actiprosoftware.com/support/ResourceGuides/WPF/ViewCategory.aspx?ResourceGuideCategoryID=26) lists quite a lot of WPF applications (many from [codeproject.com](http://www.codeproject.com), but also from various other sources). | Where can I see some good examples of what WPF is capable of? | [
"",
"c#",
".net",
"wpf",
""
] |
How do I verify that method was NOT called in [Moq](http://code.google.com/p/moq/)?
Does it have something like AssertWasNotCalled?
UPDATE: Starting from Version 3.0, a new syntax can be used:
```
mock.Verify(foo => foo.Execute("ping"), Times.Never());
``` | **UPDATE**: Since version 3, check the update to the question above or Dann's answer below.
Either, make your mock strict so it will fail if you call a method for which you don't have an expect
```
new Mock<IMoq>(MockBehavior.Strict)
```
Or, if you want your mock to be loose, use the .Throws( Exception )
```
var m = new Mock<IMoq>(MockBehavior.Loose);
m.Expect(a => a.moo()).Throws(new Exception("Shouldn't be called."));
``` | Run a verify after the test with the `Times.Never()` option.
```
_mock.Object.DoSomething()
_mock.Verify(service => service.ShouldntBeCalled(), Times.Never());
``` | How to verify that method was NOT called in Moq? | [
"",
"c#",
".net",
"moq",
""
] |
I need to declare the query variable outside the switch statement that way I would only have one variable that would handle different result of the LINQ Query. Please see the code below. Problem here is that I cannot infer a variable without initializing it
```
var query;
Switch(filter)
{
case 1:
var query = from c in Customers
where c.Type equals == 'ABC'
select c;
break;
case 2:
var query = from c in Customers
where c.Type equals == 'CDE'
select c;
break;
}
foreach(var custrow in query)
{
//Do Logic
}
}
``` | You probably want `IEnumerable<Customer>` or `IQueryable<Customer>` instead of var. | No because they have to be in initialized at the same time as they are declared, the complier doesn't know what type to assign to the variable.
I think you would want something like.
```
IQueryable<Customers> query;
```
that is assuming that the query returns a IQueryable of Customers.
overall design something like this may be better.
```
IQueryable<Customers> query;
Switch(filter)
{
case 1:
query = Customers.Where(c => c.Type == "ABC");
break;
case 2:
query = Customers.Where(c => c.Type == "CDE");
break;
}
foreach(var custrow in query)
{
//Do Logic
}
``` | Can I declare a Global Inferred variable in C#? | [
"",
"c#",
"linq",
"types",
"type-inference",
""
] |
Is there a working real world project using Clojure that take advantage of multi-core system that I can take a look?
I want to if Clojure is still in a toy-language phrase or it's really "happening soon". | You should look at the many examples posted up in the Google discussion group. They are interesting and a couple of them non-trivial in another language. I think those examples are more than enough proof about whether Clojure is a "toy" language or not.
The designer, Rich Hickey, uses the language himself for his work so I wouldn't be surprised if you find it very production ready. The API at this point isn't changing much since things are headed toward a 1.0 release, so you don't really have to worry too much about hte rug being pulled out under your feet. Probably the least stable part right now, if you're going to do your Lisping in Emacs, is SLIME and swank-clojure. It's pretty imperative that you use latest versions of both directly from their repositories (I use the GitHub mirrors).
Get it while it's hot. | On the Clojure group today there's a [thread](http://groups.google.com/group/clojure/browse_thread/thread/85649a134cf22655?tvc=2&pli=1) about a fellow doing just that. It looks like a lot of middleware glue for a vet hospital, but unfortunately there's no data up on it (it says this will change in the next few months). | Is there a working real world project using Clojure that take advantage of multi-core system that I can take a look? | [
"",
"java",
"clojure",
""
] |
I'm running PHP 5.2.8 on Apache 2.2.11 on Windows XP SP 3. In php.ini,
```
extension_dir = "C:\Program Files\PHP\ext"
extension=php_mysql.dll
```
In error.log:
```
PHP Warning: PHP Startup: Unable to load dynamic library 'C:\\Program Files\\PHP\\ext\\php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0
```
php\_mysql.dll is definately located in the extension\_dir, and is the version that PHP 5.2.8 is bundled with. What's up? | Try adding a **trailing slash** to your `extension_dir` path.
This is what I am using in my WAMP setup:
```
extension_dir = "D:\php\ext\"
; ...snip...
extension=php_gd2.dll
``` | You need to make the MySQL client library available to PHP. This is done by copying the libmysql.dll file from the PHP package root directory into a directory on your system within the Windows PATH.
The quickest way to do this is to copy libmysql.dll into your C:\Windows\System directory but as noted in the manual, this is not recommended and is just a quick fix to see if it is actually the problem here.
<https://www.php.net/manual/en/mysql.installation.php> says:
> MySQL is no longer enabled by default,
> so the php\_mysql.dll DLL must be
> enabled inside of php.ini. Also, PHP
> needs access to the MySQL client
> library. A file named libmysql.dll is
> included in the Windows PHP
> distribution and in order for PHP to
> talk to MySQL this file needs to be
> available to the Windows systems PATH.
> See the FAQ titled "How do I add my
> PHP directory to the PATH on Windows"
> for information on how to do this.
> Although copying libmysql.dll to the
> Windows system directory also works
> (because the system directory is by
> default in the system's PATH), it's
> not recommended.
The best option is to [add the PHP directory into your Windows PATH, which is explained in this FAQ](https://www.php.net/manual/en/faq.installation.php#faq.installation.addtopath). | Error loading PHP modules | [
"",
"php",
"windows",
"apache",
""
] |
I have a class with implements 2 interfaces and inherits 1 class. So, generally it looks like this:
```
class T : public A, public IB, public IC {
};
```
There is one point in the code where I have an `IB *`, but could really use an `A *`. I was hoping that a dynamic cast would like this:
```
IB *b_ptr = new T; // it's really more complicated, but serves the example
A *a_ptr = dynamic_cast<A *>(b_ptr);
```
unfortunately, this doesn't work. Is there a proper way to do this? Or should I implement a work around? I've thought about having both `IB` and `IC` inherit virtually from `A`, but IIRC last time I tried that there were some complications that made it undesirable.
Any thoughts?
**EDIT**: oh yea, this is part of a plugin API, so unfortunately I don't have direct access to the `T` type where I need the `A *`. My example has then next to each other, but as mentioned, it's more complicated. Basically I have 2 shared libraries. `T` and `T1` (where I have an `IB *`) are both classes which implement a plugin API and are internal to the shared libraries.
To clarify: Here's a more specific example of my typical plugins (they are in separate libraries):
plugin A:
```
class PluginA : public QObject, public PluginInterface, public OtherInterface {
};
```
plugin B:
```
class PluginB : public QObject, public PluginInterface {
// in here, I have a PluginInterface *, but really could use a QObject *
// unfortunately, PluginB has absolutely no knowledge of the "PluginA" type
// it just so happens that my PluginInterface * pointer points to an object of type
// PluginA.
};
```
**EDIT**: I have a guess that the issue is that pluginA and pluginB are in different shared libraries. Perhaps the rtti doesn't cross module boundaries. I think this might be the case because people's examples seem to work fine in my tests. Specifically, pluginB has no "typeinfo for PluginA" if I do an "nm" on it. This may be the core of the issue. If this is the case, I'll simply have to work around it by either virtual inheritance or a virtual `cast_to_qobject()` function in one of my interfaces. | I finally figured it out, [Daniel Paull](https://stackoverflow.com/questions/441983/dynamic-cast-with-interfaces#442025) was correct in that a "sideways `dybnamic_cast`" should be allowed. My problem was because my code is involving shared libraries. The typeinfo from PluginA was not available in PluginB. My solution was to effectively add `RTLD_NOW` and `RTLD_GLOBAL` to my load process
technically it was
```
loader.setLoadHints(QLibrary::ResolveAllSymbolsHint | QLibrary::ExportExternalSymbolsHint);
```
because I'm using Qt's plugin system but same difference. These flags force all symbols from loaded libraries to be resolved immediately and be visible to other libraries. Therefore making the typeinfo available to everyone that needed it. The `dynamic_cast` worked as expected once these flags were in place. | Does each class have at least one virtual method? If not, there's your problem. Adding a virtual destructor to each class should overcome the problem.
The following happily worked for me:
```
class IC
{
public:
virtual ~IC() {}
};
class IB
{
public:
virtual ~IB() {}
};
class A
{
public:
virtual ~A() {}
void foo() { /* stick a breakpoint here to confirm that this is called */ }
};
class T : public A, public IB, public IC
{
public:
virtual ~T() {}
};
int main(void)
{
IB *b_ptr = new T;
A *a_ptr = dynamic_cast<A *>(b_ptr);
a_ptr->foo();
return 0;
}
```
EDIT:
After all the new info, and the unusual behavior (your code should just work!), does the following help? I've introduced an interface called IObject and use virtual inheritance to ensure that there is only one copy of this base class. Can you now cast to IObject and then to A?
```
class IObject
{
public:
virtual ~IObject() {}
};
class IC : virtual public IObject
{
public:
virtual ~IC() {}
};
class IB : virtual public IObject
{
public:
virtual ~IB() {}
};
class A : virtual public IObject
{
public:
virtual ~A() {}
void foo() { /* stick a breakpoint here to confirm that this is called */ }
};
class T : virtual public A, virtual public IB, virtual public IC
{
public:
virtual ~T() {}
};
int main()
{
IB *b_ptr = new T;
A *a_ptr = dynamic_cast<A *>( dynamic_cast<IObject *>(b_ptr) );
a_ptr->foo();
return 0;
}
```
I'm not suggesting that it's the right solution, but it might offer some info as to what's going on... | dynamic cast with interfaces | [
"",
"c++",
"casting",
"dynamic-cast",
""
] |
I am having an odd problem with my SQL Server express 2005 DB. It is running on a 2003 server. With VS2005 I can access the db without any problem. But when I try and run the application I get:
```
Cannot open user default database. Login failed. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'......... .ldf may be incorrect.
```
I have ensured that NETWORK SERVICE has full access to the website folder and all it sub folders.
I have noticed that when I look at the properties of App\_Data it has a gray tick on the read only. The .MDF file is not read only but I can not remove this gray tick. I have tried just clicking it till its gone and applying changes, I have tried ATTRIB *.* -r but nothing seems to change it.
I am very confused.
Jon | The difference isn't related to ACLs on the folder but to that fact that the security context is mapped to a user in the DB. Your DB will have a mapping for your user account probably to dbo.
If the Network Service account is going to use the DB you need to create a user in the db to map to Machine account. | Are you actually using an instance of SQL Server Express, or just using a database file in the App\_Data folder?
If you are using an instance of SQL Server and you have [SQL Server Management Studio Express](http://go.microsoft.com/fwlink/?LinkId=65110) installed, you would do something like this to give the "Network Service" account access to the database (thinking about it, you should be able to point Management Studio at the mdf in the App\_Data folder as well, and do the same thing):
1. Open SQL Management Studio, connect to your database and open it in the Object Explorer.
2. At the Server level (MachineName\SQL Express) expand the "Security" section - check the user's not already in the "Logins" section below that.
3. If not, Right-click on "Logins", "New Login..."
4. Ensure that the login type is set to "Windows authentication", you can then either:
1. Hit "Search...", then "Advanced..." then "Find now", Double click on "NT Service", "Ok" the next dialog.
2. Just type "NT Authority\Network Service" into the "Login name:" box.
5. Then, in the "User Mapping" page (Selected from the left-hand pane), put a Tick next to the database that you want this user to access.
6. If you have database roles (for example if this database has the default ASP.NET membership/roles/profile tables etc installed), make sure you give the user the correct role - for example "aspnet\_Membership\_BasicAccess".
I hope that helps. | C# asp.net cannot open user default database .ldf may be incorrect | [
"",
"c#",
"asp.net",
"sql-server-express",
""
] |
Is it possible to find the greatest of two integers without any comparison? I found some solutions:
```
if(!(a/b)) // if a is less than b then division result will be zero.
{
cout << " b is greater than a";
}
else if (!(a-b)) // we know a is greater than or equal to b now. check whether they are equal.
{
cout << "a and b are equal";
}
else
cout << "a is greater than b";
```
But if(c) or if(!c) is a comparison to zero. In addition it doesn't work for negative numbers. In fact I need a solution that avoids any if statement. Instead I should use switch statements and arithmetic operators. ThanX. | Subtract them and check the sign using nasty bit twiddling hacks
<http://graphics.stanford.edu/~seander/bithacks.html>
Don't do this in production code if the other programmers know where you live. | Here's a fun bit-twiddling version that doesn't have any conditional branches.
```
int g = (int)"greater";
int l = (int)"less";
int e = (int)"equal";
int a = 7;
int b = 10;
char *result = (char*)((((a - b) >> 31) & l) | (((b - a) >> 31) & g) | ((~((a - b) | (b - a))) >> 31) & e);
cout << result;
``` | Comparing two integers without any comparison | [
"",
"c++",
"if-statement",
"comparison",
""
] |
Im building a Threaded Comment System for a website of mine and I ran into a problem...
I have a list PULLED FROM A DATABASE that has a ID field and a Parent ID Field. The parent ID field can be null, but the ID field will NEVER be null.
Since this will be a threaded comment system, I organize the list to where the ID is the top one, but if a parent ID exists, then it would be inserted under the ID. Then this can go on for infinity also. So the second level now also has an ID and and I want to insert any item with a parent ID of that ID under it.
For example:
---1. Blah
--------2. Blah Blah -> ParentID=1
-----------3. Blah Blah -> parentID=2
-------------- 4. Blah Blah ->parentID=3
----------- 3.Blah Blah -> parentID=2
--------2. Blah Blah -> parentID=1
I think you get the point.
So here is what I have so far...
```
List<comment> finalList = new List<comment>();
for (int i = 0; i < getComments.Count(); i++)
{
string item = getComments[i].parentComment;
getComments[i].threadID = 1;
finalList.Add(getComments[i]);
for (int ii = 0; ii < getComments.Count(); ii++)
{
if (getComments[ii].commentID == item)
{
getComments[ii].threadID = 2;
finalList.Add(getComments[i]);
}
}
}
```
It seems to sort it half way, but not truly... The ThreadID is of course how far it gets planted to the right. | Thanks all for your help guys. I do appreciate it.
I did though, find something by a guy that wrote absolutely everything for it.
<http://www.scip.be/index.php?Page=ArticlesNET23>
<http://www.scip.be/index.php?Page=ArticlesNET09>
<http://www.scip.be/index.php?Page=ArticlesNET18> | Given that you're using the Count() extension method instead of the Count property (which is a slight inefficiency in itself; using foreach would be better to start with though) you're presumably using .NET 3.5.
I don't think I fully understand your scheme - for instance, what is there to say that the comment with threadID=4 in your diagram goes under the first threadID=3 element instead of the second?
Without knowing much in the way of details of what you're after, in general I'd consider a commenting data structure with:
* CommentID: the ID of this entity
* RootID: the ID of the root element for the thread (so you can fetch all comments for a thread easily)
* ParentID: the CommentID of the parent for this comment, or null if it's the root element
* Timestamp: Or something else which would allow the child comments within one parent to be sorted appropriately.
Given that, it would be fairly easy to work out the indentation level, if that's what you're concerned about. If that sounds useful, I can go into more details - if not, please clarify the question. | How to build a threaded Comment System in C#? Help | [
"",
"c#",
"list",
"nested-sets",
"threaded-comments",
""
] |
What is the best idea to fill up data into a Django model from an external source?
E.g. I have a model Run, and runs data in an XML file, which changes weekly.
Should I create a view and call that view URL from a curl cronjob (with the advantage that that data can be read anytime, not only when the cronjob runs), or create a python script and install that script as a cron (with DJANGO \_SETTINGS \_MODULE variable setup before executing the script)? | There is excellent way to do some maintenance-like jobs in project environment- write a [custom manage.py command](http://docs.djangoproject.com/en/dev/howto/custom-management-commands/#howto-custom-management-commands). It takes all environment configuration and other stuff allows you to concentrate on concrete task.
And of course call it directly by cron. | You don't need to create a view, you should just trigger a python script with the appropriate [Django environment settings configured](https://stackoverflow.com/questions/383073/django-how-can-i-use-my-model-classes-to-interact-with-my-database-from-outside/383089#383089). Then call your models directly the way you would if you were using a view, process your data, add it to your model, then .save() the model to the database. | How to externally populate a Django model? | [
"",
"python",
"django",
"django-models",
""
] |
I'm working on an attendance entry form for a band. My idea is to have a section of the form to enter event information for a performance or rehearsal. Here's the model for the event table:
```
class Event(models.Model):
event_id = models.AutoField(primary_key=True)
date = models.DateField()
event_type = models.ForeignKey(EventType)
description = models.TextField()
```
Then I'd like to have an inline FormSet that links the band members to the event and records whether they were present, absent, or excused:
```
class Attendance(models.Model):
attendance_id = models.AutoField(primary_key=True)
event_id = models.ForeignKey(Event)
member_id = models.ForeignKey(Member)
attendance_type = models.ForeignKey(AttendanceType)
comment = models.TextField(blank=True)
```
Now, what I'd like to do is to pre-populate this inline FormSet with entries for all the current members and default them to being present (around 60 members). Unfortunately, Django [doesn't allow initial values in this case.](http://groups.google.com/group/django-developers/browse_thread/thread/73af9e58bd7626a8)
Any suggestions? | So, you're not going to like the answer, partly because I'm not yet done writing the code and partly because it's a lot of work.
What you need to do, as I discovered when I ran into this myself, is:
1. Spend a lot of time reading through the formset and model-formset code to get a feel for how it all works (not helped by the fact that some of the functionality lives on the formset classes, and some of it lives in factory functions which spit them out). You will need this knowledge in the later steps.
2. Write your own formset class which subclasses from `BaseInlineFormSet` and accepts `initial`. The really tricky bit here is that you *must* override `__init__()`, and you *must* make sure that it calls up to `BaseFormSet.__init__()` rather than using the direct parent or grandparent `__init__()` (since those are `BaseInlineFormSet` and `BaseModelFormSet`, respectively, and neither of them can handle initial data).
3. Write your own subclass of the appropriate admin inline class (in my case it was `TabularInline`) and override its `get_formset` method to return the result of `inlineformset_factory()` using your custom formset class.
4. On the actual `ModelAdmin` subclass for the model with the inline, override `add_view` and `change_view`, and replicate most of the code, but with one big change: build the initial data your formset will need, and pass it to your custom formset (which will be returned by your `ModelAdmin`'s `get_formsets()` method).
I've had a few productive chats with Brian and Joseph about improving this for future Django releases; at the moment, the way the model formsets work just make this more trouble than it's usually worth, but with a bit of API cleanup I think it could be made extremely easy. | I spent a fair amount of time trying to come up with a solution that I could re-use across sites. James' post contained the key piece of wisdom of extending `BaseInlineFormSet` but strategically invoking calls against `BaseFormSet`.
The solution below is broken into two pieces: a `AdminInline` and a `BaseInlineFormSet`.
1. The `InlineAdmin` dynamically generates an initial value based on the exposed request object.
2. It uses currying to expose the initial values to a custom `BaseInlineFormSet` through keyword arguments passed to the constructor.
3. The `BaseInlineFormSet` constructor pops the initial values off the list of keyword arguments and constructs normally.
4. The last piece is overriding the form construction process by changing the maximum total number of forms and using the `BaseFormSet._construct_form` and `BaseFormSet._construct_forms` methods
Here are some concrete snippets using the OP's classes. I've tested this against Django 1.2.3. I highly recommend keeping the [formset](http://docs.djangoproject.com/en/1.2/topics/forms/formsets/) and [admin](http://docs.djangoproject.com/en/1.2/ref/contrib/admin/#inlinemodeladmin-objects) documentation handy while developing.
**admin.py**
```
from django.utils.functional import curry
from django.contrib import admin
from example_app.forms import *
from example_app.models import *
class AttendanceInline(admin.TabularInline):
model = Attendance
formset = AttendanceFormSet
extra = 5
def get_formset(self, request, obj=None, **kwargs):
"""
Pre-populating formset using GET params
"""
initial = []
if request.method == "GET":
#
# Populate initial based on request
#
initial.append({
'foo': 'bar',
})
formset = super(AttendanceInline, self).get_formset(request, obj, **kwargs)
formset.__init__ = curry(formset.__init__, initial=initial)
return formset
```
**forms.py**
```
from django.forms import formsets
from django.forms.models import BaseInlineFormSet
class BaseAttendanceFormSet(BaseInlineFormSet):
def __init__(self, *args, **kwargs):
"""
Grabs the curried initial values and stores them into a 'private'
variable. Note: the use of self.__initial is important, using
self.initial or self._initial will be erased by a parent class
"""
self.__initial = kwargs.pop('initial', [])
super(BaseAttendanceFormSet, self).__init__(*args, **kwargs)
def total_form_count(self):
return len(self.__initial) + self.extra
def _construct_forms(self):
return formsets.BaseFormSet._construct_forms(self)
def _construct_form(self, i, **kwargs):
if self.__initial:
try:
kwargs['initial'] = self.__initial[i]
except IndexError:
pass
return formsets.BaseFormSet._construct_form(self, i, **kwargs)
AttendanceFormSet = formsets.formset_factory(AttendanceForm, formset=BaseAttendanceFormSet)
``` | Pre-populate an inline FormSet? | [
"",
"python",
"django",
"django-forms",
""
] |
I'm trying to create a simple toggling sidebar using jquery, where it expands and contracts when a button is pressed. The button also changes to another type of button when pressed. The sidebar will expand, but for some reason, it will not move back to it's original position.
You can see a copy of the javascript and html at <http://www.jqueryhelp.com/viewtopic.php?p=4241#4241>
Here is the working code, thanks Bendeway! :D
```
$(".btn-slide").live('click', function(e){
e.preventDefault();
$("#sidebar").animate({opacity: "show", left: 250}, "slow");
$(this).toggleClass("btn-slide").toggleClass("active");
});
$(".active").live('click', function(e){
e.preventDefault();
$("#sidebar").animate({opacity: "hide", left: 100}, "slow");
$(this).toggleClass("btn-slide").toggleClass("active");
});
``` | try instead of right use left with a negative number. in addition I would recommend using preventDefault instead of returning false.
```
$(".active").click(function(e){
e.preventDefault();
$("#sidebar").animate({opacity: "hide", left: -250}, "slow");
$(this).toggleClass("btn-slide");
});
```
## Update
Another piece i just noticed is that your attaching a click event to the .active button, when the document is ready, but there is no .active button when the document is ready that comes in after you change it. There are a couple options here.
First is to use the new [live](http://docs.jquery.com/Events/live) feature of jquery 1.3
```
$(".btn-slide").live('click', function(e){
e.preventDefault();
$("#sidebar").animate({opacity: "hide", left: 250}, "slow");
$(this).toggleClass("btn-slide").toggleClass("active");
});
$(".active").live('click', function(e){
e.preventDefault();
$("#sidebar").animate({opacity: "hide", left: -250}, "slow");
$(this).toggleClass("btn-slide").toggleClass("active");
});
```
The other option would be to have set the click event on a different modifier (eg. on the id, maybe).
```
<span>News <img src="img/overlay.png" id="sliderButton" class="btn-slide" alt="" /></span>
```
then use this to handle the click
```
$("#sliderButton").click(function(e){
e.preventDefault();
$(this).is('.btn-slide').each(function() {
$("#sidebar").animate({opacity: "show", left: 250}, "slow");
});
$(this).is('.active').each(function() {
$("#sidebar").animate({opacity: "hide", left: -250}, "slow");
});
$(this).toggleClass("active").toggleClass('btn-slide');
});
```
or even more concise
```
$("#sliderButton").click(function(e){
e.preventDefault();
var animationSettings = {opacity: "show", left: 250};
if ($(this).hasClass('active') )
{
animationSettings = {opacity: "hide", left: -250};
}
$("#sidebar").animate(animationSettings , "slow");
$(this).toggleClass("active").toggleClass('btn-slide');
});
```
The final option that I can think of would be to set the click events after you change them, but I wouldn't do that so I'm not going to supply a sample.
Lastly, I would put in alert into your active callback and make sure that your active button event is actually firing. | The way your logic is written, I think you need to do a 'toggleClass' on both classes inside your click handlers which will add one and remove the other. For example, when your "active" item is clicked you toggle (add) the btn-slide class, but this will leave the "active" class in place too.
Of course instead of using "toggleClass" you could also use "addClass" and "removeClass" to be more explicit.
I recommend using a tool like Firebug to watch what's happening inside your DOM.
```
$(document).ready(function(){
$(".btn-slide").click(function(){
$("#sidebar").animate({opacity: "show", left: "250"}, "slow");
$(this).toggleClass("active"); // add class
$(this).toggleClass("btn-slide"); // remove class
return false;
});
});
$(document).ready(function(){
$(".active").click(function(){
$("#sidebar").animate({opacity: "hide", right: "250"}, "slow");
$(this).toggleClass("active"); // remove class
$(this).toggleClass("btn-slide"); // add class
return false;
});
});
``` | I've got a problem with a toggling sidebar using jQuery | [
"",
"javascript",
"jquery",
"toggle",
"sidebar",
""
] |
I have m2m field, lets say it have name 'relations', so i want to allow user to send as many relations as he wants. I add new input to html with javascript with same name, like so
```
<input type='text' name='relations' value='a' />
<input type='text' name='relations' value='b' />
```
in cleaned\_data i receive only value of second input ('b'). How to receive both? | I don't know how to do that with Forms, but if you want to grab the values in the raw way, here's how I'd do:
```
relations = request.POST.getlist('relations')
``` | You don't need to grab all the raw values, you can just get the specific data by using element name like this:
```
relations = request.form.getlist('relations')
```
That will return a list of values in the `relations` input. | How to send multiple input field values with same name? | [
"",
"python",
"django",
"django-forms",
""
] |
I am trying to optimize a small, highly used function which uses the high bits in an unsigned short int to indicate the values of an array to sum together. At first I was using the obvious approach shown below. Please note that loop unrolling is not explicitly shown as it should be done by the compiler.
```
int total = 0;
for(unsigned short mask = 0x0001, j = 0; mask != 0; mask <<= 1, j++){
if (i & mask){
total += value[j];
}
}
```
However, later I thought it might be better to remove the branching to help CPU pipelining and came up with the following.
```
int total = 0;
for(unsigned short mask = 0x0001, j = 0; mask != 0; mask <<= 1, j++){
total += ((i & mask) != 0) * value[j];
}
```
Note that since (i & mask) does not result in a boolean answer, the comparison with 0 forces the result to be either 1 or 0. Although this second approach eliminates the if-statement from this section of the code, the second solution needs to run a multiplication of 0 or 1 on every iteration in addition to the rest of the equation.
Which code will run faster? | You could make it branchless without a multiply. It looks like for each bit set you are using that bit position as an index into an array.
First, you can easily extract bits set with:
```
unsigned short set_mask= i & -i;
i&= i - 1;
```
Then, you can get the bit index by counting the bits set in `(set_mask - 1)`. There's a constant time formula for this.
Some platforms also have an intrinsic to get the bit index of a bit set which is probably faster. x86 has `bsr`, PPC has `cntlz`.
So the answer is the branchless multiplyless version is probably fastest :) | > Which code will run faster?
Test it to find out.
Also, look at the assembly-language version of the code which the compiler emits, because there you might see things in it that surprise you, and which hint at further optimizations (for example, using `short` as you are using can need more instructions that using the machine's natural integer size). | Is it more efficient to branch or multiply? | [
"",
"c++",
"c",
"optimization",
"if-statement",
"branch",
""
] |
I've implemented my own copy of the model view presenter pattern (in vein of web client software factory) so I can leverage my own DI framework instead of being tied to WCSF's ObjectBuilder which I had numerous problems with. I've come up with a few ways to do it but none of them particularly make me happy. I wanted to know if anyone else had some other ideas.
## Solution #1a
Uses a HttpModule to intercept context.PreRequestHandlerExecute to call ObjectFactory.BuildUp(HttpContext.Current.Handler)
```
public partial class _Default : Page, IEmployeeView
{
private EmployeePresenter _presenter;
private EmployeePresenter Presenter
{
set
{
_presenter = value;
_presenter.View = this;
}
}
}
```
## Solution #1b
Call buildup in page load instead of using a HttpModule
```
public partial class _Default : Page, IEmployeeView
{
private EmployeePresenter _presenter;
private EmployeePresenter Presenter
{
set
{
_presenter = value;
_presenter.View = this;
}
}
protected void Page_Load(object sender, EventArgs e)
{
ObjectFactory.BuildUp(this);
}
}
```
## Solution #1c
Access presenter through Property allow Getter to BuildUp if needed.
```
public partial class _Default : Page, IEmployeeView
{
private EmployeePresenter _presenter;
public EmployeePresenter Presenter
{
get
{
if (_presenter == null)
{
ObjectFactory.BuildUp(this);
}
return _presenter;
}
set
{
_presenter = value;
_presenter.View = this;
}
}
}
```
## Solution #2
```
public partial class _Default : Page, IEmployeeView
{
private EmployeePresenter _presenter;
private EmployeePresenter Presenter
{
get
{
if (_presenter == null)
{
_presenter = ObjectFactory.GetInstance<EmployeePresenter>();
_presenter.View = this;
}
return _presenter;
}
}
}
```
## Solution #2b
```
public partial class _Default : Page, IEmployeeView
{
private EmployeePresenter _presenter;
private EmployeePresenter Presenter
{
get
{
if (_presenter == null)
{
Presenter = ObjectFactory.GetInstance<EmployeePresenter>();
}
return _presenter;
}
set
{
_presenter = value;
_presenter.View = this;
}
}
}
```
**Edit**: Added solution 1c, 2b | I'd Use solution #1b, and create a [layer supertype](http://martinfowler.com/eaaCatalog/layerSupertype.html) for all the pages, in order to DRY the presenter initialization a bit more. like this:
Page code:
```
public partial class _Default : AbstractPage, IEmployeeView
{
private EmployeePresenter presenter;
private EmployeePresenter Presenter
{
set
{
presenter = value;
presenter.View = this;
}
}
protected override void Do_Load(object sender, EventArgs args)
{
//do "on load" stuff
}
}
```
Abstract Page code:
```
public abstract class AbstractPage : Page
{
protected void Page_Load(object sender, EventArgs e)
{
ObjectFactory.BuildUp(this);
this.Do_Load(sender,e);
//template method, to enable subclasses to mimic "Page_load" event
}
//Default Implementation (do nothing)
protected virtual void Do_Load(object sender, EventArgs e){}
}
```
With this solution you have the presenter initialization (created by the ObjectFactory) in one class only, if you need to modify it later you can do it easily.
**Edit:**
**Should Do\_Load be *abstract* or *virtual* ?**
[Template Method](http://en.wikipedia.org/wiki/Template_method) originally states that the method should be Abstract, in order to force subclasses to implement it, adhering to the superclass contract. (see wikipedia example of "Monopoly" < "Game").
On the other hand in this particular case, we don't want to force the user class to redefine our method, but give it the chance to do so. If you declare it abstract, many classes will be obliged to redefine the method just to leave it empty (this is clearly a code smell). So we provide a sensible default (do nothing) and make the method virtual. | I have been using a base page class with:
```
protected override void OnInit(EventArgs e)
{
StructureMap.ObjectFactory.BuildUp(this);
base.OnInit(e);
}
```
The base class approach works on user controls as well, that alone kept me from the module (didn't want to have 2 ways to set it up).
For the page it is
```
public partial class Employee : View, IEmployeeView
{
public ViewPresenter Presenter { get; set; }
private void Page_Load(object sender, EventArgs e){}
}
```
I inject the view through the constructor. To avoid the circular reference issue on the structuremap config, just use this helper method:
```
static T GetView<T>()
{
return (T) HttpContext.Current.Handler;
}
```
On the structuremap config use a convention for both the presenter and the view injection. | Presenter injection in Model-View-Presenter pattern with StructureMap | [
"",
"c#",
"dependency-injection",
"structuremap",
"mvp",
""
] |
I want to avoid hardcoding the port number as in the following:
```
httpd = make_server('', 8000, simple_app)
```
The reason I'm creating the server this way is that I want to use it as a 'kernel' for an Adobe AIR app so it will communicate using PyAMF. Since I'm running this on the client side it is very possible that any port I define is already in use. If there is a better way to do this and I am asking the wrong question please let me know. | The problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port. | > The problem is that you need a known port for the application to use. But if you give a port number of 0, I believe the OS will provide you with the first available unused port.
You are correct, sir. Here's how that works:
```
>>> import socket
>>> s = socket.socket()
>>> s.bind(("", 0))
>>> s.getsockname()
('0.0.0.0', 54485)
```
I now have a socket bound to port 54485. | How do I create an HTTP server in Python using the first available port? | [
"",
"python",
"httpserver",
""
] |
How do I benchmark the performance of my web applications?
Is there a way to find out the bottlenecks in a web application?
EDIT: I am not asking about any front end tweaks like images, css etc. What I want to know is how to profile the back end of the application so that I will know which methods/queries to modify to increase the performance. | Regarding bottlenecks on the application server, you can use a [profiling tool](http://en.wikipedia.org/wiki/Performance_analysis) to see how much time is spent in each part of your code, how much memory is used, etc. For PHP, [webgrind](http://code.google.com/p/webgrind/) seems to be a popular, GUI-based way of profiling. Something like [dotTrace](http://www.jetbrains.com/profiler/index.html) would do the same thing for an ASP.NET app. Note that when it comes to databases, profiling tools like this will only show you which database queries are slow--not *why* they are slow. For that, you'd need to look into database-specific profiling...
Another aspect of web app bottlenecks is how much time it actually takes a browser to downlad everything (CSS and JavaScript imports, images, etc.) and render the page. There are several companies like [Keynote](http://www.keynote.com/) who have bots that will hit your site from all around the world, analyze the performance, and give you recommendations about changes you can make to get the output of your app *to the browser and rendered as quickly as possible* (e.g., "use gzip compression and put your JavaScript at the end of the page instead of the head", etc.). You can also do this yourslelf on a much smaller scale, of course. For example, Firefox plug-ins like [Jiffy](https://addons.mozilla.org/en-US/firefox/addon/7613) and [YSlow](https://addons.mozilla.org/en-US/firefox/addon/5369) will do the job. | [Tracing](http://msdn.microsoft.com/en-us/library/y13fw6we(VS.71).aspx) is a great start | What are the ways to find bottlenecks in a web application? | [
"",
"php",
"asp.net",
"performance",
"web-applications",
""
] |
I have Visio2007 and I really like it. However, it doesn't seem to have UML model/datatypes for Java. Is there some template I can download for Java? Or should I just forget about Visio altogether and get an Elipse plugin?
Thanks! | The latter is a better option, IMHO. Further, I don't think UML Models should be specific to Java. I am not aware of, if there are any java-specific UML, around.
I found this while searching over the net. [UML Models: MS Visio 2007](http://office.microsoft.com/en-us/visio/HP012088141033.aspx). | I had the same question, but then I messed around with it until I figured it out. Now, bear with me; this is the first time I have ever posted a response to an answer. Since I figured it out I thought I should transfer my brain’s data into your brain (eww.. we’re interfacing).
Now to business... You can actually make up your own package and data types. What I did was I went to the `Model Explorer` tab on the left. Then, I right-clicked on `IDL Data Types`, and then I clicked `Duplicate` (since I didn’t want to make my own from scratch and it pretty much has what I want). It will then make a new package called "Package 1". Right-click on it and select `Rename`; I renamed mine to “Java Data Types” since that is what I am making, but you can name it “Underpants” if you want. Then, I opened my new package and right-clicked each data type, one at a time, and selected `Properties`. Then it will open up the `UML Property Editor`. From there, click on `Name`, then remove the word “idl” from inside the brackets, so “boolean(idl)” now says ”boolean()”.
Now, one other thing: There is no `int` data type, so you need to make one. It is pretty easy; just right-click on your newly created folder, click `New`, then click `Data Type`. It should open the `UML Datatypes Properties` window. Now, click inside the `Name` tab, type in “int()”, then click `OK`. And you’re all done.
See, that wasn’t so bad. Heck if I can do it, you definitely should be able to do it. I hope this was understandable and helpful. If not, well I guess I have failed you and I should now fall on my sword to save my honor. | Is there a way to draw UML in Visio for Java? | [
"",
"java",
"uml",
"visio",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.