Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
If I'm deploying to servers with WebSphere 6.1 (Java 1.5), should I use IBM's JDK on my build box? Or will Sun's JDK compile to the same binary?
If I should use IBM's, where can I get the Windows x64 version?
|
I would as much as possible try to keep development as close to production as possible. Ibm and Sun's JDK's certainly both satisfy the SDK certification, but they are by no means identical. Their instrumentation and memory management are at least slightly different. If nothing else, the bugs in the JDK will be different, which your code may only trip over in one scenario vs. another. It'll also probably only happen at 4 am, and when the moon is full especially when you have company over.
I can't tell you where to get IBM's jdk, but if you've got a license to websphere at your company, you should have a contact at IBM to get you a link to that JDK.
Good luck, and always try to minimize differences where possible.
|
It should not make any difference. It will probably not be exactly the same binary but 100% compatible. I assume you're using external libraries anyways like log4j or maybe hibernate or whatever and those are not built using the IBM JDK.
There are differences in the JREs, however. For example, I remember that when I listed methods or fields of a class using reflection, the IBM JRE used to give them to me in a different order than the Sun one.
|
Does it matter which vendor's JDK you build with?
|
[
"",
"java",
"build-process",
""
] |
I am using the [`jquery-ui-dialog`](https://jqueryui.com/dialog/) plugin
I am looking for way to refresh the page when in some circumstances when the dialog is closed.
Is there a way to capture a close event from the dialog?
I know I can run code when the close button is clicked but that doesn't cover the user closing with escape or the x in the top right corner.
|
I have found it!
You can catch the close event using the following code:
```
$('div#popup_content').on('dialogclose', function(event) {
alert('closed');
});
```
Obviously I can replace the alert with whatever I need to do.
**Edit:** As of Jquery 1.7, the bind() has become on()
|
I believe you can also do it while creating the dialog (copied from a project I did):
```
dialog = $('#dialog').dialog({
modal: true,
autoOpen: false,
width: 700,
height: 500,
minWidth: 700,
minHeight: 500,
position: ["center", 200],
close: CloseFunction,
overlay: {
opacity: 0.5,
background: "black"
}
});
```
Note `close: CloseFunction`
|
jquery-ui-dialog - How to hook into dialog close event
|
[
"",
"javascript",
"jquery",
"jquery-ui",
"modal-dialog",
"jquery-ui-dialog",
""
] |
I'm just starting to get into Django, and of course as of last night one of the two new Python versions went final (2.6 obviously ;)) so I'm wondering if 2.6 plus Django is ready for actual use or do the Django team need more time to finish with tweaks/cleanup?
All the google searches I did were inconclusive, I saw bits about some initial test runs on beta 2 but nothing more recent seemed to show up.
Edit: <http://groups.google.com/group/django-developers/browse_thread/thread/a48f81d916f24a04>
They've confirmed here 1.0 w/2.6 works fine as far as they know.
|
The impression I get is that 2.6 should work fine with Django 1.0. As found here: <http://simonwillison.net/2008/Oct/2/whatus/>
|
Note that there is currently no python-mysql adapter for python2.6. If you need MySQL, stick with 2.5 for now.
|
Django and Python 2.6
|
[
"",
"python",
"django",
""
] |
Anyone knows if is possible to have partial class definition on C++ ?
Something like:
file1.h:
```
class Test {
public:
int test1();
};
```
file2.h:
```
class Test {
public:
int test2();
};
```
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent because inheritance is a cost to pay that is non-useful for multi-platform classes.
I mean you will never have two multi-platform specialization instances at runtime, only at compile time. Inheritance could be useful to fulfill your public interface needs but after that it won't add anything useful at runtime, just costs.
Also you will have to use an ugly #ifdef to use the class because you can't make an instance from an abstract class:
```
class genericTest {
public:
int genericMethod();
};
```
Then let's say for win32:
```
class win32Test: public genericTest {
public:
int win32Method();
};
```
And maybe:
```
class macTest: public genericTest {
public:
int macMethod();
};
```
Let's think that both win32Method() and macMethod() calls genericMethod(), and you will have to use the class like this:
```
#ifdef _WIN32
genericTest *test = new win32Test();
#elif MAC
genericTest *test = new macTest();
#endif
test->genericMethod();
```
Now thinking a while the inheritance was only useful for giving them both a genericMethod() that is dependent on the platform-specific one, but you have the cost of calling two constructors because of that. Also you have ugly #ifdef scattered around the code.
That's why I was looking for partial classes. I could at compile-time define the specific platform dependent partial end, of course that on this silly example I still need an ugly #ifdef inside genericMethod() but there is another ways to avoid that.
|
This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance.
|
Try inheritance
Specifically
```
class AllPlatforms {
public:
int common();
};
```
and then
```
class PlatformA : public AllPlatforms {
public:
int specific();
};
```
|
Partial class definition on C++?
|
[
"",
"c++",
""
] |
Here's what I'd like to do:
I want to create a library project that contains my Resource files (ie, UI Labels and whatnot). I'd like to then use the resource library both in my UI and in my Tests. (Ie, basically have a common place for my resources that I reference from multiple projects.)
Unfortunately, because the StronglyTypedResourceBuilder (the .Net class which generates the code for Resources) makes resource files internal by default, I can't reference my strongly typed resources in the library from another project (ie, my UI or tests), without jumping through hoops (ie, something similar to what is described [here](http://www.codeproject.com/KB/dotnet/Localization.aspx), or writing a public wrapper class/function).
Unfortunately, both those solutions remove my ability to keep the references strongly-typed.
Has anyone found a straight-forward way to create strongly typed .Net resources that can be referenced from multiple projects?
I'd prefer to avoid having to use a build event in order to accomplish this (ie, to do something like replace all instances of 'internal' with 'public', but that's basically my fall-back plan if I can't find an answer..
|
Not sure which version of Visual Studio you are using, so I will put steps for either one:
VS 2008 - When you open the resx file in design view, there is an option at the top beside Add Resource and Remove Resource, called Access Modifier, it is a drop down where you can change the generated code from internal to public.
VS 2005 - You don't have the option to generate the code like in VS 2008. It was a feature that was added, because of this headache. There are work around's though. You could use a third party generator like this [tool](http://altinoren.com/CommentView,guid,5b69e6d6-86b0-4717-889a-94db78ff04b2.aspx) or you could use the [InternalsVisibleTo](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx) attribute in your AssemblyInfo.cs to add the projects that will have access to the internal classes of your resource library.
|
Visual Studio 2008 allows you to select whether the generated resource class should be internal or public. There is also the [ResXFileCodeGeneratorEx](http://www.codeproject.com/KB/dotnet/ResXFileCodeGeneratorEx.aspx), which should do that for Visual Studio 2005.
|
Is there a way to make Strongly Typed Resource files public (as opposed to internal)?
|
[
"",
"c#",
"asp.net",
".net-2.0",
"resources",
"strong-typing",
""
] |
Is there a destructor for Java? I don't seem to be able to find any documentation on this. If there isn't, how can I achieve the same effect?
To make my question more specific, I am writing an application that deals with data and the specification say that there should be a 'reset' button that brings the application back to its original just launched state. However, all data have to be 'live' unless the application is closed or reset button is pressed.
Being usually a C/C++ programmer, I thought this would be trivial to implement. (And hence I planned to implement it last.) I structured my program such that all the 'reset-able' objects would be in the same class so that I can just destroy all 'live' objects when a reset button is pressed.
I was thinking if all I did was just to dereference the data and wait for the garbage collector to collect them, wouldn't there be a memory leak if my user repeatedly entered data and pressed the reset button? I was also thinking since Java is quite mature as a language, there should be a way to prevent this from happening or gracefully tackle this.
|
Because Java is a garbage collected language you cannot predict when (or even if) an object will be destroyed. Hence there is no direct equivalent of a destructor.
There is an inherited method called `finalize`, but this is called entirely at the discretion of the garbage collector. So for classes that need to explicitly tidy up, the convention is to define a *close* method and use finalize only for sanity checking (i.e. if *close* has not been called do it now and log an error).
There was [a question that spawned in-depth discussion of finalize](https://stackoverflow.com/questions/158174/why-would-you-ever-implement-finalize) recently, so that should provide more depth if required...
|
Have a look at the [try-with-resources](http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) statement. For example:
```
try (BufferedReader br = new BufferedReader(new FileReader(path))) {
System.out.println(br.readLine());
} catch (Exception e) {
...
} finally {
...
}
```
Here the resource that is no longer needed is freed in the `BufferedReader.close()` method. You can create your own class that implements `AutoCloseable` and use it in a similar fashion.
This statement is more limited than `finalize` in terms of code structuring, but at the same time it makes the code simpler to understand and maintain. Also, there is no guarantee that a `finalize` method is called at all during the livetime of the application.
|
Is there a destructor for Java?
|
[
"",
"java",
"garbage-collection",
"destructor",
"finalize",
""
] |
When logging in C#, how can I learn the name of the method that called the current method? I know all about `System.Reflection.MethodBase.GetCurrentMethod()`, but I want to go one step beneath this in the stack trace. I've considered parsing the stack trace, but I am hoping to find a cleaner more explicit way, something like `Assembly.GetCallingAssembly()` but for methods.
|
Try this:
```
using System.Diagnostics;
// Get call stack
StackTrace stackTrace = new StackTrace();
// Get calling method name
Console.WriteLine(stackTrace.GetFrame(1).GetMethod().Name);
```
one-liner:
```
(new System.Diagnostics.StackTrace()).GetFrame(1).GetMethod().Name
```
|
In C# 5, you can get that information using [caller info](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/attributes/caller-information):
```
//using System.Runtime.CompilerServices;
public void SendError(string Message, [CallerMemberName] string callerName = "")
{
Console.WriteLine(callerName + "called me.");
}
```
You can also get the `[CallerFilePath]` and `[CallerLineNumber]`.
|
How can I find the method that called the current method?
|
[
"",
"c#",
".net",
"logging",
"stack-trace",
"system.diagnostics",
""
] |
I wish to implement my software on a shareware basis, so that the user is
given a maximum trial period of (say) 30 days with which to try out the software. On purchase I intend the user to be given a randomly-generated key, which when entered
enables the software again.
I've never been down this route before, so any advice or feedback or pointers to 'standard' ways of how this is done would be much appreciated.
I do not anticipate users cheating by changing the system date or anything like that, though this is probably worth considering. Apologies if this topic has appeared before.
|
With regards to a random-generated key, how will you verify a key is legit or if a key is bogus if it is actually random? Have a look at the article "[Implementing a Partial Serial Number Verification System](http://www.brandonstaggs.com/2007/07/26/implementing-a-partial-serial-number-verification-system-in-delphi/)" as it is quite good and is easy to implement in any language.
With regards to time trials, as basic solution would be to compare your main executable files creation time to the current system time and act on the difference. This assumes your installer sets the files creation time to the time of install as opposed to preserving the time you compiled it! :)
|
If your software is really useful, you'll certainly find cracked copies on P2P before you see your first order. This will happen no matter how sophisticated is the license enforcement code you are going to implement.
That said, just store first-run date somewhere (may be registry, if on Windows) and after 30 days refuse to start, or just open a reminder window.
Don't worry about cheaters, they'll find a way around your restrictions no matter what. Worry about your honest customers and try hard not to make **their** life harder.
Eric Sink has written more about this [here (section 4)](http://www.ericsink.com/bos/Transparency.html).
|
Incorporating shareware restrictions in C++ software
|
[
"",
"c++",
"time",
"restriction",
"shareware",
""
] |
There are numerous times you have an interface that accepts similar type arguments that have a separate domain logic meaning:
```
public static class Database
{
public static bool HasAccess(string userId, string documentId) { return true; }
}
```
Now it's quite easy to have someone key documentId instead of userId and vice versa. One could prevent that by abstracting the data type of the arguments:
```
public class UserId
{
public string Value { get; internal set; }
public static implicit operator string(UserId id) { return id.Value; }
}
public class DocumentId
{
public string Value { get; internal set; }
public static implicit operator string(DocumentId id) { return id.Value; }
}
public static class Database
{
public static bool HasAccess(UserId userId, DocumentId documentId) { return true; }
}
```
This way you get a nice compiler warning if you type in arguments out of order:
```
UserId e = new UserId() { Value = "a" };
DocumentId d = new DocumentId() { Value = "b" };
Database.HasAccess(d, e);
```
You also get the ability to change abstracted type in without affecting the rest of the system but that's not very likely. Would abstracting type be a good idea to get more type safety?
The question relates to C# solutions but short descriptions in other languages is welcome.
EDIT: Removed the implicit cast from string and pointing finger at the C# tag.
|
Interesting, but I suspect that in many cases (particularly seialization / RPC APIs) this will only add confustion/overhead. Also - a minor implementation detail, but given this approach I'd make the wrappers fully immutable, not just "internal set" immutable.
TBH - I'd probably rather use unit tests for most of this... sometimes simple is beautiful. The other problem is that since you have implicit operators, it won't stop you doing the much more likely:
```
string user = "fred";
SomeMethodThatWantsADocument(user);
```
That should compile; the implicit operator undoes all your good work...
|
I think you answered your own question - better data integrity and validation, better system
|
Is abstracting data type (sometimes) a good idea?
|
[
"",
"c#",
"design-patterns",
""
] |
As a function argument I get a `vector<double>& vec` (an output vector, hence non-const) with unknown length and values. I want to initialise this vector to a specific length n with all zeroes.
This will work
```
vec.clear();
vec.resize( n, 0.0 );
```
And this will work as well:
```
vec.resize( n );
vec.assign( n, 0.0 );
```
Is the second more efficient (because no memory deallocation/allocation is involved)? Is there a more efficient way to do this?
|
```
std::vector<double>(n).swap(vec);
```
After this, vec is guaranteed to have size and capacity n, with all values 0.0.
Perhaps the more idiomatic way since C++11 is
```
vec.assign(n, 0.);
vec.shrink_to_fit();
```
with the second line optional. In the case where `vec` starts off with more than `n` elements, whether to call `shrink_to_fit` is a trade-off between holding onto more memory than is required vs performing a re-allocation.
|
```
std::vector<double>(n).swap(vec);
```
This has the advantage of actually compacting your vector too. (In your first example, `clear()` does not guarantee to compact your vector.)
|
(re)initialise a vector to a certain length with initial values
|
[
"",
"c++",
"stl",
"initialization",
""
] |
I'm currently building a small web application that includes a fair amount of JavaScript. When I was prototyping the initial idea, I just hacked together a few functions to demonstrate how the application would eventually behave intending to go forward re-writing the JavaScript in an object-oriented nature.
Now that I'm getting into the implementation phase, I'm finding that creating object-oriented JavaScript for the sake of being object-oriented seems overkill - the project isn't likely to require any major modifications in the future that would warrant and object-oriented design. Instead, I'm finding that a set of concise, cohesive functions are working well.
So, with that said and with attempting to adhere to the KISS principle, when a set of functions are providing a suitable solution to a problem, are there any other reasons worth considering to convert my code into an object-oriented design?
|
No, although I personally find OOP more tasty, it is a means to an end, and not an end in itself. There are many cases where procedural programming makes more sense than OOP, an converting for the sake of converting, could be, as you said, overkill.
|
No, let it be and move forward - that is more productive in my view .
|
Is it worth converting my functional JavaScript code to an object-oriented design?
|
[
"",
"javascript",
"software-design",
"web-applications",
""
] |
Consider the following method signatures:
```
public fooMethod (Foo[] foos) { /*...*/ }
```
and
```
public fooMethod (Foo... foos) { /*...*/ }
```
*Explanation: The former takes an array of Foo-objects as an argument - `fooMethod(new Foo[]{..})` - while the latter takes an arbitrary amount of arguments of type Foo, and presents them as an array of Foo:s within the method - `fooMethod(fooObject1, fooObject2, etc...`).*
Java throws a fit if both are defined, claiming that they are duplicate methods. I did some detective work, and found out that the first declaration really requires an explicit array of Foo objects, and that's the only way to call that method. The second way actually accepts both an arbitrary amount of Foo arguments AND also accepts an array of Foo objects.
So, the question is, since the latter method seems more flexible, are there any reasons to use the first example, or have I missed anything vital?
|
These methods are actually the same.
This feature is called varargs and it is a compiler feature. Behind the scenes is is translates to the former version.
There is a pitfall if you define a method that accepts Object... and you sent one parameter of type Object[]!
|
I'd like to add to Shimi's explanation to add that another restriction of the varargs syntax is that the vararg must be the last declared parameter. So you can't do this:
```
void myMethod(String... values, int num);
```
This means that any given method can only have a single vararg parameter. In cases where you want to pass multiple arrays, you can use varargs for only one of them.
In practice, varargs are at their best when you are treating the args as an arbitrary number of distinct values, rather than as an array. Java5 maps them to an array simply because that was the most convenient thing to do.
A good example is String.format(). Here, the varargs are matched against the format placeholders in the first argument.
|
Object arrays in method signatures
|
[
"",
"java",
"arrays",
"methods",
""
] |
I got this error when I checked out my database from source control. It might sounds weird to check in the sql server database, but this was what I have done because this is just a personal project.
Anyone knows how to fix this?
|
Here's my finding.
As mentioned by other posters, you really don't want to check database files into and out of the source control.
But if you absolutely need to, and you have done check in the database files and you are encountering the same error that I encountered, here is a workaround:
First, detach the database, then, delete the ldf file, reattach the database again.
This is how I solved my problem.
|
You really don't want to be checking database files into and out of source control - in SQL Server you have to detach the files for this to even work and you run all kinds of risks.
If you absolutely have to do this, you should version backups.
I recommend versioning a script which creates the entire database (tables, sprocs, views, etc.)
You can try creating a database attaching from that data file and using [Create Database](http://msdn.microsoft.com/en-us/library/ms176061.aspx) the `ATTACH_REBUILD_LOG` option, but I'm not confident it's going to work since they probably weren't detached properly.
|
One or more files do not match the primary file of the database (error 5173)
|
[
"",
"c#",
"sql-server",
""
] |
In C++, static library A is linked into dynamic libraries B and C. If a class, Foo, is used in A which is defined in B, will C link if it doesn't use Foo?
I thought the answer was yes, but I am now running into a problem with xlc\_r7 where library C says Foo is an undefined symbol, which it is as far as C is concerned. My problem with that is Library C isn't using the class referencing it. This links in Win32 (VC6) and OpenVMS.
Is this a linker discrepancy or a PBCAK?
**New info:**
1. B depends on C, but not visa-versa.
2. I'm not using /OPT:REF to link on Windows and it links without issue.
|
When you statically link, two modules become one. So when you compile C and link A into it, its as if you had copied all the source code of A into the source code of C, then compiled the combined source. So C.dll includes A, which has a dependency on B via Foo. You'll need to link C to B's link library in order to satisfy that dependency.
Note that according to your info, this will create a circular dependency between B and C.
|
Sounds like it's probably the linker (ld/unix), as (most versions that I've used of) ld links the libraries in from left to right - and if there is a reference in the first one that is required by a later one the usual trick is to append the first library (or any required library) to the end of the command.
It's a try it and see....
|
Static and dynamic library linking
|
[
"",
"c++",
"linker",
"aix",
""
] |
I'm developing a little project plan and I came to a point when I need to decide what local databse system to use.
The input data is going to be stored on webserver (hosting - MySQL DB). The idea is to build a process to download all necessary data (for example at midnight) and process them. However, there are going to be many inputs and stages of processing, so I need to use some kind of local database to store the semi-product of the application
What local database system would you recommend to work with C# (.NET) application?
edit: The final product (information) should be easily being exported back to Hosting MySQL DB.
As Will mentioned in his answer - yes, I'm for a performance AND comfort of use.
|
I want to say Microsoft Sql 2005 Express, as it (almost) comes as the obvious choice when developing in .NET.
But it all depends on what previous db skills you have. If you already know MySql and as you already said, the data should be exported back to MySql. Why not use MySql all the way?
|
For quick and dirty I'd go with [Sql Server Compact Edition.](http://www.microsoft.com/sql/editions/compact/default.mspx) Its an in-process implementation of Sql Server, so it doesn't require you install any other applications.
Back in the day, you'd use an Access database for this kind of thing. But Access databases kinda blow.
It wouldn't take much to upload your finished data back to the production server. If you're looking for a solution that automates that process, you'll probably need to look at hosting an instance of MySql locally and use whatever replication services it provides.
|
Which database would you recommend to use with C# (.NET) application?
|
[
"",
"c#",
".net",
"database",
"local",
""
] |
Let's say I have the following `class X` where I want to return access to an internal member:
```
class Z
{
// details
};
class X
{
std::vector<Z> vecZ;
public:
Z& Z(size_t index)
{
// massive amounts of code for validating index
Z& ret = vecZ[index];
// even more code for determining that the Z instance
// at index is *exactly* the right sort of Z (a process
// which involves calculating leap years in which
// religious holidays fall on Tuesdays for
// the next thousand years or so)
return ret;
}
const Z& Z(size_t index) const
{
// identical to non-const X::Z(), except printed in
// a lighter shade of gray since
// we're running low on toner by this point
}
};
```
The two member functions `X::Z()` and `X::Z() const` have identical code inside the braces. This is duplicate code **and can cause maintenance problems for long functions with complex logic**.
Is there a way to avoid this code duplication?
|
Yes, it is possible to avoid the code duplication. You need to use the const member function to have the logic and have the non-const member function call the const member function and re-cast the return value to a non-const reference (or pointer if the functions returns a pointer):
```
class X
{
std::vector<Z> vecZ;
public:
const Z& z(size_t index) const
{
// same really-really-really long access
// and checking code as in OP
// ...
return vecZ[index];
}
Z& z(size_t index)
{
// One line. One ugly, ugly line - but just one line!
return const_cast<Z&>( static_cast<const X&>(*this).z(index) );
}
#if 0 // A slightly less-ugly version
Z& Z(size_t index)
{
// Two lines -- one cast. This is slightly less ugly but takes an extra line.
const X& constMe = *this;
return const_cast<Z&>( constMe.z(index) );
}
#endif
};
```
**NOTE:** It is important that you do **NOT** put the logic in the non-const function and have the const-function call the non-const function -- it may result in undefined behavior. The reason is that a constant class instance gets cast as a non-constant instance. The non-const member function may accidentally modify the class, which the C++ standard states will result in undefined behavior.
|
For a detailed explanation, please see the heading "Avoid Duplication in `const` and Non-`const` Member Function," on p. 23, in Item 3 "Use `const` whenever possible," in [*Effective C++*, 3d ed](https://www.aristeia.com/books.html) by Scott Meyers, ISBN-13: 9780321334879.

Here's Meyers' solution (simplified):
```
struct C {
const char & get() const {
return c;
}
char & get() {
return const_cast<char &>(static_cast<const C &>(*this).get());
}
char c;
};
```
The two casts and function call may be ugly, but it's correct in a non-`const` method as that implies the object was not `const` to begin with. (Meyers has a thorough discussion of this.)
|
How do I remove code duplication between similar const and non-const member functions?
|
[
"",
"c++",
"code-duplication",
"c++-faq",
"function-qualifier",
"explicit-object-parameter",
""
] |
Assuming I have an ArrayList
```
ArrayList<MyClass> myList;
```
And I want to call toArray, is there a performance reason to use
```
MyClass[] arr = myList.toArray(new MyClass[myList.size()]);
```
over
```
MyClass[] arr = myList.toArray(new MyClass[0]);
```
?
I prefer the second style, since it's less verbose, and I assumed that the compiler will make sure the empty array doesn't really get created, but I've been wondering if that's true.
Of course, in 99% of the cases it doesn't make a difference one way or the other, but I'd like to keep a consistent style between my normal code and my optimized inner loops...
|
Counterintuitively, the fastest version, on Hotspot 8, is:
```
MyClass[] arr = myList.toArray(new MyClass[0]);
```
I have run a micro benchmark using jmh the results and code are below, showing that the version with an empty array consistently outperforms the version with a presized array. Note that if you can reuse an existing array of the correct size, the result may be different.
Benchmark results (score in microseconds, smaller = better):
```
Benchmark (n) Mode Samples Score Error Units
c.a.p.SO29378922.preSize 1 avgt 30 0.025 β 0.001 us/op
c.a.p.SO29378922.preSize 100 avgt 30 0.155 β 0.004 us/op
c.a.p.SO29378922.preSize 1000 avgt 30 1.512 β 0.031 us/op
c.a.p.SO29378922.preSize 5000 avgt 30 6.884 β 0.130 us/op
c.a.p.SO29378922.preSize 10000 avgt 30 13.147 β 0.199 us/op
c.a.p.SO29378922.preSize 100000 avgt 30 159.977 β 5.292 us/op
c.a.p.SO29378922.resize 1 avgt 30 0.019 β 0.000 us/op
c.a.p.SO29378922.resize 100 avgt 30 0.133 β 0.003 us/op
c.a.p.SO29378922.resize 1000 avgt 30 1.075 β 0.022 us/op
c.a.p.SO29378922.resize 5000 avgt 30 5.318 β 0.121 us/op
c.a.p.SO29378922.resize 10000 avgt 30 10.652 β 0.227 us/op
c.a.p.SO29378922.resize 100000 avgt 30 139.692 β 8.957 us/op
```
---
For reference, the code:
```
@State(Scope.Thread)
@BenchmarkMode(Mode.AverageTime)
public class SO29378922 {
@Param({"1", "100", "1000", "5000", "10000", "100000"}) int n;
private final List<Integer> list = new ArrayList<>();
@Setup public void populateList() {
for (int i = 0; i < n; i++) list.add(0);
}
@Benchmark public Integer[] preSize() {
return list.toArray(new Integer[n]);
}
@Benchmark public Integer[] resize() {
return list.toArray(new Integer[0]);
}
}
```
---
You can find similar results, full analysis, and discussion in the blog post [*Arrays of Wisdom of the Ancients*](https://shipilev.net/blog/2016/arrays-wisdom-ancients/). To summarize: the JVM and JIT compiler contains several optimizations that enable it to cheaply create and initialize a new correctly sized array, and those optimizations can not be used if you create the array yourself.
|
As of [ArrayList in Java 5](http://java.sun.com/j2se/1.5.0/docs/api/java/util/ArrayList.html#toArray(T[])), the array will be filled already if it has the right size (or is bigger). Consequently
```
MyClass[] arr = myList.toArray(new MyClass[myList.size()]);
```
will create one array object, fill it and return it to "arr". On the other hand
```
MyClass[] arr = myList.toArray(new MyClass[0]);
```
will create two arrays. The second one is an array of MyClass with length 0. So there is an object creation for an object that will be thrown away immediately. As far as the source code suggests the compiler / JIT cannot optimize this one so that it is not created. Additionally, using the zero-length object results in casting(s) within the toArray() - method.
See the source of ArrayList.toArray():
```
public <T> T[] toArray(T[] a) {
if (a.length < size)
// Make a new array of a's runtime type, but my contents:
return (T[]) Arrays.copyOf(elementData, size, a.getClass());
System.arraycopy(elementData, 0, a, 0, size);
if (a.length > size)
a[size] = null;
return a;
}
```
Use the first method so that only one object is created and avoid (implicit but nevertheless expensive) castings.
|
.toArray(new MyClass[0]) or .toArray(new MyClass[myList.size()])?
|
[
"",
"java",
"performance",
"coding-style",
""
] |
I will choose Java as an example, most people know it, though every other OO language was working as well.
Java, like many other languages, has interface inheritance and implementation inheritance. E.g. a Java class can inherit from another one and every method that has an implementation there (assuming the parent is not abstract) is inherited, too. That means the interface is inherited and the implementation for this method as well. I can overwrite it, but I don't have to. If I don't overwrite it, I have inherited the implementation.
However, my class can also "inherit" (not in Java terms) just an interface, without implementation. Actually interfaces are really named that way in Java, they provide interface inheritance, but without inheriting any implementation, since all methods of an interface have no implementation.
Now there was this [article, saying it's better to inherit interfaces than implementations](http://www.javaworld.com/javaworld/jw-08-2003/jw-0801-toolbox.html), you may like to read it (at least the first half of the first page), it's pretty interesting. It avoids issues like the [fragile base class problem](http://en.wikipedia.org/wiki/Fragile_base_class). So far this makes all a lot of sense and many other things said in the article make a lot of sense to me.
What bugs me about this, is that implementation inheritance means **code reuse**, one of the most important properties of OO languages. Now if Java had no classes (like James Gosling, the godfather of Java has wished according to this article), it solves all problems of implementation inheritance, but how would you make code reuse possible then?
E.g. if I have a class Car and Car has a method move(), which makes the Car move. Now I can sub-class Car for different type of cars, that are all cars, but are all specialized versions of Car. Some may move in a different way, these need to overwrite move() anyway, but most would simply keep the inherited move, as they move alike just like the abstract parent Car. Now assume for a second that there are only interfaces in Java, only interfaces may inherit from each other, a class may implement interfaces, but all classes are always final, so no class can inherit from any other class.
How would you avoid that when you have an Interface Car and hundred Car classes, that you need to implement an identical move() method for each of them? What concepts for code reuse other than implementation inheritance exist in the the OO world?
Some languages have Mixins. Are Mixins the answer to my question? I read about them, but I cannot really imagine how Mixins would work in a Java world and if they can really solve the problem here.
Another idea was that there is a class that only implements the Car interface, let's call it AbstractCar, and implements the move() method. Now other cars implement the Car interface as well, internally they create an instance of AbstractCar and they implement their own move() method by calling move() on their internal abstract Car. But wouldn't this be wasting resources for nothing (a method calling just another method - okay, JIT could inline the code, but still) and using extra memory for keeping internal objects, you wouldn't even need with implementation inheritance? (after all every object needs more memory than just the sum of the encapsulated data) Also isn't it awkward for a programmer to write dummy methods like
```
public void move() {
abstractCarObject.move();
}
```
?
Anyone can imagine a better idea how to avoid implementation inheritance and still be able to re-use code in an easy fashion?
|
Short answer: Yes it is possible. But you have to do it on purpose and no by chance ( using final, abstract and design with inheritance in mind, etc. )
Long answer:
Well, inheritance is not actually for "code re-use", it is for class "specialization", I think this is a misinterpretation.
For instance is it a very bad idea to create a Stack from a Vector, just because they are alike. Or properties from HashTable just because they store values. See [Effective].
The "code reuse" was more a "business view" of the OO characteristics, meaning that you objects were easily distributable among nodes; and were portable and didn't not have the problems of previous programming languages generation. This has been proved half rigth. We now have libraries that can be easily distributed; for instance in java the jar files can be used in any project saving thousands of hours of development. OO still has some problems with portability and things like that, that is the reason now WebServices are so popular ( as before it was CORBA ) but that's another thread.
This is one aspect of "code reuse". The other is effectively, the one that has to do with programming. But in this case is not just to "save" lines of code and creating fragile monsters, but designing with inheritance in mind. This is the item 17 in the book previously mentioned; **Item 17: Design and document for inheritance or else prohibit it.** See [Effective]
Of course you may have a Car class and tons of subclasses. And yes, the approach you mention about Car interface, AbstractCar and CarImplementation is a correct way to go.
You define the "contract" the Car should adhere and say these are the methods I would expect to have when talking about cars. The abstract car that has the base functionality that every car but leaving and documenting the methods the subclasses are responsible to handle. In java you do this by marking the method as abstract.
When you proceed this way, there is not a problem with the "fragile" class ( or at least the designer is conscious or the threat ) and the subclasses do complete only those parts the designer allow them.
Inheritance is more to "specialize" the classes, in the same fashion a Truck is an specialized version of Car, and MosterTruck an specialized version of Truck.
It does not make sanse to create a "ComputerMouse" subclase from a Car just because it has a Wheel ( scroll wheel ) like a car, it moves, and has a wheel below just to save lines of code. It belongs to a different domain, and it will be used for other purposes.
The way to prevent "implementation" inheritance is in the programming language since the beginning, you should use the final keyword on the class declaration and this way you are prohibiting subclasses.
Subclassing is not evil if it's done on purpose. If it's done uncarefully it may become a nightmare. I would say that you should start as private and "final" as possible and if needed make things more public and extend-able. This is also widely explained in the presentation"How to design good API's and why it matters" See [Good API]
Keep reading articles and with time and practice ( and a lot of patience ) this thing will come clearer. Although sometime you just need to do the work and copy/paste some code :P . This is ok, as long you try to do it well first.
Here are the references both from Joshua Bloch ( formerly working in Sun at the core of java now working for Google )
---
[Effective]
Effective Java. Definitely the best java book a non beginner should learn, understand and practice. A must have.
[Effective Java](http://java.sun.com/docs/books/effective)
---
[Good API]Presentation that talks on API's design, reusability and related topics.
It is a little lengthy but it worth every minute.
[How To Design A Good API and Why it Matters](http://www.youtube.com/watch?v=aAb7hSCtvGw)
Regards.
---
Update: Take a look at minute 42 of the video link I sent you. It talks about this topic:
"When you have two classes in a public API and you think to make one a subclass of another, like Foo is a subclass of Bar, ask your self , is Every Foo a Bar?... "
And in the minute previous it talks about "code reuse" while talking about TimeTask.
|
The problem with most example against inheritance are examples where the person is using inheritance incorrectly, not a failure of inheritance to correctly abstract.
In the article you posted a link to, the author shows the "brokenness" of inheritance using Stack and ArrayList. The example is flawed because **a Stack is not an ArrayList** and therefore inheritance should not be used. The example is as flawed as String extending Character, or PointXY extending Number.
Before you extend class, you should always perform the "is\_a" test. Since you can't say Every Stack is an ArrayList without being wrong in some way, then you should not inheirit.
The contract for Stack is different than the contract for ArrayList (or List) and stack should not be inheriting methods that is does not care about (like get(int i) and add()). In fact Stack should be an interface with methods such as:
```
interface Stack<T> {
public void push(T object);
public T pop();
public void clear();
public int size();
}
```
A class like ArrayListStack might implement the Stack interface, and in that case use composition (having an internal ArrayList) and not inheritance.
Inheritance is not bad, bad inheritance is bad.
|
Is OOP & completely avoiding implementation inheritance possible?
|
[
"",
"java",
"oop",
""
] |
I'm developing a website (in Django) that uses OpenID to authenticate users. As I'm currently only running on my local machine I can't authenticate using one of the OpenID providers on the web. So I figure I need to run a local OpenID server that simply lets me type in a username and then passes that back to my main app.
Does such an OpenID dev server exist? Is this the best way to go about it?
|
The libraries at [OpenID Enabled](http://openidenabled.com/) ship with examples that are sufficient to run a local test provider. Look in the examples/djopenid/ directory of the python-openid source distribution. Running that will give you an instance of [this test provider](http://openidenabled.com/python-openid/trunk/examples/server/).
|
I have no problems testing with [myopenid.com](http://myopenid.com). I thought there would be a problem testing on my local machine but it just worked. (I'm using ASP.NET with DotNetOpenId library).
The 'realm' and return url must contain the port number like '<http://localhost:93359>'.
I assume it works OK because the provider does a client side redirect.
|
How do you develop against OpenID locally
|
[
"",
"python",
"django",
"openid",
""
] |
I have a table with rowID, longitude, latitude, businessName, url, caption. This might look like:
```
rowID | long | lat | businessName | url | caption
1 20 -20 Pizza Hut yum.com null
```
How do I delete all of the duplicates, but only keep the one that has a URL (first priority), or keep the one that has a caption if the other doesn't have a URL (second priority) and delete the rest?
|
Here's my looping technique. This will probably get voted down for not being mainstream - and I'm cool with that.
```
DECLARE @LoopVar int
DECLARE
@long int,
@lat int,
@businessname varchar(30),
@winner int
SET @LoopVar = (SELECT MIN(rowID) FROM Locations)
WHILE @LoopVar is not null
BEGIN
--initialize the variables.
SELECT
@long = null,
@lat = null,
@businessname = null,
@winner = null
-- load data from the known good row.
SELECT
@long = long,
@lat = lat,
@businessname = businessname
FROM Locations
WHERE rowID = @LoopVar
--find the winning row with that data
SELECT top 1 @Winner = rowID
FROM Locations
WHERE @long = long
AND @lat = lat
AND @businessname = businessname
ORDER BY
CASE WHEN URL is not null THEN 1 ELSE 2 END,
CASE WHEN Caption is not null THEN 1 ELSE 2 END,
RowId
--delete any losers.
DELETE FROM Locations
WHERE @long = long
AND @lat = lat
AND @businessname = businessname
AND @winner != rowID
-- prep the next loop value.
SET @LoopVar = (SELECT MIN(rowID) FROM Locations WHERE @LoopVar < rowID)
END
```
|
This solution is brought to you by "stuff I've learned on Stack Overflow" in the last week:
```
DELETE restaurant
WHERE rowID in
(SELECT rowID
FROM restaurant
EXCEPT
SELECT rowID
FROM (
SELECT rowID, Rank() over (Partition BY BusinessName, lat, long ORDER BY url DESC, caption DESC ) AS Rank
FROM restaurant
) rs WHERE Rank = 1)
```
Warning: I have not tested this on a real database
|
Remove Duplicates with Caveats
|
[
"",
"sql",
"sql-server",
"duplicate-data",
""
] |
I'd like to create a subset of Users that don't have a login... basically as a way to add a photographer field to photos without having a full blown account associated with that person (since in many cases, they'll never actually log in to the site). A caveat is that I'd also like to be able to enable an account for them later.
So, I think the question becomes what's the best way to set up a "People" table that ties to the User table without actually extending the User table with UserProfile.
|
A [user profile](http://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-users) (as returned by `django.contrib.auth.models.User.get_profile`) doesn't extend the User table - the model you specify as the profile model with the `AUTH_PROFILE_MODULE` setting is just a model which has a `ForeignKey` to `User`. `get_profile` and the setting are really just a convenience API for accessing an instance of a specific model which has a `ForeignKey` to a specific `User` instance.
As such, one option is to create a profile model in which the `ForeignKey` to `User` can be `null` and associate your `Photo` model with this profile model instead of the `User` model. This would allow you to create a profile for a non-existent user and attach a registered User to the profile at a later date.
|
Users that can't login? Just given them a totally random password.
```
import random
user.set_password( str(random.random()) )
```
They'll never be able to log on.
|
Django UserProfile... without a password
|
[
"",
"python",
"database",
"django",
"django-authentication",
"django-users",
""
] |
I sometimes notice programs that crash on my computer with the error: "pure virtual function call".
How do these programs even compile when an object cannot be created of an abstract class?
|
They can result if you try to make a virtual function call from a constructor or destructor. Since you can't make a virtual function call from a constructor or destructor (the derived class object hasn't been constructed or has already been destroyed), it calls the base class version, which in the case of a pure virtual function, doesn't exist.
```
class Base
{
public:
Base() { reallyDoIt(); }
void reallyDoIt() { doIt(); } // DON'T DO THIS
virtual void doIt() = 0;
};
class Derived : public Base
{
void doIt() {}
};
int main(void)
{
Derived d; // This will cause "pure virtual function call" error
}
```
See also Raymond Chen's [2](https://devblogs.microsoft.com/oldnewthing/20040428-00/?p=39613) [articles on the subject](https://devblogs.microsoft.com/oldnewthing/20131011-00/?p=2953)
|
As well as the standard case of calling a virtual function from the constructor or destructor of an object with pure virtual functions you can also get a pure virtual function call (on MSVC at least) if you call a virtual function after the object has been destroyed. Obviously this is a pretty bad thing to try and do but if you're working with abstract classes as interfaces and you mess up then it's something that you might see. It's possibly more likely if you're using referenced counted interfaces and you have a ref count bug or if you have an object use/object destruction race condition in a multi-threaded program... The thing about these kinds of purecall is that it's often less easy to fathom out what's going on as a check for the 'usual suspects' of virtual calls in ctor and dtor will come up clean.
To help with debugging these kinds of problems you can, in various versions of MSVC, replace the runtime library's purecall handler. You do this by providing your own function with this signature:
```
int __cdecl _purecall(void)
```
and linking it before you link the runtime library. This gives YOU control of what happens when a purecall is detected. Once you have control you can do something more useful than the standard handler. I have a handler that can provide a stack trace of where the purecall happened; see here: <http://www.lenholgate.com/blog/2006/01/purecall.html> for more details.
(Note you can also call \_set\_purecall\_handler() to install your handler in some versions of MSVC).
|
Where do "pure virtual function call" crashes come from?
|
[
"",
"c++",
"polymorphism",
"virtual-functions",
"pure-virtual",
""
] |
I'm doing some experiments with Microsoft Dynamics CRM. You interact with it through web services and I have added a Web Reference to my project. The web service interface is very rich, and the generated "Reference.cs" is some 90k loc.
I'm using the web reference in a console application. I often change something, recompile and run. Compilation is fast, but newing up the web service reference is very slow, taking some 15-20 seconds:
`CrmService service = new CrmService();`
Profiling reveals that all time is spent in the SoapHttpClientProtocol constructor.
The culprit is apparently the fact that the XML serialization code (not included in the 90k loc mentioned above) is generated at run time, before being JIT'ed. This happens during the constructor call. The wait is rather frustrating when playing around and trying things out.
I've tried various combinations of sgen.exe, ngen and XGenPlus (which takes several hours and generates 500MB of additional code) but to no avail. I've considered implementing a Windows service that have few CrmService instances ready to dish out when needed but that seems excessive.
Any ideas?
|
The following is ripped from [this](http://communities.vmware.com/thread/47063) thread on the VMWare forums:
Hi folks,
We've found that sgen.exe does work. It'just that there is a couple of additional steps beyond pre-generating the serializer dll's that we missed in this thread. Here is the detailed instruction
## PROBLEM
When using the VIM 2.0 SDK from .NET requires long time to instantiate the VimService class. (The VimService class is the proxy class generated by running 'wsdl.exe vim.wsdl vimService.wsdl')
In other words, the following line of code:
```
_service = new VimService();
```
Could take about 50 seconds to execute.
## CAUSE
Apparently, the .NET `XmlSerializer` uses the `System.Xml.Serialization.*` attributes annotating the proxy classes to generate serialization code in run time. When the proxy classes are many and large, as is the code in VimService.cs, the generation of the serialization code can take a long time.
## SOLUTION
This is a known problem with how the Microsoft .NET serializer works.
Here are some references that MSDN provides about solving this problem:
<http://msdn2.microsoft.com/en-us/library/bk3w6240.aspx>
<http://msdn2.microsoft.com/en-us/library/system.xml.serialization.xmlserializerassemblyattribute.aspx>
Unfortunately, none of the above references describe the complete solution to the problem. Instead they focus on how to pre-generate the XML serialization code.
The complete fix involves the following steps:
1. Create an assembly (a DLL) with the pre-generated XML serializer code
2. Remove all references to System.Xml.Serialization.\* attributes from the proxy code (i.e. from the VimService.cs file)
3. Annotate the main proxy class with the XmlSerializerAssemblyAttribute to point it to where the XML serializer assembly is.
Skipping step 2 leads to only 20% improvement in the instantiation time for the `VimService` class. Skipping either step 1 or 3 leads to incorrect code. With all three steps 98% improvement is achieved.
Here are step-by-step instructions:
Before you begin, makes sure you are using .NET verison 2.0 tools. This solution will not work with version 1.1 of .NET because the sgen tool and the `XmlSerializationAssemblyAttribute` are only available in version 2.0 of .NET
1. Generate the VimService.cs file from the WSDL, using wsdl.exe:
`wsdl.exe vim.wsdl vimService.wsdl`
This will output the VimService.cs file in the current directory
2. Compile VimService.cs into a library
`csc /t:library /out:VimService.dll VimService.cs`
3. Use the sgen tool to pre-generate and compile the XML serializers:
`sgen /p VimService.dll`
This will output the VimService.XmlSerializers.dll in the current directory
4. Go back to the VimService.cs file and remove all `System.Xml.Serialization.*` attributes. Because the code code is large, the best way to achieve that is by using some regular expression substitution tool. Be careful as you do this because not all attributes appear on a line by themselves. Some are in-lined as part of a method declaration.
If you find this step difficult, here is a simplified way of doing it:
Assuming you are writing C#, do a global replace on the following string:
`[System.Xml.Serialization.XmlIncludeAttribute`
and replace it with:
`// [System.Xml.Serialization.XmlIncludeAttribute`
This will get rid of the `Xml.Serialization` attributes that are the biggest culprits for the slowdown by commenting them out. If you are using some other .NET language, just modify the replaced string to be prefix-commented according to the syntax of that language. This simplified approach will get you most of the speedup that you can get. Removing the rest of the Xml.Serialization attributes only achieves an extra 0.2 sec speedup.
5. Add the following attribute to the VimService class in VimService.cs:
`[System.Xml.Serialization.XmlSerializerAssemblyAttribute(AssemblyName = "VimService.XmlSerializers")]`
You should end up with something like this:
`// ... Some code here ...
[System.Xml.Serialization.XmlSerializerAssemblyAttribute(AssemblyName = "VimService.XmlSerializers")]
public partial class VimService : System.Web.Services.Protocols.SoapHttpClientProtocol {
// ... More code here`
6. Regenerate VimSerice.dll library by
`csc /t:library /out:VimService.dll VimService.cs`
7. Now, from your application, you can add a reference to VimSerice.dll library.
8. Run your application and verify that VimService object instanciation time is reduced.
## ADDITIONAL NOTES
The sgen tool is a bit of a black box and its behavior varies depending on what you have in your Machine.config file. For example, by default it is supposed to ouptut optimized non-debug code, but that is not always the case. To get some visibility into the tool, use the /k flag in step 3, which will cause it to keep all its temporary generated files, including the source files and command line option files it generated.
Even after the above fix the time it takes to instantiate the VimService class for the first time is not instantaneous (1.5 sec). Based on empirical observation, it appears that the majority of the remaining time is due to processing the `SoapDocumentMethodAttribute` attributes. At this point it is unclear how this time can be reduced. The pre-generated XmlSerializer assembly does not account for the SOAP-related attributes, so these attributes need to remain in the code. The good news is that only the first instantiation of the VimService class for that app takes long. So if the extra 1.5 seconds are a problem, one could try to do a dummy instantiation of this class at the beginning of the application as a means to improve user experience of login time.
|
You might wish to look into the [`Sgen.exe`](http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx) tool that comes with .NET. There's also a handy little thing in Visual Studio's C# project properties "Build" page, at the very bottom, called "Build serialization assembly" that automatically runs `Sgen` for you.
|
Slow SoapHttpClientProtocol constructor
|
[
"",
"c#",
".net",
"performance",
"xml-serialization",
"soaphttpclientprotocol",
""
] |
I'm doing some maintenance coding on a webapp and I am getting a javascript error of the form: "[elementname] has no properties"
Part of the code is being generated on the fly with an AJAX call that changes innerHTML for part of the page, after this is finished I need to copy a piece of data from a hidden input field to a visible input field.
So we have the destination field: `<input id="dest" name="dest" value="0">`
And the source field: `<input id="source" name="source" value="1">`
Now when the ajax runs it overwrites the innerHTML of the div that source is in, so the source field now reads: `<input id="source" name="source" value="2">`
Ok after the javascript line that copies the ajax data to innerHTML the next line is:
`document.getElementById('dest').value = document.getElementById('source').value;`
I get the following error: `Error: document.getElementById("source") has no properties`
(I also tried `document.formname.source` and `document.formname.dest` and same problem)
What am I missing?
Note1: The page is fully loaded and the element exists. The ajax call only happens after a user action and replaces the html section that the element is in.
Note2: As for not using innerHTML, this is how the codebase was given to me, and in order to remove it I would need to rewrite all the ajax calls, which is not in the scope of the current maintenance cycle.
Note3: the innerHTML is updated with the new data, a whole table with data and formatting is being copied, I am trying to add a boolean to the end of this big chunk, instead of creating a whole new ajax call for one boolean. It looks like that is what I will have to do... as my hack on the end then copy method is not working.
Extra pair of eyes FTW.
Yeah I had a couple guys take a look here at work and they found my simple typing mistake... I swear I had those right to begin with, but hey we live and learn...
Thanks for the help guys.
|
"[elementname] has no properties" is javascript error speak for "the element you tried to reference doesn't exist or is nil"
This means you've got one or more of a few possible problems:
1. Your page hasn't rendered yet and you're trying to reference it before it exists
2. You've got a spelling error
3. You've named your id the same as a reserved word (submit on a submit button for instance)
4. What you think you're referencing you're really not (a passed variable that isn't what you think you're passing)
|
Make sure your code runs AFTER the page fully loads. If your code runs before the element you are looking for is rendered, this type of error will occur.
|
Javascript error: [elementname] has no properties
|
[
"",
"javascript",
"ajax",
""
] |
What is the benefit of using singleton instead of global for database connections in PHP? I feel using singleton instead of global makes the code unnecessarily complex.
## Code with Global
```
$conn = new PDO(...);
function getSomething()
{
global $conn;
.
.
.
}
```
## Code with Singleton
```
class DB_Instance
{
private static $db;
public static function getDBO()
{
if (!self::$db)
self::$db = new PDO(...);
return self::$db;
}
}
function getSomething()
{
$conn = DB_Instance::getDBO();
.
.
.
}
```
If there's a better way of initializing database connection other than global or singleton, please mention it and describe the advantages it have over global or singleton.
|
I know this is old, but Dr8k's answer was *almost* there.
When you are considering writing a piece of code, assume it's going to change. That doesn't mean that you're assuming the kinds of changes it will have hoisted upon it at some point in the future, but rather that some form of change will be made.
Make it a goal mitigate the pain of making changes in the future: a global is dangerous because it's hard to manage in a single spot. What if I want to make that database connection context aware in the future? What if I want it to close and reopen itself every 5th time it was used. What if I decide that in the interest of scaling my app I want to use a pool of 10 connections? Or a configurable number of connections?
A **singleton factory** gives you that flexibility. I set it up with very little extra complexity and gain more than just access to the same connection; I gain the ability to change how that connection is passed to me later on in a simple manner.
Note that I say *singleton factory* as opposed to simply *singleton*. There's precious little difference between a singleton and a global, true. And because of that, there's no reason to have a singleton connection: why would you spend the time setting that up when you could create a regular global instead?
What a factory gets you is a why to get connections, and a separate spot to decide what connections (or connection) you're going to get.
## Example
```
class ConnectionFactory
{
private static $factory;
private $db;
public static function getFactory()
{
if (!self::$factory)
self::$factory = new ConnectionFactory(...);
return self::$factory;
}
public function getConnection() {
if (!$this->db)
$this->db = new PDO(...);
return $this->db;
}
}
function getSomething()
{
$conn = ConnectionFactory::getFactory()->getConnection();
.
.
.
}
```
Then, in 6 months when your app is super famous and getting dugg and slashdotted and you decide you need more than a single connection, all you have to do is implement some pooling in the getConnection() method. Or if you decide that you want a wrapper that implements SQL logging, you can pass a PDO subclass. Or if you decide you want a new connection on every invocation, you can do do that. It's flexible, instead of rigid.
16 lines of code, including braces, which will save you hours and hours and hours of refactoring to something eerily similar down the line.
Note that I don't consider this "Feature Creep" because I'm not doing any feature implementation in the first go round. It's border line "Future Creep", but at some point, the idea that "coding for tomorrow today" is *always* a bad thing doesn't jive for me.
|
I'm not sure I can answer your specific question, but wanted to suggest that global / singleton connection objects may not be the best idea if this if for a web-based system. DBMSs are generally designed to manage large numbers of unique connections in an efficient manner. If you are using a global connection object, then you are doing a couple of things:
1. Forcing you pages to do all database
connections sequentially and killing
any attempts at asyncronous page
loads.
2. Potentially holding open locks on
database elements longer than
necessary, slowing down overall
database performance.
3. Maxing out the total number of
simultaneous connections your
database can support and blocking
new users from accessing the
resources.
I am sure there are other potential consequences as well. Remember, this method will attempt to sustain a database connection for every user accessing the site. If you only have one or two users, not a problem. If this is a public website and you want traffic then scalability will become an issue.
**[EDIT]**
In larger scaled situations, creating new connections everytime you hit the datase can be bad. However, the answer is not to create a global connection and reuse it for everything. The answer is connection pooling.
With connection pooling, a number of distinct connections are maintained. When a connection is required by the application the first available connection from the pool is retrieved and then returned to the pool once its job is done. If a connection is requested and none are available one of two things will happen: a) if the maximum number of allowed connection is not reached, a new connection is opened, or b) the application is forced to wait for a connection to become available.
**Note:** In .Net languages, connection pooling is handled by the ADO.Net objects by default (the connection string sets all the required information).
Thanks to Crad for commenting on this.
|
Global or Singleton for database connection?
|
[
"",
"php",
"design-patterns",
"singleton",
""
] |
What is the best way to replace all '<' with `<` in a given database column? Basically perform `s/<[^;]/</gi`
Notes:
* must work in [MS SQL Server](http://en.wikipedia.org/wiki/Microsoft_SQL_Server#SQL_Server_2005) 2000
* Must be repeatable (and not end up with `<;;;;;;;;;`)
|
Some hacking required but we can do this with **LIKE**, **PATINDEX**, **LEFT** AND **RIGHT** and good old string concatenation.
```
create table test
(
id int identity(1, 1) not null,
val varchar(25) not null
)
insert into test values ('< <- ok, < <- nok')
while 1 = 1
begin
update test
set val = left(val, patindex('%<[^;]%', val) - 1) +
'<' +
right(val, len(val) - patindex('%<[^;]%', val) - 2)
from test
where val like '%<[^;]%'
IF @@ROWCOUNT = 0 BREAK
end
select * from test
```
Better is that this is SQL Server version agnostic and should work just fine.
|
I think this can be done much cleaner if you use different STUFF :)
```
create table test
(
id int identity(1, 1) not null,
val varchar(25) not null
)
insert into test values ('< <- ok, < <- nok')
WHILE 1 = 1
BEGIN
UPDATE test SET
val = STUFF( val , PATINDEX('%<[^;]%', val) + 3 , 0 , ';' )
FROM test
WHERE val LIKE '%<[^;]%'
IF @@ROWCOUNT = 0 BREAK
END
select * from test
```
|
Perform regex (replace) in an SQL query
|
[
"",
"sql",
"sql-server",
"regex",
"sql-server-2000",
""
] |
I'm making a simple jquery command:
`element.html(" ");`
using the attributes/html method: <http://docs.jquery.com/Attributes/html>
It works on my local app engine server, but it doesn't work once I push to the Google server. The element empties but doesn't fill with spaces.
So instead of `" "` *(6 spaces)* it's just `""`.
Once again, this is running on App Engine, but I don't think that should matter...
|
You could try generating the space during run-time, so it won't be trimmed or whatever happens during transport:
```
element.html(String.fromCharCode(32));
```
|
This might not be a direct answer to your problem, but why are you even wanting to put in a heap of spaces? You can probably achieve the same result by just changing the `padding-left` or `text-indent` of that element.
```
element.css("textIndent", "3em");
```
Using a heap of ` `s is a very dodgy way to do indentation.
|
.html() jQuery method bizzare bug - resolves to empty space locally, but not on production
|
[
"",
"javascript",
"jquery",
"html",
"google-app-engine",
""
] |
For technical reasons, I can't use ClickOnce to auto-update my .NET application and its assemblies. What is the best way to handle auto-updating in .NET?
|
I think the [Updater Application Block](http://msdn.microsoft.com/en-us/library/ms978574.aspx) was something of a precursor to ClickOnce. Might be worth investigating. Looking at its source code might be enough to spark some ideas.
|
We have a product that's commercial/open source: [wyBuild & wyUpdate](http://wyday.com/wybuild/). It has patching ability and is dead simple to use.
Edit: I'm getting voted down into the negative numbers, but my post wasn't just blatant selling. Our updater, wyUpdate, is open source, written in C# and is licensed under the BSD license.
I thought it might help anyone trying to build an updater from scratch using the .NET framework.
But, vote me down if you must.
|
What's the best way for a .NET winforms application to update itself without using ClickOnce?
|
[
"",
"c#",
".net",
"winforms",
"compact-framework",
""
] |
I have an application that reads a table from a database.
I issue an SQL query to get a result set, based on a unique string value I glean from the results, I use a case/switch statement to generate certain objects (they inherit TreeNode BTW). These created objects get shunted into a Dictionary object to be used later.
Whilst generating these objects I use some of the values from the result set to populate values in the object via the setters.
I query the Dictionary to return a particular object type and use it to populate a treeview. However it is not possible to populate 2 objects of the same type in a treeview from the Dictionary object (you get a runtime error - which escapes me at the moment, something to with referencing the same object). So what I have to do is use a memberwiseClone and implement IClonable to get around this.
Am I doing this right? Is there a better way - because I think this is causing my program to be real slow at this point. At the very least I think its a bit clunky - any advice from people who know more than me - greatly appreciated.
|
To add to @[Brad](https://stackoverflow.com/questions/56224/creating-objects-driven-by-the-database-to-populate-a-treeview-very-slow-c#56264), only populate the tree as needed. That means hooking into the expand event of the tree nodes. This is similar to how Windows Explorer functions when dealing with network shares.
There should be 1 `TreeNode` object per actual tree node in the tree - don't try to reuse the things. You may either associate them with your data using the Tag property (this is the recommended method), or you can subclass the `TreeNode` itself (this is the Java method, but used less in .NET).
(The use of cloning methods is usually a hint that you're either (a) doing something wrong, or (b) need to factor your domain model to separate mutable objects from immutable.)
|
Is there a reason you are using the external dictionary? I would populate the tree directly as the data is queried.
If you do require the dictionary, you could set the .Tag property of the tree node to point to the data in your dictionary.
|
Creating objects driven by the database to populate a Treeview - very slow
|
[
"",
"c#",
"dictionary",
"object",
"clone",
"icloneable",
""
] |
In C#, does anybody know why the following will compile:
```
int i = 1;
++i;
i++;
```
but this will not compile?
```
int i = 1;
++i++;
```
(Compiler error: The operand of an increment or decrement operator must be a variable, property or indexer.)
|
you are running one of the operands on the result of the other, the result of a increment/decrement is a value - and you can not use increment/decrement on a value it has to be a variable that can be set.
|
For the same reason you can't say
```
5++;
```
or
```
f(i)++;
```
A function returns a *value*, not a variable. The increment operators also return values, but cannot be applied to values.
|
Pre and post increment/decrement operators in C#
|
[
"",
"c#",
""
] |
Switch statement fallthrough is one of my personal major reasons for loving `switch` vs. `if/else if` constructs. An example is in order here:
```
static string NumberToWords(int number)
{
string[] numbers = new string[]
{ "", "one", "two", "three", "four", "five",
"six", "seven", "eight", "nine" };
string[] tens = new string[]
{ "", "", "twenty", "thirty", "forty", "fifty",
"sixty", "seventy", "eighty", "ninety" };
string[] teens = new string[]
{ "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen",
"sixteen", "seventeen", "eighteen", "nineteen" };
string ans = "";
switch (number.ToString().Length)
{
case 3:
ans += string.Format("{0} hundred and ", numbers[number / 100]);
case 2:
int t = (number / 10) % 10;
if (t == 1)
{
ans += teens[number % 10];
break;
}
else if (t > 1)
ans += string.Format("{0}-", tens[t]);
case 1:
int o = number % 10;
ans += numbers[o];
break;
default:
throw new ArgumentException("number");
}
return ans;
}
```
The smart people are cringing because the `string[]`s should be declared outside the function: well, they are, this is just an example.
The compiler fails with the following error:
```
Control cannot fall through from one case label ('case 3:') to another
Control cannot fall through from one case label ('case 2:') to another
```
Why? And is there any way to get this sort of behaviour without having three `if`s?
|
(Copy/paste of an [answer I provided elsewhere](https://stackoverflow.com/questions/9033/hidden-features-of-c/90432#90432))
Falling through `switch`-`case`s can be achieved by having no code in a `case` (see `case 0`), or using the special `goto case` (see `case 1`) or `goto default` (see `case 2`) forms:
```
switch (/*...*/) {
case 0: // shares the exact same code as case 1
case 1:
// do something
goto case 2;
case 2:
// do something else
goto default;
default:
// do something entirely different
break;
}
```
|
The "why" is to avoid accidental fall-through, for which I'm grateful. This is a not uncommon source of bugs in C and Java.
The workaround is to use goto, e.g.
```
switch (number.ToString().Length)
{
case 3:
ans += string.Format("{0} hundred and ", numbers[number / 100]);
goto case 2;
case 2:
// Etc
}
```
The general design of switch/case is a little bit unfortunate in my view. It stuck too close to C - there are some useful changes which could be made in terms of scoping etc. Arguably a smarter switch which could do pattern matching etc would be helpful, but that's really changing from switch to "check a sequence of conditions" - at which point a different name would perhaps be called for.
|
Switch statement fallthrough in C#?
|
[
"",
"c#",
"switch-statement",
""
] |
I have a page that has an iframe
From one of the pages within the iframe I want to look back and make a panel on the default page invisible because it is overshadowing a popup
I tried using Parent.FindControl but it does not seem to be working. I am positive I have the right id in the findcontrol because I used Firebug to inspect the panel and I copied the id from there
Does anyone know what I am missing?
|
I didn't completely follow your problem, but I'll take my best shot.
It sounds like you have an ASP.NET page, that has an iframe in it that refers to another ASP.NET page, and in that page that was requested by the iframe you want to modify the visibility of the item contained in the page that contains the iframe.
If my understanding of your problem is correct, then you have some somewhat nasty problems here.
1. What's actually happening at the browser level is the first page gets loaded, and that page has an iframe in it that is making a second request to the server.
2. This second request can't FindControl your control, because it isn't in the same page, and isn't alive during that request.
So you have some alternatives here:
1. Get rid of the iframe and use a panel. This will put them both in the same request, and able to find each other.
2. (Additionally) When you do this you are going to want to use Page.FindControl() not Parent.FindControl() as the FindControl method just searches through the Control's child control collection, and I presume your control will be somewhere else on the page.
3. On the client side in the iframe you could use some javascript code to access the outer page's DOM, and set the visibility of it there.
|
Parent document:
```
<body>
<input type="text" id="accessme" value="Not Accessed" />
...
</body>
```
Document in iframe:
```
<head>
...
<script type="text/javascript">
function setValueOfAccessme()
{
window.parent.document.getElementById("accessme").value = "Accessed";
}
</script>
</head>
<body onload="setValueOfAccessme();">
</body>
```
The document inside the iframe accesses the `document object` on the `window object` on load, and uses the `getElementId()` function to set the value of the input inside the body of the parent document.
|
Parent.FindControl() not working?
|
[
"",
"asp.net",
"javascript",
"findcontrol",
""
] |
Is there a way to tell SQL Server 2008 Express to log every query (including each and every SELECT Query!) into a file?
It's a Development machine, so the negative side effects of logging Select-Queries are not an issue.
Before someone suggests using the SQL Profiler: This is not available in Express (does anyone know if it's available in the Web Edition?) and i'm looking for a way to log queries even when I am away.
|
SQL Server Profiler:
* File β New Trace
* The "General" Tab is displayed.
* Here you can choose "Save to file:" so its logged to a file.
* View the "Event Selection" Tab
* Select the items you want to log.
* TSQL β SQL:BatchStarting will get you sql selects
* Stored Procedures β RPC:Completed will get you Stored Procedures.
More information from Microsoft: SQL Server 2008 Books Online - [Using SQL Server Profiler](http://msdn.microsoft.com/en-us/library/ms187929.aspx)
*Update - SQL Express Edition:*
A comment was made that MS SQL Server Profiler is not available for the express edition.
There does appear to be a free alternative: [Profiler for Microsoft SQL Server 2005 Express Edition](http://sqlprofiler.googlepages.com/)
|
There is one more way to get information about queries that has been executed on MS SQL Server Express [described here](http://blog.sqlauthority.com/2008/01/03/sql-server-2005-last-ran-query-recently-ran-query/).
Briefly, it runs smart query to system tables and gets info(text, time executed) about queries(or cached query plans if needed). Thus you can get info about executed queries without profiler in MSSQL 2008 Express edition.
```
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query]
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY deqs.last_execution_time DESC
```
|
Logging ALL Queries on a SQL Server 2008 Express Database?
|
[
"",
"sql",
"sql-server",
"database",
"logging",
"ssms",
""
] |
I'm currently trying ElementTree and it looks fine, it escapes HTML entities and so on and so forth. Am I missing something truly wonderful I haven't heard of?
This is similar to what I'm actually doing:
```
import xml.etree.ElementTree as ET
root = ET.Element('html')
head = ET.SubElement(root,'head')
script = ET.SubElement(head,'script')
script.set('type','text/javascript')
script.text = "var a = 'I love á letters'"
body = ET.SubElement(root,'body')
h1 = ET.SubElement(body,'h1')
h1.text = "And I like the fact that 3 > 1"
tree = ET.ElementTree(root)
tree.write('foo.xhtml')
# more foo.xhtml
<html><head><script type="text/javascript">var a = 'I love &aacute;
letters'</script></head><body><h1>And I like the fact that 3 > 1</h1>
</body></html>
```
|
I assume that you're actually creating an XML DOM tree, because you want to validate that what goes into this file is valid XML, since otherwise you'd just write a static string to a file. If validating your output is indeed your goal, then I'd suggest
```
from xml.dom.minidom import parseString
doc = parseString("""<html>
<head>
<script type="text/javascript">
var a = 'I love &aacute; letters'
</script>
</head>
<body>
<h1>And I like the fact that 3 > 1</h1>
</body>
</html>""")
with open("foo.xhtml", "w") as f:
f.write( doc.toxml() )
```
This lets you just write the XML you want to output, validate that it's correct (since parseString will raise an exception if it's invalid) and have your code look much nicer.
Presumably you're not just writing the same static XML every time and want some substitution. In this case I'd have lines like
```
var a = '%(message)s'
```
and then use the % operator to do the substitution, like
```
</html>""" % {"message": "I love &aacute; letters"})
```
|
Another way is using the [E Factory](http://codespeak.net/lxml/tutorial.html#the-e-factory) builder from lxml (available in [Elementtree](http://effbot.org/zone/element-builder.htm) too)
```
>>> from lxml import etree
>>> from lxml.builder import E
>>> def CLASS(*args): # class is a reserved word in Python
... return {"class":' '.join(args)}
>>> html = page = (
... E.html( # create an Element called "html"
... E.head(
... E.title("This is a sample document")
... ),
... E.body(
... E.h1("Hello!", CLASS("title")),
... E.p("This is a paragraph with ", E.b("bold"), " text in it!"),
... E.p("This is another paragraph, with a", "\n ",
... E.a("link", href="http://www.python.org"), "."),
... E.p("Here are some reserved characters: <spam&egg>."),
... etree.XML("<p>And finally an embedded XHTML fragment.</p>"),
... )
... )
... )
>>> print(etree.tostring(page, pretty_print=True))
<html>
<head>
<title>This is a sample document</title>
</head>
<body>
<h1 class="title">Hello!</h1>
<p>This is a paragraph with <b>bold</b> text in it!</p>
<p>This is another paragraph, with a
<a href="http://www.python.org">link</a>.</p>
<p>Here are some reservered characters: <spam&egg>.</p>
<p>And finally an embedded XHTML fragment.</p>
</body>
</html>
```
|
XML writing tools for Python
|
[
"",
"python",
"xml",
"xhtml",
""
] |
I've got a class named `BackgroundWorker` that has a thread constantly running. To turn this thread off, an instance variable named `stop` to needs to be `true`.
To make sure the thread is freed when the class is done being used, I've added `IDisposable` and a finalizer that invokes `Dispose()`. Assuming that `stop = true` does indeed cause this thread to exit, is this sippet correct? It's fine to invoke `Dispose` from a finalizer, right?
Finalizers should always call `Dispose` if the `object` inherits `IDisposable`, right?
```
/// <summary>
/// Force the background thread to exit.
/// </summary>
public void Dispose()
{
lock (this.locker)
{
this.stop = true;
}
}
~BackgroundWorker()
{
this.Dispose();
}
```
|
Your code is fine, although locking in a finalizer is somewhat "scary" and I would avoid it - if you get a deadlock... I am not 100% certain what would happen but it would not be good. However, if you are safe this should not be a problem. Mostly. The internals of garbage collection are painful and I hope you never have to see them ;)
As Marc Gravell points out, a volatile bool would allow you to get rid of the lock, which would mitigate this issue. Implement this change if you can.
nedruod's code puts the assignment inside the if (disposing) check, which is completely wrong - the thread is an unmanaged resource and must be stopped even if not explicitly disposing. Your code is fine, I am just pointing out that you should not take the advice given in that code snippet.
Yes, you almost always should call Dispose() from the finalizer if implementing the IDisposable pattern. The full IDisposable pattern is a bit bigger than what you have but you do not always need it - it merely provides two extra possibilities:
1. detecting whether Dispose() was called or the finalizer is executing (you are not allowed to touch any managed resources in the finalizer, outside of the object being finalized);
2. enabling subclasses to override the Dispose() method.
|
First off, a **severe warning**. Don't use a finalizer like you are. You are setting yourself up for some very bad effects if you take locks within a finalizer. Short story is don't do it. Now to the original question.
```
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// Force the background thread to exit.
/// </summary>
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
lock (this.locker)
{
this.stop = true;
}
}
}
~BackgroundWorker()
{
Dispose(false);
}
```
The only reason to have a finalizer at all is to allow sub-classes to extend and release **unmanaged resources**. If you don't have subclasses then seal your class and drop the finalizer completely.
|
Finalizers and Dispose
|
[
"",
"c#",
"dispose",
"idisposable",
"finalizer",
"disposable",
""
] |
How can I create an Excel spreadsheet with C# without requiring Excel to be installed on the machine that's running the code?
|
You can use a library called ExcelLibrary. It's a free, open source library posted on Google Code:
[ExcelLibrary](https://code.google.com/archive/p/excellibrary/)
This looks to be a port of the PHP ExcelWriter that you mentioned above. It will not write to the new .xlsx format yet, but they are working on adding that functionality in.
It's very simple, small and easy to use. Plus it has a DataSetHelper that lets you use DataSets and DataTables to easily work with Excel data.
ExcelLibrary seems to still only work for the older Excel format (.xls files), but may be adding support in the future for newer 2007/2010 formats.
You can also use [EPPlus](https://github.com/JanKallman/EPPlus), which works only for Excel 2007/2010 format files (.xlsx files). There's also [NPOI](https://github.com/tonyqus/npoi) which works with both.
There are a few known bugs with each library as noted in the comments. In all, EPPlus seems to be the best choice as time goes on. It seems to be more actively updated and documented as well.
Also, as noted by @ΠΡΡΡΠΌΠ¦Π°ΡΠΈΠΎΠ½ΠΎΠ² below, EPPlus has support for Pivot Tables and ExcelLibrary may have some support ([Pivot table issue in ExcelLibrary](https://code.google.com/archive/p/excellibrary/issues/98))
Here are a couple links for quick reference:
[ExcelLibrary](https://code.google.com/archive/p/excellibrary/) - [GNU Lesser GPL](https://www.gnu.org/licenses/lgpl.html)
[EPPlus](https://github.com/JanKallman/EPPlus) - [GNU (LGPL) - No longer maintained](https://github.com/JanKallman/EPPlus#license)
[EPPlus 5](https://www.epplussoftware.com/) - [Polyform Noncommercial - Starting May 2020](https://www.epplussoftware.com/en/LicenseOverview)
[NPOI](https://github.com/tonyqus/npoi) - [Apache License](https://github.com/tonyqus/npoi/blob/master/LICENSE)
**Here some example code for ExcelLibrary:**
Here is an example taking data from a database and creating a workbook from it. Note that the ExcelLibrary code is the single line at the bottom:
```
//Create the data set and table
DataSet ds = new DataSet("New_DataSet");
DataTable dt = new DataTable("New_DataTable");
//Set the locale for each
ds.Locale = System.Threading.Thread.CurrentThread.CurrentCulture;
dt.Locale = System.Threading.Thread.CurrentThread.CurrentCulture;
//Open a DB connection (in this example with OleDB)
OleDbConnection con = new OleDbConnection(dbConnectionString);
con.Open();
//Create a query and fill the data table with the data from the DB
string sql = "SELECT Whatever FROM MyDBTable;";
OleDbCommand cmd = new OleDbCommand(sql, con);
OleDbDataAdapter adptr = new OleDbDataAdapter();
adptr.SelectCommand = cmd;
adptr.Fill(dt);
con.Close();
//Add the table to the data set
ds.Tables.Add(dt);
//Here's the easy part. Create the Excel worksheet from the data set
ExcelLibrary.DataSetHelper.CreateWorkbook("MyExcelFile.xls", ds);
```
Creating the Excel file is as easy as that. You can also manually create Excel files, but the above functionality is what really impressed me.
|
If you are happy with the xlsx format, try my library, EPPlus. It started with the source from ExcelPackage, but since became a total rewrite.
It supports ranges, cell styling, charts, shapes, pictures, named ranges, AutoFilter, and a lot of other stuff.
You have two options:
* [EPPlus 4](https://github.com/JanKallman/EPPlus), licensed under LGPL (original branch, developed until 2020)
* [EPPlus 5](https://github.com/EPPlusSoftware/EPPlus), licensed under Polyform Noncommercial 1.0.0 (since 2020).
From the EPPlus 5 readme.md:
> With the new license EPPlus is still free to use in some cases, but will require a commercial license to be used in a commercial business.
EPPlus website: <https://www.epplussoftware.com/>
|
How do I create an Excel (.XLS and .XLSX) file in C# without installing Microsoft Office?
|
[
"",
"c#",
".net",
"excel",
"file-io",
""
] |
Is it at the state where it is actually useful and can do more than rename classes?
|
CDT (C/C++ Development Tools - eclipse project) 5.0 has a bunch of new refactorings
```
* Declare Method
* Extract Baseclass
* Extract Constant
* Extract Method
* Extract Subclass
* Hide Method
* Implement Method
* Move Field / Method
* Replace Number
* Separate Class
* Generate Getters and Setters
```
There is a CDT refactoring [wiki](http://r2.ifs.hsr.ch/cdtrefactoring)
|
There have been numerous efforts to provide refactoring tools for C++, most of them failed pretty early, because the creation of such tools requires the full ability to process C++ source code, i.e. you need a working and full c++ compiler in the first place to implement even the most basic forms of automated source to source transformations.
Fortunately, with the introduction of [plugins into gcc](http://gcc.gnu.org/wiki/plugins), it it's finally becoming foreseeable that related efforts may actually be able to leverage an existing C++ compiler for this purpose, instead of having to resort to their own implementations of a C++ compiler.
For a more in depth discussion, you may want to check out [this](http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?action=browse&id=BoostSpiritCXXParser&revision=48).
For the time being, the most promising candidate to provide widely automated C++ refactoring support, is certainly the Mozilla [pork](https://wiki.mozilla.org/pork) project, along with its related companion project [Dehydra](https://developer.mozilla.org/En/Dehydra).
|
What is the state of C++ refactor support in Eclipse?
|
[
"",
"c++",
"eclipse",
"refactoring",
""
] |
At the moment I am working on a project admin application in C# 3.5 on ASP.net. In order to reduce hits to the database, I'm caching a lot of information using static variables. For example, a list of users is kept in memory in a static class. The class reads in all the information from the database on startup, and will update the database whenever changes are made, but it never needs to read from the datebase.
The class pings other webservers (if they exist) with updated information at the same time as a write to the database. The pinging mechanism is a Windows service to which the cache object registers using a random available port. It is used for other things as well.
The amount of data isn't all that great. At the moment I'm using it just to cache the users (password hashes, permissions, name, email etc.) It just saves a pile of calls being made to the database.
I was wondering if there are any pitfalls to this method and/or if there are better ways to cache the data?
|
A pitfall: A static field is scoped per app domain, and increased load will make the server generate more app domains in the pool. This is not necessarily a problem if you only read from the statics, but you will get duplicate data in memory, and you will get a hit every time an app domain is created or recycled.
Better to use the Cache object - it's intended for things like this.
Edit: Turns out I was wrong about AppDomains (as pointed out in comments) - more instances of the *Application* will be generated under load, but they will all run in the same AppDomain. (But you should still use the Cache object!)
|
As long as you can expect that the cache will never grow to a size greater than the amount of available memory, it's fine. Also, be sure that there will only be one instance of this application per database, or the caches in the different instances of the app could "fall out of sync."
Where I work, we have a homegrown O/RM, and we do something similar to what you're doing with certain tables which are not expected to grow or change much. So, what you're doing is not unprecedented, and in fact in our system, is tried and true.
|
Is it OK to use static variables to cache information in ASP.net?
|
[
"",
"c#",
"asp.net",
"caching",
"static-variables",
""
] |
I want to write a program in which plays an audio file that reads a text.
I want to highlite the current syllable that the audiofile plays in green and the rest of the current word in red.
What kind of datastructure should I use to store the audio file and the information that tells the program when to switch to the next word/syllable?
|
This is a slightly left-field suggestion, but have you looked at Karaoke software? It may not be seen as "serious" enough, but it sounds very similar to what you're doing. For example, [Aegisub](http://www.aegisub.net/) is a subtitling program that lets you create subtitles in the SSA/ASS format. It has karaoke tools for hilighting the chosen word or part.
It's most commonly used for subtitling anime, but it also works for audio provided you have a suitable player. These are sadly quite rare on the Mac.
The format looks similar to the one proposed by Yuval A:
```
{\K132}Unmei {\K34}no {\K54}tobira
{\K60}{\K132}yukkuri {\K36}to {\K142}hirakareta
```
The lengths are durations rather than absolute offsets. This makes it easier to shift the start of the line without recalculating all the offsets. The double entry indicates a pause.
Is there a good reason this needs to be part of your Java program, or is an off the shelf solution possible?
|
How about a simple data structure that describes what next batch of letters consists of the next syllable and the time stamp for switching to that syllable?
Just a quick example:
[0:00] This [0:02] is [0:05] an [0:07] ex- [0:08] am- [0:10] ple
|
Which data structure for linking text with audio in Java
|
[
"",
"java",
"audio",
""
] |
I'm trying to figure out the best way to determine whether I'm in the last iteration of a loop over a map in order to do something like the following:
```
for (iter = someMap.begin(); iter != someMap.end(); ++iter) {
bool last_iteration;
// do something for all iterations
if (!last_iteration) {
// do something for all but the last iteration
}
}
```
There seem to be several ways of doing this: random access iterators, the `distance` function, etc. What's the canonical method?
Edit: no random access iterators for maps!
|
Canonical? I can't claim that, but I'd suggest
```
final_iter = someMap.end();
--final_iter;
if (iter != final_iter) ...
```
**Edited** to correct as suggested by **[KTC](https://stackoverflow.com/questions/151046/c-last-loop-iteration-stl-map-iterator#151104).** (Thanks! Sometimes you go too quick and mess up on the simplest things...)
|
Since C++11, you can also use std::next()
```
for (auto iter = someMap.begin(); iter != someMap.end(); ++iter) {
// do something for all iterations
if (std::next(iter) != someMap.end()) {
// do something for all but the last iteration
}
}
```
Although the question was asked a while ago, I thought it would be worth sharing.
|
How can I detect the last iteration in a loop over std::map?
|
[
"",
"c++",
"stl",
"iterator",
"maps",
""
] |
I want to split an arithmetic expression into tokens, to convert it into RPN.
Java has the StringTokenizer, which can optionally keep the delimiters. That way, I could use the operators as delimiters. Unfortunately, I need to do this in PHP, which has strtok, but that throws away the delimiters, so I need to brew something myself.
This sounds like a classic textbook example for Compiler Design 101, but I'm afraid I'm lacking some formal education here. Is there a standard algorithm you can point me to?
My other options are to read up on [Lexical Analysis](http://en.wikipedia.org/wiki/Lexical_analysis) or to roll up something quick and dirty with the available string functions.
|
As often, I would just use a regular expression to do this:
```
$expr = '(5*(7 + 2 * -9.3) - 8 )/ 11';
$tokens = preg_split('/([*\/^+-]+)\s*|([\d.]+)\s*/', $expr, -1,
PREG_SPLIT_DELIM_CAPTURE | PREG_SPLIT_NO_EMPTY);
$tts = print_r($tokens, true);
echo "<pre>x=$tts</pre>";
```
It needs a little more work to accept numbers with exponent (like -9.2e-8).
|
This might help.
[Practical Uses of Tokenizer](http://c7y.phparch.com/c/entry/1/art,practical_uses_tokenizer)
|
Standard algorithm to tokenize a string, keep delimiters (in PHP)
|
[
"",
"php",
"algorithm",
"parsing",
""
] |
What characters are valid in a Java class name? What other rules govern Java class names (for instance, Java class names cannot begin with a number)?
|
You can have almost any character, including most Unicode characters! The exact definition is in [the Java Language Specification under section 3.8: Identifiers](http://docs.oracle.com/javase/specs/jls/se7/html/jls-3.html#jls-3.8).
> An *identifier* is an unlimited-length sequence of *Java letters* and *Java digits*, the first of which must be a *Java letter*. ...
>
> Letters and digits may be drawn from the entire Unicode character set, ... This allows programmers to use identifiers in their programs that are written in their native languages.
>
> An identifier cannot have the same spelling (Unicode character sequence) as a keyword (Β§3.9), boolean literal (Β§3.10.3), or the null literal (Β§3.10.7), or a compile-time error occurs.
However, see [this question](https://stackoverflow.com/questions/61615/should-you-use-international-identifiers-in-javac) for whether or not you should do that.
|
> Every programming language has its own set of rules and conventions for the kinds of names that you're allowed to use, and the Java programming language is no different. The rules and conventions for naming your variables can be summarized as follows:
>
> * Variable names are case-sensitive. A variable's name can be any legal identifier β an unlimited-length sequence of Unicode letters and digits, beginning with a letter, the dollar sign "$", or the underscore character "\_". The convention, however, is to always begin your variable names with a letter, not "$" or "\_". Additionally, the dollar sign character, by convention, is never used at all. You may find some situations where auto-generated names will contain the dollar sign, but your variable names should always avoid using it. A similar convention exists for the underscore character; while it's technically legal to begin your variable's name with "\_", this practice is discouraged. White space is not permitted.
> * Subsequent characters may be letters, digits, dollar signs, or underscore characters. Conventions (and common sense) apply to this rule as well. When choosing a name for your variables, use full words instead of cryptic abbreviations. Doing so will make your code easier to read and understand. In many cases it will also make your code self-documenting; fields named cadence, speed, and gear, for example, are much more intuitive than abbreviated versions, such as s, c, and g. Also keep in mind that the name you choose must not be a keyword or reserved word.
> * If the name you choose consists of only one word, spell that word in all lowercase letters. If it consists of more than one word, capitalize the first letter of each subsequent word. The names gearRatio and currentGear are prime examples of this convention. If your variable stores a constant value, such as `static final int NUM_GEARS = 6`, the convention changes slightly, capitalizing every letter and separating subsequent words with the underscore character. By convention, the underscore character is never used elsewhere.
From [the official Java Tutorial](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html).
|
Valid characters in a Java class name
|
[
"",
"java",
"class",
"unicode",
"naming-conventions",
"invalid-characters",
""
] |
py2exe does not work with the standard email module
Hello. I am trying to use py2exe for converting a script into an exe. The build process shows this:
---
The following modules appear to be missing
## ['email.Encoders', 'email.Generator', 'email.Iterators', 'email.MIMEBase', 'email.MIMEMultipart', 'email.MIMEText', 'email.Utils', 'email.base64MIME']
The executable does not work. The referenced modules are not included. I researched this on the Internet and I found out that py2exe has a problem with the Lazy import used in the standard lib email module. Unfortunately I have not succeeded in finding a workaround for this problem. Can anyone help?
Thank you,
P.S.
Imports in the script look like this:
Code: Select all
import string,time,sys,os,smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
from email import Encoders
|
If you don't have to work with py2exe, bbfreeze works better, and I've tried it with the email module. <http://pypi.python.org/pypi/bbfreeze/0.95.4>
|
Have a look at this question [how-to-package-twisted-program-with-py2exe](https://stackoverflow.com/questions/169897/how-to-package-twisted-program-with-py2exe) it seems to be the same problem.
The answer given there is to explicitly include the modules on the command line to py2exe.
|
Python - Py2exe can't build .exe using the 'email' module
|
[
"",
"python",
"winapi",
"py2exe",
""
] |
I have built an application in C# that I would like to be optimized for multiple cores. I have some threads, should I do more?
**Updated for more detail**
* C# 2.0
* Run on Windows Vista and Windows Server 2003
**Updated again**
* This code is running as a service
* I do not want to have the complete code... my goal here is to get your experience and how to start. Like I say, I have already use threads. What more can I do?
|
I'd generalize that writing a highly optimized multi-threaded process is a lot harder than just throwing some threads in the mix.
I recommend starting with the following steps:
1. Split up your workloads into discrete parallel executable units
2. Measure and characterize workload types - Network intensive, I/O intensive, CPU intensive etc - these become the basis for your worker pooling strategies. e.g. you can have pretty large pools of workers for network intensive applications, but it doesn't make sense having more workers than hardware-threads for CPU intensive tasks.
3. Think about queuing/array or ThreadWorkerPool to manage pools of threads. Former more finegrain controlled than latter.
4. Learn to prefer async I/O patterns over sync patterns if you can - frees more CPU time to perform other tasks.
5. Work to eliminate or atleast reduce serialization around contended resources such as disk.
6. Minimize I/O, acquire and hold minimum level of locks for minimum period possible. (Reader/Writer locks are your friend)
5.Comb through that code to ensure that resources are locked in consistent sequence to minimize deadly embrace.
7. Test like crazy - race conditions and bugs in multithreaded applications are hellish to troubleshoot - often you only see the forensic aftermath of the massacre.
Bear in mind that it is entirely possible that a multi-threaded version could perform worse than a single-threaded version of the same app. There is no excuse for good engineering measurement.
|
You might want to take a look at the parallel extensions for .NET
<http://msdn.com/concurrency>
|
How to make my code run on multiple cores?
|
[
"",
"c#",
"multithreading",
"multicore",
""
] |
In .NET, under which circumstances should I use `GC.SuppressFinalize()`?
What advantage(s) does using this method give me?
|
`SuppressFinalize` should only be called by a class that has a finalizer. It's informing the Garbage Collector (GC) that `this` object was cleaned up fully.
The recommended `IDisposable` pattern when you have a finalizer is:
```
public class MyClass : IDisposable
{
private bool disposed = false;
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
// called via myClass.Dispose().
// OK to use any private object references
}
// Release unmanaged resources.
// Set large fields to null.
disposed = true;
}
}
public void Dispose() // Implement IDisposable
{
Dispose(true);
GC.SuppressFinalize(this);
}
~MyClass() // the finalizer
{
Dispose(false);
}
}
```
Normally, the CLR keeps tabs on objects with a finalizer when they are created (making them more expensive to create). `SuppressFinalize` tells the GC that the object was cleaned up properly and doesn't need to go onto the finalizer queue. It looks like a C++ destructor, but doesn't act anything like one.
The `SuppressFinalize` optimization is not trivial, as your objects can live a long time waiting on the finalizer queue. Don't be tempted to call `SuppressFinalize` on other objects mind you. That's a serious defect waiting to happen.
Design guidelines inform us that a finalizer isn't necessary if your object implements `IDisposable`, but if you have a finalizer you should implement `IDisposable` to allow deterministic cleanup of your class.
Most of the time you should be able to get away with `IDisposable` to clean up resources. You should only need a finalizer when your object holds onto unmanaged resources and you need to guarantee those resources are cleaned up.
Note: Sometimes coders will add a finalizer to debug builds of their own `IDisposable` classes in order to test that code has disposed their `IDisposable` object properly.
```
public void Dispose() // Implement IDisposable
{
Dispose(true);
#if DEBUG
GC.SuppressFinalize(this);
#endif
}
#if DEBUG
~MyClass() // the finalizer
{
Dispose(false);
}
#endif
```
|
`SupressFinalize` tells the system that whatever work would have been done in the finalizer has already been done, so the finalizer doesn't need to be called. From the .NET docs:
> Objects that implement the IDisposable
> interface can call this method from
> the IDisposable.Dispose method to
> prevent the garbage collector from
> calling Object.Finalize on an
> object that does not require it.
In general, most any `Dispose()` method should be able to call `GC.SupressFinalize()`, because it should clean up everything that would be cleaned up in the finalizer.
`SupressFinalize` is just something that provides an optimization that allows the system to not bother queuing the object to the finalizer thread. A properly written `Dispose()`/finalizer should work properly with or without a call to `GC.SupressFinalize()`.
|
When should I use GC.SuppressFinalize()?
|
[
"",
"c#",
".net",
"garbage-collection",
"idisposable",
"suppressfinalize",
""
] |
I need to upload some data to a server using HTTP `PUT` method in Python. From my brief reading of the `urllib2` docs, it only does HTTP `POST`.
Is there any way to do an HTTP `PUT` in Python?
|
I've used a variety of python HTTP libs in the past, and I've settled on [requests](https://requests.readthedocs.io/) as my favourite. Existing libs had pretty useable interfaces, but code can end up being a few lines too long for simple operations. A basic PUT in requests looks like:
```
payload = {'username': 'bob', 'email': 'bob@bob.com'}
>>> r = requests.put("http://somedomain.org/endpoint", data=payload)
```
You can then check the response status code with:
```
r.status_code
```
or the response with:
```
r.content
```
Requests has a lot synactic sugar and shortcuts that'll make your life easier.
|
```
import urllib2
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request('http://example.org', data='your_put_data')
request.add_header('Content-Type', 'your/contenttype')
request.get_method = lambda: 'PUT'
url = opener.open(request)
```
|
Is there any way to do HTTP PUT request in Python?
|
[
"",
"python",
"http",
"put",
""
] |
I was thinking about obfuscating a commercial .Net application. But is it really worth the effort to select, buy and use such a tool? Are the obfuscated binaries really safe from reverse engineering?
|
You may not have to buy a tool - Visual Studio.NET comes with a community version of Dotfuscator. Other free obfuscation tools [are listed here](http://twit88.com/blog/2007/09/15/free-net-obfuscation-tools/), and they may meet your needs.
It's possible that the obfuscated binaries aren't safe from reverse engineering, just like it's possible that your bike lock might be breakable/pickable. However, it's often the case that a small inconvenience is enough to deter would be code/bicycle thieves.
Also, if ever it comes time to assert your rights to a piece of code in court, having been seen to make an effort to protect it (by obfuscating it) may give you extra points. :-)
You do have to consider the downsides, though - it can be more difficult to use reflection with obfuscated code, and if you're using something like log4net to generate parts of log lines based on the name of the class involved, these messages can become much more difficult to interpret.
|
Remember that obfuscation is only a barrier to the casual examiner of your code. If someone is serious about figuring out what you wrote, you will have a very hard time stopping them.
If you have secrets in your code (like passwords), you're doing it wrong.
If you worried someone might produce your own software with your ideas, you'll have more luck in the marketplace by providing new versions that your customers want, with technical support, and by being a partner to them. Good business wins.
|
Should you obfuscate a commercial .Net application?
|
[
"",
"c#",
".net",
"obfuscation",
""
] |
I'm stuck on a PHP 4 server, and I would like to start moving an old legacy project to modern Design Patterns, including Dependency Injection. Are there any dependency injection frameworks that will work with PHP 4?
|
i found [this](http://drip.tigris.org/) (drip), but it looks like it hasn't been updated in a few years.
|
Most dependency injection frameworks use reflection to determine dependencies. Since PHP4 doesn't have typehints, you can't really do this. Experiments have been made with using config files - Some times embedded in comments in the code (Often called annotations). While this works, I find it a bit clunky. In my opinion, you're better off using PHP's dynamic nature to your advantage, than to try and apply statically typed solutions to it. You can get a long way with hand-crafted factories. See for example [this post on how](http://www.sitepoint.com/blogs/2008/02/04/dealing-with-dependencies/).
|
Is there a Dependency Injection framework for PHP4?
|
[
"",
"php",
"dependency-injection",
"design-patterns",
"php4",
""
] |
I have inherited a c# class 'Button' (which I can't change) which clashes with the BCL class 'Windows.Forms.Button'. Normally, Id be very happy to go:
```
MyPackage.MyClass.Button;
```
But there are a large number or references to this class which is a pain to have to re-type.
Is there any way to get the compiler (linker?) to default to using the customised version of Button over the BCL version?
|
Add this to the top of the file:
```
using MyButton = MyPackage.MyClass.Button;
```
Now you can reference your custom button using a distinct name. You may need to do something similar for the stock button if you use that anywhere in the same file.
|
if you want to use it by default, replace
```
using Windows.Forms;
```
with
```
using MyPackage.MyClass;
```
If you do that, you'll need to fully qualify all the buttons from Windows.Forms.
Or, if you want to, you can alias the namespace
```
using My = MyPackage.MyClass;
//... then
My.Button b = ...
```
Or alias the button
```
using MyButton = MyPackage.MyClass.Button;
```
|
C# Default scope resolution
|
[
"",
"c#",
"linker",
"alias",
"scope-resolution",
"name-clash",
""
] |
I have two tables that are joined together.
A has many B
Normally you would do:
```
select * from a,b where b.a_id = a.id
```
To get all of the records from a that has a record in b.
How do I get just the records in a that does not have anything in b?
|
```
select * from a where id not in (select a_id from b)
```
Or like some other people on this thread says:
```
select a.* from a
left outer join b on a.id = b.a_id
where b.a_id is null
```
|
```
select * from a
left outer join b on a.id = b.a_id
where b.a_id is null
```
|
How do I find records that are not joined?
|
[
"",
"sql",
"select",
"join",
"anti-join",
""
] |
The following test fails:
```
#!/usr/bin/env python
def f(*args):
"""
>>> t = 1, -1
>>> f(*map(lambda i: lambda: i, t))
[1, -1]
>>> f(*(lambda: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda: i for i in t]) # -> [-1, -1]
[1, -1]
"""
alist = [a() for a in args]
print(alist)
if __name__ == '__main__':
import doctest; doctest.testmod()
```
In other words:
```
>>> t = 1, -1
>>> args = []
>>> for i in t:
... args.append(lambda: i)
...
>>> map(lambda a: a(), args)
[-1, -1]
>>> args = []
>>> for i in t:
... args.append((lambda i: lambda: i)(i))
...
>>> map(lambda a: a(), args)
[1, -1]
>>> args = []
>>> for i in t:
... args.append(lambda i=i: i)
...
>>> map(lambda a: a(), args)
[1, -1]
```
|
They are different, because the value of `i` in both the generator expression and the list comp are evaluated lazily, i.e. when the anonymous functions are invoked in `f`.
By that time, `i` is bound to the last value if `t`, which is -1.
So basically, this is what the list comprehension does (likewise for the genexp):
```
x = []
i = 1 # 1. from t
x.append(lambda: i)
i = -1 # 2. from t
x.append(lambda: i)
```
Now the lambdas carry around a closure that references `i`, but `i` is bound to -1 in both cases, because that is the last value it was assigned to.
If you want to make sure that the lambda receives the current value of `i`, do
```
f(*[lambda u=i: u for i in t])
```
This way, you force the evaluation of `i` at the time the closure is created.
**Edit**: There is one difference between generator expressions and list comprehensions: the latter leak the loop variable into the surrounding scope.
|
The lambda captures variables, not values, hence the code
```
lambda : i
```
will always return the value i is **currently** bound to in the closure. By the time it gets called, this value has been set to -1.
To get what you want, you'll need to capture the actual binding at the time the lambda is created, by:
```
>>> f(*(lambda i=i: i for i in t)) # -> [-1, -1]
[1, -1]
>>> f(*[lambda i=i: i for i in t]) # -> [-1, -1]
[1, -1]
```
|
Why results of map() and list comprehension are different?
|
[
"",
"python",
"closures",
"list-comprehension",
"late-binding",
"generator-expression",
""
] |
When running one of our software, a tester was faced with the data execution prevention dialog of Windows.
We try to reproduce this situation on a developer computer for debugging purposes : with no success.
Does anyone know how to find what may cause the DEP protection to kill the application?
Is there any existing tools available for this?
|
The DEP dialog will typically only show when you try to execute code from a region that you're not marking as executable. This might be caused by 'thunks' in a library you're using, e.g. ATL windowing. This problem is fixed in ATL 8.0.
A stack-trashing bug - for example, a buffer overrun - can also cause this problem, by setting the return address to a location that isn't executable. This might not cause an access violation but instead weird behaviour, if DEP is turned off for the process or not available on the hardware.
It might also happen if you throw a C++ exception or raise an SEH exception, and your structured exception handlers have been trashed by a buffer overrun.
|
Potentially I would think any time you try to write to memory that isn't allocated this would be a possible result. Could be anything along the lines of deleting an object then using it, or writing a string into a buffer which is too small to hold it.
|
Track Data Execution Prevention (DEP)
|
[
"",
"c++",
"dep",
""
] |
Eclipse is a really great editor, which I prefer to use, but the GUI design tools for Eclipse are lacking. On the other hand, NetBeans works really well for GUI design.
Are there any tips, tricks or pitfalls for using NetBeans for GUI design and Eclipse for everything else on the same project?
**EDIT:** I tried Maven, and it does not seem to work (too complex for my needs).
|
Create your GUI with Netbeans. copy a Eclipse .project file (like below) into the project folder change the MyProjectName. Open Eclipse and import the project into your workspace, so you can open the projekt from your Eclipse workspace with Netbeans. Now you able to use Netbeans to create and change the GUI and editing the code with Eclipse.
```
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>MyProject</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
</natures>
</projectDescription>
```
|
MyEclipse offers an integration of the Netbeans GUI editor (Matisse) with Eclipse.
See <http://www.myeclipseide.com/module-htmlpages-display-pid-5.html>
|
Using both Eclipse and NetBeans on the same project
|
[
"",
"java",
"eclipse",
"netbeans",
""
] |
I'm looking for a decent lexical scanner generator for C#/.NET -- something that supports Unicode character categories, and generates somewhat readable & efficient code. Anyone know of one?
---
EDIT: I ***need*** support for **Unicode categories**, not just Unicode characters. There are currently 1421 characters in just the `Lu` (Letter, Uppercase) category alone, and I need to match many different categories very specifically, and would rather not hand-write the character sets necessary for it.
Also, actual code is a ***must*** -- this rules out things that generate a binary file that is then used with a driver (i.e. GOLD)
---
EDIT: ANTLR does not support Unicode categories yet. There is an [open issue](http://www.antlr.org/wiki/display/ANTLR3/define+unicode+aliases+for+character+categories) for it, though, so it might fit my needs someday.
|
GPLEX seems to support your requirements.
|
The two solutions that come to mind are [ANTLR](http://www.antlr.org) and [Gold](http://www.devincook.com/goldparser/). ANTLR has a GUI based grammar designer, and an excellent sample project in C# [can be found here](http://www.codeproject.com/KB/recipes/sota_expression_evaluator.aspx).
|
C#/.NET Lexer Generators
|
[
"",
"c#",
".net",
"parsing",
"generator",
"lexical-analysis",
""
] |
I know it's possible, and I've seen simple examples in the documentation, but are they being used in the wild?
I use attributes at the class and method level all the time, but have never used them on method parameters. What are some real-world examples, and the reasons for the usage?
I'm not interested in seeing a textbook example, mind you. There are plenty of those out there. I want to see an actual reason why it solved a particular problem for you.
EDIT: Let's place aside the discussion about whether or not to use attributes in the first place. I understand some people don't like them because they "dirty" their code. That's for a different discussion!
|
You can for example create a ValidatorAttribute for every parameter, then before calling the method, you can reflect the parameter attributes and do parameter validation. Then call the method if all ok.
|
(I've left this answer here in case others find it a useful intro to PostSharp, but it doesn't actually answer the question properly! I misread the question as asking about method attributes instead of class attributes. Doh. From what I remember, the generated SOAP classes use parameter attributes. LINQ to SQL uses return attributes and may use parameter attributes too, when it comes to stored procs.)
I'm used them with PostSharp, although admittedly only in a quick demo so far (I haven't used PostSharp in production).
See [my blog post](http://msmvps.com/blogs/jon_skeet/archive/2008/03/27/postsharp-and-iterator-blocks-a-beautiful-combination.aspx) for more details.
Oh, and of course NUnit tests specify [Test] all over the place :)
Jon
|
Is there a practical example of how they have used attributes on method parameters in .NET?
|
[
"",
"c#",
".net",
""
] |
Where can I find a list of all types of bsd style socket errors?
|
In the documentation? For instance, for connect(), see:
```
% man connect
...
ECONNREFUSED
No-one listening on the remote address.
EISCONN
The socket is already connected.
ENETUNREACH
Network is unreachable.
```
|
You can also find a list of error codes (and a general description of their meaning) on the Open Group's pages for each function (like [connect](http://www.opengroup.org/onlinepubs/009695399/functions/connect.html), for example).
|
good resource for socket errors?
|
[
"",
"c++",
"c",
"sockets",
""
] |
By default, in C++, a single-argument constructor can be used as an implicit conversion operator. This can be suppressed by marking the constructor as explicit.
I'd prefer to make "explicit" be the default, so that the compiler cannot silently use these constructors for conversion.
Is there a way to do this in standard C++? Failing that, is there a pragma (or similar) that'll work in Microsoft C++ to do this? What about g++ (we don't use it, but it might be useful information)?
|
Nope, you have to do it all by hand. It's a pain, but you certainly should get in the habit of making single argument constructors explicit. I can't imagine the pain you would have if you did find a solution and then had to port the code to another platform. You should usually shy away from compiler extensions like this because it will make the code less portable.
|
If there was a pragma or command line option that made constructors explicit by default, how would you declare one that is not explicit? There would have to be another compiler-specific token or pragma to make it possible to declare an implicit conversion constructor.
|
Forcing single-argument constructors to be explicit in C++?
|
[
"",
"c++",
""
] |
I'm using [Enunciate](http://enunciate.codehaus.org) to build a prototype REST api and need to include a jar containing custom code as a library.
My Ant Script looks like this:
```
<!--include all jars-->
<path id="en.classpath">
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</path>
<!--define the task-->
<taskdef name="enunciate" classname="org.codehaus.enunciate.main.EnunciateTask">
<classpath refid="en.classpath" />
</taskdef>
<mkdir dir="${dist}" />
<enunciate dir="${src}" configFile="${basedir}/enunciate.xml">
<include name="**/*.java" />
<classpath refid="en.classpath"/>
<export artifactId="spring.war.file" destination="${dist}/${war.name}" />
</enunciate>
```
The problem is that my custom jar is being excluded from the WAR file. It is necessary to compile the enunciate annotated classes so the jar is obviously on the classpath at compile time but enunciate is failing to include it in the distribution. I have also noticed that several of the jars needed by enunciate are not being included in the WAR file.
Why are they being excluded and how do I fix it?
|
As it turns out one of the jars we're attempting to include has a dependency listed in it's Manifest file of a jar that Enunciate depends on (freemarker). Enunciate automatically excludes freemarker and at first glance it seems as though it automatically excludes anything that depends on freemarker as well. If we remove freemarker from the list of dependent jars in our code's manifest file it works just fine.
However; I've spoken with the main developer of Enunciate (Ryan Heaten) and he assures me this isn't what's happening. Including his response below:
> Really?!
>
> Wow. Interesting. I can't explain
> it; Enunciate doesn't look at what's
> in the Manifest in order to determine
> what to include in the war, so I'm
> kind of stumped here. It could also
> be some weird Ant behavior (not
> including that jar in the
> "en.classpath" reference for some
> reason).
>
> ~Ryan
|
I never used enunciate, but as a quick hack you can add the jars to the war:
```
<jar jarfile="${dist}/${war.name}" update="true">
<fileset dir="${lib}">
<include name="**/*.jar" />
</fileset>
</jar>
```
Note: you probably want to add the jars to the **WEB-INF/lib** directory, instead of the root directory.
I'm guessing that enunciate does the mininum to interfere with your own build process, since you know best what to put within your jar file.
|
Why do Jars get Excluded from Enunciate's Deployment?
|
[
"",
"java",
"ant",
"war",
"enunciate",
""
] |
I have an array of a few million numbers.
```
double* const data = new double (3600000);
```
I need to iterate through the array and find the range (the largest value in the array minus the smallest value). However, there is a catch. I only want to find the range where the smallest and largest values are within 1,000 samples of each other.
So I need to find the maximum of: range(data + 0, data + 1000), range(data + 1, data + 1001), range(data + 2, data + 1002), ...., range(data + 3599000, data + 3600000).
I hope that makes sense. Basically I could do it like above, but I'm looking for a more efficient algorithm if one exists. I think the above algorithm is O(n), but I feel that it's possible to optimize. An idea I'm playing with is to keep track of the most recent maximum and minimum and how far back they are, then only backtrack when necessary.
I'll be coding this in C++, but a nice algorithm in pseudo code would be just fine. Also, if this number I'm trying to find has a name, I'd love to know what it is.
Thanks.
|
The algorithm you describe is really O(N), but i think the constant is too high. Another solution which looks reasonable is to use O(N\*log(N)) algorithm the following way:
```
* create sorted container (std::multiset) of first 1000 numbers
* in loop (j=1, j<(3600000-1000); ++j)
- calculate range
- remove from the set number which is now irrelevant (i.e. in index *j - 1* of the array)
- add to set new relevant number (i.e. in index *j+1000-1* of the array)
```
I believe it should be faster, because the constant is much lower.
|
This type of question belongs to a branch of algorithms called streaming algorithms. It is the study of problems which require not only an O(n) solution but also need to work in a single pass over the data. the data is inputted as a stream to the algorithm, the algorithm can't save all of the data and then and then it is lost forever. the algorithm needs to get some answer about the data, such as for instance the minimum or the median.
Specifically you are looking for a maximum (or more commonly in literature - minimum) in a window over a stream.
[Here's a presentation](http://www.cs.tau.ac.il/~matias/courses/Seminar_Spring03/Estimating%20Rarity%20and%20Similarity%20over%20Data%20stream%20Windows.ppt) on an [article](http://www-cs-students.stanford.edu/~datar/papers/esa02-streams.ps) that mentions this problem as a sub problem of what they are trying to get at. it might give you some ideas.
I think the outline of the solution is something like that - maintain the window over the stream where in each step one element is inserted to the window and one is removed from the other side (a sliding window). The items you actually keep in memory aren't all of the 1000 items in the window but a selected representatives which are going to be good candidates for being the minimum (or maximum).
read the article. it's abit complex but after 2-3 reads you can get the hang of it.
|
Algorithm for finding the maximum difference in an array of numbers
|
[
"",
"c++",
"algorithm",
"statistics",
""
] |
Is it possible to find out who called a stored procedure?
For example, say I get an error in `proc3`. From within that proc I want to know if it was called by `proc1` or `proc2`.
|
I would use an extra input parameter, to specify the source, if this is important for your logic.
This will also make it easier to port your database to another platform, since you don't depend on some obscure platform dependent function.
|
There is no nice automatic way to do this (alas). So it really depends on how much you are prepared to (re)write your procs in order to be able to do this.
If you have a logging mechanism, you might be able to read the log and work out who called you.
For example, if you implement logging by inserting to a table, for example:
```
CREATE TABLE Log
(timestamp dattime,
spid int,
procname varchar(255),
message varchar(255) )
... text of proc ...
INSERT INTO Log
SELECT get_date(), @@spid, @currentproc, 'doing something'
-- you have to define @currentproc in each proc
-- get name of caller
SELECT @caller = procname
FROM Log
WHERE spid = @@spid
AND timestamp = (SELECT max(timestamp)
FROM Log
WHERE timestamp < get_date()
AND procname != @currentproc )
```
This wouldn't work for recursive calls, but perhaps someone can fix that?
|
Find out the calling stored procedure in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
"stored-procedures",
"sql-server-2005",
""
] |
I can understand the use for one level of namespaces. But 3 levels of namespaces. Looks insane. Is there any practical use for that? Or is it just a misconception?
|
Hierarchical namespaces do have a use in that they allow progressively more refined definitions. Certainly a single provider may produce two classes with the same name. Often the first level is occupied by the company name, the second specifies the product, the third (and possibly more) my provide the domain.
There are also other uses of namespace segregation. One popular situation is placing the base classes for a factory pattern in its own namespace and then derived factories in their own namespaces by provider. E.g. `System.Data`, `System.Data.SqlClient` and `System.Data.OleDbClient`.
|
Obviously it's a matter of opinion. But it really boils down to organization. For example, I have a project which has a plugin api that has functions/objects which look something like this:
```
plugins::v1::function
```
When 2.0 is rolled out they will be put into the v2 sub-namespace. I plan to only deprecate but never remove v1 members which should nicely support backwards compatibility in the future. This is just one example of "sane" usage. I imagine some people will differ, but like I said, it's a matter of opinion.
|
When have we any practical use for hierarchical namespaces in c++?
|
[
"",
"c++",
"namespaces",
""
] |
If my code throws an exception, sometimes - not everytime - the jsf presents a blank page. IΒ΄m using facelets for layout.
A similar error were reported at this [Sun forumnΒ΄s post](http://forums.sun.com/thread.jspa?messageID=10237827), but without answers.
Anyone else with the same problem, or have a solution?
;)
Due to some requests. Here follow more datails:
web.xml
```
<error-page>
<exception-type>com.company.ApplicationResourceException</exception-type>
<location>/error.faces</location>
</error-page>
```
And the stack related to jsf is printed after the real exception:
```
####<Sep 23, 2008 5:42:55 PM GMT-03:00> <Error> <HTTP> <comp141> <AdminServer> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1222202575662> <BEA-101107> <[weblogic.servlet.internal.WebAppServletContext@6d46b9 - appName: 'ControlPanelEAR', name: 'ControlPanelWeb', context-path: '/Web'] Problem occurred while serving the error page.
javax.servlet.ServletException: viewId:/error.xhtml - View /error.xhtml could not be restored.
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:249)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:226)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:124)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:525)
at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:261)
at weblogic.servlet.internal.ForwardAction.run(ForwardAction.java:22)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.servlet.internal.ErrorManager.handleException(ErrorManager.java:144)
at weblogic.servlet.internal.WebAppServletContext.handleThrowableFromInvocation(WebAppServletContext.java:2201)
at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2053)
at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1366)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)
javax.faces.application.ViewExpiredException: viewId:/error.xhtml - View /error.xhtml could not be restored.
at com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:180)
at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:248)
at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:117)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:244)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:226)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:124)
```
IΒ΄m using the jsf version `Mojarra 1.2_09`, `richfaces 3.2.1.GA` and `facelets 1.1.13`.
Hope some help :(
|
I think this largely depends on your JSF implementation. I've heard that some will render blank screens.
The one we were using would throw error 500's with a stack trace. Other times out buttons wouldn't work without any error for the user. This was all during our development phase.
But the best advice I can give you is to catch the exceptions and log them in an error log so you have the stack trace for debugging later. For messages that we couldn't do anything about like a backend failing we would just add a fatal message to the FacesContext that gets displayed on the screen and log the stack trace.
|
I fixed a similar problem in my `error.jsp` page today. This won't be exactly the same as yours, but it might point someone in the right direction if they're having a similar problem. My problem seemed to be coming from two different sources.
First, the `message` exception property wasn't being set in some of the servlets that were throwing exceptions caught by the error page. The servlets were catching and rethrowing exceptions using the [`ServletException(Throwable rootCause)`](http://download.oracle.com/javaee/5/api/javax/servlet/ServletException.html#ServletException%28java.lang.Throwable%29) constructor.
Second, in the error page itself, the original author had used scriptlet code to parse the message using `String.split(message, ";");` Since the message was `null` this failed. I was getting a `NullPointerException` in my error log, along with the message "Problem occurred while serving the error page."
These two things combined to give me a blank page at the URL of the servlet that was throwing the original exception. I fixed my problem by providing my own error message when I rethrow exceptions in my servlets using the [`ServletException(String message, Throwable rootCause)`](http://download.oracle.com/javaee/5/api/javax/servlet/ServletException.html#ServletException%28java.lang.String,%20java.lang.Throwable%29) constructor, so the error message will no longer be `null`. I also rewrote the `error.jsp` page using EL instead of scriptlet code, but that wasn't strictly necessary.
|
Blank Page in JSF
|
[
"",
"java",
"jsf",
"facelets",
""
] |
In java, I could do this with the 'final' keyword. I don't see 'final' in C#. Is there a substitute?
|
You're looking for the `sealed` keyword. It does exactly what the `final` keyword in Java does. Attempts to inherit will result in a compilation error.
|
Also be aware that "I don't think anybody will ever need to inherit from this" is not a good reason to use "sealed". Unless you've got a specific need to ensure that a particular implementation is used, leave the class unsealed.
|
How can I keep a class from being inherited in C#?
|
[
"",
"c#",
".net",
"inheritance",
""
] |
I have got a C function in a static library, let's call it A, with the following interface :
```
int A(unsigned int a, unsigned long long b, unsigned int *y, unsigned char *z);
```
This function will change the value of y an z (this is for sure). I use it from within a dynamic C++ library, using extern "C".
Now, here is what stune me :
* y is properly set, z is not changed. What I exactly mean is that if both are initialized with a (pointed) value of 666, the value pointed by y will have changed after the call but not the value pointed by z (still 666).
* when called from a C binary, this function works seamlessly (value
pointed by z is modified).
* if I create a dummy C library with a function having the same prototype, and I use it from within my dynamic C++ library, it works very well. If I re-use the same variables to call A(..), I get the same result as before, z is not changed.
I think that the above points show that it is not a stupid mistake with the declaration of my variables.
I am clearly stuck, and I can't change the C library. Do you have any clue on what can be the problem ?
I was thinking about a problem on the C/C++ interface, per instance the way a char\* is interpreted.
Edit : I finally found out what was the problem. See below my answer.
|
First of all, I am very grateful to everyone for your help.
Thanks to the numerous ideas and clues you gave me, I have been able to finally sort out this problem. Your advices helped me to question what I took for granted.
Short answer to my problem : The problem was that my C++ library used an old version of the C library. This old version missed the 4th argument. As a consequence, the 4th argument was obviously never changed.
I am a bit ashamed now that I realised this was the problem. However, I was misslead by the fact that my code was compiling fine. This was due to the fact that the C++ library compiled against the correct version of the C lib, but at runtime it used the old version statically linked with another library that I was using.
```
C++ Lib (M) ---> dyn C++ lib (N) ---> C lib (P) v.1.0
|
------> C lib (P) v.1.1
```
(N) is a dynamic library which is statically linked with (P) version 1.0.
The compiler accepted the call from (M) to the function with 4 arguments because I linked against (P) version 1.1, but at runtime it used the old version of (P).
Feel free to edit this answer or the question or to ask me to do so.
|
It looks like a difference between the the way your C library and C++ compiler is dealing with *long longs*. My guess is that it is that the C library is probably pre C89 standard and actually treating the 64bit *long long* as a 32bit long. Your C++ library is handling it correctly and placing 64bits on the call stack and hence corrupting y and z. Maybe try calling the function through \*int A(unsigned int a, unsigned long b, unsigned int \*y, unsigned char *z)*, and see what you get.
Just a thought.
|
Why is my parameter passed by reference not modified within the function?
|
[
"",
"c++",
"c",
"parameters",
"linker",
"pass-by-reference",
""
] |
You can use ftplib for full FTP support in Python. However the preferred way of getting a directory listing is:
```
# File: ftplib-example-1.py
import ftplib
ftp = ftplib.FTP("www.python.org")
ftp.login("anonymous", "ftplib-example-1")
data = []
ftp.dir(data.append)
ftp.quit()
for line in data:
print "-", line
```
Which yields:
```
$ python ftplib-example-1.py
- total 34
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 .
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 ..
- drwxrwxr-x 2 root 4127 512 Sep 13 15:18 RCS
- lrwxrwxrwx 1 root bin 11 Jun 29 14:34 README -> welcome.msg
- drwxr-xr-x 3 root wheel 512 May 19 1998 bin
- drwxr-sr-x 3 root 1400 512 Jun 9 1997 dev
- drwxrwxr-- 2 root 4127 512 Feb 8 1998 dup
- drwxr-xr-x 3 root wheel 512 May 19 1998 etc
...
```
I guess the idea is to parse the results to get the directory listing. However this listing is directly dependent on the FTP server's way of formatting the list. It would be very messy to write code for this having to anticipate all the different ways FTP servers might format this list.
Is there a portable way to get an array filled with the directory listing?
(The array should only have the folder names.)
|
Try using [`ftp.nlst(dir)`](https://docs.python.org/3/library/ftplib.html#ftplib.FTP.nlst).
However, note that if the folder is empty, it might throw an error:
```
files = []
try:
files = ftp.nlst()
except ftplib.error_perm as resp:
if str(resp) == "550 No files found":
print "No files in this directory"
else:
raise
for f in files:
print f
```
|
The reliable/standardized way to parse FTP directory listing is by using MLSD command, which by now should be supported by all recent/decent FTP servers.
```
import ftplib
f = ftplib.FTP()
f.connect("localhost")
f.login()
ls = []
f.retrlines('MLSD', ls.append)
for entry in ls:
print entry
```
The code above will print:
```
modify=20110723201710;perm=el;size=4096;type=dir;unique=807g4e5a5; tests
modify=20111206092323;perm=el;size=4096;type=dir;unique=807g1008e0; .xchat2
modify=20111022125631;perm=el;size=4096;type=dir;unique=807g10001a; .gconfd
modify=20110808185618;perm=el;size=4096;type=dir;unique=807g160f9a; .skychart
...
```
Starting from python 3.3, ftplib will provide a specific method to do this:
* <http://bugs.python.org/issue11072>
* <http://hg.python.org/cpython/file/67053b135ed9/Lib/ftplib.py#l535>
|
Using Python's ftplib to get a directory listing, portably
|
[
"",
"python",
"ftp",
"portability",
""
] |
Java SE 6 (64 bit only) is now on OS X and that is a good thing. As I understand it since Eclipse is still Carbon and thus 32 bit, it cannot be used for 1.6 on Leopard, only 1.5.
Does anyone know if NetBeans 6.x can be used with Java SE 6 on Leopard utilizing its JVM?
|
Yes, you should be able to.
A number of blogs have reported running Netbeans on 1.6 as well as the the problems they had with earlier versions of NB. The NB issue tracker also has a number of bugs that have been fixed that affected 1.6 on Mac OS.
If you have trouble getting it to run, you might also try [the Netbeans forum](http://www.nabble.com/Netbeans.org-f2602.html).
|
Eclipse works with java 1.6, kinda. Ecplipse runs using the 1.5 vm, but it can compile code for 1.6 using the 1.6 java compiler. I have used netbeans for 1.6 development and it seems alright.
|
Does anyone know if NetBeans 6.x can be used with Java SE 6 on Leopard?
|
[
"",
"java",
"eclipse",
"netbeans",
"osx-leopard",
""
] |
This PHP code...
```
207 if (getenv(HTTP_X_FORWARDED_FOR)) {
208 $ip = getenv('HTTP_X_FORWARD_FOR');
209 $host = gethostbyaddr($ip);
210 } else {
211 $ip = getenv('REMOTE_ADDR');
212 $host = gethostbyaddr($ip);
213 }
```
Throws this warning...
> **Warning:** gethostbyaddr()
> [function.gethostbyaddr]: Address is
> not in a.b.c.d form in **C:\inetpub...\filename.php** on line **212**
It seems that *$ip* is blank.
|
on php.net it says the following:
> The function `getenv` does not work if your Server API is ASAPI (IIS).
> So, try to don't use `getenv('REMOTE_ADDR')`, but `$_SERVER["REMOTE_ADDR"]`.
Did you maybe try to do it with `$_SERVER`?
|
Why don't you use
```
$_SERVER['REMOTE_ADDR']
```
and
```
$_SERVER['HTTP_X_FORWARDED_FOR']
```
|
Why is getenv('REMOTE_ADDR') giving me a blank IP address?
|
[
"",
"php",
"ip-address",
""
] |
Given a string `"filename.conf"`, how to I verify the extension part?
I need a cross platform solution.
|
You have to make sure you take care of file names with more then one dot.
example: `c:\.directoryname\file.name.with.too.many.dots.ext` would not be handled correctly by `strchr` or `find.`
My favorite would be the [boost filesystem library](http://www.boost.org/doc/libs/release/libs/filesystem/index.html) that have an extension(path) function
|
Is this too simple of a solution?
```
#include <iostream>
#include <string>
int main()
{
std::string fn = "filename.conf";
if(fn.substr(fn.find_last_of(".") + 1) == "conf") {
std::cout << "Yes..." << std::endl;
} else {
std::cout << "No..." << std::endl;
}
}
```
|
How to get file extension from string in C++
|
[
"",
"c++",
"string",
"filenames",
"file-extension",
""
] |
I'm developing a web service whose methods will be called from a "dynamic banner" that will show a sort of queue of messages read from a sql server table.
The banner will have a heavy pressure in the home pages of high traffic sites; every time the banner will be loaded, it will call my web service, in order to obtain the new queue of messages.
Now: I don't want that all this traffic drives queries to the database every time the banner is loaded, so I'm thinking to use the asp.net cache (i.e. HttpRuntime.Cache[cacheKey]) to limit database accesses; I will try to have a cache refresh every minute or so.
Obviously I'll try have the messages as little as possible, to limit traffic.
But maybe there are other ways to deal with such a scenario; for example I could write the last version of the queue on the file system, and have the web service access that file; or something mixing the two approaches...
The solution is c# web service, asp.net 3.5, sql server 2000.
Any hint? Other approaches?
Thanks
Andrea
|
It depends on a lot of things:
* If there is little change in the data (think backend with "publish" button or daily batches), then I would definitely use static files (updated via push from the backend). We used this solution on a couple of large sites and worked really well.
* If the data is small enough, memory caching (i.e. Http Cache) is viable, but beware of locking issues and also beware that Http Cache **will not** work that well under heavy memory load, because items can be expired early if the framework needs memory. I have been bitten by it before! With the above caveats, Http Cache works quite well.
|
I think caching is a reasonable approach and you can take it a step further and add a SQL Dependency to it.
[ASP.NET Caching: SQL Cache Dependency With SQL Server 2000](http://www.c-sharpcorner.com/UploadFile/mosessaur/sqlcachedependency01292006135138PM/sqlcachedependency.aspx?ArticleID=3caa7d32-dce0-44dc-8769-77f8448e76bc)
|
Hints for a high-traffic web service, c# asp.net sql2000
|
[
"",
"c#",
"web-services",
"high-load",
""
] |
What is the difference between `#include` and `#import` in C++?
|
`#import` is a Microsoft-specific thing, apparently for COM or .NET stuff only.
`#include` is a standard C/C++ preprocessor statement, used for including header (or occasionally other source code) files in your source code file.
|
**Import in VC++:** `#import` is for type libraries or .tlbs (COM stuff).
The content of the type library is converted into C++ classes, mostly describing the COM interfaces for you automatically, and then it is included into your file.
The `#import` directive was introduced by Microsoft as an extension to the C++ language. You can read about it at [this MSDN article](http://msdn.microsoft.com/en-us/library/8etzzkb6.aspx).
The `#import` directive is also used with .NET / CLI stuff.
**Import in gcc:**
The import in gcc is different from the import in VC++. It is a simple way to include a header at most once only. (In VC++ and GCC you can do this via `#pragma` once as well)
The `#import` directive was officially undeprecated by the gcc team in version 3.4 and works fine 99% of the time in all previous versions of gcc which support
**Include:** `#include` is for mostly header files, but to prepend the content to your current file. `#include` is part of the C++ standard. You can read about it at [this MSDN article](http://msdn.microsoft.com/en-us/library/36k2cdd4.aspx).
|
C++ #include and #import difference
|
[
"",
"c++",
"visual-c++",
"gcc",
"import",
"include",
""
] |
I've written a little script to iterate across files in folders to count lines of code.
The heart of the script is this function to count lines of whitespace, comments, and code. (Note that for the moment it is tailored to C# and doesn't know about multi-line comments).
It just doesn't look very nice to me - has anyone got a cleaner version?
```
// from list of strings return tuple with count of (whitespace, comments, code)
let loc (arr:List<string>) =
let innerloc (whitesp, comment, code) (l:string) =
let s = l.Trim([|' ';'\t'|]) // remove leading whitespace
match s with
| "" -> (whitesp + 1, comment, code) //blank lines
| "{" -> (whitesp + 1, comment, code) //opening blocks
| "}" -> (whitesp + 1, comment, code) //closing blocks
| _ when s.StartsWith("#") -> (whitesp + 1, comment, code) //regions
| _ when s.StartsWith("//") -> (whitesp, comment + 1, code) //comments
| _ -> (whitesp, comment, code + 1)
List.fold_left innerloc (0,0,0) arr
```
|
I think what you have is fine, but here's some variety to mix it up. (This solution repeats your problem of ignoring trailing whitespace.)
```
type Line =
| Whitespace = 0
| Comment = 1
| Code = 2
let Classify (l:string) =
let s = l.TrimStart([|' ';'\t'|])
match s with
| "" | "{" | "}" -> Line.Whitespace
| _ when s.StartsWith("#") -> Line.Whitespace
| _ when s.StartsWith("//") -> Line.Comment
| _ -> Line.Code
let Loc (arr:list<_>) =
let sums = Array.create 3 0
arr
|> List.iter (fun line ->
let i = Classify line |> int
sums.[i] <- sums.[i] + 1)
sums
```
"Classify" as a separate entity might be useful in another context.
|
A better site for this might be [refactormycode](http://refactormycode.com/) - it's tailored exactly for these questions.
|
Can you improve this 'lines of code algorithm' in F#?
|
[
"",
"c#",
"algorithm",
"f#",
""
] |
When using IF statements in Python, you have to do the following to make the "cascade" work correctly.
```
if job == "mechanic" or job == "tech":
print "awesome"
elif job == "tool" or job == "rock":
print "dolt"
```
Is there a way to make Python accept multiple values when checking for "equals to"? For example,
```
if job == "mechanic" or "tech":
print "awesome"
elif job == "tool" or "rock":
print "dolt"
```
|
```
if job in ("mechanic", "tech"):
print "awesome"
elif job in ("tool", "rock"):
print "dolt"
```
The values in parentheses are a tuple. The `in` operator checks to see whether the left hand side item occurs somewhere inside the right handle tuple.
Note that when Python searches a tuple or list using the `in` operator, it does a linear search. If you have a large number of items on the right hand side, this could be a performance bottleneck. A larger-scale way of doing this would be to use a [`frozenset`](http://docs.python.org/lib/types-set.html):
```
AwesomeJobs = frozenset(["mechanic", "tech", ... lots of others ])
def func():
if job in AwesomeJobs:
print "awesome"
```
The use of `frozenset` over `set` is preferred if the list of awesome jobs does not need to be changed during the operation of your program.
|
You can use in:
```
if job in ["mechanic", "tech"]:
print "awesome"
```
When checking very large numbers, it may also be worth storing off a set of the items to check, as this will be faster. Eg.
```
AwesomeJobs = set(["mechanic", "tech", ... lots of others ])
...
def func():
if job in AwesomeJobs:
print "awesome"
```
|
Using OR comparisons with IF statements
|
[
"",
"python",
"comparison",
"boolean",
""
] |
Right now, I have two Eclipse projects - they both use Maven 2 for all their jar-dependency goodness.
Inside Eclipse, I have project Foo included in project Bar's build path, so that I can use Foo's classes from project Bar. This works really well in Eclipse land, but when I try:
```
mvn compile
```
inside Bar's directory, it fails because Maven doesn't know about the project-to-project relationship in Eclipse's build path.
If I were using Ant, I would just use it to do something silly like copy foo.jar into project Bar's classpath, but as far as I can tell, things are done a lot less hackishly in Maven-land.
I'm wondering if there's a standard workaround for this type of problem - it seems like it would be fairly common, and I'm just missing something basic about how Maven works.
|
Maybe you are referencing the other project via Eclipse configure-> build path only. This works as long as you use Eclipse to build your project.
Try running first `mvn install` in project Bar (in order to put Bar in your Maven repository), and then add the dependency to Foo's pom.xml.
That should work!.
|
Check out the [m2eclipse](https://www.eclipse.org/m2e/) plugin. It will automatically and dynamically update the project build path when you change the pom. There is no need for running `mvn eclipse:eclipse`.
The plugin will also detect if any dependency is in the same workspace and add that project to the build path.
Ideally, if you use m2eclipse, you would never change the project build path manually. You would always edit pom.xml instead, which is the proper way to do it.
As has been previously stated, Maven will not know about the Eclipse project build path. You do need to add all dependencies to the pom, and you need to make sure all dependencies are built and installed first by running `mvn install`.
If you want to build both projects with a single command then you might find [project aggregation](http://maven.apache.org/guides/introduction/introduction-to-the-pom.html#Project_Aggregation) interesting.
|
Adding referenced Eclipse projects to Maven dependencies
|
[
"",
"java",
"eclipse",
"maven-2",
""
] |
When working with large and/or many Javascript and CSS files, what's the best way to reduce the file sizes?
|
In addition to using server side compression, using intelligent coding is the best way to keep bandwidth costs low. You can always use tools like [Dean Edward's Javascript Packer](http://dean.edwards.name/download/#packer), but for CSS, take the time to learn [CSS Shorthand](http://www.webcredible.co.uk/user-friendly-resources/css/css-shorthand-properties.shtml "CSS Shorthand Examples"). E.g. use:
```
background: #fff url(image.gif) no-repeat top left;
```
...instead of:
```
background-color: #fff;
background-image: url(image.gif);
background-repeat: no-repeat;
background-position: top left;
```
Also, use the cascading nature of CSS. For example, if you know that your site will use one font-family, define that for all elements that are in the body tag like this:
```
body{font-family:arial;}
```
One other thing that can help is including your CSS and JavaScript as files instead of inline or at the head of each page. That way your server only has to serve them once to the browser after that browser will go from cache.
### Including Javascript
```
<script type="text/javascript" src="/scripts/loginChecker.js"></script>
```
### Including CSS
```
<link rel="stylesheet" href="/css/myStyle.css" type="text/css" media="All" />
```
|
[Minify](http://developer.yahoo.net/blog/archives/2007/07/high_performanc_8.html) seems to be one of the easiest ways to shrink Javascript.
Turning on zip at the web server level can also help.
|
What is the best method to reduce the size of my Javascript and CSS files?
|
[
"",
"javascript",
"css",
"jscompress",
""
] |
I have created my own Tree implementation for [various reasons](https://stackoverflow.com/questions/144642/tree-directed-acyclic-graph-implementation) and have come up with two classes, a 'base' class that is a generic tree node that is chock full of logic and another class that extends that one which is more specialised.
In my base class certain methods involve instantiating new tree nodes (e.g. adding children). These instantations are inside logic (in a nested loop, say) which makes the logic hard to separate from the instantation.
So, if I don't override these instantations in the specific class the wrong type of node will be created. However, I don't *want* to override those methods because they also contained shared logic that shouldn't be duplicated!
The problem can be boiled down to this:
```
public class Foo {
public String value() { return "foo"; }
public Foo doStuff() {
// Logic logic logic..
return new Foo();
}
}
class Bar extends Foo {
public String value() { return "bar"; }
}
new Bar().doStuff().value(); // returns 'foo', we want 'bar'
```
The first thing that popped into my head would have a 'create hook' that extending classes could override:
```
public Foo createFooHook(/* required parameters */) {
return new Foo();
}
```
Now. while it was a fine first thought, there is a stench coming off that code something awful. There is something very... *wrong* about it.
It's like cooking while naked-- it feels dangerous and *unnecessary*.
So, **how would you deal with this situation?**
|
So, after getting my copy of [Design Patterns](https://rads.stackoverflow.com/amzn/click/com/0201633612) and opening it for what I'm fairly sure is the first time ever I discovered what I want.
It's called the [Factory Method](http://en.wikipedia.org/wiki/Factory_method) and it's mostly a perfect fit. It's still a bit ugly because my super class (`Foo` in the above example) is not abstract which means subclasses are not forced to implement the hook.
That can be fixed with some refactoring though, and I'll end up with something to the effect of:
```
abstract class AbstractFoo {
public String value() { return "Foo"; }
public AbstractFoo doStuff() {
// Logic logic logic
return hook();
}
protected abstract AbstractFoo hook();
}
class Foo extends AbstractFoo {
protected AbstractFoo hook() { return new Foo(); }
}
class Bar extends AbstractFoo {
public String value() { return "Bar"; }
protected AbstractFoo hook() { return new Bar(); }
}
new Bar().doStuff().value(); // Returns 'Bar'!
```
|
In addition to the Factory pattern, I'd take a look at the Composite pattern - it tends to lend itself well to working with a Factory in tree-based situations.
[Composite Design Pattern](http://en.wikipedia.org/wiki/Composite_pattern)
|
Is it OK to have object instantation 'hooks' in base classes?
|
[
"",
"java",
"inheritance",
""
] |
## Summary
Is that possible for webapps deployed on linux + tomcat5.5 to use/see all `/usr/share/java/` jars automatically?
## Details
I'm packaging my java webapp for Ubuntu (yet the question is related to any linux-based distro) and going to make it depend on tomcat.
I'm going to put context descriptor (an xml file) to `/usr/share/tomcat5.5/conf/Catalina/localhost/` to make my app deployed.
Having my web dir here: `/usr/share/<appname>/web`, how can I enable my app to use java jar libs installed in the system (`/usr/share/java`)?
I can't just symlink `/usr/share/java -> <webdir>/WEB-INF/lib`, since I have my custom jars need to be placed in `lib` dir.
## Bad Solution
The solution I've found so far is to symlink each required jar to `<webdir>/WEB-INF/lib/`.
This is not so good, because I have to symlink a lot of jars and even worse to symlink all jars my direct dependency lib (jar) requires (and so on). In case my direct dependency lib changes its list of required jars I'll have to maintain that symlinks.
|
You should not do that. Java EE applications are the supposed to be self sufficient and not depend on any resources outside the deployment package other than those provided by the container. So you should take the libs you need from that directory and add it to your war or ear package.
This guarantees that your application will behave the same wherever you deploy it and you will not be subject to unexpected changes in the versions of the libs in /usr/share/java....
|
According to [the Tomcat classloading documentation](http://tomcat.apache.org/tomcat-5.5-doc/class-loader-howto.html), you need to put any shared libs that should be available to all Tomcat apps in the $CATALINA\_BASE/shared/lib library -- so one way to do what you're looking to do is to move your libraries from /usr/share/java to $CATALINA\_BASE/shared/lib.
BUt if I'm not misunderstanding that same documentation, Tomcat also makes the system-wide CLASSPATH variable's contents available to the classloader at launch, so if your directory -- /usr/share/java -- were included in the system-wide CLASSPATH variable, then that should work too. I've never done this, though; Tomcat's method of making the contents of $CATALINA\_BASE/shared/lib available Tomcat-wide has always served me perfectly.
|
How to add /usr/share/java libs to webapp's classpath?
|
[
"",
"java",
"linux",
"tomcat",
"web-applications",
"classpath",
""
] |
Is the sorting algorithm used by .NET's `Array.Sort()` method a [stable](http://en.wikipedia.org/wiki/Stable_sort#Classification) algorithm?
|
From [MSDN](http://msdn.microsoft.com/en-us/library/6tf1f0bc.aspx):
> This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
The sort uses introspective sort. (Quicksort in version 4.0 and earlier of the .NET framework).
If you need a stable sort, you can use [Enumerable.OrderBy](http://msdn.microsoft.com/en-us/library/bb534966.aspx).
|
Adding to [Rasmus Faber's answer](https://stackoverflow.com/questions/148074/net-is-arraysort-stable-sort#148081)β¦
Sorting in LINQ, via [Enumerable.OrderBy](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.orderby.aspx) and [Enumerable.ThenBy](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.thenby.aspx), is a stable sort implementation, which can be used as an alternative to [Array.Sort](http://msdn.microsoft.com/en-us/library/system.array.sort.aspx). From [Enumerable.OrderBy documentation over at MSDN](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.orderby.aspx):
> This method performs a stable sort;
> that is, if the keys of two elements
> are equal, the order of the elements
> is preserved. In contrast, an unstable
> sort does not preserve the order of
> elements that have the same key.
Also, any unstable sort implementation, like that of `Array.Sort`, can be stabilized by using the position of the elements in the source sequence or array as an additional key to serve as a tie-breaker. Below is one such implementation, as a generic extension method on any single-dimensional array and which turns `Array.Sort` into a stable sort:
```
using System;
using System.Collections.Generic;
public static class ArrayExtensions {
public static void StableSort<T>(this T[] values, Comparison<T> comparison) {
var keys = new KeyValuePair<int, T>[values.Length];
for (var i = 0; i < values.Length; i++)
keys[i] = new KeyValuePair<int, T>(i, values[i]);
Array.Sort(keys, values, new StabilizingComparer<T>(comparison));
}
private sealed class StabilizingComparer<T> : IComparer<KeyValuePair<int, T>>
{
private readonly Comparison<T> _comparison;
public StabilizingComparer(Comparison<T> comparison) {
_comparison = comparison;
}
public int Compare(KeyValuePair<int, T> x,
KeyValuePair<int, T> y) {
var result = _comparison(x.Value, y.Value);
return result != 0 ? result : x.Key.CompareTo(y.Key);
}
}
}
```
Below is a sample program using `StableSort` from above:
```
static class Program
{
static void Main()
{
var unsorted = new[] {
new Person { BirthYear = 1948, Name = "Cat Stevens" },
new Person { BirthYear = 1955, Name = "Kevin Costner" },
new Person { BirthYear = 1952, Name = "Vladimir Putin" },
new Person { BirthYear = 1955, Name = "Bill Gates" },
new Person { BirthYear = 1948, Name = "Kathy Bates" },
new Person { BirthYear = 1956, Name = "David Copperfield" },
new Person { BirthYear = 1948, Name = "Jean Reno" },
};
Array.ForEach(unsorted, Console.WriteLine);
Console.WriteLine();
var unstable = (Person[]) unsorted.Clone();
Array.Sort(unstable, (x, y) => x.BirthYear.CompareTo(y.BirthYear));
Array.ForEach(unstable, Console.WriteLine);
Console.WriteLine();
var stable = (Person[]) unsorted.Clone();
stable.StableSort((x, y) => x.BirthYear.CompareTo(y.BirthYear));
Array.ForEach(stable, Console.WriteLine);
}
}
sealed class Person
{
public int BirthYear { get; set; }
public string Name { get; set; }
public override string ToString() {
return string.Format(
"{{ BirthYear = {0}, Name = {1} }}",
BirthYear, Name);
}
}
```
Below is the output from the sample program above (running on a machine with Windows Vista SP1 and .NET Framework 3.5 SP1 installed):
```
{ BirthYear = 1948, Name = Cat Stevens }
{ BirthYear = 1955, Name = Kevin Costner }
{ BirthYear = 1952, Name = Vladimir Putin }
{ BirthYear = 1955, Name = Bill Gates }
{ BirthYear = 1948, Name = Kathy Bates }
{ BirthYear = 1956, Name = David Copperfield }
{ BirthYear = 1948, Name = Jean Reno }
{ BirthYear = 1948, Name = Jean Reno }
{ BirthYear = 1948, Name = Kathy Bates }
{ BirthYear = 1948, Name = Cat Stevens }
{ BirthYear = 1952, Name = Vladimir Putin }
{ BirthYear = 1955, Name = Bill Gates }
{ BirthYear = 1955, Name = Kevin Costner }
{ BirthYear = 1956, Name = David Copperfield }
{ BirthYear = 1948, Name = Cat Stevens }
{ BirthYear = 1948, Name = Kathy Bates }
{ BirthYear = 1948, Name = Jean Reno }
{ BirthYear = 1952, Name = Vladimir Putin }
{ BirthYear = 1955, Name = Kevin Costner }
{ BirthYear = 1955, Name = Bill Gates }
{ BirthYear = 1956, Name = David Copperfield }
```
|
Is the sorting algorithm used by .NET's `Array.Sort()` method a stable algorithm?
|
[
"",
"c#",
".net",
""
] |
I have a method which should be executed in an exclusive fashion. Basically, it's a multi threaded application where the method is invoked periodically by a timer, but which could also be manually triggered by a user action.
Let's take an example :
1. The timer elapses, so the method is
called. The task could take a few
seconds.
2. Right after, the user clicks on some
button, which should trigger the
same task : BAM. It does nothing
since the method is already running.
I used the following solution :
```
public void DoRecurentJob()
{
if(!Monitor.TryEnter(this.lockObject))
{
return;
}
try
{
// Do work
}
finally
{
Monitor.Exit(this.lockObject);
}
}
```
Where `lockObject` is declared like that:
```
private readonly object lockObject = new object();
```
**Edit** : There will be only one instance of the object which holds this method, so I updated the lock object to be non-static.
Is there a better way to do that ? Or maybe this one is just wrong for any reason ?
|
This looks reasonable if you are just interested in not having the method run in parallel. There's nothing to stop it from running immediately after each other, say that you pushed the button half a microsecond after the timer executed the Monitor.Exit().
And having the lock object as readonly static also make sense.
|
You could also use `Mutex` or `Semaphore` if you want it to work cross process (with a slight performance penalty), or if you need to set any other number than one of allowed simultaneous threads running your piece of code.
There are other signalling constructs that would work, but your example looks like it does the trick, and in a simple and straightforward manner.
|
How to make a method exclusive in a multithreaded context?
|
[
"",
"c#",
".net",
"multithreading",
"locking",
""
] |
Which one should I use?
```
catch (_com_error e)
```
or
```
catch (_com_error& e)
```
|
The second. Here is my attempt at quoting Sutter
"Throw by value, catch by reference"
> Learn to `catch` properly: Throw exceptions by value (not pointer) and
> catch them by reference (usually to `const`). This is the combination
> that meshes best with exception semantics. When rethrowing the same
> exception, prefer just `throw;` to `throw e;`.
Here's the full [Item 73. Throw by value, catch by reference.](http://www.informit.com/content/images/0321113586/items/sutter_item73.pdf)
---
The reason to avoid catching exceptions by value is that it implicitly makes a copy of the exception. If the exception is of a subclass, then information about it will be lost.
```
try { throw MyException ("error") }
catch (Exception e) {
/* Implies: Exception e (MyException ("error")) */
/* e is an instance of Exception, but not MyException */
}
```
Catching by reference avoids this issue by not copying the exception.
```
try { throw MyException ("error") }
catch (Exception& e) {
/* Implies: Exception &e = MyException ("error"); */
/* e is an instance of MyException */
}
```
|
Personally, I would go for the third option:
```
catch (const _com_error& e)
```
|
Which is correct? catch (_com_error e) or catch (_com_error& e)?
|
[
"",
"c++",
"exception",
"com",
""
] |
I'm sure this is a subject that's on most python developers' minds considering that Python 3 is coming out soon. Some questions to get us going in the right direction:
1. Will you have a python 2 and python 3 version to be maintained concurrently or will you simply have a python 3 version once it's finished?
* Have you already started or plan on starting soon? Or do you plan on waiting until the final version comes out to get into full swing?
|
Here's the general plan for Twisted. I was originally going to blog this, but then I thought: why blog about it when I could get *points* for it?
1. **Wait until somebody cares.**
Right now, nobody has Python 3. We're not going to spend a bunch of effort until at least one actual user has come forth and said "I need Python 3.0 support", and has a good reason for it aside from the fact that 3.0 looks shiny.
2. **Wait until our dependencies have migrated.**
A large system like Twisted has a number of dependencies. For starters, ours include:
* [Zope Interface](http://www.zope.org/Products/%5AopeInterface)
* [PyCrypto](http://www.dlitz.net/software/pycrypto/)
* [PyOpenSSL](https://launchpad.net/pyopenssl/)
* [pywin32](http://sourceforge.net/projects/pywin32/)
* [PyGTK](http://www.pygtk.org/) (though this dependency is sadly very light right now, by the time migration rolls around, I hope Twisted will have more GUI tools)
* [pyasn1](http://pyasn1.sourceforge.net/)
* [PyPAM](http://www.pangalactic.org/PyPAM/)
* [gmpy](http://gmpy.sourceforge.net/)
Some of these projects have their own array of dependencies so we'll have to wait for those as well.
3. **Wait until somebody cares enough *to help*.**
There are, charitably, 5 people who work on Twisted - and I say "charitably" because that's counting me, and I haven't committed in months. We have [over 1000 open tickets](http://twistedmatrix.com/trac/report/1) right now, and it would be nice to actually fix some of those β fix bugs, add features, and generally make Twisted a better product in its own right β before spending time on getting it ported over to a substantially new version of the language.
This potentially includes [sponsors](http://twistedmatrix.com/trac/wiki/TwistedSoftwareFoundation) caring enough to pay for us to do it, but I hope that there will be an influx of volunteers who care about 3.0 support and want to help move the community forward.
4. **Follow Guido's advice.**
This means ***[we will not change our API incompatibly](http://www.artima.com/weblogs/viewpost.jsp?thread=227041)***, and we will follow the [transitional development guidelines](http://www.artima.com/weblogs/viewpost.jsp?thread=208549) that Guido posted last year. That starts with having unit tests, and running [the 2to3 conversion tool](http://docs.python.org/library/2to3.html) over the Twisted codebase.
5. **Report bugs against, and file patches for, the 2to3 tool**.
When we get to the point where we're actually using it, I anticipate that there will be a lot of problems with running `2to3` in the future. Running it over Twisted right now takes an extremely long time and (last I checked, which was quite a while ago) can't parse a few of the files in the Twisted repository, so the resulting output won't import. I think there will have to be a fair amount of success stories from small projects and a lot of hammering on the tool before it will actually work for us.
However, the Python development team has been very helpful in responding to our bug reports, and early responses to these problems have been encouraging, so I expect that all of these issues will be fixed in time.
6. **Maintain 2.x compatibility for several years.**
Right now, Twisted supports python 2.3 to 2.5. Currently, we're working on 2.6 support (which we'll obviously have to finish before 3.0!). Our plan is to we revise our supported versions of Python based on the long-term supported versions of [Ubuntu](http://en.wikipedia.org/wiki/Ubuntu) - release 8.04, which includes Python 2.5, will be supported until 2013. According to Guido's advice we will need to drop support for 2.5 in order to support 3.0, but I am hoping we can find a way around that (we are pretty creative with version-compatibility hacks).
So, we are planning to support Python 2.5 until at least 2013. In two years, Ubuntu will release another long-term supported version of Ubuntu: if they still exist, and stay on schedule, that will be 10.04. Personally I am guessing that this will ship with Python 2.x, perhaps python 2.8, as `/usr/bin/python`, because there is a huge amount of Python software packaged with the distribution and it will take a long time to update it all. So, five years from *then*, in 2015, we can start looking at dropping 2.x support.
During this period, we will continue to follow Guido's advice about migration: running 2to3 over our 2.x codebase, and modifying the 2.x codebase to keep its tests passing in both versions.
The upshot of this is that Python 3.x will not be a *source* language for Twisted until well after my 35th birthday β it will be a target runtime (and a set of guidelines and restrictions) for my python 2.x code. I expect to be writing programs in Python 2.x for the next ten years or so.
So, that's the plan. I'm hoping that it ends up looking laughably conservative in a year or so; that the 3.x transition is easy as pie, and everyone rapidly upgrades. Other things could happen, too: the 2.x and 3.x branches could converge, someone might end up writing a `3to2`, or another runtime (PyPy comes to mind) might allow for running 2.x and 3.x code in the same process directly, making our conversion process easier.
For the time being, however, we're assuming that, for many years, we will have people with large codebases they're maintaining (or people writing new code who want to use *other* libraries which have not yet been migrated) who still want new features and bug fixes in Twisted. Pretty soon I expect we will also have bleeding-edge users that want to use Twisted on python 3. I'd like to provide all of those people with a positive experience for as long as possible.
|
The Django project uses the library [`six`](http://packages.python.org/six) to maintain a codebase that works simultaneously on Python 2 *and* Python 3 ([blog post](https://www.djangoproject.com/weblog/2012/aug/19/experimental-python-3-support/)).
`six` does this by providing a compatibility layer that intelligently redirects imports and functions to their respective locations (as well as unifying other incompatible changes).
### Obvious advantages:
* No need for separate branches for Python 2 and Python 3
* No conversion tools, such as 2to3.
|
How are you planning on handling the migration to Python 3?
|
[
"",
"python",
"migration",
"python-3.x",
"python-2to3",
""
] |
I'm working with a webservice that offers almost duplicated code across two namesspaces. Lets say for example PigFeet and HorseFeet, both namespaces contain a Feet class and other code that works with the Feet class requires it to be part of that same namespace.
Right now In my code I'm forced to do something like this:
```
if( _animalType == AnimalType.Pig )
{
//namespace is pigfeet
PigFeet.Feet feet = new Feet();
feet.WashFeet();
}
if( _animalType == AnimalType.Horse )
{
//namespace is horsefeet
HorseFeet.Feet feet = new Feet();
feet.WashFeet();
}
```
This is leaving me with lots of duplicated code. Is there a way to choose a namespace more dynamically?
|
The namespace isn't the problem - it's simply that the 2 classes aren't related, so there's no inheritance chain that you can use for polymorphism.
You'll need to look at something like [duck typing](http://www.deftflux.net/blog/page/Duck-Typing-Project.aspx), or an [adapter pattern](http://en.wikipedia.org/wiki/Adapter_pattern), or building your own proxy classes to get yourself to a common interface. For small numbers of implementations, I've gotten away with just building a single adapter class that delegates to whatever non-null instance it has:
```
interface IFeet {
void WashFeet();
}
class FeetAdapter : IFeet {
private PigFeet.Feet _pigFeet;
private HorseFeet.Feet _horseFeet;
private FeetAdapter(PigFeet.Feet pigFeet) {
_pigFeet = pigFeet;
}
private FeetAdapter(HorseFeet.Feet horseFeet) {
_horseFeet = horseFeet;
}
public void WashFeet() {
if (_pigFeet != null) {
_pigFeet.WashFeet();
} else {
_horseFeet.WashFeet();
}
}
public static FeetAdapter Create(AnimalType animalType) {
switch (animalType) {
case AnimalType.Pig:
return new FeetAdapter(new PigFeet.Feet());
case AnimalType.Horse:
return new FeetAdapter(new HorseFeet.Feet());
}
}
}
```
For larger cases, you'd be better off with a separate PigFeetAdapter and HorseFeetAdapter that both implement IFeet, along with a FeetAdapterFactory to create them - but the concept is the same as I show above.
|
In your namespace imports you can assign an [alias](http://msdn.microsoft.com/en-us/library/aa664765(VS.71).aspx) to a specific namespace or member.
```
using PigFeet = PigFeet.Feet;
using HorseFeet = HorseFeet.Feet;
//now your sample code should look something like
if( _animalType == AnimalType.Pig )
{
//namespace is pigfeet
PigFeet feet = new PigFeet();
feet.WashFeet();
}
if( _animalType == AnimalType.Horse )
{
//namespace is horsefeet
HorseFeet feet = new HorseFeet();
feet.WashFeet();
}
```
|
Same class, different namespaces, a way to simplify?
|
[
"",
"c#",
"namespaces",
""
] |
I have two tables, **Book** and **Tag**, and books are tagged using the association table **BookTag**. I want to create a report that contains a list of books, and for each book a list of the book's tags. Tag IDs will suffice, tag names are not necessary.
Example:
```
Book table:
Book ID | Book Name
28 | Dracula
BookTag table:
Book ID | Tag ID
28 | 101
28 | 102
```
In my report, I'd like to show that book #28 has the tags 101 and 102:
```
Book ID | Book Name | Tags
28 | Dracula | 101, 102
```
Is there a way to do this in-line, without having to resort to functions or stored procedures? I am using SQL Server 2005.
*Please note that the same question already has been asked in [Combine multiple results in a subquery into a single comma-separated value](https://stackoverflow.com/questions/111341/combine-multiple-results-in-a-subquery-into-a-single-comma-separated-value), but the solution involves creating a function. I am asking if there is a way to solve this without having to create a function or a stored procedure.*
|
You can almost do it. The only problem I haven't resolved is the comma delimiter. Here is a query on a similar structure that separates the tags using a space.
```
SELECT em.Code,
(SELECT et.Name + ' ' AS 'data()'
FROM tblEmployeeTag et
JOIN tblEmployeeTagAssignment eta ON et.Id = eta.EmployeeTag_Id AND eta.Employee_Id = em.id
FOR XML PATH('') ) AS Tags
FROM tblEmployee em
```
Edit:
Here is the complete version using your tables and using a comma delimiter:
```
SELECT bk.Id AS BookId,
bk.Name AS BookName,
REPLACE((SELECT LTRIM(STR(bt.TagId)) + ', ' AS 'data()'
FROM BookTag bt
WHERE bt.BookId = bk.Id
FOR XML PATH('') ) + 'x', ', x','') AS Tags
FROM Book bk
```
I suppose for future reference I should explain a bit about what is going on. The 'data()' column name is a special value that is related to the FOR XML PATH statement. It causes the XML document to be rendered as if you did an .InnerText on the root node of the resulting XML.
The REPLACE statement is a trick to remove the trailing comma. By appending a unique character (I randomly chose 'x') to the end of the tag list I can search for comma-space-character and replace it with an empty string. That allows me to chop off just the last comma. This assumes that you are never going to have that sequence of characters in your tags.
|
Unless you know what the tag ids/names are and can hard code them into your query, I'm afraid the answer is no.
|
Query showing list of associations in many-to-many relationship
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I am trying to leverage ORM given the following requirements:
1) Using .NET Framework (latest Framework is okay)
2) Must be able to use Sybase, Oracle, MSSQL interchangeably
3) The schema is mostly static, BUT there are dynamic parts.
I am somewhat familiar with SubSonic and NHibernate, but not deeply.
I get the nagging feeling that the ORM can do what I want, but I don't know how to leverage it at the moment.
SubSonic probably isn't optimal, since it doesn't currently support Sybase, and writing my own provider for it is beyond my resources and ability right now.
For #3 (above), there are a couple of metadata tables, which describe tables which the vendors can "staple on" to the existing database.
Let's call these *MetaTables*, and *MetaFields*.
There is a base static schema, which the ORM (NHibernate ATM) handles nicely.
**However**, a vendor can add a table to the database (physically) as long as they also add the data to the metadata tables to describe their structure.
What I'd really like is for me to be able to somehow "feed" the ORM with that metadata (in a way that it understands) and have it at that point allow me to manipulate the data.
My primary **goal** is to reduce the amount of generic SQL statement building I have to do on these dynamic tables.
I'd also like to avoid having to worry about the differences in SQL being sent to Sybase,Oracle, or MSSQL.
My primary **problem** is that I don't have a way to let ORM know about the dynamic tables until runtime, when I'll have access to the metadata
**Edit**: An example of the usage might be like the one [outlined here](http://subsonicproject.com/querying/using-the-original-query-tool/):
IDataReader rdr=new Query("DynamicTable1").WHERE("ArbitraryId",2).ExecuteReader();
(However, it doesn't look like SubSonic will work, as there is no Sybase provider (see above)
|
Acording to this blog you can in fact use [NHibernate with dynamic mapping](http://ayende.com/Blog/archive/2008/05/01/Dynamic-Mapping-with-NHibernate.aspx). It takes a bit of tweaking though...
|
We did some of the using NHibernate, however we stopped the project since it didn't provide us with the ROI we wanted. We ended up writing our own ORM/SQL layer which worked very well (worked since I no longer work there, I'm guessing it still works).
Our system used a open source project to generate the SQL (don't remember the name any more) and we built all our queries in our own Xml based language (Query Markup Language - QML). We could then build an xmlDocument with selects, wheres, groups etc. and then send that to the SqlEngine that would turn it into a Sql statement and execute it. We discusse, but never implemented, a cache in all of this. That would've allowed us to cache the Qmls for frequently used queries.
|
How can I leverage an ORM for a database whose schema is unknown until runtime?
|
[
"",
"c#",
".net",
"database",
"orm",
"dynamic",
""
] |
I am working on a web application, where I transfer data from the server to the browser in XML.
Since I'm danish, I quickly run into problems with the characters `æøΓ₯`.
I know that in html, I use the `"&aelig;&oslash;&aring;"` for `æøΓ₯`.
however, as soon as the chars pass through JavaScript, I get black boxes with `"?"` in them when using `æøΓ₯`, and `"æøå"` is printed as is.
I've made sure to set it to utf-8, but that isn't helping much.
Ideally, I want it to work with any special characters (naturally).
The example that isn't working is included below:
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Untitled Document</title>
<script type="text/javascript" charset="utf-8">
alert("æøå");
alert("æøΓ₯");
</script>
</head>
<body>
</body>
</html>
```
What am I doing wrong?
---
Ok, thanks to Grapefrukts answer, I got it working.
I actually needed it for data coming from an MySQL server. Since the saving of the files in UTF-8 encoding only solves the problem for static content, I figure I'd include the solution for strings from a MySQL server, pulled out using PHP:
`utf8_encode($MyStringHere)`
|
Just specifying UTF-8 in the header is not enough. I'd bet you haven't saved your file as UTF-8. Any reasonably advanced text editor will have this option. Try that and I'm sure it'll work!
|
If you ever can't set the response encoding, you can use `\u` escape sequence in the JavaScript string literal to display these characters.
```
alert("\u00e6\u00f8\u00e5")
```
|
international characters in Javascript
|
[
"",
"javascript",
"unicode",
"internationalization",
"character-encoding",
""
] |
I've heard that `SELECT *` is generally bad practice to use when writing SQL commands because it is more efficient to `SELECT` columns you specifically need.
If I need to `SELECT` every column in a table, should I use
```
SELECT * FROM TABLE
```
or
```
SELECT column1, colum2, column3, etc. FROM TABLE
```
Does the efficiency really matter in this case? I'd think `SELECT *` would be more optimal internally if you really need all of the data, but I'm saying this with no real understanding of database.
I'm curious to know what the best practice is in this case.
**UPDATE:** I probably should specify that the only situation where I would really *want* to do a `SELECT *` is when I'm selecting data from one table where I know all columns will always need to be retrieved, even when new columns are added.
Given the responses I've seen however, this still seems like a bad idea and `SELECT *` should never be used for a lot more technical reasons that I ever though about.
|
One reason that selecting specific columns is better is that it raises the probability that SQL Server can access the data from indexes rather than querying the table data.
Here's a post I wrote about it: [**The real reason select queries are bad index coverage**](http://weblogs.asp.net/jgalloway/archive/2007/07/18/the-real-reason-select-queries-are-bad-index-coverage.aspx)
It's also less fragile to change, since any code that consumes the data will be getting the same data structure regardless of changes you make to the table schema in the future.
|
Given **your** specification that you **are** selecting all columns, there is little difference **at this time**. Realize, however, that database schemas do change. If you use **`SELECT *`** you are going to get any new columns added to the table, even though in all likelihood, your code is not prepared to use or present that new data. This means that you are exposing your system to unexpected performance and functionality changes.
You may be willing to dismiss this as a minor cost, but realize that columns that you don't need still must be:
1. Read from database
2. Sent across the network
3. Marshalled into your process
4. (for ADO-type technologies) Saved in a data-table in-memory
5. Ignored and discarded / garbage-collected
Item #1 has many hidden costs including eliminating some potential covering index, causing data-page loads (and server cache thrashing), incurring row / page / table locks that might be otherwise avoided.
Balance this against the potential savings of specifying the columns versus an **`*`** and the only potential savings are:
1. Programmer doesn't need to revisit the SQL to add columns
2. The network-transport of the SQL is smaller / faster
3. SQL Server query parse / validation time
4. SQL Server query plan cache
For item 1, the reality is that you're going to add / change code to use any new column you might add anyway, so it is a wash.
For item 2, the difference is rarely enough to push you into a different packet-size or number of network packets. If you get to the point where SQL statement transmission time is the predominant issue, you probably need to reduce the rate of statements first.
For item 3, there is NO savings as the expansion of the **`*`** has to happen anyway, which means consulting the table(s) schema anyway. Realistically, listing the columns will incur the same cost because they have to be validated against the schema. In other words this is a complete wash.
For item 4, when you specify specific columns, your query plan cache could get larger but **only** if you are dealing with different sets of columns (which is not what you've specified). In this case, you **do want** different cache entries because you want different plans as needed.
So, this all comes down, because of the way you specified the question, to the issue resiliency in the face of eventual schema modifications. If you're burning this schema into ROM (it happens), then an **`*`** is perfectly acceptable.
However, my general guideline is that you should only select the columns you need, which means that **sometimes** it will look like you are asking for all of them, but DBAs and schema evolution mean that some new columns might appear that could greatly affect the query.
My advice is that you should **ALWAYS SELECT specific columns**. Remember that you get good at what you do over and over, so just get in the habit of doing it right.
If you are wondering why a schema might change without code changing, think in terms of audit logging, effective/expiration dates and other similar things that get added by DBAs for systemically for compliance issues. Another source of underhanded changes is denormalizations for performance elsewhere in the system or user-defined fields.
|
Which is faster/best? SELECT * or SELECT column1, colum2, column3, etc
|
[
"",
"sql",
"database",
"performance",
"select",
"sqlperformance",
""
] |
There are several different methods for converting floating point numbers to Integers in JavaScript. My question is what method gives the best performance, is most compatible, or is considered the best practice?
Here are a few methods that I know of:
```
var a = 2.5;
window.parseInt(a); // 2
Math.floor(a); // 2
a | 0; // 2
```
I'm sure there are others out there. Suggestions?
|
According to [**this website**](http://www.jibbering.com/faq/faq_notes/type_convert.html#tcParseIn):
> parseInt is occasionally used as a means of turning a floating point number into an integer. It is very ill suited to that task because if its argument is of numeric type it will first be converted into a string and then parsed as a number...
>
> For rounding numbers to integers one of Math.round, Math.ceil and Math.floor are preferable...
|
Apparently double bitwise-not is the fastest way to floor a number:
```
var x = 2.5;
console.log(~~x); // 2
```
Used to be an article here, getting a 404 now though: <http://james.padolsey.com/javascript/double-bitwise-not/>
~~Google has it cached: `http://74.125.155.132/search?q=cache:wpZnhsbJGt0J:james.padolsey.com/javascript/double-bitwise-not/+double+bitwise+not&cd=1&hl=en&ct=clnk&gl=us`~~
But the Wayback Machine saves the day! <http://web.archive.org/web/20100422040551/http://james.padolsey.com/javascript/double-bitwise-not/>
|
What is the best method to convert floating point to an integer in JavaScript?
|
[
"",
"javascript",
"variables",
"types",
""
] |
What is the best way to generate a Unique ID from two (or more) short ints in C++? I am trying to uniquely identify vertices in a graph. The vertices contain two to four short ints as data, and ideally the ID would be some kind of a hash of them. Prefer portability and uniqueness over speed or ease.
There are a lot of great answers here, I will be trying them all tonight to see what fits my problem the best. A few more words on what I'm doing.
The graph is a collection of samples from an audio file. I use the graph as a Markov Chain to generate a new audio file from the old file. Since each vertex stores a few samples and points to another sample, and the samples are all short ints, it seemed natural to generate an ID from the data. Combining them into a long long sounds good, but maybe something as simple as just a 0 1 2 3 `generateID` is all I need. not sure how much space is necessary to guarantee uniqueness, if each vertex stores 2 16 bit samples, there are 2^32 possible combinations correct? and so if each vertex stores 4 samples, there are 2^64 possible combinations?
Library and platform specific solutions not really relevant to this question. I don't want anyone else who might compile my program to have to download additional libraries or change the code to suit their OS.
|
A simple solution is to use a 64 bit integer where the lower 16 bits is the first vertex coordinate, next 16 bits is the second, and so on. This will be unique for all your vertices, though not very compact.
So here's some half-assed code to do this. Hopefully I got the casts right.
```
uint64_t generateId( uint16_t v1, uint16_t v2, uint16_t v3, uint16_t v4)
{
uint64_t id;
id = v1 | (((uint64_t)v2) << 16) | (((uint64_t)v3) << 32) | (((uint64_t)v4) << 48);
return id;
}
```
Optionally this could be done with a union (great idea from Leon Timmermans, see comment). Very clean this way:
```
struct vertex
{
uint16_t v1;
uint16_t v2;
uint16_t v3;
uint16_t v4;
};
union vertexWithId
{
vertex v;
uint64_t id;
};
int main()
{
vertexWithId vWithId;
// Setup your vertices
vWithId.v.v1 = 2;
vWithId.v.v2 = 5;
// Your id is automatically setup for you!
std::cout << "Id is " << vWithId.id << std::endl;
return 0;
}
```
|
Sometimes the simplest things works best.
Can you just add an id field to the Vertex object and assign it a number in order of construction?
```
static int sNextId = 0;
int getNextId() { return ++sNextId; }
```
|
Generating a Unique ID in c++
|
[
"",
"c++",
"hash",
""
] |
I have a method that periodically (e.g. once in every 10 secs) try to connect to a server and read some data from it. The server might not be available all the time. If the server is not available the method throws an exception.
What would be the best way to implement a wrapper method that doesn't throw an exception except if the server wasn't available for at least one minute?
|
Keep track of when the last time you successfully reached the server was. If the server throws an exception, catch it and compare to the last time you reached the server. If that time is more than a minute, rethrow the exception.
|
In pseudocode.
```
//Create Timer
//Start Timer
bool connected = false;
while (!connected)
try {
//Connect To DB
connected = true;
}
catch (Exception ex) {
if (more than 1 minute has passed)
throw new Exception(ex);
}
}
```
|
Delaying an exception
|
[
"",
"java",
"exception",
""
] |
I am writing a C# program which captures signals from a external device, and sends keystrokes to another application. I am using SendKeys and it works fine.
SendKeys does "press" a key by holding and releasing it immediately. I would like to make it push key and release it at will.
My question is : "is there a way to send a "push" signal to a key, then a "release" signal after a certain amount of time ?"
I am not sure SendKeys is able to do this. Any clue ?
|
I don't think it's possible from .NET directly
You could try using keybd\_event native call by p/invoking the function as described here: <http://pinvoke.net/default.aspx/user32.keybd_event>
The MSDN for keybd\_event is here: <http://msdn.microsoft.com/en-us/library/ms646304(VS.85).aspx>
Hope that helps!
|
The accepted answer uses `keybd_event` which is deprecated. The official API is now [`SendInput`](http://msdn.microsoft.com/en-us/library/ms646310%28VS.85%29.aspx). There's also a nice wrapper for it at <http://inputsimulator.codeplex.com>.
None of the above, however, fully caters to the "key holding" scenario. This is due to the fact that holding a key will generate multiple `WM_KEYDOWN` messages, followed by a single `WM_KEYUP` message upon release (you can check this with Spy++).
The frequency of the `WM_KEYDOWN` messages is dependent on hardware, BIOS settings and a couple of Windows settings: [`KeyboardDelay`](http://technet.microsoft.com/en-us/library/cc978658.aspx) and [`KeyboardSpeed`](http://technet.microsoft.com/en-us/library/cc978659.aspx). The latter are accessible from Windows Forms ([`SystemInformation.KeyboardDelay`](http://msdn.microsoft.com/en-us/library/system.windows.forms.systeminformation.keyboarddelay.aspx), [`SystemInformation.KeyboardSpeed`](http://msdn.microsoft.com/en-us/library/system.windows.forms.systeminformation.keyboardspeed.aspx)).
Using the aforementioned Input Simulator library, I've implemented a key holding method which mimics the actual behavior. It's `await/async` ready, and supports cancellation.
```
static Task SimulateKeyHold(VirtualKeyCode key, int holdDurationMs,
int repeatDelayMs, int repeatRateMs, CancellationToken token)
{
var tcs = new TaskCompletionSource<object>();
var ctr = new CancellationTokenRegistration();
var startCount = Environment.TickCount;
Timer timer = null;
timer = new Timer(s =>
{
lock (timer)
{
if (Environment.TickCount - startCount <= holdDurationMs)
InputSimulator.SimulateKeyDown(key);
else if (startCount != -1)
{
startCount = -1;
timer.Dispose();
ctr.Dispose();
InputSimulator.SimulateKeyUp(key);
tcs.TrySetResult(null);
}
}
});
timer.Change(repeatDelayMs, repeatRateMs);
if (token.CanBeCanceled)
ctr = token.Register(() =>
{
timer.Dispose();
tcs.TrySetCanceled();
});
return tcs.Task;
}
```
|
How to push a key and release it using C#?
|
[
"",
"c#",
"keyboard",
""
] |
I've created a web page that lets you input some information and then draws an image in a canvas element based on that info. I have it pretty much working the way I want except for the printing.
Is there a way to print out the canvas element or is creating a new window to draw in, the only way to do it?
Update:
The answer was so simple. I was thinking of a lot more complicated solution.
I wish I could pick more than 1 answer. I wasn't able to get the canvas to print when I used \* to disable display. The simplest solution was to just turn off the form that I was using for input, using form {display:none;} in the CSS inside an @media print{}. Thanks for the quick response.
```
@media print {
form {
display:none;
}
}
```
|
You could try something like this:
```
@media print {
* {
display:none;
}
#SOME-CANVAS-ID {
display:block;
}
}
```
I'm not sure if a canvas is block by default, but you could try something along the lines of that and see if it works. The idea is that it will hide everything (\*) for print media, except for some other arbitrary element as long as the rule's precedence is higher (which is why I used the ID selector).
Edit: If CSS3 (specifically the [negation pseudo-class](http://www.w3.org/TR/css3-selectors/#negation)) had more support, your rule could be as simple as this:
```
*:not(canvas) {
display:none;
}
```
However, this may cause the <html> and <body> tags to be hidden, effectively hiding your canvas as well...
|
I'm not 100% sure of the support, but you can use CSS and put an attribute in the `<link>` tag for `media="print"`. In this CSS file, just hide the elements you don't want to show while printing: `display:none;`
|
Is it possible to print just the canvas element?
|
[
"",
"javascript",
"printing",
"canvas",
""
] |
Unquestionably, I would choose to use the STL for most C++ programming projects. The question was presented to me recently however, "Are there any cases where you wouldn't use the STL?"...
The more I thought about it, the more I realized that perhaps there SHOULD be cases where I choose not to use the STL... For example, a really large, long term project whose codebase is expected to last years... Perhaps a custom container solution that precisely fits the projects needs is worth the initial overhead? What do you think, are there any cases where you would choose NOT to STL?
|
Projects with strict memory requirements such as for embedded systems may not be suited for the STL, as it can be difficult to control and manage what's taken from and returned to the heap. As Evan mentioned, writing proper allocators can help with this, but if you're counting every byte used or concerned with memory fragmentation, it may be wiser to hand-roll a solution that's tailored for your specific problem, as the STL has been optimized for the most general usage.
You may also choose not to use STL for a particular case because more applicable containers exist that are not in the current standard, such as boost::array or boost::unordered\_map.
|
The main reasons not to use STL are that:
1. Your C++ implementation is old and has horrible template support.
2. You can't use dynamic memory allocation.
Both are very uncommon requirements in practice.
For a longterm project rolling your own containers that overlap in functionality with the STL is just going to increase maintenance and development costs.
|
To STL or !STL, that is the question
|
[
"",
"c++",
"stl",
"containers",
""
] |
I have an idea for how to solve this problem, but I wanted to know if there's something easier and more extensible to my problem.
The program I'm working on has two basic forms of data: images, and the information associated with those images. The information associated with the images has been previously stored in a JET database of extreme simplicity (four tables) which turned out to be both slow and incomplete in the stored fields. We're moving to a new implementation of data storage. Given the simplicity of the data structures involved, I was thinking that a database was overkill.
Each image will have information of it's own (capture parameters), will be part of a group of images which are interrelated (taken in the same thirty minute period, say), and then part of a larger group altogether (taken of the same person). Right now, I'm storing people in a dictionary with a unique identifier. Each person then has a List of the different groups of pictures, and each picture group has a List of pictures. All of these classes are serializable, and I'm just serializing and deserializing the dictionary. Fairly straightforward stuff. Images are stored separately, so that the dictionary doesn't become astronomical in size.
The problem is: what happens when I need to add new information fields? Is there an easy way to setup these data structures to account for potential future revisions? In the past, the way I'd handle this in C was to create a serializable struct with lots of empty bytes (at least a k) for future extensibility, with one of the bytes in the struct indicating the version. Then, when the program read the struct, it would know which deserialization to use based on a massive switch statement (and old versions could read new data, because extraneous data would just go into fields which are ignored).
Does such a scheme exist in C#? Like, if I have a class that's a group of String and Int objects, and then I add another String object to the struct, how can I deserialize an object from disk, and then add the string to it? Do I need to resign myself to having multiple versions of the data classes, and a factory which takes a deserialization stream and handles deserialization based on some version information stored in a base class? Or is a class like Dictionary ideal for storing this kind of information, as it will deserialize all the fields on disk automatically, and if there are new fields added in, I can just catch exceptions and substitute in blank Strings and Ints for those values?
If I go with the dictionary approach, is there a speed hit associated with file read/writes as well as parameter retrieval times? I figure that if there's just fields in a class, then field retrieval is instant, but in a dictionary, there's some small overhead associated with that class.
Thanks!
|
My brain is fried at the moment, so I'm not sure I can advise for or against a database, but if you're looking for version-agnostic serialization, you'd be a fool to not at least check into [Protocol Buffers](http://code.google.com/apis/protocolbuffers/docs/overview.html).
Here's a quick list of implementations I know about for C#/.NET:
* [protobuf-net](http://code.google.com/p/protobuf-net/)
* [Proto#](http://code.google.com/p/protosharp/)
* [jskeet's dotnet-protobufs](http://github.com/jskeet/dotnet-protobufs/tree/master)
|
[Sqlite](http://www.sqlite.org/) is what you want. It's a fast, embeddable, single-file database that has bindings to most languages.
With regards to extensibility, you can store your models with default attributes, and then have a separate table for attribute extensions for future changes.
A year or two down the road, if the code is still in use, you'll be happy that 1)Other developers won't have to learn a customized code structure to maintain the code, 2) You can export, view, modify the data with standard database tools (there's an ODBC driver for sqlite files and various query tools), and 3) you'll be able to scale up to a database with minimal code changes.
|
Best (free) way to store data? How about updates to the file system?
|
[
"",
"c#",
"extensibility",
"data-storage",
""
] |
I haven't a clue about encryption at all. But I need it. How?
Say you have a system of nodes communicating with each other on a network via asynchronous messages. The nodes do not maintain session information about other nodes (this is a design restriction).
Say you want to make sure only your nodes can read the messages being sent. I believe encryption is the sollution to that.
Since the nodes are not maintaining a session and communication must work in a stateless, connectionless fashion, I am guessing that asymmetric encryption is ruled out.
So here is what I would like to do:
* messages are sent as UDP datagrams
* each message contains a timestamp to make messages differ (counter replay attacks)
* each message is encrypted with a shared secret symmetric key and sent over the network
* other end can decrypt with shared secret symmetric key
Keys can obviously be compromised by compromising any single node. At the same time, in this scenario, access to any single compromised node reveals all interesting information anyway, so the key is not the weakest link.
What cypher should I use for this encryption? What key length?
I would prefer to use something supported by [ezPyCrypto](http://www.freenet.org.nz/ezPyCrypto/).
Assuming, as most point out, I go with AES. What modes should I be using?
I couldn't figure out how to do it with ezPyCrypto, [PyCrypto](http://www.dlitz.net/software/pycrypto/) seems to be hung on a moderator swap and googles [keyczar](http://www.keyczar.org/) does not explain how to set this up - I fear if I don't just *get* it, then I run a risk of introducing insecurity. So barebones would be better. [This guy](http://www.josh-davis.org/pythonAES) claims to have a nice module for AES in python, but he also asserts that this is his first python project - Allthough he is probably loads smarter than I, maybe he got tripped up?
**EDIT:** I moved the search for the python implementation to [another question](https://stackoverflow.com/questions/172486/what-pure-python-library-to-use-for-aes-256-encryption) to stop clobber...
|
Your first thought should be channel security - either SSL/TLS, or IPSec.
Admittedly, these both have a certain amount of setup overhead, IPSec more than SSL/TLS, especially when it comes to PKI etc. - but it more than pays for itself in simplicity of development, reliability, security, and more. Just make sure you're using strong cipher suites, as appropriate to the protocol.
If neither SSL/TLS or IPSec fits your scenario/environment, your next choice should be [AES](http://en.wikipedia.org/wiki/Advanced_Encryption_Standard) (aka Rijndael).
Use keys at least 256 bits long, if you want you can go longer.
Keys should be randomly generated, by a cryptographically secure random number generator (and not a simple rnd() call).
Set the cipher mode to [CBC](http://en.wikipedia.org/wiki/Cipher_block_chaining).
Use PKCS7 padding.
Generate a unique, crypto-random Initialization Vector (IV).
Don't forget to properly protect and manage your keys, and maybe consider periodic key rotations.
Depending on your data, you may want to also implement a keyed hash, to provide for message integrity - use [SHA-256](http://en.wikipedia.org/wiki/SHA-256) for hashing.
There are also rare situations where you may want to go with a stream cipher, but thats usually more complicated and I would recommend you avoid it your first time out.
Now, I'm not familiar ezpycrypto (or really python in general), and cant really state that it supports all this; but everything here is pretty standard and recommended best practice, if your crypto library doesnt support it, I would suggest finding one that does ;-).
|
> I haven't a clue about encryption at all. But I need it. How?
**DANGER!** If you don't know much about cryptography, don't try to implement it yourself. Cryptography is *hard to get right*. There are many, many different ways to break the security of a cryptographic system beyond actually cracking the key (which is usually very hard).
If you just slap a cipher on your streaming data, without careful key management and other understanding of the subtleties of cryptographic systems, you will likely open yourself up to all kinds of vulnerabilities. For example, the scheme you describe will be vulnerable to [man-in-the-middle attacks](http://wikipedia.org/wiki/Man-in-the-middle_attack) without some specific plan for key distribution among the nodes, and may be vulnerable to [chosen-plaintext](http://wikipedia.org/wiki/Chosen-plaintext_attack) and/or [known-plaintext attacks](http://wikipedia.org/wiki/Known-plaintext_attack) depending on how your distributed system communicates with the outside world, and the exact choice of cipher and [mode of operation](http://wikipedia.org/wiki/Block_cipher_mode).
So... you will have to read up on crypto in general before you can use it securely.
|
What symmetric cypher to use for encrypting messages?
|
[
"",
"python",
"security",
"encryption",
""
] |
I have a table that has redundant data and I'm trying to identify all rows that have duplicate sub-rows (for lack of a better word). By sub-rows I mean considering `COL1` and `COL2` only.
So let's say I have something like this:
```
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
aa 112 blah_m
ab 111 blah_s
bb 112 blah_d
bb 112 blah_d
cc 112 blah_w
cc 113 blah_p
```
I need a SQL query that returns this:
```
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
bb 112 blah_d
bb 112 blah_d
```
|
Does this work for you?
```
select t.* from table t
left join ( select col1, col2, count(*) as count from table group by col1, col2 ) c on t.col1=c.col1 and t.col2=c.col2
where c.count > 1
```
|
With the data you have listed, your query is not possible. The data on rows 5 & 6 is not distinct within itself.
Assuming that your table is named 'quux', if you start with something like this:
```
SELECT a.COL1, a.COL2, a.COL3
FROM quux a, quux b
WHERE a.COL1 = b.COL1 AND a.COL2 = b.COL2 AND a.COL3 <> b.COL3
ORDER BY a.COL1, a.COL2
```
You'll end up with this answer:
```
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
```
That's because rows 5 & 6 have the same values for COL3. Any query that returns both rows 5 & 6 will also return duplicates of ALL of the rows in this dataset.
On the other hand, if you have a primary key (ID), then you can use this query instead:
```
SELECT a.COL1, a.COL2, a.COL3
FROM quux a, quux b
WHERE a.COL1 = b.COL1 AND a.COL2 = b.COL2 AND a.ID <> b.ID
ORDER BY a.COL1, a.COL2
```
*[Edited to simplify the WHERE clause]*
And you'll get the results you want:
```
COL1 COL2 COL3
---------------------
aa 111 blah_x
aa 111 blah_j
bb 112 blah_d
bb 112 blah_d
```
I just tested this on SQL Server 2000, but you should see the same results on any modern SQL database.
**[blorgbeard](https://stackoverflow.com/users/369/blorgbeard) proved me [wrong](https://stackoverflow.com/questions/131014/whats-the-sql-query-to-list-all-rows-that-have-2-column-sub-rows-as-duplicates#131036) -- good for him!**
|
What's the SQL query to list all rows that have 2 column sub-rows as duplicates?
|
[
"",
"sql",
"database",
"platform-agnostic",
""
] |
I use both ruby on rails and Java. I really enjoy using migrations when I am working on a rails project. so I am wondering is there a migrations like tool for Java? If there is no such tool is it a good idea to use migrations as a tool to control a database used by a Java project?
|
I've used Hibernate's SchemaUpdate to perform the same function as migrations. It's actually easier than migrations because every time you start up your app, it examines the database structure and syncs it up with your mappings so there's no extra rake:db:migrate step and your app can never be out of sync with the database it's running against. Hibernate mapping files are no more complex than Rails migrations so even if you didn't use Hibernate in the app, you could take advantage of it. The downside is that it's not as flexible as far as rolling back, migrating down, running DML statements. As pointed out in the comments, it also doesn't drop tables or columns. I run a separate method to do those manually as part of the Hibernate initialization process.
I don't see why you couldn't use Rails migrations though - as long as you don't mind installing the stack (Ruby, Rake, Rails), you wouldn't have to touch your app.
|
For a **feature comparison** between
* Flyway
* Liquibase
* c5-db-migration
* dbdeploy
* mybatis
* MIGRATEdb
* migrate4j
* dbmaintain
* AutoPatch
have a look at <http://flywaydb.org>
This should be a good start for you and anyone else to **select the right tool for the job**
|
Migrations for Java
|
[
"",
"java",
"ruby-on-rails",
"migration",
""
] |
I have a number of stored procs which I would like to all run simultaneously on the server. Ideally all on the server without reliance on connections to an external client.
What options are there to launch all these and have them run simultaneously (I don't even need to wait until all the processes are done to do additional work)?
I have thought of:
* Launching multiple connections from
a client, having each start the
appropriate SP.
* Setting up jobs for
each SP and starting the jobs from a
SQL Server connection or SP.
* Using
xp\_cmdshell to start additional runs
equivalent to osql or whetever
* SSIS - I need to see if the package can be dynamically written to handle more SPs, because I'm not sure how much access my clients are going to get to production
In the job and cmdshell cases, I'm probably going to run into permissions level problems from the DBA...
SSIS could be a good option - if I can table-drive the SP list.
This is a datawarehouse situation, and the work is largely independent and NOLOCK is universally used on the stars. The system is an 8-way 32GB machine, so I'm going to load it down and scale it back if I see problems.
I basically have three layers, Layer 1 has a small number of processes and depends on basically all the facts/dimensions already being loaded (effective, the stars are a Layer 0 - and yes, unfortunately they will all need to be loaded), Layer 2 has a number of processes which depend on some or all of Layer 1, and Layer 3 has a number of processes which depend on some or all of Layer 2. I have the dependencies in a table already, and would only initially launch all the procs in a particular layer at the same time, since they are orthogonal within a layer.
|
In the end, I created a C# management console program which launches the processes Async as they are able to be run and keeps track of the connections.
|
Is SSIS an option for you? You can create a simple package with parallel Execute SQL tasks to execute the stored procs simultaneously. However, depending on what your stored procs do, you may or may not get benefit from starting this in parallel (e.g. if they all access the same table records, one may have to wait for locks to be released etc.)
|
Spawning multiple SQL tasks in SQL Server 2005
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"automation",
"parallel-processing",
""
] |
While it would be very convenient to use inline functions at some situations,
Are there any drawbacks with inline functions?
**Conclusion**:
Apparently, There is nothing wrong with using inline functions.
But it is worth noting the following points!
* Overuse of inlining can actually make programs slower. Depending on a function's size, inlining it can cause the code size to increase or decrease. Inlining a very small accessor function will usually decrease code size while inlining a very large function can dramatically increase code size. On modern processors smaller code usually runs faster due to better use of the instruction cache. [- Google Guidelines](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Inline_Functions)
* The speed benefits of inline functions tend to diminish as the function grows in size. At some point the overhead of the function call becomes small compared to the execution of the function body, and the benefit is lost [- Source](http://www-rocq.inria.fr/who/Marc.Thiriet/Langages/tic0324.html)
* There are few situations where an inline function may not work:
+ For a function returning values; if a return statement exists.
+ For a function not returning any values; if a loop, switch or goto statement exists.
+ If a function is recursive. [-Source](http://www-rocq.inria.fr/who/Marc.Thiriet/Langages/tic0324.html)
* The `__inline` keyword causes a function to be inlined only if you specify the optimize option. If optimize is specified, whether or not `__inline` is honored depends on the setting of the inline optimizer option. By default, the inline option is in effect whenever the optimizer is run. If you specify optimize , you must also specify the noinline option if you want the `__inline` keyword to be ignored. [-Source](http://support.sas.com/documentation/onlinedoc/sasc/doc750/html/clug/zcoptinl.htm)
|
It worth pointing out that the inline keyword is actually just a hint to the compiler. The compiler may ignore the inline and simply generate code for the function someplace.
The main drawback to inline functions is that it can **increase the size of your executable** (depending on the number of instantiations). This can be a problem on some platforms (eg. embedded systems), especially if the function itself is recursive.
I'd also recommend making inline'd functions **very small** - The speed benefits of inline functions tend to diminish as the function grows in size. At some point the overhead of the function call becomes small compared to the execution of the function body, and the benefit is lost.
|
It could increase the size of the
executable, and I don't think
compilers will always actually make
them inline even though you used the
inline keyword. (Or is it the other
way around, like what [Vaibhav](https://stackoverflow.com/questions/60830/what-is-wrong-with-using-inline-functions#60833)
said?...)
I think it's usually OK if the
function has only 1 or 2 statements.
**Edit:** Here's what the linux [CodingStyle](http://lxr.linux.no/linux/Documentation/CodingStyle) document says about it:
> Chapter 15: The inline disease
>
> There appears to be a common
> misperception that gcc has a magic
> "make me faster" speedup option called
> "inline". While the use of inlines can
> be appropriate (for example as a means
> of replacing macros, see Chapter 12),
> it very often is not. Abundant use of
> the inline keyword leads to a much
> bigger kernel, which in turn slows the
> system as a whole down, due to a
> bigger icache footprint for the CPU
> and simply because there is less
> memory available for the pagecache.
> Just think about it; a pagecache miss
> causes a disk seek, which easily takes
> 5 miliseconds. There are a LOT of cpu
> cycles that can go into these 5
> miliseconds.
>
> A reasonable rule of thumb is to not
> put inline at functions that have more
> than 3 lines of code in them. An
> exception to this rule are the cases
> where a parameter is known to be a
> compiletime constant, and as a result
> of this constantness you *know* the
> compiler will be able to optimize most
> of your function away at compile time.
> For a good example of this later case,
> see the kmalloc() inline function.
>
> Often people argue that adding inline
> to functions that are static and used
> only once is always a win since there
> is no space tradeoff. While this is
> technically correct, gcc is capable of
> inlining these automatically without
> help, and the maintenance issue of
> removing the inline when a second user
> appears outweighs the potential value
> of the hint that tells gcc to do
> something it would have done anyway.
|
What is wrong with using inline functions?
|
[
"",
"c++",
"inline-functions",
""
] |
I've got a Win32 C++ app with a suite of unit tests. After the unit tests have finished running, I'd like a human-readable report on any unfreed memory to be automatically generated. Ideally, the report will have a stack with files & line number info for each unfreed allocation. It would be nice to have them generated in a consistent order to make it easy to diff it from one run to the next. (Basically, I would like the results of valgrind --leak-check=full, but on windows).
I've had success with UMDH getting this kind of info from running processes, but that tool only seems to work if you attach to an existing process. I want this to happen automatically every time I run my unit tests.
Is there a tool that can do this? If so, how do I use it?
Thanks!
|
I played around with the CRT Debug Heap functions Mike B pointed out, but ultimately I wasn't satisfied just getting the address of the leaked memory. Getting the stacks like UMDH provides makes debugging so much faster. So, in my main() function now I
launch UMDH using CreateProcess before and after I run the tests to take heap snapshots. I also wrote a trivial batch file that runs my test harness and then diffs the heap snapshots. So, I launch the batch file and get my test results and a text file with the full stacks of any unfreed allocations all in one shot.
UMDH picks up a lot of false positives, so perhaps some hybrid of the CrtDebug stuff and what I'm doing now would be a better solution. But for right now I'm happy with what I've got.
Now if I just had a way to detect if I was not closing any handles...
|
To obtain this sort of information, we override new/delete and malloc/free, providing our own heap implementations that store stacktraces on allocation and produce a report when the heap is destroyed (as well as adding sentinels to detect buffer overruns).
This is a fair bit of work the first time you do it. [This guy](http://www.codeproject.com/KB/applications/visualleakdetector.aspx) has written a freeware tool that handles all the hard bits - I have not tried it out myself, but his explanation of how he wrote it is useful when rolling your own.
|
Memory leak detection while running unit tests
|
[
"",
"c++",
"winapi",
"memory-leaks",
""
] |
I have a multi-line string that I want to do an operation on each line, like so:
```
inputString = """Line 1
Line 2
Line 3"""
```
I want to iterate on each line:
```
for line in inputString:
doStuff()
```
|
```
inputString.splitlines()
```
Will give you a list with each item, the [`splitlines()`](https://docs.python.org/3/library/stdtypes.html#str.splitlines) method is designed to split each line into a list element.
|
```
inputString.split('\n') # --> ['Line 1', 'Line 2', 'Line 3']
```
This is identical to the above, but the string module's functions are deprecated and should be avoided:
```
import string
string.split(inputString, '\n') # --> ['Line 1', 'Line 2', 'Line 3']
```
Alternatively, if you want each line to include the break sequence (CR,LF,CRLF), use the `splitlines` method with a `True` argument:
```
inputString.splitlines(True) # --> ['Line 1\n', 'Line 2\n', 'Line 3']
```
|
How do I split a multi-line string into multiple lines?
|
[
"",
"python",
"string",
"split",
""
] |
I'm looking for a getopt library for c#. So far I found a few ([phpguru](http://www.phpguru.org/static/getopt.html), [XGetOptCS](http://69.10.233.10/KB/cs/XGetoptCS.aspx), [getoptfordotnet](http://www.codeproject.com/KB/cs/geoptfordotnet.aspx)) but these look more like unfinished attempts that only support a part of C's getopt.
Is there a full getopt c# implementation?
|
Here is a .NET Implementation of getopt: <http://www.codeplex.com/getopt>
|
Miguel de Icaza [raves about Mono.Options](http://tirania.org/blog/archive/2008/Oct-14.html). You can use the [nuget package](https://www.nuget.org/packages/Mono.Options/), or just copy the [single C# source file](https://github.com/mono/mono/blob/master/mcs/class/Mono.Options/Mono.Options/Options.cs) into your project.
|
GetOpt library for C#
|
[
"",
"c#",
"getopt",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.