Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am dealing with a lot of strings in my program.
These string data don't change through out the whole life time after they being read into my program.
But since the C++ string reserves capacity, they waste a lot of space that won't be used for sure.
I tried to release those spaces, but it didn't work.
The following is the simple code that I tried:
```
string temp = "1234567890123456";
string str;
cout << str.capacity() << endl;
str.reserve(16);
cout << str.capacity() << endl;
// capacity is 31 on my computer
str += temp;
cout << str.capacity() << endl;
str.reserve(16);
cout << str.capacity() << endl;
// can't release. The capacity is still 31.
```
(The compiler is Visual C++)
How could I release it? | When you call `reserve`, you're making a *request* to change the capacity. Implementations will only guarantee that a number equal to or greater than this amount is reserved. **Therefore, a request to *shrink* capacity may be safely ignored by a particular implementation.**
However, I encourage you to consider whether this isn't premature optimization. Are you sure that you're really making so many strings that it's a memory bottleneck for you? Are you sure that it's actually memory that's the bottleneck?
From the documentation for `reserve`:
> This can expand or shrink the size of
> the storage space in the string,
> although notice that the resulting
> capacity after a call to this function
> is not necessarily equal to `res_arg`
> but can be either equal or greater
> than `res_arg`, therefore shrinking
> requests may or may not produce an
> actual reduction of the allocated
> space in a particular library
> implementation. In any case, it never
> trims the string content (for that
> purposes, see resize or clear, which
> modify the content). | Why don't you use a char array then? | How to release the unused capacity of a string | [
"",
"c++",
"string",
"visual-c++",
"memory-management",
""
] |
How can I disable an event handler temporarily in WinForms? | Probably, the simplest way (which doesn't need unsubscribing or other stuff) is to declare a boolean value and check it at the beginning of the handler:
```
bool dontRunHandler;
void Handler(object sender, EventArgs e) {
if (dontRunHandler) return;
// handler body...
}
``` | Disable from what perspective? If you want to remove a method that's in your scope from the list of delegates on the handler, you can just do..
```
object.Event -= new EventHandlerType(your_Method);
```
This will remove that method from the list of delegates, and you can reattach it later with
```
object.Event += new EventHandlerType(your_Method);
``` | WinForms: temporarily disable an event handler | [
"",
"c#",
"winforms",
""
] |
I am trying to send an array of about 50 elements to a WCF service method, but I'm always receiving a `(404) Bad Request` error.
I think that it has to do with the message size or something like that, because if I send an empty array it works.
I did some research and added some stuff in the web.config of the WCF but I still can't manage to get this to work.
Can anyone please provide some additional information as to how I can maybe **increase the size of the message I can send**?
---
**[UPDATE] Solution:**
[Solution](https://stackoverflow.com/questions/740039/c-wcf-wcf-service-returning-a-404-bad-request-when-sending-an-array-of-items/740087#740087) | Stupid, stupid me :(
The thing is that I was creating the binding configuration in the web.config like such:
```
<bindings>
<wsHttpBinding>
<binding name="netTcpBindingConfig" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="6000000">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="6000000" maxBytesPerRead="4096" maxNameTableCharCount="16384" />
</binding>
</wsHttpBinding>
</bindings>
```
But then I was not applying the configuration to the endpoint! So, I had to add this to the endpoint tag:
```
bindingConfiguration="netTcpBindingConfig"
```
Now it works like a charm. | It's an obvious one but have you tried setting [MaxReceivedMessageSize](http://msdn.microsoft.com/en-us/library/system.servicemodel.channels.transportbindingelement.maxreceivedmessagesize.aspx) to 65536 and seeing if it still fails? | C# WCF: WCF Service returning a (404) Bad Request when sending an array of items | [
"",
"c#",
"wcf",
"wcf-binding",
""
] |
I've written a helper class that takes a string in the constructor and provides a lot of Get properties to return various aspects of the string. Currently the only way to set the line is through the constructor and once it is set it cannot be changed. Since this class only has one internal variable (the string) I was wondering if I should keep it this way or should I allow the string to be set as well?
Some example code my help why I'm asking:
```
StreamReader stream = new StreamReader("ScannedFile.dat");
ScannerLine line = null;
int responses = 0;
while (!stream.EndOfStream)
{
line = new ScannerLine(stream.ReadLine());
if (line.IsValid && !line.IsKey && line.HasResponses)
responses++;
}
```
Above is a quick example of counting the number of valid responses in a given scanned file. Would it be more advantageous to code it like this instead?
```
StreamReader stream = new StreamReader("ScannedFile.dat");
ScannerLine line = new ScannerLine();
int responses = 0;
while (!stream.EndOfStream)
{
line.RawLine = stream.ReadLine();
if (line.IsValid && !line.IsKey && line.HasResponses)
responses++;
}
```
This code is used in the back end of a ASP.net web application and needs to be somewhat responsive. I am aware that this may be a case of premature optimization but I'm coding this for responsiveness on the client side and maintainability.
Thanks!
**EDIT -** I decided to include the constructor of the class as well (Yes, this is what it really is.) :
```
public class ScannerLine
{
private string line;
public ScannerLine(string line)
{
this.line = line;
}
/// <summary>Gets the date the exam was scanned.</summary>
public DateTime ScanDate
{
get
{
DateTime test = DateTime.MinValue;
DateTime.TryParseExact(line.Substring(12, 6).Trim(), "MMddyy", CultureInfo.InvariantCulture, DateTimeStyles.None, out test);
return test;
}
}
/// <summary>Gets a value indicating whether to use raw scoring.</summary>
public bool UseRaw { get { return (line.Substring(112, 1) == "R" ? true : false); } }
/// <summary>Gets the raw points per question.</summary>
public float RawPoints
{
get
{
float test = float.MinValue;
float.TryParse(line.Substring(113, 4).Insert(2, "."), out test);
return test;
}
}
}
```
\*\*EDIT 2 - \*\* I included some sample properties of the class to help clarify. As you can see, the class takes a fixed string from a scanner and simply makes it easier to break apart the line into more useful chunks. The file is a line delimiated file from a Scantron machine and the only way to parse it is a bunch of string.Substring calls and conversions. | I would definitely stick with the immutable version if you really need the class at all. Immutability makes it easier to reason about your code - if you store a reference to a `ScannerLine`, it's useful to know that it's not going to change. The performance is almost certain to be insignificant - the IO involved in reading the line is likely to be more significant than creating a new object. If you're *really* concerned about performance, should should benchmark/profile the code before you decide to make a design decision based on those performance worries.
However, if your state is just a string, are you really providing much benefit over just storing the strings directly and having appropriate methods to analyse them later? Does `ScannerLine` analyse the string and cache that analysis, or is it really just a bunch of parsing methods? | You're first approach is more clear. Performance wise you can gain something but I don't think is worth. | Create new instance or just set internal variables | [
"",
"c#",
".net",
"asp.net",
"optimization",
"class",
""
] |
I am interested in the possibility that GWT could serve as the basis for my entire presentation layer.
I would be interested to know if anyone has tried this successfully - or unsuccessfully - and could persuade or unpersuade me from attempting this. | I worked with GWT about a year ago. At the time it seemed like a great idea, with a number of caveats:
* I had "gotcha" problems with some parts of the API, that were probably related to the fact that you're coding as if you're in java when in fact you're actually writing for a separately compiled environment that acts like java, so you make some incorrect assumptions (in this case, passing nested values to the front end). I think there was another was rewriting my ant scripts to use a 32-bit jvm for the gwt compile.
* I spent a bit of time trying to tweak the appearence - we never deployed a finished project so I'm not sure how much work this would've taken to get to a professional level, but it seemed (logically) like it'd be comparable to tweaking a swing interface. maybe a bit more unwieldy, visually, than html.
* Because the ajax is so hidden from you in the final product, I had some concerns about what I might do if the performance was poor.
That being said,it definitely seems worth playing with, and my experiences were a long, long time ago in internet years, especially given that it's probably much more mature now. It's also worth pointing out that it's a very different (and refreshing) way of developing GUI code from most MVC frameworks, and worth a look if for no other reason than that.
My feeling is that if you're building a high-load professional site with very demanding graphical requirements GWT is probably not a good choice, otherwise ok. | You mentioned that GWT would handle the presentational layer. Would you be doing the business layer in Java too? If that's the case, I'd like to point you towards [IT Mill Toolkit](http://itmill.com), that does exactly this: It's a toolkit that uses GWT to render its GUI components, allowing you to do your applications entirely in Java. I think the term it's trying to coin is "server driven RIA".
I come from a PHP background, but instantly came to like the toolkit. But it's probably better that I won't say anything more and let you do your own decisions.
*Disclamer: I do work at IT Mill, although that's irrelevant to my opinions.* | Does it make sense to use Google Web Toolkit (GWT) as a full-blown Java web framework? | [
"",
"java",
"gwt",
"wicket",
"web-frameworks",
""
] |
Would you do:
```
this.btSomeButton.Click += btSomeButton_OnClick;
private void btSomeButton_OnClick(object sender, EventArgs e)
{
this.DoFunc1();
this.DoFunc2();
}
```
Or:
```
this.btSomeButton.Click += DoFunc1;
this.btSomeButton.Click += DoDunc2;
```
Are there any hidden implications for using the second method? Like, is it guaranteed that DoFunc2() will run after DoFunc1()? | I think the first method is safer.
AFAIK there is no guarantee when it comes to method execution order, and if the methods do come in a sequential the first method makes more sense anyway.
Also, when multiple event handlers are attached to an event, it becomes very easy to miss out other events when detaching individual event handlers. | I'd do the first as well..
Perfomance should be very close to the same in the 2 examples, but #1 is easier to read imho - however, it sorta depends on what they are doing. I'd prefer this for example:
```
this.btSomeButton.Click += DoSomethingRelatedToDataPersistence;
this.btSomeButton.Click += DoSomethingRelatedToTheDirectionTheMoonSpinsAroundTheEarth;
```
.. I hope you catch my drift :P
If you can tie the 2 event-handlers into 1 common procedure (talking from a domain-perspective) i'd do that. If the handlers are completely un-related I'd keep them in different handlers..
The answer is very subjective imho. No one will be able to provide a "final answer, case closed"-kinda answer. | One generic event name or multiple descriptive event name | [
"",
"c#",
""
] |
*I am asking this purely to determine the worthwhile-ness of implementing the class in Question ...*
Do you know of a Java utility class that takes an un-synchronized instance, uses reflection to investigate that instance, and returns the input instance "wrapped" within synchronized calls ?
( ie: A factory which creates a synchronized delegate class for any instance ) | I like Jon Skeet's answer; it's seeing the forest instead of the trees. But to answer the question:
Assuming that the instance belongs to some interface, it's easy to use `java.lang.reflect.Proxy` to do this.
```
public final class SynchronizedFactory {
private SynchronizedFactory() {}
public static <T> T makeSynchronized(Class<T> ifCls, T object) {
return ifCls.cast(Proxy.newProxyInstance(
object.getClass().getClassLoader(),
new Class<?>[] {ifCls},
new Handler<T>(object)));
}
private static class Handler<T> implements InvocationHandler {
private final T object;
Handler(T object) {
this.object = object;
}
@Override
public Object invoke(Object proxy, Method method,
Object[] args) throws Throwable {
synchronized (object) {
return method.invoke(object, args);
}
}
}
}
```
This code is not tested, by the way. Use at your own risk. | No, I don't know of anything which does that - and I'd rarely want to use it.
Synchronizing individual operations is rarely a useful feature. Typically you want to synchronize a few operations at a time. Something which simply synchronizes individual operations gives an *illusion* of thread-safety (enough to make some programmers careless) without dealing with the real decisions of which operations need to be performed in an atomic fashion for any particular situation. | Java: Synchronization Utility | [
"",
"java",
"synchronization",
"utilities",
""
] |
I'm using the .NET 3.0 class `System.Security.Cryptography.MACTripleDES` class to generate a MAC value. Unfortunately, I am working with a hardware device that uses "`1111111111111111`" (as hex) as a single-length DES key. The `System.Security.Cryptography` library does some sanity checking on the key and returns a Exception if you try to use a cryptographically weak key.
For example:
```
byte[] key = new byte[24];
for (int i = 0; i < key.Length; i++)
key[i] = 0x11;
byte[] data = new byte[] { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
byte[] computedMac = null;
using (MACTripleDES mac = new MACTripleDES(key))
{
computedMac = mac.ComputeHash(data);
}
```
throws an exception
```
System.Security.Cryptography.CryptographicException : Specified key is a known weak key for 'TripleDES' and cannot be used.
```
I know this is not a secure key. In production, the device will be flashed with a new, secure key. In the mean time, is there any way to inhibit this Exception from being thrown? Perhaps an `app.config` or registry setting?
Edit: The key would actually be 101010... due to the algorithm forcing odd parity. I'm not sure if this is universal to the DES algorithm or just a requirement in the payment processing work I do.
Edit 2: Daniel's answer below has some very good information about hacking .NET. Unfortunately, I wasn't able to solve my problem using this technique, but there is still some interesting reading there. | Instead of using MACTripleDES with the DES key repeated to fake a single DES CBC-MAC, you could just implement CBC-MAC yourself on top of [DESCryptoServiceProvider](http://msdn.microsoft.com/en-us/library/system.security.cryptography.descryptoserviceprovider.aspx).
<1111111111111111> is not a weak DES key.
This will calculate a DES CBC-MAC:
```
public static byte[] CalcDesMac(byte[] key, byte[] data){
DESCryptoServiceProvider des = new DESCryptoServiceProvider();
des.Key = key;
des.IV = new byte[8];
des.Padding = PaddingMode.Zeros;
MemoryStream ms = new MemoryStream();
using(CryptoStream cs = new CryptoStream(ms, des.CreateEncryptor(), CryptoStreamMode.Write)){
cs.Write(data, 0, data.Length);
}
byte[] encryption = ms.ToArray();
byte[] mac = new byte[8];
Array.Copy(encryption, encryption.Length-8, mac, 0, 8);
PrintByteArray(encryption);
return mac;
}
``` | I wouldn't really recommend it, but you should be able to modify the IL-code that checks for weak keys using [Reflector](http://sebastien.lebreton.free.fr/reflexil/) and the Add-in [ReflexIL](https://www.red-gate.com/products/dotnet-development/reflector/)
edit:
Sorry, it took a while for me to load all of it up in my Virtual Machine (running Ubuntu) and didn't want to mess with Mono.
* Install the ReflexIL Add-in: View -> Add-ins -> Add
* Open ReflexIL: Tools -> ReflexIL v0.9
* Find the IsWeakKey() function. (You can use Search: F3)
* Two functions will come up, doubleclick the one found in System.Security.Cryptography.TripleDES
* ReflexIL should have come up too. In the Instructions tab, scroll all the way down to line 29 (offset 63).
* Change ldc.i4.1 to ldc.i4.0, this means the function will always return false.
In your assemblies pane (left one), you can now scroll up and click on "Common Language Runtime Library", the ReflexIL pane will give you an option to save it.
Important notes:
* BACK UP your original assembly first! (mscorlib.dll)
* mscorlib.dll is a signed assembly and you will need the .NET SDK (sn.exe tool) for ReflexIL to make it skip verification. I just checked this myself, you should already have this with Visual C# installed. Just click "Register it for verification skipping (on this computer)" when asked to.
* I don't think I have to tell you to only use this on your development machine :)
Good luck! If you need additional instructions, please feel free to use the commentbox.
edit2:
I'm confused!
[](http://i44.tinypic.com/2r6fwbo.png)
I completely removed the IsWeakKey check from the set\_Key function in the mscorlib assembly. I am absolutely certain that I modified the correct function, and that I did it correctly. Reflector's disassembler does no longer show the check. The funny thing is however, that Visual C# still throws the same exception.
This leads me to believe that mscorlib must somehow still be cached somewhere. However, renaming mscorlib.dll to mscorlib.dll\_ leads MSVC# to crash, so it must still be dependent on the original dll.
This is quite interesting stuff, but I think I've reached the point where I have no clue what is going on, it just doesn't make any sense! See attached image. :(
edit3:
I notice in Olly, that unlike assemblies such as mscoree, mscorsec and mscorwks; mscorlib.dll isn't actually located in:
c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\
But instead, in what appears to be a non-existent location:
C:\WINDOWS\assembly\NativeImages\_v2.0.50727\_32\mscorlib\6d667f19d687361886990f3ca0f49816\mscorlib.ni.dll
I think I am missing something here :) Will investigate this some more.
edit4:
Even after having patched out EVERYTHING in IsWeakKey, and played around with both removing and generating new native images (x.**ni**.dll) of mscorlib.dll using "ngen.exe", I am getting the same exception. I must be noted that even after uninstalling the native mscorlib images, it is still using mscorlib.ni.dll... Meh.
I give up. I hope someone will be able to answer what the hell is going on because I sure don't know. :) | TripleDES: Specified key is a known weak key for 'TripleDES' and cannot be used | [
"",
"c#",
".net",
"cryptography",
"cryptographicexception",
""
] |
I am making multiple sub folders in my App\_Code folder to organize my classes and it seems it is working fine, is there any restriction on that?
like:
* App\_Code
+ Ui
- TextBoxes
- Labels | Absolutely No Restrictions. Sub-Directories are just FileSystem Grouping, you will have to assign the namespaces manually to all your classes in App\_Code. | I found [this](http://msdn.microsoft.com/en-us/library/t990ks23%28v=vs.80%29.aspx) interesting piece of information looking for information on App\_Code so thought I'd share.
However, you can configure your Web application to treat subfolders of the App\_Code folder as separate compilable units. Each folder can then contain source code in a different programming language. The configuration is specified by creating a codeSubDirectories element in the compilation element of the Web.config file and adding a reference to the subfolder. The following example illustrates how you would configure subfolders named VBCode and CSCode to compile into separate assemblies:
```
<compilation debug="false">
<codeSubDirectories>
<add directoryName="VBCode" />
<add directoryName="CSCode" />
</codeSubDirectories>
</compilation>
```
The references to the VBCode and CSCode subfolders do not need to include any information about what programming language is contained in the subfolder. As with the App\_Code folder itself, ASP.NET infers the compiler to use based on the files in the subfolder. | multiple folders in App_Code Folder | [
"",
"c#",
".net",
"asp.net",
""
] |
The code below is looping through a dictionary of strings and **IMyCompanySettings** looking for values that implement **IMyCompanyProductSetting**. Clearly, trying to cast and raising an exception is a very expensive way to do this.
```
public static List<IMyCompanyProductSetting> GetProductSettings(ConfigurationManager cfm)
{
List<IMyCompanyProductSetting> ret = new List<IMyCompanyProductSetting>();
foreach(IMyCompanySetting setting in cfm.Values)
{
try
{
IMyCompanyProductSetting prod = (IMyCompanyProductSetting)setting;
ret.Add(prod);
}
catch
{
// Do nothing.
}
}
return ret;
}
```
What's a better way to do this? | ## Casting 101 [general info on casting stuff]:
Use `[object] is [interface/class]` expression:
```
if (setting is IMyCompanyProductSetting) {
...
}
```
Alternatively you can use the `as` keyword which tries to cast the object and if it fails, instead of throwing exception, it'll return `null`. Note that the target type must be a reference type in the `as` keyword:
```
var prod = setting as IMyCompanyProductSetting;
if (prod != null) {
...
}
```
You should always use the above code instead of the equivalent exception handling.
## Filtering an `IEnumerable` by type (LINQy):
As Jon Skeet pointed out, you should use `OfType` extension method to filter a sequence easily (assuming you got LINQ):
```
var filteredSequence = sequence.OfType<TargetType>();
```
## Casting an `IEnumerable` to type (LINQy):
If you want to try casting each element to the target type (as opposed to filtering by type), you can use the `Cast` extension method:
```
var castedSequence = sequence.Cast<TargetType>();
``` | The "hard" way (pre-LINQ) is to use "as". This is more efficient than using "is" and then casting each time (as both the "is" and the cast require execution-time checks):
```
IMyCompanyProductSetting prod = setting as IMyCompanyProductSetting;
if (prod != null)
{
ret.Add(prod);
}
```
See [another question](https://stackoverflow.com/questions/496096/casting-vs-using-the-as-keyword-in-the-clr/496167#496167) for when to use "as" and when to use a cast.
If you're using .NET 3.5, however, it's really easy:
```
return cfm.Values.OfType<IMyCompanyProductSetting>().ToList();
```
Very easy :) | What's the right way to check for inheritance from a class/interface? | [
"",
"c#",
".net",
"reflection",
"inheritance",
""
] |
Let's say I have a class like this:
```
class MonkeyFish
{
MonkeyFish( GlobalObjectA & a, GlobalObjectB & b, GlobalObjectC & c);
private:
GlobalObjectA & m_a;
GlobalObjectB & m_b;
GlobalObjectC & m_c;
}
```
Without a factory, I need to do the following in order to instantiated a `MonkeyFish`.
```
GlobalObjectA a;
GlobalObjectB b;
GlobalObjectC c;
int main()
{
MonkeyFish * monkey_fish = new MonkeyFish(a, b, c);
monkey_fish->go();
}
```
On the other hand, if I have a `MonkeyFishFactory`, it seems like I have to do this:
```
GlobalObjectA a;
GlobalObjectB b;
GlobalObjectC c;
int main()
{
MonkeyFishFactory mf_factory(a, b, c);
MonkeyFish * monkey_fish = mf_factory.buildMonkeyFish("Bob");
monkey_fish->go();
}
```
1. I still have global objects.
2. Even if the MonkeyFishFactory itself created the `GlobalObjects` internally (so they are now inside the MonkeyFishFactory instead of true globals), it seems like the MonkeyFishFactory **itself** still needs to be a global object so that I can access it anytime I want to create a `MonkeyFish`.
Isn't the Factory pattern the same thing as global state in this case?
(I'm currently operating under the assumption that global state is a Bad Thing, and eliminating it is a Good Thing.) | Are you confusing concepts here?
The Factory pattern is usually applied when you are returning an instance of a concrete class that hides behind an abstract interface. The idea is that the caller will see just the interface and doesn't even have to know what the concrete type of the object is. It is all about creating an object instance based on parameters and decoupling the logic associated with deciding what object to create from the user creating the object.
What you are describing is a mixture of Singleton (or MonoState) and Factory. Your Factory has state so it cannot be made static. In this case, you will need to apply something like the Singleton pattern to control the creation of a single Factory instance with the appropriate globals hidden within it:
```
class IMonkeyFish {
public:
virtual ~IMonkeyFish() = 0;
virtual void go() = 0;
};
class Factory {
public:
static Factory& instance();
IMonkeyFish* createMonkeyFish();
protected:
Factory(GlobalObjectA& a, GlobalObjectB& b, GlobalObjectC& c);
private:
static Factory *theInstance;
GlobalObjectA& instanceOfA;
GlobalObjectB& instanceOfB;
GlobalObjectC& instanceOfC;
};
Factory& factory = Factory::instance();
IMonkeyFish* fishie = factory.createMonkeyFish();
fishie->go();
```
The `Singleton` pattern governs the creation of the factory instance. The `Factory` pattern hides the details surrounding the creation of objects that implement the `IMonkeyFish` interface. The Good Thing (TM) is the hiding of the global state and decoupling of the `MonkeyFish` concrete details from creating an instance.
The usage or correctness of using the `Singleton` stuff is a whole other issue though. There are probably a bunch of threads floating around about that as well. | Global state is not in-and-of-itself a Bad Thing. **Public** global state is a Bad Thing. The Factory pattern helps encapsulate global state, which is a Good Thing. | Isn't the Factory pattern the same thing as global state? | [
"",
"c++",
"global-variables",
"factory",
"global",
""
] |
Visual Studio 2008 comes with nice javascript debugging features.
But I have played a little with NetBeans debugger wich has installed an ugly Script Debugger from Microsoft to my IE... Normally IE should ask what do I want to use for debugging, but now I can't start debugging with Visual Studio, the Script Debugger is started automatically... After uninstalling the Script Debugger I can't debug in IE at all. Even attaching to iexplore.exe process doesn't helps...
Has installed the Script Debugger again... :(((((
How can I get back my Visual Studio debugging working in IE again? | I have the same problem. It's not quite resolved yet, but I'll share what I have.
To let IE see the debugger again do this:
1. Open Visual Studio.
2. Go to Tools > Options > Debugging > Just-In-Time.
3. Enable Script checkbox, click OK.
4. Close Visual Studio.
Now when I try to debug it opens Visual Studio but gives this error:
"Unable to attach to the crashing process. An error occurred that usually indicates a corrupt installation (code 0x80040155). If the problem persists, repair your Visual Studio installation via 'Add or Remove Program' in Control Panel."
When I get back in the office tomorrow I'll try repairing Visual Studio. | make sure you don't have script debugging disabled in IE Settings. May not be the answer to your situation but I forget this all the time.
[](https://i.stack.imgur.com/lKgvQ.png) | (Re)Enabling JavaScript debugger in IE7 with Visual Studio 2008 | [
"",
"javascript",
"visual-studio",
"debugging",
"internet-explorer",
""
] |
I have:
```
unsigned char *foo();
std::string str;
str.append(static_cast<const char*>(foo()));
```
The error: `invalid static_cast from type ‘unsigned char*’ to type ‘const char*’`
What's the correct way to cast here in C++ style? | reinterpret\_cast | `char *` and `const unsigned char *` are considered unrelated types. So you want to use `reinterpret_cast`.
But if you were going from `const unsigned char*` to a non `const` type you'd need to use `const_cast` first. `reinterpret_cast` cannot cast away a `const` or `volatile` qualification. | C++ style cast from unsigned char * to const char * | [
"",
"c++",
"constants",
"casting",
""
] |
I want to get the name of the currently running program, that is the executable name of the program. In C/C++ you get it from `args[0]`. | ```
System.AppDomain.CurrentDomain.FriendlyName
``` | [`System.AppDomain.CurrentDomain.FriendlyName`](http://msdn.microsoft.com/en-us/library/system.appdomain.friendlyname.aspx) - Returns the filename with extension (e.g. MyApp.exe).
[`System.Diagnostics.Process.GetCurrentProcess().ProcessName`](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.processname.aspx) - Returns the filename *without* extension (e.g. MyApp).
[`System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName`](http://msdn.microsoft.com/en-us/library/system.diagnostics.processmodule.modulename.aspx) - Returns the full path and filename (e.g. C:\Examples\Processes\MyApp.exe). You could then pass this into `System.IO.Path.GetFileName()` or `System.IO.Path.GetFileNameWithoutExtension()` to achieve the same results as the above. | How do I get the name of the current executable in C#? | [
"",
"c#",
"command-line",
""
] |
My team is looking into geospatial features offered by different database platforms.
Are all of the implementations database specific, or is there a ANSI SQL standard, or similar type of standard, which is being offered, or will be offered in the future?
I ask, because I would like the implemented code to be as database agnostic as possible (our project is written to be ANSI SQL standard).
Is there any known plan for standardization of this functionality in the future? | Currently, there are more than one specifications followed by popular proprietary and open source implementations of spatial databases:
* [The OpenGIS - Simple Features for SQL](http://www.opengeospatial.org/standards/sfs/)
* ISO SQL Multimedia Specification for Spatial - [ISO/IEC 13249-3:2006](http://www.iso.org/iso/catalogue_detail.htm?csnumber=38651) - Information technology -- Database languages -- SQL multimedia and application packages -- Part 3: Spatial
PostGIS, Oracle, Microsoft SQL Server and to some limited degree MySQL, all the databases implement the standard interfaces to manipulate spatial data. However, in spite of this fairly standardized features, all databases usually differ on simple SQL level what may make the database-agnostic implementation of your solution tricky. You likely need to survey the features you are interested and compare what various vendors provide. | I haven't tried it, but Google tells me [FDO](http://fdo.osgeo.org/) is "an open-source API for manipulating, defining and analyzing geospatial information regardless of where it is stored". It's listed on osgeo.org - a point in its favour in my opinion.
There are providers for MySQL & Oracle. Disappointingly though SQL Server and Postgis aren't listed on the FDO [providers page](http://fdo.osgeo.org/OSProviderOverviews.html). | Are all SQL Geospatial implementations database specific? | [
"",
"sql",
"t-sql",
"geospatial",
"ansi-sql",
""
] |
I'm having trouble forming a MySQL query that performs the following action:
Select all threadIDs from the threads table ordering by the timestamp (in descending order) of the most recent post (largest timestamp) with threadID equal to the threadID of each thread. So basically I want to go through the threads and have MySQL check the database for let's say thread 0. It then checks all of the posts that have threadID of 0 and sorts thread0 based on the largest timestamp of the posts inside of thread0. Then it repeats this for thread1, thread2, etc and sorts them accordingly.
Is this even possible? It is to create the "bump-system" effect of the forum, where the most recently active thread is bumped to the top of the list continuously until the thread dies out, then it drops to the bottom. I used to use a different implementation where I stored a lastActivity timestamp in the threads table and updated it when a new post was submitted into the thread, but this query would make things a lot more efficient.
Thanks a lot! There are two tables relevant here: threads and posts. The posts have a threadID field that stores the ID of the thread it belongs to, and it also has a timestamp field. Threads has a field threadID that corresponds to the post's threadID. | The following has worked for me in the past on MySql. If you want to include more about each thread in the query you'll have to add the columns to the `SELECT` and the `GROUP BY`.
```
select thread.threadID, max(comments.modifiedOn) as threadUpdated
from thread inner join comments on thread.threadID = comments.threadID
group by 1
order by 2 desc;
```
This query serves your primary request, which is a list of threads ordered my most recent comment. It will not return threads with no comments as-is, you would need to change the join to an outer join to do that. | ```
SELECT *
FROM threads t
LEFT JOIN
posts p
ON p.id =
(
SELECT p.id
FROM posts pi
WHERE pi.threadID = t.threadID
ORDER BY
pi.timestamp DESC
LIMIT 1
)
```
Having an index on `posts (threadID, timestamp)` will greatly improve this query.
Note that unlike `GROUP BY` solutions, this query also selects all fields from `posts` and works even if you have duplicates on the latest `posts.timestamp`. | MySQL Query to create a "bump-system" for a forum I am making | [
"",
"php",
"mysql",
"inner-join",
""
] |
In my web app. i want to setup a route like this:
```
/document/quality/index
/document/general/index
/document/quality/detail/[id]
/document/general/detail/[id]
```
As you can see i have two kind of documents: general and quality. What is the best way to set this up in my global.asax file? I tried the following, but i don't get it work:
```
routes.MapRoute(
"QualityDocument",
"Document/Quality/{action}/{id}",
new { controller = "Document", action="Index", id= ""}
);
routes.MapRoute(
"GeneralDocument",
"Document/General/{action}/{id}",
new { controller = "Document", action = "Index", id = "" }
);
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = "" } // Parameter defaults
);
```
**EDIT**
I have it working right now, but when i change the action="" in my asax file it doesn't work anymore:
Global.asax:
```
routes.MapRoute(
"GeneralDocument",
"Document/General/{action}/{id}",
new { controller = "Document", action = "blaat", id = "" }
);
```
DocumentController:
```
public void blaat(int? id)
{
Response.Write("algemeen");
// return View();
}
```
Now i get the Resource not found error. But when I use index instead of blaat it is working. How come? | you should be able to just prefix the route with "Document" and then set the defaults like:
```
routes.MapRoute("DocumentView",
"Document/{controller}/{action}/{id}",
new { controller = "General", action="Index", id = "" }
);
```
remember with routing you want to be completely defining! when setting out the route. dont let those unexpected routes through :D
Check out Scott Hanselman presentation at MIX
very funny and your pick up some great tips!
<http://www.hanselman.com/blog/Mix09FirstHalfRollupAndSessionVideos.aspx> | Perhaps add in the controller attribute, but constrain it to be the document controller.
```
routes.MapRoute(
"QualityDocument",
"{controller}/Quality/{action}/{id}",
new { controller = "Document", action="Index", id= ""},
new { controller = "Document" }
);
routes.MapRoute(
"GeneralDocument",
"{controller}/General/{action}/{id}",
new { controller = "Document", action = "Index", id = "" },
new { controller = "Document" } );
``` | C# asp.net mvc 1.0 routing | [
"",
"c#",
"model-view-controller",
"routes",
""
] |
Anybody is familiar with DevExpress in Microsoft Visual C#?
My boss gave me the code to study and this code has the version of DevExpress 8.3.
And I would like to open this code in my laptop which has already a version of DevExpress 9.1
Would that be a problem finding the files?
Thanks a lot!
Regards
tintincute | DevExpress has a convertion tool located under start -> all programs -> developper .NET vx.x -> tools -> ProjectConvertor
That should do the trick | If you want to just look at the code, it should be fine.
For compiling, you have to convert the project to use DevExpress 9.1. You can either use the DevExpress tool to upgrade or manually remove and add the references to use 9.1. | DevExpress Microsoft Visual C# | [
"",
"c#",
".net",
"devexpress",
""
] |
Is it possible to modify the connectionstrings defined in the app.config/web.config at runtime? I want to use different configfiles depending on the machine the app/site is run on (only for debugging purposes, of course. We'll use the regular config files when deployed).
I can write AppSettings, but not ConnectionStrings (AFAIK). Or can I? | Yes it's possible, but AFAIK only via Reflection. The following code should do what you need (read below for usage):
```
public static string SetConnectionString(Type assemblyMember,
Type settingsClass,
string newConnectionString,
string connectionStringKey)
{
Type typSettings = Type.GetType(Assembly.CreateQualifiedName(assemblyMember.Assembly.FullName, settingsClass.FullName));
if (typSettings == null)
{
return null;
}
PropertyInfo prpDefault = typSettings.GetProperty("Default", BindingFlags.Static | BindingFlags.Public);
if (prpDefault == null)
{
return null;
}
object objSettings = prpDefault.GetValue(null, null);
if (objSettings == null)
{
return null;
}
// the default property, this[], is actually named Item
PropertyInfo prpItem = objSettings.GetType().GetProperty("Item", BindingFlags.Instance | BindingFlags.Public);
if (prpItem == null)
{
return null;
}
object[] indexerName = { connectionStringKey };
string oldConnectionString = (string)prpItem.GetValue(objSettings, indexerName);
prpItem.SetValue(objSettings, newConnectionString, indexerName);
return oldConnectionString;
}
```
`assemblyMember` is the calling type
`settingsClass` is the type of your settings class
`newConnectionString` is the full string to set
`connectionStringKey` is the name of the connection string that you defined in your app's settings
You should call this method as soon as possible after your app has started, preferably in the Main() method. | You can't really edit the config file of the running process.
One option (with pros and cons) is to use config data in the machine.config or the master web.config (for "site", you use the "location" nodes) - not an option to rush into, though.
A better way to handle this is to swap the config file as part of your build/deploy process, ideally automated. That way, everything is self-contained, and you can "robocopy" to a vanilla server and have it work.
---
Re your "per developer" point; I found that the easiest way to do this was to standardise the config, and tweak the machines. For example, we run a local web-server on a VM; rather than code against each machine, we standardise on "localserver" (to mirror "localhost"), and add a local DNS record to each machine that the developer can control. Note that this requires fixed IP addresses (or maybe a DHCP reservation) to prevent it changing over time!
Ditto databases; local servers can use "."; remote servers can be aliased on the machine, so "devserver" points to whatever the user wants.
Just a thought... | Is it possible to modify configuration ConnectionStrings at runtime? | [
"",
"c#",
"configuration",
"web-config",
"app-config",
"connection-string",
""
] |
I'm writing a small program that's supposed to execute a command on a remote server (let's say a reasonably dumb wrapper around `ssh [hostname] [command]`).
I want to execute it as such:
```
./floep [command]
```
However, I need to pass certain command lines from time to time:
```
./floep -v [command]
```
so I decided to use optparse.OptionParser for this. Problem is, I sometimes the command also has argument, which works fine if I do:
```
./floep -v "uname -a"
```
But I also want it to work when I use:
```
./floep -v uname -a
```
The idea is, as soon as I come across the first non-option argument, everything after that should be part of my command.
This, however, gives me:
```
Usage: floep [options]
floep: error: no such option: -a
```
Does OptionParser support this syntax? If so: how?
If not: what's the best way to fix this? | Try using [`disable_interspersed_args()`](http://docs.python.org/library/optparse.html#other-methods)
```
#!/usr/bin/env python
from optparse import OptionParser
parser = OptionParser()
parser.disable_interspersed_args()
parser.add_option("-v", action="store_true", dest="verbose")
(options, args) = parser.parse_args()
print "Options: %s args: %s" % (options, args)
```
When run:
```
$ ./options.py foo -v bar
Options: {'verbose': None} args: ['foo', '-v', 'bar']
$ ./options.py -v foo bar
Options: {'verbose': True} args: ['foo', 'bar']
$ ./options.py foo -a bar
Options: {'verbose': None} args: ['foo', '-a', 'bar']
``` | OptionParser instances can actually be manipulated during the parsing operation for complex cases. In this case, however, I believe the scenario you describe is supported out-of-the-box (which would be good news if true! how often does that happen??). See this section in the docs: [Querying and manipulating your option parser](http://docs.python.org/library/optparse.html#querying-and-manipulating-your-option-parser).
To quote the link above:
> disable\_interspersed\_args()
>
> Set parsing to stop on the first non-option. Use this if you have a
> command processor which runs another command which has options of its
> own and you want to make sure these options don’t get confused. For example,
> each command might have a different set of options. | OptionParser - supporting any option at the end of the command line | [
"",
"python",
"optparse",
""
] |
I know it is possible to add new CSS classes definitions at runtime through JavaScript. But...
**How to change/remove CSS classes definitions at runtime?**
For instance, supose a I have the class below:
```
<style>
.menu { font-size: 12px; }
</style>
```
What I want is, at runtime, change the `font-size` rule of the `.menu` class, so that every element in the page who uses this class will be affected.
And, I also want to know how to remove the `.menu` class definition. | It's not difficult to change CSS rules at runtime, but apparently it is difficult to find the rule you want. PPK has a quick tour of this on [quirksmode.org](http://www.quirksmode.org/dom/changess.html).
You'll want to use `document.styleSheets[i].cssRules` which is an array you need to parse through to find the one you want, and then `rule.style.setProperty('font-size','10px',null);` | I found an answer at <http://twelvestone.com/forum_thread/view/31411> and I'm reproducing parts of the thread here, verbatim, because I'm afraid the thread, and the very helpful answer, will evaporate.
Flip 2006.06.26, 02:45PM —
[ Crunchy Frog ]
posts: 2470 join date: 2003.01.26
Well after about 10 to 12 hours of searching, reading, and tinkering I've done it! I am CSS/JS code Ninja today!
The JS code used is as follows:
```
<script language="JavaScript">
function changeRule(theNumber) {
var theRules = new Array();
if (document.styleSheets[0].cssRules) {
theRules = document.styleSheets[0].cssRules;
} else if (document.styleSheets[0].rules) {
theRules = document.styleSheets[0].rules;
}
theRules[theNumber].style.backgroundColor = '#FF0000';
}
</script>
```
I've tested this on FF(Mac), Safari(Mac), O9(Mac), IE5(Mac), IE6(PC), FF(PC) and they all work. The reason for the 'if' statement is some of the browsers use cssRules... some use just rules... And the only other hair is that you can't use "background-color" to refer to the style, you have to get rid of the hyphen and capitalize the first letter after the hyphen.
To refer to the first CSS rule you'd use "changeRule(0)", the second "changeRule(1)" and the third "changeRule(2)" and so on...
I haven't found a browser it doesn't work on.... yet....
Anything you say can and will be used against you. Over and over and over.
---
BillyBones 2011.01.20, 11:57AM —
[ in the barrel ]
posts: 1 join date: 2011.01.20
Hello, I registered in these forums just to add this little bit as I could not conveniently find it elsewhere:
```
function changeStyle(selectorText)
{
var theRules = new Array();
if (document.styleSheets[0].cssRules) {
theRules = document.styleSheets[0].cssRules;
}
else if (document.styleSheets[0].rules) {
theRules = document.styleSheets[0].rules;
}
for (n in theRules)
{
if (theRules[n].selectorText == selectorText) {
theRules[n].style.color = 'blue';
}
}
}
```
This simply makes the CSS rule identifiable by its selector name rather than by its index number in the cssRules array.
In other words, you can execute the Javascript function with the string argument "selectorText" instead of a number that is difficult to remember and susceptible to frequent changes if new styles are added.
Thank you for your 10 to 12 hours of research, Flip, I hope I made a worthy addition. | How to change/remove CSS classes definitions at runtime? | [
"",
"javascript",
"css",
""
] |
I'm having a very strange issue with Python's subprocess.Popen. I'm using it to call several times an external exe and keep the output in a list.
Every time you call this external exe, it will return **a different string**. However, if I call it several times using Popen, it will always **return the SAME string**. =:-O
It looks like Popen is returning always the same value from stdout, without recalling the exe. Maybe doing some sort of caching without actually calling again the exe.
This is my code:
```
def get_key():
from subprocess import Popen, PIPE
args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ]
process = Popen(args, stdout=PIPE)
output = process.communicate()[0].strip()
return output
if __name__ == '__main__':
print get_key() # Returns a certain string
print get_key() # Should return another string, but returns the same!
```
What on Earth am I doing wrong?! | It is possible (if `C_KEY_MAKER`'s random behaviour is based on the current time in seconds, or similar) that when you run it twice on the command line, the time has changed in between runs and so you get a different output, but when python runs it, it runs it twice in such quick succession that the time hasn't changed and so it returns the same value twice in a row. | Nothing. That works fine, on my own tests (aside from your indentation error at the bottom). The problem is either in your exe. or elsewhere.
To clarify, I created a python program tfile.py
```
cat > tfile.py
#!/usr/bin/env python
import random
print random.random()
```
And then altered tthe program to get rid of the indentation problem at the bottom, and to call tfile.py . It did give two different results. | Python's subprocess.Popen returns the same stdout even though it shouldn't | [
"",
"python",
"subprocess",
"stdout",
"popen",
""
] |
Is it good to send stack allocated object as a pointer parameter to some other function? | Yes, but the more common C++ idiom for this situation is to use a reference (and probably a const rreference) instead of a pointer. So instead of
```
void foo( sometype * p ) {
p->func();
}
```
you write:
```
void foo( sometype & p ) {
p.func();
}
```
This has the advantage that you don't need to dereference the object in the caller:
```
void afunc() {
sometype t;
foo( t );
}
```
and also gives a subliminal hint to the reader that you do not intend the function to take ownership of the object. | If you are sure that the call is synchronous then it is perfectly valid to send the stack allocated object's address to the function. In case the call is asynchronous (i.e. the function you called passes the pointer to a different thread) then it will definitely creates issues as you might try to access the memory address from the different thread even after the stack allocated object is destoryed. | Is it good to send stack allocated object as a pointer parameter to some other function? | [
"",
"c++",
"function",
"parameters",
""
] |
I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general? | You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
```
export VAR="foo"
```
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
```
cat .data
```
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
```
export DATA=`cat .data`
```
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
```
alias set-data="export DATA=`cat .data`"
```
You can put that alias command inside the .bashrc or .bash\_profile files in your home directory to have that command available in any new bash shell you start. | One workaround is to output `export` commands, and have the parent shell evaluate this..
`thescript.py`:
```
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
```
..and the bash alias (the same can be done in most shells.. even tcsh!):
```
alias setblahblahenv="eval $(python thescript.py)"
```
Usage:
```
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
```
You can output any arbitrary shell code, including multiple commands like:
```
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
```
Just remember to be careful about escaping any dynamically created output (the `pipes.quote` module is good for this) | Why can't environmental variables set in python persist? | [
"",
"python",
"persistence",
"environment-variables",
""
] |
Currently I do like this:
```
IndexSearcher searcher = new IndexSearcher(lucenePath);
Hits hits = searcher.Search(query);
Document doc;
List<string> companyNames = new List<string>();
for (int i = 0; i < hits.Length(); i++)
{
doc = hits.Doc(i);
companyNames.Add(doc.Get("companyName"));
}
searcher.Close();
companyNames = companyNames.Distinct<string>().Skip(offSet ?? 0).ToList();
return companyNames.Take(count??companyNames.Count()).ToList();
```
As you can see, I first collect ALL the fields (several thousands) and then distinct them, possibly skip some and take some out.
I feel like there should be a better way to do this. | I'm not sure there is, honestly, as Lucene doesn't provide 'distinct' functionality. I believe with SOLR you can use a facet search to achieve this, but if you want this in Lucene, you'd have to write some sort of facet functionality yourself. So as long as you don't run into any performance issues, you should be ok this way. | Tying this question to an earlier question of yours (re: "Too many clauses"), I think you should definitely be looking at term enumeration from the index reader. Cache the results (I used a sorted dictionary keyed on the field name, with a list of terms as the data, to a max of 100 terms per field) until the index reader becomes invalid and away you go.
Or perhaps I should say, that when faced with a similar problem to yours, that's what I did.
Hope this helps, | Faster way to get distinct values from Lucene Query | [
"",
"c#",
"lucene",
""
] |
Is there a cross-browser CSS/JavaScript technique to display a long HTML table such that the column headers stay fixed on-screen and do not scroll with the table body. Think of the "freeze panes" effect in Microsoft Excel.
I want to be able to scroll through the contents of the table, but to always be able to see the column headers at the top. | I was looking for a solution for this for a while and found most of the answers are not working or not suitable for my situation, so I wrote a simple solution with jQuery.
This is the solution outline:
1. Clone the table that needs to have a fixed header, and place the
cloned copy on top of the original.
2. Remove the table body from top table.
3. Remove the table header from bottom table.
4. Adjust the column widths. (We keep track of the original column widths)
Below is the code in a runnable demo.
```
function scrolify(tblAsJQueryObject, height) {
var oTbl = tblAsJQueryObject;
// for very large tables you can remove the four lines below
// and wrap the table with <div> in the mark-up and assign
// height and overflow property
var oTblDiv = $("<div/>");
oTblDiv.css('height', height);
oTblDiv.css('overflow', 'scroll');
oTbl.wrap(oTblDiv);
// save original width
oTbl.attr("data-item-original-width", oTbl.width());
oTbl.find('thead tr td').each(function() {
$(this).attr("data-item-original-width", $(this).width());
});
oTbl.find('tbody tr:eq(0) td').each(function() {
$(this).attr("data-item-original-width", $(this).width());
});
// clone the original table
var newTbl = oTbl.clone();
// remove table header from original table
oTbl.find('thead tr').remove();
// remove table body from new table
newTbl.find('tbody tr').remove();
oTbl.parent().parent().prepend(newTbl);
newTbl.wrap("<div/>");
// replace ORIGINAL COLUMN width
newTbl.width(newTbl.attr('data-item-original-width'));
newTbl.find('thead tr td').each(function() {
$(this).width($(this).attr("data-item-original-width"));
});
oTbl.width(oTbl.attr('data-item-original-width'));
oTbl.find('tbody tr:eq(0) td').each(function() {
$(this).width($(this).attr("data-item-original-width"));
});
}
$(document).ready(function() {
scrolify($('#tblNeedsScrolling'), 160); // 160 is height
});
```
```
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script>
<div style="width:300px;border:6px green solid;">
<table border="1" width="100%" id="tblNeedsScrolling">
<thead>
<tr><th>Header 1</th><th>Header 2</th></tr>
</thead>
<tbody>
<tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr>
<tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr>
<tr><td>row 3, cell 1</td><td>row 3, cell 2</td></tr>
<tr><td>row 4, cell 1</td><td>row 4, cell 2</td></tr>
<tr><td>row 5, cell 1</td><td>row 5, cell 2</td></tr>
<tr><td>row 6, cell 1</td><td>row 6, cell 2</td></tr>
<tr><td>row 7, cell 1</td><td>row 7, cell 2</td></tr>
<tr><td>row 8, cell 1</td><td>row 8, cell 2</td></tr>
</tbody>
</table>
</div>
```
This solution works in Chrome and IE. Since it is based on jQuery, this should work in other jQuery supported browsers as well. | ## This can be cleanly solved in four lines of code.
If you only care about modern browsers, a fixed header can be achieved much easier by using CSS transforms. Sounds odd, but works great:
* HTML and CSS stay as-is.
* No external JavaScript dependencies.
* Four lines of code.
* Works for all configurations (table-layout: fixed, etc.).
```
document.getElementById("wrap").addEventListener("scroll", function(){
var translate = "translate(0,"+this.scrollTop+"px)";
this.querySelector("thead").style.transform = translate;
});
```
Support for CSS transforms is [widely available](http://caniuse.com/#feat=transforms2d) except for Internet Explorer 8-.
Here is the full example for reference:
```
document.getElementById("wrap").addEventListener("scroll",function(){
var translate = "translate(0,"+this.scrollTop+"px)";
this.querySelector("thead").style.transform = translate;
});
```
```
/* Your existing container */
#wrap {
overflow: auto;
height: 400px;
}
/* CSS for demo */
td {
background-color: green;
width: 200px;
height: 100px;
}
```
```
<div id="wrap">
<table>
<thead>
<tr>
<th>Foo</th>
<th>Bar</th>
</tr>
</thead>
<tbody>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
<tr><td></td><td></td></tr>
</tbody>
</table>
</div>
``` | HTML table with fixed headers? | [
"",
"javascript",
"css",
"html-table",
""
] |
I have TextBlock that has Inlines dynamicly added to it (basically bunch of Run objects that are either italic or bold).
In my application I have search function.
I want to be able to highlight TextBlock's text that is in being searched for.
By highlighting I mean changing certain parts of TextBlock text's color (keeping in mind that it may highlight several different Run objects at a time).
I have tried this example <http://blogs.microsoft.co.il/blogs/tamir/archive/2008/05/12/search-and-highlight-any-text-on-wpf-rendered-page.aspx>
But it seams very unstable :(
Is there easy way to solve this problem? | This question is similar to [How to display search results in a WPF items control with highlighted query terms](https://stackoverflow.com/questions/3728584/how-to-display-search-results-in-a-wpf-items-control-with-highlighted-query-terms)
In answer to that question, I came up with an approach that uses an IValueConverter. The converter takes a text snippet, formats it into valid XAML markup, and uses a XamlReader to instantiate the markup into framework objects.
The full explanation is rather long, so I've posted it to my blog: [Highlighting Query Terms in a WPF TextBlock](https://web.archive.org/web/20180308102102/http://underground.infovark.com/2011/03/03/highlighting-query-terms-in-a-wpf-textblock/) | I took [dthrasers answer](https://stackoverflow.com/a/5183538/1767377) and took out the need for an XML parser. He does a great job explaining each of the pieces in [his blog](http://underground.infovark.com/2011/03/03/highlighting-query-terms-in-a-wpf-textblock/), However this didn't require me to add any extra libraries, here's how I did it.
Step one, make a converter class:
```
class StringToXamlConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
string input = value as string;
if (input != null)
{
var textBlock = new TextBlock();
textBlock.TextWrapping = TextWrapping.Wrap;
string escapedXml = SecurityElement.Escape(input);
while (escapedXml.IndexOf("|~S~|") != -1) {
//up to |~S~| is normal
textBlock.Inlines.Add(new Run(escapedXml.Substring(0, escapedXml.IndexOf("|~S~|"))));
//between |~S~| and |~E~| is highlighted
textBlock.Inlines.Add(new Run(escapedXml.Substring(escapedXml.IndexOf("|~S~|") + 5,
escapedXml.IndexOf("|~E~|") - (escapedXml.IndexOf("|~S~|") + 5)))
{ FontWeight = FontWeights.Bold, Background= Brushes.Yellow });
//the rest of the string (after the |~E~|)
escapedXml = escapedXml.Substring(escapedXml.IndexOf("|~E~|") + 5);
}
if (escapedXml.Length > 0)
{
textBlock.Inlines.Add(new Run(escapedXml));
}
return textBlock;
}
return null;
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
throw new NotImplementedException("This converter cannot be used in two-way binding.");
}
}
```
Step two:
Instead of a TextBlock use a ContentBlock. Pass in the string (you would of used for your textBlock) to the content block, like so:
```
<ContentControl Margin="7,0,0,0"
HorizontalAlignment="Left"
VerticalAlignment="Center"
Content="{Binding Description, Converter={StaticResource CONVERTERS_StringToXaml}, Mode=OneTime}">
</ContentControl>
```
Step three:
Make sure the text you pass includes `|~S~|` before and `|~E~|` after the text part you want to be highlighted. For example in this string `"my text |~S~|is|~E~| good"` the `is` will be highlighted in yellow.
Notes:
You can change the style in the run to determine what and how your text is highlighted
Make sure you add your Converter class to your namespace and resources. This might also require a rebuild to get working. | WPF TextBlock highlight certain parts based on search condition | [
"",
"c#",
"wpf",
"highlight",
"textblock",
""
] |
What are your thoughts on code that looks like this:
```
public void doSomething()
{
try
{
// actual code goes here
}
catch (Exception ex)
{
throw;
}
}
```
The problem I see is the actual error is not handled, just throwing the exception in a different place. I find it more difficult to debug because i don't get a line number where the actual problem is.
So my question is why would this be good?
---- EDIT ----
From the answers it looks like most people are saying it's pointless to do this with no custom or specific exceptions being caught. That's what i wanted comments on, when no specific exception is being caught. I can see the point of actually doing something with a caught exception, just not the way this code is. | Depending on what quality you are looking at it is not throwing the exception in a different place. "throw" without a target rethrows the exception which is very different from throwing an exception. Primarily a rethrow does not reset the stack trace.
In this particular sample, the catch is pointless because it doesn't do anything. The exception is happily rethrown and it's almost as if the try/catch didn't exist. | I think the construction should be used for handling the exceptions you know you will be throwing inside your code; if other exception is raised, then just rethrow.
Take into account that
throw;
is different than
throw ex;
throw ex will truncate the stack to the new point of throwing, losing valuable info about the exception.
```
public void doSomething()
{
try
{
// actual code goes here
}
catch (EspecificException ex)
{
HandleException(ex);
}
catch (Exception ex)
{
throw;
}
}
``` | Thoughts on try-catch blocks | [
"",
"c#",
"exception",
"coding-style",
"try-catch",
""
] |
I have no problem compiling specific code the following way:
```
g++ -I /opt/local/include Code1.cc -o Code1
```
However when I tried to do that in the makefile:
```
CXX = g++ -Wall -Werror -gstabs -pedantic -O2 -g
all: Code3 Code2 Code1
Code3: Code3.cc Tools.cc
$(CXX) $^ -o $@
Code2: Code2.cc Tools.cc
$(CXX) $^ -o $@
Code1: Code1.cc Tools.cc
$(CXX) -I /opt/local/include $^ -o $@
```
It complains. What's the correct way to do it?
Note that only Code1.cc require the external library as include.
The header of Code1.cc looks like this:
```
#include <iostream>
#include <vector>
#include <fstream>
#include <sstream>
#include <iomanip>
#include <boost/math/distributions/chi_squared.hpp>
using namespace std;
using boost::math::chi_squared;
using boost::math::quantile;
```
The error I get is as follows:
```
g++ -Wall -Werror -gstabs -pedantic -O2 -g -I/opt/local/include Code1.cc Tools.cc -o EstimateErrorMean
In file included from /opt/local/include/boost/detail/lcast_precision.hpp:16,
from /opt/local/include/boost/lexical_cast.hpp:31,
from /opt/local/include/boost/math/special_functions/gamma.hpp:23,
from /opt/local/include/boost/math/distributions/chi_squared.hpp:13,
from EstimateErrorMean.cc:19:
/opt/local/include/boost/integer_traits.hpp:164:66: error: use of C99 long long integer constant
/opt/local/include/boost/integer_traits.hpp:164:77: error: use of C99 long long integer constant
/opt/local/include/boost/integer_traits.hpp:170:70: error: use of C99 long long integer constant
``` | -pedantic causes all required warnings to be reported, and -Werror causes warnings to be reported as errors. As C++ does not define the "ULL" long long integer constant syntax (C99 does), this is probably being reported and then promoted to full-on error status by g++.
Try removing -pedantic.
[Or try #including <boost/cstdint.hpp>.](http://www.boost.org/doc/libs/1_38_0/libs/integer/index.html) | In a GNU Make makefile, the convention is to use `CXXFLAGS` for C++ compiler flags, and to make an addition to the flags for a specific target, you can use target-specific variables. For example:
```
CXX=g++
# Set CXXFLAGS to include the base set of flags
CXXFLAGS=-Wall -Werror -gstabs -pedantic -O2 -g
all: Code3 Code2 Code1
Code3: Code3.cc Tools.cc
$(CXX) $(CXXFLAGS) $^ -o $@
Code2: Code2.cc Tools.cc
$(CXX) $(CXXFLAGS) $^ -o $@
# Add a target-specific addition to CXXFLAGS for Code1:
Code1: CXXFLAGS += -I/opt/local/include
Code1: Code1.cc Tools.cc
$(CXX) $(CXXFLAGS) $^ -o $@
```
Note that you may also want to switch to using pattern rules, rather than explicitly declaring rules for all your (very similar) targets. For example, you could replace the Code1, Code2 and Code3 rules with just this:
```
%: %.cc Tools.cc
$(CXX) $(CXXFLAGS) $^ -o $@
```
**EDIT**: In response to the updated post regarding the specific error seen: it looks like you are probably getting burned because you include `-Wall -Werror` in the flags when you are using the Makefile, but not on the command line. `-Wall` causes g++ to be a lot more picky about syntax; `-Werror` causes g++ to promote things that are normally just warnings into full-blown errors. Since the errors are being reported against code in a third-party library, perhaps you can get by with less strict warning options (ie, remove `-Wall`); or perhaps you need to switch to a newer version of the compiler that can handle the constructs without complaining; or perhaps you just need to explicitly specify `-std=c99` to alert the compiler that you want C99 support.
Hope that helps,
Eric Melski | Howto incorporate -I in makefile | [
"",
"c++",
"boost",
"makefile",
""
] |
I'm learning about Events / Delegates in C#. Could I ask your opinion on the naming/coding style I've chosen (taken from the Head First C# book)?
Am teaching a friend about this tomorrow, and am trying to come up with the most elegant way of explaining the concepts. (thought the best way to understand a subject is to try and teach it!)
```
class Program
{
static void Main()
{
// setup the metronome and make sure the EventHandler delegate is ready
Metronome metronome = new Metronome();
// wires up the metronome_Tick method to the EventHandler delegate
Listener listener = new Listener(metronome);
metronome.OnTick();
}
}
```
---
```
public class Metronome
{
// a delegate
// so every time Tick is called, the runtime calls another method
// in this case Listener.metronome_Tick
public event EventHandler Tick;
public void OnTick()
{
while (true)
{
Thread.Sleep(2000);
// because using EventHandler delegate, need to include the sending object and eventargs
// although we are not using them
Tick(this, EventArgs.Empty);
}
}
}
```
---
```
public class Listener
{
public Listener(Metronome metronome)
{
metronome.Tick += new EventHandler(metronome_Tick);
}
private void metronome_Tick(object sender, EventArgs e)
{
Console.WriteLine("Heard it");
}
}
```
n.b. Code is refactored from <http://www.codeproject.com/KB/cs/simplesteventexample.aspx> | There are a few points that I would mention:
Metronome.OnTick doesn't seem to be named correctly. Semantically, "OnTick" tells me it will be called when it "Tick"s, but that isn't really what's happening. I would call it "Go" instead.
The typically accepted model, however would be to do the following. `OnTick` is a virtual method that raises the event. This way, you can override the default behavior in inherited classes easily, and call the base to raise the event.
```
class Metronome
{
public event EventHandler Tick;
protected virtual void OnTick(EventArgs e)
{
//Raise the Tick event (see below for an explanation of this)
var tickEvent = Tick;
if(tickEvent != null)
tickEvent(this, e);
}
public void Go()
{
while(true)
{
Thread.Sleep(2000);
OnTick(EventArgs.Empty); //Raises the Tick event
}
}
}
```
---
Also, I know this is a simple example, but if there are no listeners attached, your code will throw on `Tick(this, EventArgs.Empty)`. You should at least include a null guard to check for listeners:
```
if(Tick != null)
Tick(this, EventArgs.Empty);
```
However, this is still vulnerable in a multithreaded environment if the listener is unregistered between the guard and the invocation. The best would be to capture the current listeners first and call them:
```
var tickEvent = Tick;
if(tickEvent != null)
tickEvent(this, EventArgs.Empty);
```
I know this is an old answer, but since it's still gathering upvotes, here's the C# 6 way of doing things. The whole "guard" concept can be replaced with a conditional method call and the compiler does indeed do the Right Thing(TM) in regards to capturing the listeners:
```
Tick?.Invoke(this, EventArgs.Empty);
``` | Microsoft has actually written extensive set of naming guidelines and put it in the MSDN library. You can find the articles here: [Naming Guidelines](https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/naming-guidelines)
Aside from the general capitalization guidelines, here is what it has for 'Events' on the page [Names of Type Members](https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/names-of-type-members#names-of-events):
> ✔️ DO name events with a verb or a verb phrase.
>
> Examples include `Clicked`, `Painting`, `DroppedDown`, and so on.
>
> ✔️ DO give events names with a concept of before and after, using the present and past tenses.
>
> For example, a close event that is raised before a window is closed would be called `Closing`, and one that is raised after the window is closed would be called `Closed`.
>
> ❌ DO NOT use "Before" or "After" prefixes or postfixes to indicate pre- and post-events. Use present and past tenses as just described.
>
> ✔️ DO name event handlers (delegates used as types of events) with the "EventHandler" suffix, as shown in the following example:
>
> ```
> public delegate void ClickedEventHandler(object sender, ClickedEventArgs e);
> ```
>
> ✔️ DO use two parameters named `sender` and `e` in event handlers.
>
> The sender parameter represents the object that raised the event. The sender parameter is typically of type `object`, even if it is possible to employ a more specific type.
>
> ✔️ DO name event argument classes with the "EventArgs" suffix. | Events - naming convention and style | [
"",
"c#",
"events",
"delegates",
""
] |
Yesterday I wrote the following c# code (shortened a bit for legibility):
```
var timeObjects = ( from obj in someList
where ( obj.StartTime != null )
select new MyObject()
{
StartTime= obj.StartTime.Value,
EndTime = obj.EndTime
} )
```
So each item has a startTime and some have an EndTime (others have null as EndTime).
If both start and endtime are known I wanted to calculate the elapsed time:
```
foreach ( var item in timeObjects)
{
if ( item.EndTime == null )
{
item.elapsed = 0;
}
else
{
item.elapsed = ( item.EndTime.Value - item.StartTime).Minutes;
}
}
```
But this doesn't work! the timeObjects collection never changes.
If I say:
```
var timeObjects = ( from obj in someList
where ( obj.StartTime != null )
select new MyObject()
{
StartTime= obj.StartTime.Value,
EndTime = obj.EndTime
} ).ToList();
foreach ( var item in timeObjects)
{
if ( item.EndTime == null )
{
item.elapsed = 0;
}
else
{
item.elapsed = ( item.EndTime.Value - item.StartTime).Minutes;
}
}
//(only change is the ToList() at the end of the linq statement)
```
it does work.
I'd very much like to know why this is? | Your `timeObjects` is a delayed-execution enumerable. If you enumerate over the list twice, the results will actually be evaluated twice, creating new objects.
When you performed ToList(), it created a local copy of the RESULTS of that query/enumerable, which is why you saw the changes. This sort of LINQ query doesn't create any sort of list under the covers. The query itself isn't performed until you enumerate over it. All you're doing in the (from ... select) state is creating the query definition. | Your original timeObjects definition defines a LINQ expression that gets lazily evaluated, so everytime you try to go over the timeObjects enumerable, it will create new instances of MyObject. | Why can't I change elements from a linq IEnumerable in a for loop? | [
"",
"c#",
"linq",
""
] |
does anyone know a smooth / fast way of removing transparency from e.g. pngs/tiffs etc and replacing it with a white background?
Basically what I need this for is I need to create PDF/A compatible images, which may, according to the spec, have -no- transparency (and therefore a fixed white background is fine).
Any ideas / suggestions?
Cheers & thanks,
-Jörg | You could create a bitmap the same size as the png, draw a white rectangle and then draw the image on top of it.
```
void RemoveImageTransparancy(string file) {
Bitmap src = new Bitmap(file);
Bitmap target = new Bitmap(src.Size.Width,src.Size.Height);
Graphics g = Graphics.FromImage(target);
g.DrawRectangle(new Pen(new SolidBrush(Color.White)), 0, 0, target.Width, target.Height);
g.DrawImage(src, 0, 0);
target.Save("Your target path");
}
``` | You have to remove the alpha channel. Otherwise you'll still have a transparent image - just without transparent areas.
```
class Program
{
static void Main(string[] args)
{
//this also works for different file formats
ReplaceTransparency(@"C:\Y\transparent.png", System.Drawing.Color.White).Save(@"C:\Y\no_transparency.png");
ReplaceTransparency(@"C:\Y\transparent.gif", System.Drawing.Color.White).Save(@"C:\Y\no_transparency.gif");
}
public static System.Drawing.Bitmap ReplaceTransparency(string file, System.Drawing.Color background)
{
return ReplaceTransparency(System.Drawing.Image.FromFile(file), background);
}
public static System.Drawing.Bitmap ReplaceTransparency(System.Drawing.Image image, System.Drawing.Color background)
{
return ReplaceTransparency((System.Drawing.Bitmap)image, background);
}
public static System.Drawing.Bitmap ReplaceTransparency(System.Drawing.Bitmap bitmap, System.Drawing.Color background)
{
/* Important: you have to set the PixelFormat to remove the alpha channel.
* Otherwise you'll still have a transparent image - just without transparent areas */
var result = new System.Drawing.Bitmap(bitmap.Size.Width, bitmap.Size.Height, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
var g = System.Drawing.Graphics.FromImage(result);
g.Clear(background);
g.CompositingMode = System.Drawing.Drawing2D.CompositingMode.SourceOver;
g.DrawImage(bitmap, 0, 0);
return result;
}
}
``` | Remove transparency in images with C# | [
"",
"c#",
".net",
"image",
"transparency",
""
] |
I have a program in which I want to be able to store certain data (dynamically allocated blocks), on disk for reduced memory usage and persistence.
My first thought was to write my own custom allocator which managed the content of files on the disk, but I want to see what alternatives there are too.
I've looked into custom memory allocators and topics on object serialization but there are subtle differences, both good and bad, when adapting those principles to managing the address space of a file.
In this situation:
1. Memory is accessed only through IO (read/write) functions rather than directly
2. No objects(methods/pointers) are stored, only data.
3. The size of a file is not static, so it should grow when needed rather than being large and static
4. For my uses, it is acceptable to re-map existing pointers after defragmentation
Because the data is not of a fixed size, most database implementations seem not well suited.
I ask, what is the best approach for this problem? Should I implement a simple memory allocator which treats a file as the heap?
For reference, im using C++ on embedded devices.
---
Edit: I've implemented my own memory manager which uses buddy memory allocation and block sizes of powers of two. I'm satisfied that it is correct and does not leak, coalesces free blocks, and can do a 'stop the world' defragmentation.
The problem is, as expected, there is quite a bit of internal and external fragmentation. I'm not an expert in this field and although I find it fascinating (I'm still a student), I wonder if there are any other implementations that have done the same or similar thing? Surely I cant be the only one?
---
Some helpful but so far incompatible topics are:
[mmap](http://en.wikipedia.org/wiki/Mmap)
tbh I havent used mmap but, it addresses the file IO, but not the management of the file address space.
[BOOST:serialization](http://www.boost.org/doc/libs/1_38_0/libs/serialization/doc/index.html) I have a (probably unjustified) reluctance to use boost libraries at the moment.
[STXXL](http://stxxl.sourceforge.net/) Interesting but doesnt address variable size memory allocation
[Doug Lea Memory Allocator](http://g.oswego.edu/dl/html/malloc.html) Has very good insights into issues with memory allocators, but I'm not in a position to try and make my own implementation | Your two goals are to reduce memory usage and persist your data. That definitely sounds like a job for a **database**. But then you say
> Because the data is not of a fixed
> size, most database implementations
> seem not well suited.
I think you'll be interested in this [distinctive feature of SQLite](http://www.sqlite.org/different.html) (a very lightweight cross-platform database with public domain source code):
> **Variable-length records**
>
> ...
>
> SQLite, in contrast, use only the amount of disk space actually needed
> to store the information in a row. If
> you store a single character in a
> VARCHAR(100) column, then only a
> single byte of disk space is consumed.
> (Actually two bytes - there is some
> overhead at the beginning of each
> column to record its datatype and
> length.)
It also is a [good choice for embedded development](http://www.sqlite.org/whentouse.html):
> **Embedded devices and applications**
>
> Because an SQLite database requires
> little or no administration, SQLite is
> a good choice for devices or services
> that must work unattended and without
> human support. SQLite is a good fit
> for use in cellphones, PDAs, set-top
> boxes, and/or appliances. It also
> works well as an embedded database in
> downloadable consumer applications. | > I've implemented my own memory manager which uses buddy memory allocation and block sizes of powers of two. I'm satisfied it is correct and has doesnt leak, coalesses free blocks and can do a 'stop the world' defragmentation.
That's a great first step. Once you have a working custom memory allocator you can of course do better!
> The problem is, as expected there is quite a bit of internal(power of 2 blocks) and external fragmentation. I'm not an expert in this field and although I find it facinating (I'm still a student), I wonder if there is any other implementations that have done the same or similar thing? Surely I cant be the only one?
The power of two is a generic approach. However, note that this may not be the best simply because your allocation pattern may not follow the same geometric progression. In such a case it is best to test as much as you can and see what block sizes are getting allocated the most and optimize accordingly.
I would also like to suggest this a wonderful article by Andrei Alexandrescu and Emery Berger on the topic of memory allocation: [Policy-Based Memory Allocation](http://www.ddj.com/cpp/184402039) and the latter's work in particular: [The Hoard Memory Allocator](http://www.cs.umass.edu/~emery/hoard/).
If possible go through the references mentioned at the end of that article. They may just as well provide additional insights. | Disk Based Dynamic Memory Allocation | [
"",
"c++",
"memory-management",
""
] |
I wish to call a method in my code behind from JavaScript, I sort of know how to do it.. I must call `__DoPostBack` passing name of control and parameters..
But what if an event doesn't exist i.e. NO CONTROL. Really what i am trying to do is call an event.. but the event doesn't exist as there is no control associated with it..
I sort of could do this:
```
If IsPostBack Then
If Request(”__EVENTTARGET”).Trim() = “CleanMe” Then
CleanMe()
End If
.....
```
But this means I must do it manually. Can I not wire up an event.... otherwise I will have loads of different IFs (i.e. If this passed then call this .. etc..).
Any ideas?
Thanks | You may be able to use a PageMethod to call your codebehind function, here is a link to an example: <http://blogs.microsoft.co.il/blogs/gilf/archive/2008/10/04/asp-net-ajax-pagemethods.aspx> | If you want to use \_\_doPostBack(), you must have a control to receive the command. However, you don't have to explicitly wire up an event to handle it. If you want the \_\_doPostBack() to invoke, say, Foo(), do the following:
```
MyControl : IPostBackEventHandler
{
void RaisePostBackEvent(string eventArgument)
{
Foo();
}
}
```
Calling \_\_doPostBack() will invoke the RaisePostBackEvent method on the targeted control. | __doPostback - calling code behind events from JavaScript? | [
"",
"asp.net",
"javascript",
"events",
"postback",
""
] |
In the Northwind Starters Kit, Primary keys from database are mapped to Strings in C#.
Is this good practice? And if so, why?
thx, Lieven Cardoen
ps: Sorry for the maybe wrong question...
In Northwind Starters Kit some tables have a auto-incremental primary key with datatype int and others have a non auto-incremental primary key with datatype nchar(5). Why is this? ***Well, apparently some primary keys are just codes (nchar(5) format). So sorry to have used your time.***
I thought that a datatype int was mapped to C# string which seemed very wrong to me (but it isn't the case). | For pure efficiency, using an Int as your primary key is better simply due to the support for comparison of Ints at the machine code level. Strings are compared using algorithms implemented at the database level. Unless your strings are very short, an Integer key will take up less space on the page as well (db page).
**Update**: Based on the other answer now on the board, I'm not sure if I've understood your question correctly. Are you asking whether it is better to use an Integer as your key compared to a string (where either could be chosen)? Or are you asking whether your C# type should match your database type? I'm assuming the former...and would be very surprised if it is the latter - whose answer I would think is obvious.
**Update**: Lieven has now clarified his request to say that he was, in fact, asking whether an Int or an nchar field would be better as an index so my original take on this question was correct.
To add to my answer, Lieven, it is almost always better to have an Int as your PK. The exception is when there is a *natural* key that can be captured as a short character string (e.g. in an accounting system where "Item" entries are char strings). The reasons are threefold.
First, Integers are represented as a native machine type (32 or 64-bit word) and manipulated via machine-native operations whereas strings are not but must be compared using a char-by-char approach. So, for example, when traversing the PK Index (usually some variant of a BTree) to locate a record, the comparison operation at each node is a single operation. Is this a huge thing? Probably not unless you are working with a truly massive database or transaction load. If you have a natural character key then, by all means, use it! However, if your "key" is the first five letters of the last name plus the first initial plus a number to make it unique, then you'd obviously be far better off with an Int field.
Second, Integers simply take up less room than almost any char key (except char(1) assuming the use of Unicode). And it isn't just the room in the main table page, remember that the index fields are represented in the Index as well. Again, is this a big deal? Not really, unless you are, again, working with a massive database.
Lastly, our choice of keys often has effects elsewhere. So, for example, if you use the primary key on one table as the foreign key on another, both of the above effects are magnified when you are inserting or updating records in the table using the foreign key.
To sum: use the key that is most natural. However, if you have a choice between Int and Char and both are essentially arbitrary, go with the Int over the Char. | It all depends on the data type of the column in the database.
Good practice is to use a compatible/corresponding data type. If the database uses int, use int. If the database uses uniqueidentifier, use Guid. If the database uses nvarchar, use string.
Anything else will give problems down the line. Guaranteed. | Database Primary Key C# mapping - String or int | [
"",
"c#",
"database",
"architecture",
"mapping",
"primary-key",
""
] |
I am wanting to know if there is any way to have multiple footer rows in a h:dataTable (or t:datatable) I want to do somthing like this (which does not compile)
```
<h:dataTable ... var="row">
<h:column>
<f:facet name="header">
Header
</f:facet>
<h:outputText value="#{row.value}"/>
<f:facet name="footer">
FirstFooter
</f:facet>
<f:facet name="footer">
Second Footer in a new tr
</f:facet>
</h:column>
</h:dataTable>
```
With the result being somthing like this
```
<table border=1 style="border:1px solid">
<thead><tr><th>Header</th></tr></thead>
<tbody><tr><td>Data Value 1</td></tr>
<tr><td>Data Value 2</td></tr>
<tr><td>Data Value ...</td></tr>
<tr><td>Data Value n</td></tr>
</tbody>
<tfoot>
<tr><td>FirstFooter</td></tr>
<tr><td>Second Footer in a new tr</td></tr>
</tfoot>
</table>
```
Any ideas how best to accomplish this?
Thanks.
EDIT:
It would be great if I could avoid using a custom control/custom renderer | The solution I ended up using was a custom renderer associated with t:datatable (the tomahawk extension to datatable)
```
public class HtmlMultiHeadTableRenderer extends HtmlTableRenderer
```
I only had to override one method
```
protected void renderFacet(FacesContext facesContext,
ResponseWriter writer, UIComponent component, boolean header)
```
with in which I looked for facets with names header, header2, header3 ... headerN (I stop looking as soon as there is one missing) and the same with footer. This has allowed me do do code like
```
<h:dataTable ... var="row">
<h:column>
<f:facet name="header">
Header
</f:facet>
<f:facet name="header2">
A second Tr with th's
</f:facet>
<h:outputText value="#{row.value}"/>
<f:facet name="footer">
FirstFooter
</f:facet>
<f:facet name="footer2">
Second Footer in a new tr
</f:facet>
</h:column>
</h:dataTable>
```
This took about one day(with some other extensions like allowing colspans based on groups) to code and document, | As McDowell said, panelGrid will do what you're asking without custom controls, unless I'm missing something.
```
<f:facet name="footer">
<h:panelGrid columns="1">
<h:outputText value="FirstFooter" />
<h:outputText value="Second Footer in a new tr" />
</h:panelGrid>
</f:facet>
```
This renders as follows
```
<tfoot>
<tr>
<td colspan="3">
<table>
<tbody>
<tr>
<td>FirstFooter</td>
</tr>
<tr>
<td>Second Footer in a new tr</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tfoot>
```
You can use tags other than outputText in the panelGrid as well to get whatever effect you're looking for (just make sure that you wrap them in a panelGroup or something similar). | Multiple Footer Rows in a JSF dataTable | [
"",
"java",
"jsf",
"datatable",
""
] |
In the code below I would like array to be defined as an array of size x when the Class constructor is called. How can I do that?
```
class Class
{
public:
int array[];
Class(int x) : ??? { }
}
``` | You can't initialize the size of an array with a non-const dimension that can't be calculated at compile time (at least not in current C++ standard, AFAIK).
I recommend using `std::vector<int>` instead of array. It provides array like syntax for most of the operations. | You folks have so overcomplicated this. Of course you can do this in C++. It is fine for him to use a normal array for efficiency. A vector only makes sense if he doesn't know the final size of the array ahead of time, i.e., it needs to grow over time.
If you can know the array size one level higher in the chain, a templated class is the easiest, because there's no dynamic allocation and no chance of memory leaks:
```
template < int ARRAY_LEN > // you can even set to a default value here of C++'11
class MyClass
{
int array[ARRAY_LEN]; // Don't need to alloc or dealloc in structure! Works like you imagine!
}
// Then you set the length of each object where you declare the object, e.g.
MyClass<1024> instance; // But only works for constant values, i.e. known to compiler
```
If you can't know the length at the place you declare the object, or if you want to reuse the same object with different lengths, or you must accept an unknown length, then you need to allocate it in your constructor and free it in your destructor... (and in theory always check to make sure it worked...)
```
class MyClass
{
int *array;
MyClass(int len) { array = calloc(sizeof(int), len); assert(array); }
~MyClass() { free(array); array = NULL; } // DON'T FORGET TO FREE UP SPACE!
}
``` | Determine array size in constructor initializer | [
"",
"c++",
"arrays",
"constructor",
"initialization",
""
] |
If I include a JavaScript file in my HTML page, do the variables declared in my JavaScript file also have scope in my `<script />` tags in my HTML page? For example, in my included JS file, I declare a variable:
```
var myVar = "test";
```
Then inside my HTML page, what will this produce (if it's after my include script tag)?
```
alert(myVar);
``` | If you declare the variable *outside of any function* as
```
var myVar = 'test';
```
or at *any* location as
```
myVar = 'test';
```
or
```
window.myVar = 'test';
```
It should be added to the Global Object (window) and be available anywhere as
```
alert(myVar);
```
or
```
alert(window.myVar);
```
or
```
alert(window['myVar']);
``` | It will produce an alert containing "test".
All variables declared at the top level in JavaScript share the same scope. If you want to use variables in one file that won't clash with another, then you can use an anonymous function to introduce a new scope:
```
var myVar = "something else";
(function () {var myVar = "test"; alert(myVar)})();
alert(myVar);
```
**edit**: As BYK points out, you can expand this into something that resembles a full fledged namespace, by assigning an object literal:
```
var MyNamespace = (function () {
var myVar = "something";
return { alert: function() { alert(myVar) },
setVar: function(value) { myVar = value } }
})();
``` | HTML JavaScript Include File Variable Scope | [
"",
"javascript",
"html",
""
] |
I'm looking into options for doing asynchronous event dispatching in a component that has many subscribers to its events. In perusing the options, I ran across this example:
```
public event ValueChangedEvent ValueChanged;
public void FireEventAsync(EventArgs e)
{
Delegate[] delegates = ValueChanged.GetInvocationList();
foreach (Delegate d in delegates)
{
ValueChangedEvent ev = (ValueChangedEvent)d;
ev.BeginInvoke(e, null, null);
}
}
```
Beyond the older syntax (the sample was from .NET 1.1), it looks to me like this is a serious resource leak. There's no completion method, no polling for completion, or any other way that `EndInvoke` will be called.
My understanding is that every `BeginInvoke` *must have* a corresponding `EndInvoke`. Otherwise there are pending `AsyncResult` object instances floating around, along with (potentially) exceptions that were raised during the asynchronous events.
I realize that it's easy enough to change that by supplying a callback and doing an `EndInvoke`, but if I don't need to . . .
Handling the asynchronous exeptions is another matter entirely, and, combined with the need to synchronize with the UI thread (i.e. `InvokeRequired`, etc.) could very well tank the whole idea of doing these asynchronous notifications.
So, two questions:
1. Am I correct in believing that every `BeginInvoke` requires a corresponding `EndInvoke`?
2. Beyond what I've noted above, are there other pitfalls to doing asynchronous event notifications in Windows Forms applications? | A call to `BeginInvoke()` should be paired with a `EndInvoke()` but not doing it will not result in a resource leak. The `IAsyncResult` returned by `BeginInvoke()` will be garbage collected.
The biggest pitfall in this code is you are highly exposed to exceptions terminating the application. You might want to wrap the delegate invocation in an exception handler and put some thought into how you want to propagate the exceptions that happen (report the first, produce an aggregate exception, etc).
Invoking a deletage using `BeginInvoke()` will take a thread off the thread queue to start running the event. This means that the event will always fire off the main UI thread. This might make some event handler scenarios harder to handle (e.g. updating the UI). Handlers would need to realize they need to call `SynchronizationContext.Send()` or `.Post()` to synchronize with the primary UI thread. Of course all other multi-thread programming pitfalls also apply. | After thinking about this for a while, I came to the conclusion that it's probably a bad idea to do asynchronous events in Windows Forms controls. Windows Forms events should be raised on the UI thread. Doing otherwise presents an undue burden on clients, and possibly makes a mess with `AsyncResult` objects and asynchronous exceptions.
It's cleaner to let the clients fire off their own asynchronous processing (using `BackgroundWorker` or some other technique), or handle the event synchronously.
There are exceptions, of course. `System.Timers.Timer`, for example, raises the `Elapsed` event on a thread pool thread. But then, the initial notification comes in on a pool thread. It looks like the general rule is: raise the events on the same thread that got the initial notification. At least, that's the rule that works best for me. That way there's no question about leaking objects. | Raising events asynchronously | [
"",
"c#",
".net",
"winforms",
""
] |
I have a C++ program on Linux that crashes after some time with the message:
```
*** glibc detected *** free(): invalid pointer: 0x41e0ce94 ***
```
Inside the program I make extensive use of containers. They have to store objects of a simple class.
**EDIT** 2009-4-17:
In the meantime it seems clear that the error has nothing to do with the simple class. The error still occurs if I change the containers to hold other datatypes. The problem must be somewhere else in my code, I'm trying to figure it out at the moment... | Consider using a std::string to hold the string value instead of a raw char pointer. Then you won't have to worry about managing the string data in your assignment, copy, and destruction methods. Most likely your problem lies there.
Edit: There's no issue with the newer class you posted, and no problem with the first version if you're only using the char \* to point to string constants. The problem lies elsewhere in the program or with the way you're using the class. You'll have to spend more time digging in the debugger and/or valgrind to track down the problem. I would figure out what is pointed to at the specified address and try determine why it's being freed twice. | At a guess, there is something wrong in your copy ctor, assignment op or destructor - you need to show the code for those.
**Edit:** Just noticed you don't have an assignment operator - assuming your copy constructor & destructor are OK, you need an assignment operator too, as the std:; containers will use it. | problem with containers: *** glibc detected *** free(): invalid pointer: 0x41e0ce94 *** | [
"",
"c++",
"glibc",
""
] |
Is there a way I can put a console application in the system tray when minimizing ? | A console has no window to minimize by itself. It runs in a command prompt window. You might hook the window messages and hide the window on minimize. In your application it's possible to add a tray icon just the same as you would do it in a windows application. Well, somehow this **smells**...
But: I'm not sure why you want to do this. A console application is by design different to a windows application. Hence, maybe it's an option to change the app to be a windows form application? | Yes, you can do this.
Create a Windows Forms application and add a [NotifyIcon component](http://msdn.microsoft.com/en-us/library/7yyz6s5c.aspx).
Then use the following methods ([found on MSDN](http://msdn.microsoft.com/en-us/library/ms682073(VS.85).aspx)) to allocate and display a Console
```
[DllImport("kernel32.dll")]
public static extern Boolean AllocConsole();
[DllImport("kernel32.dll")]
public static extern Boolean FreeConsole();
[DllImport("kernel32.dll")]
public static extern Boolean AttachConsole(Int32 ProcessId);
```
When your console is onscreen, capture the minimize button click and use it to hide the console window and update the Notify icon. You can find your window using the following methods ([found on MSDN](http://msdn.microsoft.com/en-us/library/ms633499.aspx)):
```
[DllImport("user32.dll", SetLastError = true)]
static extern IntPtr FindWindow(string lpClassName, string lpWindowName);
// Find window by Caption only. Note you must pass IntPtr.Zero as the first parameter.
// Also consider whether you're being lazy or not.
[DllImport("user32.dll", EntryPoint="FindWindow", SetLastError = true)]
static extern IntPtr FindWindowByCaption(IntPtr ZeroOnly, string lpWindowName);
```
Be sure to call FreeConsole whenever you're ready to close the app. | .Net Console Application in System tray | [
"",
"c#",
".net-3.5",
"console",
"system-tray",
""
] |
How can I instantiate the type T inside my `InstantiateType<T>` method below?
I'm getting the error: **'T' is a 'type parameter' but is used like a 'variable'.**:
## (SCROLL DOWN FOR REFACTORED ANSWER)
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestGeneric33
{
class Program
{
static void Main(string[] args)
{
Container container = new Container();
Console.WriteLine(container.InstantiateType<Customer>("Jim", "Smith"));
Console.WriteLine(container.InstantiateType<Employee>("Joe", "Thompson"));
Console.ReadLine();
}
}
public class Container
{
public T InstantiateType<T>(string firstName, string lastName) where T : IPerson
{
T obj = T();
obj.FirstName(firstName);
obj.LastName(lastName);
return obj;
}
}
public interface IPerson
{
string FirstName { get; set; }
string LastName { get; set; }
}
public class Customer : IPerson
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Company { get; set; }
}
public class Employee : IPerson
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int EmployeeNumber { get; set; }
}
}
```
## REFACTORED ANSWER:
Thanks for all the comments, they got me on the right track, this is what I wanted to do:
```
using System;
namespace TestGeneric33
{
class Program
{
static void Main(string[] args)
{
Container container = new Container();
Customer customer1 = container.InstantiateType<Customer>("Jim", "Smith");
Employee employee1 = container.InstantiateType<Employee>("Joe", "Thompson");
Console.WriteLine(PersonDisplayer.SimpleDisplay(customer1));
Console.WriteLine(PersonDisplayer.SimpleDisplay(employee1));
Console.ReadLine();
}
}
public class Container
{
public T InstantiateType<T>(string firstName, string lastName) where T : IPerson, new()
{
T obj = new T();
obj.FirstName = firstName;
obj.LastName = lastName;
return obj;
}
}
public interface IPerson
{
string FirstName { get; set; }
string LastName { get; set; }
}
public class PersonDisplayer
{
private IPerson _person;
public PersonDisplayer(IPerson person)
{
_person = person;
}
public string SimpleDisplay()
{
return String.Format("{1}, {0}", _person.FirstName, _person.LastName);
}
public static string SimpleDisplay(IPerson person)
{
PersonDisplayer personDisplayer = new PersonDisplayer(person);
return personDisplayer.SimpleDisplay();
}
}
public class Customer : IPerson
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Company { get; set; }
}
public class Employee : IPerson
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int EmployeeNumber { get; set; }
}
}
``` | Declare your method like this:
```
public string InstantiateType<T>(string firstName, string lastName)
where T : IPerson, new()
```
Notice the additional constraint at the end. Then create a `new` instance in the method body:
```
T obj = new T();
``` | Couple of ways.
Without specifying the type must have a constructor:
```
T obj = default(T); //which will produce null for reference types
```
With a constructor:
```
T obj = new T();
```
But this requires the clause:
```
where T : new()
``` | In C#, how to instantiate a passed generic type inside a method? | [
"",
"c#",
"generics",
""
] |
I've got a Mac server and I'm building PHP code to allow users to upload images, documents, and even video files. Researching this has definitely gotten me nervous, I want the uploaded content to be virus free.
Is building something myself going to be a huge challenge? Would you do it, or would you find some OS or OTS product? (And do you know of any you can recommend)? | Conceptually, what you're talking about is pretty straightforward. Accepting and processing uploads is pretty simple, it's definitely not something I think you need to worry about buying a pre-built solution for.
Generally things like images and videos can't really have "viruses" (unless the viewer application is really poor and lets them run code somehow - also known as "Internet Explorer"), but it's not really difficult to virus-scan them anyway if you'd like to. Just find a command-line scanner that can run on the server (something like [Clam AV](http://en.wikipedia.org/wiki/Clam_AV)), and whenever a file is uploaded, run it through the scanner and reject the upload (and log the event) if it fails the scan. | If you're uploading very large files, you might also consider a Flash upload/status bar so that users can see how much of the file is uploaded. [SWFUpload](http://swfupload.org/) is a good choice for that.
You can scan files with ClamAV by doing something like this in PHP:
```
$out = '';
$int = -1;
exec('/usr/local/bin/clamscan --stdout /path/to/file.ext', $out, $int);
if ($int == 0)
{
print('No virus!');
}
/*
Return codes from clamscan:
0 : No virus found.
1 : Virus(es) found.
40: Unknown option passed.
50: Database initialization error.
52: Not supported file type.
53: Can't open directory.
54: Can't open file. (ofm)
55: Error reading file. (ofm)
56: Can't stat input file / directory.
57: Can't get absolute path name of current working directory.
58: I/O error, please check your file system.
59: Can't get information about current user from /etc/passwd.
60: Can't get information about user '' from /etc/passwd.
61: Can't fork.
62: Can't initialize logger.
63: Can't create temporary files/directories (check permissions).
64: Can't write to temporary directory (please specify another one).
70: Can't allocate memory (calloc).
71: Can't allocate memory (malloc).
*/
``` | Building PHP uploader for gif/jpg/png/pdf/doc, wmv files, feasible, or should I buy something? | [
"",
"php",
"security",
"file",
"upload",
""
] |
We started using Google Maps on our web application rather extensively. It worked fine at the beginning, but as we add more markers we find that the performance are not quite there. Although I'm quite sure we don't use it in the most efficient way.
I am looking for information about Google Maps best practices and tips'n tricks. Any suggestions? | You might find some good ideas in [this article](http://www.svennerberg.com/2009/01/handling-large-amounts-of-markers-in-google-maps/), which compares several methods of handling large amounts of markers.
Marker Manager has some limitations, depending on what you're trying to accomplish; for instance, it doesn't allow every marker to be available from every zoom level. I created a clustering function based on the principles discussed in [this tutorial](http://www.appelsiini.net/2008/11/introduction-to-marker-clustering-with-google-maps). It uses the Static Maps API in PHP, but the principles behind the clustering can be used however you want.
**Update:** This clustering utility was just released: [MarkerClusterer](http://gmaps-utility-library.googlecode.com/svn/trunk/markerclusterer/) | Use [Marker Manager](http://googlemapsapi.blogspot.com/2006/11/marker-manager.html). | Google Maps Best Practices? | [
"",
"javascript",
"google-maps",
""
] |
Let's say that I have a binary that I am building, and I include a bunch of files that are never actually used, and do the subsequent linking to the libraries described by those include files? (again, these libraries are never used)
What are the negative consequences of this, beyond increased compile time? | A few I can think of are **namespace pollution** and **binary size** | In addition to compile time; Increased complexity, needless distraction while debugging, a maintenance overhead.
Apart from that, nothing. | What are the negative consequences of including and/or linking things that aren't used by your binary? | [
"",
"c++",
"linux",
"linker",
"g++",
"ld",
""
] |
Current situation:
* I have the *current version* of my MVC Framework which uses classes as controllers.
* I have some "vintage" modules from my *old* MVC Framework which uses simple, flat includes as controllers.
Much simplified that means:
**New Version:**
```
<?PHP
class blaController extends baseController {
private $intVar;
function dosomethingFunction() {
$this->intVar = 123;
$this->view('myView');
}
}
?>
```
**Old Version:**
```
<?PHP
$globalVar = 123;
// view "controllername" is automatically shown
?>
```
I'm now trying to write a wrapper to be able to use my old controllers in my new MVC without having to rewrite everything. To do so, I have a "wrapper" controller:
```
class wrapController extends baseController {
function dosomethingFunction() {
require 'old_dosomething.function.php';
$this->view('old_dosomething_view');
}
}
```
(Once again: This is VERY, VERY simplified - just to get the idea over. Not actual code.)
The problem with that approach is, that the previously global variable $globalVar now only exists inside of the method "dosomethingFunction" and cannot be accessed by the view.
This wouldn't be the case if I could force the require to behave as "in global scope" so that $globalVar would once again be available in global scope.
So: Is there some way to achieve "**require\_global**" or something similar?
(One solution for my problem would be to modify my old controllers to start with a bunch of "global" commands, but I'd prefer a solution where I don't have to change so much of that old code.)
(Note: Please don't tell me that GLOBALS are bad. It totally misses the point of this question. Just accept that it is a requirement to keep some old code working in a newer, cleaner environment.) | You can add local variables defined within dosomethingFunction() to global scope:
```
class wrapController extends baseController {
function dosomethingFunction() {
require 'old_dosomething.function.php';
//begin added code
$vararr = get_defined_vars();
foreach($vararr as $varName => $varValue)
$GLOBALS[$varName] = $varValue;
//end added code
$this->view('old_dosomething_view');
}
}
```
Note, that for this to work as expected, you should call require before using any other thing in the function. get\_defined\_vars() returns only variables from the current scope, so no array\_diff hacks are needed. | This is the easiest solution I can think of.
Use the get\_defined\_vars() function twice and get a diff of each call to determine what variables were introduced by the required file.
Example:
```
$__defined_vars = get_defined_vars();
require('old_dosomething.function.php');
$__newly_defined_vars = array_diff_assoc($__defined_vars, get_defined_vars());
$GLOBALS = array_merge($GLOBALS, $__newly_defined_vars);
$this->view('old_dosomething_view');
``` | How to achieve "require_global"? | [
"",
"php",
""
] |
Can anyone show me some code of how I could bypass read only cells in DatagridView when pressing TAB key? | ```
private void dataGridView1_CellEnter(object sender, DataGridViewCellEventArgs e)
{
if (dataGridView1.CurrentRow.Cells[e.ColumnIndex].ReadOnly)
{
SendKeys.Send("{tab}");
}
}
``` | Overriding the SelectionChanged event is the right approach. The property CurrentCell can be used to set the current cell. You want something like this:
```
private void dataGridView_SelectionChanged(object sender, EventArgs e)
{
DataGridViewCell currentCell = dataGridView.CurrentCell;
if (currentCell != null)
{
int nextRow = currentCell.RowIndex;
int nextCol = currentCell.ColumnIndex + 1;
if (nextCol == dataGridView.ColumnCount)
{
nextCol = 0;
nextRow++;
}
if (nextRow == dataGridView.RowCount)
{
nextRow = 0;
}
DataGridViewCell nextCell = dataGridView.Rows[nextRow].Cells[nextCol];
if (nextCell != null && nextCell.Visible)
{
dataGridView.CurrentCell = nextCell;
}
}
}
```
You'll need to add a test for the current cell being read only and loop while the next cell is invisible or read only. You'll also need to check to make sure that you don't loop for ever if all cells are read only.
You'll have to cope with the case where the display index is different to the base index too.
To get this behaviour just when pressing Tab you'll need to add a KeyDown handler:
```
private void AlbumChecker_KeyDown(object sender, KeyEventArgs e)
{
if (e.KeyCode == Keys.Tab)
{
SelectNextEditableCell(DataGridView dataGridView);
}
}
```
and put the first code in this new method.
You might want to check that the DataGridView has focus too. | Bypass read only cells in DataGridView when pressing TAB key | [
"",
"c#",
"winforms",
"datagridview",
""
] |
In C#, I can initialize a list using the following syntax.
```
List<int> intList= new List<int>() { 1, 2, 3 };
```
I would like to know how that `{}` syntax works, and if it has a name. There is a constructor that takes an `IEnumerable`, you could call that.
```
List<int> intList= new List<int>(new int[]{ 1, 2, 3 });
```
That seems more "standard". When I deconstruct the default constructor for the List I only see
```
this._items = Array.Empty;
```
I would like to be able to do this.
```
CustomClass abc = new CustomClass() {1, 2, 3};
```
And be able to use the `1, 2, 3` list. How does this work?
**Update**
Jon Skeet answered
> It's calling the parameterless
> constructor, and then calling Add:
```
> List<int> tmp = new List<int>();
> tmp.Add(1); tmp.Add(2); tmp.Add(3);
> List<int> intList = tmp;
```
I understand what is does. I want to know how. How does that syntax know to call the Add method?
**Update**
I know, how cliche to accept a Jon Skeet answer. But, the example with the strings and ints is awesome. Also a very helpful MSDN page is:
* *[Object and Collection Initializers (C# Programming Guide)](http://msdn.microsoft.com/en-us/library/bb384062.aspx)* | This is called a *collection initializer*. It's calling the parameterless constructor, and then calling Add:
```
List<int> tmp = new List<int>();
tmp.Add(1);
tmp.Add(2);
tmp.Add(3);
List<int> intList = tmp;
```
The requirements for the type are:
* It implements `IEnumerable`
* It has overloads of `Add` which are appropriate for the argument types you supply. You can supply multiple arguments in braces, in which case the compiler looks for an `Add` method with multiple parameters.
For example:
```
public class DummyCollection : IEnumerable
{
IEnumerator IEnumerable.GetEnumerator()
{
throw new InvalidOperationException("Not a real collection!");
}
public void Add(string x)
{
Console.WriteLine("Called Add(string)");
}
public void Add(int x, int y)
{
Console.WriteLine("Called Add(int, int)");
}
}
```
You can then use:
```
DummyCollection foo = new DummyCollection
{
"Hi",
"There",
{ 1, 2 }
};
```
(Of course, normally you'd want your collection to implement `IEnumerable` properly...) | Read *[Object and Collection Initializers (C# Programming Guide)](http://msdn.microsoft.com/en-us/library/bb384062.aspx)*. Basically you could this with every custom type that is a list (implements IEnumerable). | Initializing a Generic.List in C# | [
"",
"c#",
".net",
"generics",
"constructor",
""
] |
Please recommend a C++ compression (zip) library for a commercial, closed-source application. So, not a GPL license.
This is for my day job... | I know you said C++, but [zlib](http://www.zlib.net/) is a very [permissively licensed](http://www.zlib.net/zlib_license.html) C library that you could use directly from a C++ app.
If I recall correctly, there are various "iostream-like" wrappers around zlib available, too. | 7 Zip is LGPL, meaning so long as you treat it like a 3rd-party library, your usage of it is fine in a closed-source app.
<http://www.7-zip.org/> | C++ compression (zip) library for closed-source app | [
"",
"c++",
"compression",
"zip",
""
] |
I think, the following can't be done in Java. But I would be happy to learn how to implement something that resembles it.
Suppose we have a class C, that is already used in compiled code. (We can neither change that code nor the original definition of C).
Suppose further there is interesting code that could be re-used, if only C would implement interface I. It is, in fact, more or less trivial to derive D that is just C + the implementation of the interface methods.
Yet, it seems there is no way, once I have a C, to say: I want you to be a D, that is, a C implementing I.
(Side remark: I think the cast (D)c, where c's runtime type is C, should be allowed if D is a C and the only difference to C are added methods. This should be safe, should it not?)
How could one work around this calamity?
(I know of the factory design pattern, but this is not a solution, it seems. For, once we manage to create D's in all places where formerly were C's, somebody else finds another interface J useful and derives E extends C implements J. But E and D are incompatible, since they both add a different set of methods to C. So while we can always pass an E where a C is expected, we can't pass an E where a D is expected. Rather, now, we'd need a new class F extends C implements I,J.) | If all that you need to be compatible with is interfaces then no problem take a look at [dynamic proxy classes](http://docs.oracle.com/javase/8/docs/api/java/lang/reflect/Proxy.html), its basically how you implement interfaces at runtime in java.
if you need similar runtime compatibility with classes I suggest you take a look at cglib or javaassist opensource libraries. | Couldn't you use a delegate class, i.e. a new class which wraps an instance of "Class C", but also implements "Interface I" ?
```
public class D implements I {
private C c;
public D (C _c) {
this.c = _c;
}
public void method_from_class_C() {
c.method_from_class_C();
}
// repeat ad-nauseum for all of class C's public methods
...
public void method_from_interface_I() {
// does stuff
}
// and do the same for all of interface I's methods too
}
```
and then, if you need to invoke a function which normally takes a parameter of type `I` just do this:
```
result = some_function(new D(c));
``` | implementing interfaces after the fact | [
"",
"java",
"design-patterns",
"oop",
"interface",
""
] |
Our application uses libcurl for HTTP, and we want to get access to Internet Explorer's proxy settings. An earlier Stack Overflow question [recommends that we use `WinHttpGetIEProxyConfigForCurrentUser` and `WinHttpGetProxyForUrl`](https://stackoverflow.com/questions/202547/how-do-i-find-out-the-browsers-proxy-settings).
Unfortunately, the `winhttp.h` header does not appear to be included with our copies either Visual C++ 2005 or Visual Studio 2008. Apparently, [it's possible to download an updated Platform SDK and install it in Visual C++ 2005](http://bbulkow.blogspot.com/2006/04/winhttp-and-visual-studio-2005-howto.html), but it's a pretty painful process, and it doesn't necessarily work with newer versions of Visual Studio.
Is there a good, well-supported way to access the WinHTTP 5.1 APIs from C++? Or should we avoid using these APIs? | The best, well-supported way to access the WinHTTP 5.1 APIs from C++ is via the Windows SDK (new name for the Platform SDK) and using those APIs you mentioned.
The article you linked to suggests that installing the SDK is difficult - the good news is its an old article from 2006 and things are much easier these days. Just do the following:
1. Download the latest SDK ISO image from [here](http://www.microsoft.com/downloads/details.aspx?FamilyId=F26B1AA4-741A-433A-9BE5-FA919850BDBF&displaylang=en) - don't worry about it being called the "SDK for Windows Server 2008", it's also for Server 2003, XP and Vista.
2. Burn to DVD
3. Run the setup.exe on the DVD, select full installation and let it run for 1/2 hour or so.
4. After installation, goto Start -> Programs -> Microsoft Windows SDK -> Visual Studio Registration -> **Integrate Windows SDK with Visual Studio 2005**
You may even be prompted to run the integration tool during installation - it's been a while since I installed it so I can't recall exactly.
The Visual C++ compiler will then be able to find winhttp.h and winhttp.lib in the appropriate folders.
**Update** As usual things are never as simple as they should be. EMK has pointed out that the Windows SDK Configuration Tool doesn't work properly with Visual Studio 2008 (any version) and crashes after installing Visual Studio 2008 SP1. Details and workarounds are reported [here](http://social.msdn.microsoft.com/forums/en-US/windowssdk/thread/62b7d1a6-5210-4f1e-8fc5-a06193edce22/) and [here](http://social.msdn.microsoft.com/Forums/en-US/windowssdk/thread/7d0eafe5-477b-40cc-85a1-cd6296c6b745). | In case anyone comes across this post and is curious about the minimal components that can be installed to get winhttp, installing Microsoft Windows Core SDK > Build Environment > Build Environment (x86 32-bit) worked for me. | What's the recommended way to get winhttp.h? | [
"",
"c++",
"windows",
"proxy",
"sdk",
"winhttp",
""
] |
We have a two tables with a one-to-many relationship. We would like to enforce a constraint that at least one child record exist for a given parent record.
Is this possible?
If not, would you change the schema a bit more complex to support such a constraint? If so how would you do it?
Edit: I'm using SQL Server 2005 | Such a constraint isn't possible from a schema perspective, because you run into a "chicken or the egg" type of scenario. Under this sort of scenario, when I insert into the parent table I have to have a row in the child table, but I can't have a row in the child table until there's a row in the parent table.
This is something better enforced client-side. | It's possible if your back-end supports deferrable constraints, as does PostgreSQL. | Constraint for one-to-many relationship | [
"",
"sql",
"sql-server",
"database-design",
"constraints",
""
] |
I'm writing an Event Booking system in C#, which is vexing me greatly because "event" and "delegate" are reserved words. "Delegate" I've changed to "Attendee", but I can't think of an alternative name for the Event class (and instances thereof). The best I've come up with is "Happening", which is a little bit 1970s for my liking. Plus, the classes will be exposed via a customer API so I have to use professional terminology.
Any suggestions would be most gratefully received.
Edit: it is mainly the naming of instances and parameters that is bothering me:
```
public Booking CreateBooking(Event event, Person person);
``` | If you really want to use a C# reserved word, you can prefix it with '@'.
E.g.
```
public class @class
{
...
}
```
Use with caution... | I would just go with Event. Type names are case-sensitive, so there is a distinction between event and Event. It is (in my humble opinion) the least confusing and most clear solution to the problem. | Suggest class names for an Event Booking system | [
"",
"c#",
"naming",
""
] |
Is there a way to click a link programatically, so that it has the same effects as if the user clicked on it?
Example:
I have an ASP.NET LinkButton:
```
<asp:LinkButton id="lnkExport" runat="server" CssClass="navclass">Export</asp:LinkButton>
```
I have a link on a sidebar directing to the .aspx page that has this linkbutton on it. For various reasons I can't have the code for the LinkButton executed until the page has refreshed -- so I am looking for a way to force-click this LinkButton in my code once the page is completely loaded. Is there a simple/doable way to accomplish this? If it involves triggering an event, please provide a code sample if you can. Thanks. | Rashack's post show's how to do it. You can just do it in javascript.
```
function ClickLink() {
document.getElementById('').click();
}
```
If you want this to fire after some other event, you can add code in c# to add a call to that function on the client side when the page loads.
```
Page.ClientScript.RegisterStartupScript(
this.getType(),
"clickLink",
"ClickLink();",
true);
``` | Triggering a click event programatically on a link will trigger the “onclick” event, but not the default action(href).
And since linkbuttons come out as hrefs, so you could try doing this with Javascript.
```
var lnkExport = document.getElementById('<%= lnkExport.ClientID %>');
if(lnkExport){
window.location = lnkExport.href;
}
``` | ASP.NET 2.0 - Need to programmatically click a link | [
"",
"c#",
"asp.net",
"hyperlink",
""
] |
There are two ways to load a driver:
1. `Class.forName()`
2. `DriverManager.registerDriver()`
Method 1 internally also calls DriverManager.registerDriver and method 1 is the preferred way.
But why? Is there any small difference or is performance etc. better?
Any views are appreciated.. | If you use Class.forName(), then you are not required to have any compile-time dependencies on a particular JDBC driver. This is particularly useful when you are writing code that can work with a variety of databases.
Consider the following code:
```
// Register the PostgreSQL driver
Class.forName("org.postgresql.Driver");
```
Now compare it to:
```
import org.postgresql.Driver;
// Register the PostgreSQL driver
DriverManager.registerDriver(new Driver());
```
And consider that in the first example, the class name could also have come from a properties file, XML file, etc., depending on what's convenient for your application. | The [JDBC API Tutorial and Reference](https://rads.stackoverflow.com/amzn/click/com/0321173848) is the best reference for such questions, a [section of which addresses the role played by the Driver and DriverManager classes](http://java.sun.com/j2se/1.5.0/docs/guide/jdbc/getstart/drivermanager.html).
**All Driver classes are expected to have a static initializer that is responsible for creating an instance of that Driver, and register it with the DriverManager, when the Driver class is loaded.**
Additionally, the DriverManager.getConnection() is probably the only user-space friendly method in the class. Most of the other methods are usually not used by most developers using the JDBC API. So the old adage still stands - use Class.forName() to load the driver, and then use DriverManager.getConnection() to get a connection to the database. | Which approach is better to load a JDBC driver? | [
"",
"java",
"jdbc",
""
] |
One of the projects which I am working uses CSS "attribute" selector [att]
[CSS Selectors](http://www.w3.org/TR/CSS21/selector.html#attribute-selectors)
which is not supported by ie6:
[Support for CSS selectors in IE6](http://msdn.microsoft.com/en-us/library/cc351024%28VS.85%29.aspx#attributeselectors) (look for text "Attribute Selectors")
Is there any workaround/hack which is of course valid html/css to overcome this problem? | Since IE6 is essentially limited to:
* class selectors
* ID selectors
* (space) descendant selectors
* a:-only pseudo-selectors
your only options are:
* Use more classes to identify your elements
* Use JavaScript (**strongly** not recommended except in highly specialized cases)
I find it very helpful to take advantage of the ability to assign multiple classes to an element by separating them with a space: `class="foo bar"` | This isn't possible without peppering your HTML with a stack of extraneous class selectors, sadly.
I'd recommend designing your site so that your entirely valid CSS works for people using modern browsers, and that it's still usable in the IE6, albeit visually not quite right. You just have to find the right balance between getting your site up to standard and bending over backwards for users who won't upgrade. It's a broken browser, treat it as such. | How to workaround: IE6 does not support CSS "attribute" selectors | [
"",
"javascript",
"css",
"internet-explorer",
"internet-explorer-6",
"css-selectors",
""
] |
I have a user control that contains a 2-column TableLayoutPanel and accepts commands to dynamically add rows to display details of an item selected in a separate control. So, the user will select a row in the other control (a DataGridView), and in the SelectedItemChanged event handler for the DataGridView I clear the detail control and then regenerate all the rows for the new selected item (which may have a totally different detail display from the previously selected item). This works great for a while. But if I keep moving from one selected item to another for quite a long time, the refreshes become VERY slow (3-5 seconds each). That makes it sound like I'm not disposing everything properly, but I can't figure out what I'm missing. Here's my code for clearing the TableLayoutPanel:
```
private readonly List<Control> controls;
public void Clear()
{
detailTable.Visible = false;
detailTable.SuspendLayout();
SuspendLayout();
detailTable.RowStyles.Clear();
detailTable.Controls.Clear();
DisposeAndClearControls();
detailTable.RowCount = 0;
detailTable.ColumnCount = 2;
}
private void DisposeAndClearControls()
{
foreach (Control control in controls)
{
control.Dispose();
}
controls.Clear();
}
```
And once I get finished loading up all the controls I want into the TableLayoutPanel for the next detail display here's what I call:
```
public void Render()
{
detailTable.ResumeLayout(false);
detailTable.PerformLayout();
ResumeLayout(false);
detailTable.Visible = true;
}
```
I'm not using anything but labels (and a TextBox very rarely) inside the TableLayoutPanel, and I add the Labels and TextBoxes to the controls list (referenced in DisposeAndClearControls()) when I create them. I tried just iterating over detailTable.Controls and disposing them that way, but it seemed to miss half the controls (determined by stepping through it in the debugger). This way I know I get them all.
I'd be interested in any suggestions to improve drawing performance, but particularly what's causing the degradation over multiple selections. | I changed the containing form to just construct a new version of my user control on each selection change. It disposes the old one and constructs a new one. This seems to perform just fine. I'd originally gone with reusing just one for performance reasons anyway. Clearly that doesn't improve the performance. And the performance isn't a problem if I dispose the old one and create a new one.
Unfortunate that the TableLayoutPanel leaks like that, though. | Just use a custom control that inherits from TableLayoutPanel and set the DoubleBuffered property on true, works great... especially when you dynamically add or remove rows.
```
public CustomLayout()
{
this.DoubleBuffered = true;
InitializeComponent();
}
``` | Dynamically Populated TableLayoutPanel Performance Degredation | [
"",
"c#",
".net",
"winforms",
"tablelayoutpanel",
""
] |
I am having trouble figuring out how to coalesce or pivot on a SQL recordset that looks like this:
```
ID VALUE GROUP
3 John 18
4 Smith 18
5 Microsoft 18
3 Randy 21
4 Davis 21
5 IBM 21
etc
```
and I want formatted like this
```
NEWVALUE GROUP
Smith, John (Microsft) 18
Davis, Randy (IBM) 21
```
thanks for any suggestions and help! | This is what i done, i hope it fits for you
```
DECLARE @t table (id int, value VARCHAR(20), grupo int)
INSERT @T VALUES (3, 'John', 18)
INSERT @T VALUES (4, 'Smith', 18)
INSERT @T VALUES (5, 'Microsoft', 18)
INSERT @T VALUES (3, 'Randy', 21)
INSERT @T VALUES (4, 'Davis', 21)
INSERT @T VALUES (5, 'IBM', 21)
SELECT grupo, (SELECT value FROM @t t2 WHERE t2.grupo = t.grupo AND id = 4) + ', ' +
(SELECT value FROM @t t2 WHERE t2.grupo = t.grupo AND id = 3) + ' (' +
(SELECT value FROM @t t2 WHERE t2.grupo = t.grupo AND id = 5) + ')'
FROM @t t
GROUP BY grupo
``` | ```
SELECT LEFT(gvalue, LEN(gvalue) - 1) AS newvalue, _group
FROM (
SELECT DISTINCT _group
FROM mytable
) qo
CROSS APPLY
(
SELECT value + ', '
FROM mytable qi
WHERE qi._group = qo._group
FOR XML PATH ('')
) gr(qvalue)
```
If you always have a set of three hardcoded `ID`'s for each `_group`, you can just use:
```
SELECT m3._group, m3.value + ', ' + m4.value + '(' + m5.value + ')' AS newvalue
FROM mytable m3
LEFT JOIN
mytable m4
ON m4._group = m3.group
LEFT JOIN
mytable m5
ON m5._group = m3.group
WHERE m3.id = 3
AND m4.id = 4
AND m5.id = 5
``` | Coalesce and Pivot in TSQL | [
"",
"sql",
"sql-server-2005",
"t-sql",
"aggregate",
""
] |
I am trying to find the fastest way to find all of the records in a parent table that do not have certain records in a child table. For example, I want the query to return all of the family records where there are no male children or no children at all.
## Example 1
This is painfully slow:
SELECT \* FROM Families
WHERE Families.FamilyID NOT IN(SELECT FamilyID FROM Children WHERE Children.Gender="m")
## Example 2
This is faster:
SELECT \* FROM Families
WHERE NOT EXISTS(SELECT \* FROM Children WHERE Children.Gender="m" AND Children.FamilyID = Families.FamilyID)
## Example 3
This is the fastest, but I'm worried about investing in it because I can't find any documentation on it. I don't even know what it's called. For want of a better term, we have been calling it an "anonymous query".
SELECT \* FROM Families
LEFT JOIN
**[SELECT \* FROM Children WHERE Gender="m"].** AS AliasChildren
ON Families.FamilyID=AliasChildren.FamilyID
WHERE AliasChildren.FamilyID IS NULL
So my question is this:
1. What is this thing called in example 3?
2. Is it a "mainstream" feature of Access that
will be supported in the future?
3. Where can I find documentation on
it (this might be easier once I know what it's called)?
Thanks! | Example 3 is termed a derived table, but you can simplify the query to this:
```
SELECT * FROM Families
LEFT JOIN
Children ON Families.FamilyID = Children.FamilyID AND Children.Gender="m"
WHERE Children.FamilyID IS NULL
``` | **What is this thing called in example 3?**
It is called a "Derrived table"
**Is it a "mainstream" feature of Access that will be supported in the future?**
It is a standard feature of ANSI SQL as far as I know and is very commonly used.
**Where can I find documentation on it (this might be easier once I know what it's called)?**
Here is a [blog article](http://techahead.wordpress.com/2007/10/01/sql-derived-tables/) about them
In any case the query posed by Mitch is your best bet:
```
SELECT *
FROM Families
LEFT JOIN Children
ON (Families.FamilyID = Children.FamilyID) AND (Children.Gender="m")
WHERE (Children.FamilyID IS NULL)
``` | SQL Help in Access – Looking for the Absence of Data | [
"",
"sql",
"ms-access",
"derived-table",
""
] |
How to enable logging of all SQL executed by PostgreSQL 8.3?
*Edited (more info)*
I changed these lines :
```
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_statement = 'all'
```
And restart PostgreSQL service... but no log was created...
I'm using Windows Server 2003.
Any ideas? | In your `data/postgresql.conf` file, change the `log_statement` setting to `'all'`.
---
**Edit**
Looking at your new information, I'd say there may be a few other settings to verify:
* make sure you have turned on the `log_destination` variable
* make sure you turn on the `logging_collector`
* also make sure that the `log_directory` directory already exists inside of the `data` directory, and that the postgres user can write to it. | Edit your `/etc/postgresql/9.3/main/postgresql.conf`, and change the lines as follows.
**Note**: If you didn't find the `postgresql.conf` file, then just type `$locate postgresql.conf` in a terminal
1. `#log_directory = 'pg_log'` **to** `log_directory = 'pg_log'`
2. `#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'` **to** `log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'`
3. `#log_statement = 'none'` **to** `log_statement = 'all'`
4. `#logging_collector = off` **to** `logging_collector = on`
5. **Optional**: `SELECT set_config('log_statement', 'all', true);`
6. `sudo /etc/init.d/postgresql restart` **or** `sudo service postgresql restart`
7. Fire query in postgresql `select 2+2`
8. Find current log in `/var/lib/pgsql/9.2/data/pg_log/`
The log files tend to grow a lot over a time, and might kill your machine. For your safety, write a bash script that'll delete logs and restart postgresql server.
Thanks @paul , @Jarret Hardie , @Zoltán , @Rix Beck , @Latif Premani | How to log PostgreSQL queries? | [
"",
"sql",
"database",
"postgresql",
"logging",
""
] |
I've looked at the 101 Linq Samples [here](http://msdn.microsoft.com/en-us/vcsharp/aa336746.aspx) but I can't see anything like this in that list. If I'm just not seeing a relevant example there, please link to it.
If I have these 3 classes:
```
class Student { int id; string name }
class Course { int id, string name }
class Enrolment { int studentId; int courseId; }
```
How would I use LINQ to get a list of courses a student is enrolled on? (assume I have an IList of all three classes) | How about:
```
IEnumerable<Course> FindCoursesForStudent(Student student)
{
return from enrolment in Enrolments
where enrolment.studentId == student.id
join course in Courses
on enrolment.courseId equals course.id
select course;
}
``` | Not an explicit answer, but perhaps implicitly helpful building your system.
Have you considered a richer domain model like so:
```
class Student { int id; string name; IEnumerable<Enrolment> enrolments }
class Course { int id, string name; }
class Enrolment { Student student; Course course; }
```
?
The Student class could then have a method GetEnroledCourses() like so:
```
public IEnumerable<Course> GetEnroledCourses()
{
return enrolements.Select(enrolement => enrolement.Course).ToList().AsReadonly();
}
```
A model that's merely a one-to-one mapping to the database suffice in some contexts, but if you have complex business logic such an [«anemic domain model»](http://martinfowler.com/bliki/AnemicDomainModel.html) might bite you. | LINQ - joining multiple lists | [
"",
"c#",
"linq",
""
] |
I'm quite new to Java and from Python and PHP, I'm used to default values for function parameters.
So I have a habit of writing methods that are designed to be called from slightly different situations where you want to set only some of the values. For example, in my PHP code, this would be common where I have factory methods that provide object instances with slightly different properties.
In Java, it seems, I have to have one method that expects all parameters and then several overloaded variations that would call that method with some of the parameters set to defaults and some provided by the caller. Which is, well, OK, but can get on my nerves sometimes.
I remember that this already annoyed me in some brief excursions to C++ and ActionScript. Now, do more experienced Java developers have some shortcut for this?
*It seems to me that, technically, the question has been answered by the sum of all posts as "Yes, it is". I've opened a wiki post below to collect the various alternative solutions, please contribute if you like. I found all of these very helpful as an inspiration and as learning examples for typical Java constructs.* | It seems like, "Yes, it is", except:
Similar effects could be achieved with [varargs](http://java.sun.com/developer/JDCTechTips/2005/tt0104.html#1) as suggested by Paul Whelan or by defining an extra type carrying the parameters as fields with the proper default values as suggested by Jon Skeet. Boris Pavlović adds that his type could be an inner static class to keep things neatly in the right place.
Cons (Note that this whole question is not about any serious implications, just about convenient tweaks):
Varargs seem most suitable for passing a variable-length list of values that have very similar form and mostly equivalent meaning, such as a list of names. If used for the purpose stated in the question, the method would have to perform various checks on the list in order to interpret it, which seems not more, but less convenient.
A special type carrying the parameters seems most useful if the resulting type could be of other uses than just being passed to one particular method, or when it is a more complex set of parameters. If there's 2-4 parameters and some may have default values, it still seems a little more convenient to overload, but that might be a matter of personal taste. | Another option is a variation on the builder pattern - you have a type which represents all the parameters, construct an instance of that type (which defaults accordingly), set the properties you want, and then pass the result into the original method, or add a method in the "parameter type" to call the method for you.
You can see this in action in the standard libraries, with the [`ProcessBuilder`](http://java.sun.com/javase/6/docs/api/java/lang/ProcessBuilder.html) and [`Process`](http://java.sun.com/javase/6/docs/api/java/lang/Process.html) classes. | Is overloading really the only way to get default values for method parameters in Java? | [
"",
"java",
"default",
"overloading",
""
] |
Need to convert the following code from Ruby to C#. However I'm kind of puzzled by the use of the yield keyword and the general syntax of Ruby. Can anyone that knows a little bit Ruby please help out and convert the code
```
class < < Cache
STALE_REFRESH = 1
STALE_CREATED = 2
# Caches data received from a block
#
# The difference between this method and usual Cache.get
# is following: this method caches data and allows user
# to re-generate data when it is expired w/o running
# data generation code more than once so dog-pile effect
# won't bring our servers down
#
def smart_get(key, ttl = nil, generation_time = 30.seconds)
# Fallback to default caching approach if no ttl given
return get(key) { yield } unless ttl
# Create window for data refresh
real_ttl = ttl + generation_time * 2
stale_key = "#{key}.stale"
# Try to get data from memcache
value = get(key)
stale = get(stale_key)
# If stale key has expired, it is time to re-generate our data
unless stale
put(stale_key, STALE_REFRESH, generation_time) # lock
value = nil # force data re-generation
end
# If no data retrieved or data re-generation forced, re-generate data and reset stale key
unless value
value = yield
put(key, value, real_ttl)
put(stale_key, STALE_CREATED, ttl) # unlock
end
return value
end
```
end | I don't know C# at all, so anything I say about C# should be taken with a grain of salt. However, I will try to explain what goes on in that piece of Ruby code.
```
class << Cache
```
Ruby has something called *singleton methods*. These have nothing to do with the Singleton Software Design Pattern, they are just methods that are defined for one and only one object. So, you can have two instances of the same class, and add methods to one of those two objects.
There are two different syntaxes for singleton methods. One is to just prefix the name of the method with the object, so `def foo.bar(baz)` would define a method `bar` only for object `foo`. The other method is called *opening up the singleton class* and it looks syntactically similar to defining a class, because that's also what happens semantically: singleton methods actually live in an invisible class that gets inserted between the object and its actual class in the class hierarchy.
This syntax looks like this: `class << foo`. This opens up the singleton class of object `foo` and every method defined inside of that class body becomes a singleton method of object `foo`.
Why is this used here? Well, Ruby is a pure object-oriented language, which means that *everything*, including classes is an object. Now, if methods can be added to individual objects, and classes are objects, this means that methods can be added to individual classes. In other words, Ruby has no need for the artificial distinction between regular methods and static methods (which are a fraud, anyway: they aren't really methods, just glorified procedures). What is a static method in C#, is just a regular method on a class object's singleton class.
All of this is just a longwinded way of explaining that everything defined between `class << Cache` and its corresponding `end` becomes `static`.
```
STALE_REFRESH = 1
STALE_CREATED = 2
```
In Ruby, every variable that starts with a capital letter, is actually a constant. However, in this case we won't translate these as `static const` fields, but rather an `enum`, because that's how they are used.
```
# Caches data received from a block
#
# The difference between this method and usual Cache.get
# is following: this method caches data and allows user
# to re-generate data when it is expired w/o running
# data generation code more than once so dog-pile effect
# won't bring our servers down
#
def smart_get(key, ttl = nil, generation_time = 30.seconds)
```
This method has three parameters (four actually, we will see exactly *why* later), two of them are optional (`ttl` and `generation_time`). Both of them have a default value, however, in the case of `ttl` the default value isn't really used, it serves more as a marker to find out whether the argument was passed in or not.
`30.seconds` is an extension that the `ActiveSupport` library adds to the `Integer` class. It doesn't actually do anything, it just returns `self`. It is used in this case just to make the method definition more readable. (There are other methods which do something more useful, e.g. `Integer#minutes`, which returns `self * 60` and `Integer#hours` and so on.) We will use this as an indication, that the type of the parameter should not be `int` but rather `System.TimeSpan`.
```
# Fallback to default caching approach if no ttl given
return get(key) { yield } unless ttl
```
This contains several complex Ruby constructs. Let's start with the easiest one: trailing conditional modifiers. If a conditional body contains only one expression, then the conditional can be appended to the end of the expression. So, instead of saying `if a > b then foo end` you can also say `foo if a > b`. So, the above is equivalent to `unless ttl then return get(key) { yield } end`.
The next one is also easy: `unless` is just syntactic sugar for `if not`. So, we are now at `if not ttl then return get(key) { yield } end`
Third is Ruby's truth system. In Ruby, truth is pretty simple. Actually, falseness is pretty simple, and truth falls out naturally: the special keyword `false` is false, and the special keyword `nil` is false, everything else is true. So, in this case the conditional will *only* be true, if `ttl` is either `false` or `nil`. `false` isn't a terrible sensible value for a timespan, so the only interesting one is `nil`. The snippet would have been more clearly written like this: `if ttl.nil? then return get(key) { yield } end`. Since the default value for the `ttl` parameter is `nil`, this conditional is true, if no argument was passed in for `ttl`. So, the conditional is used to figure out with how many arguments the method was called, which means that we are not going to translate it as a conditional but rather as a method overload.
Now, on to the `yield`. In Ruby, every method can accept an implicit code block as an argument. That's why I wrote above that the method actually takes *four* arguments, not three. A code block is just an anonymous piece of code that can be passed around, stored in a variable, and invoked later on. Ruby inherits blocks from Smalltalk, but the concept dates all the way back to 1958, to Lisp's lambda expressions. At the mention of anonymous code blocks, but at the very least now, at the mention of lambda expressions, you should know how to represent this implicit fourth method parameter: a delegate type, more specifically, a `Func`.
So, what's `yield` do? It transfers control to the block. It's basically just a very convenient way of invoking a block, without having to explicitly store it in a variable and then calling it.
```
# Create window for data refresh
real_ttl = ttl + generation_time * 2
stale_key = "#{key}.stale"
```
This `#{foo}` syntax is called *string interpolation*. It means "replace the token inside the string with whatever the result of evaluating the expression between the braces". It's just a very concise version of `String.Format()`, which is exactly what we are going to translate it to.
```
# Try to get data from memcache
value = get(key)
stale = get(stale_key)
# If stale key has expired, it is time to re-generate our data
unless stale
put(stale_key, STALE_REFRESH, generation_time) # lock
value = nil # force data re-generation
end
# If no data retrieved or data re-generation forced, re-generate data and reset stale key
unless value
value = yield
put(key, value, real_ttl)
put(stale_key, STALE_CREATED, ttl) # unlock
end
return value
end
end
```
This is my feeble attempt at translating the Ruby version to C#:
```
public class Cache<Tkey, Tvalue> {
enum Stale { Refresh, Created }
/* Caches data received from a delegate
*
* The difference between this method and usual Cache.get
* is following: this method caches data and allows user
* to re-generate data when it is expired w/o running
* data generation code more than once so dog-pile effect
* won't bring our servers down
*/
public static Tvalue SmartGet(Tkey key, TimeSpan ttl, TimeSpan generationTime, Func<Tvalue> strategy)
{
// Create window for data refresh
var realTtl = ttl + generationTime * 2;
var staleKey = String.Format("{0}stale", key);
// Try to get data from memcache
var value = Get(key);
var stale = Get(staleKey);
// If stale key has expired, it is time to re-generate our data
if (stale == null)
{
Put(staleKey, Stale.Refresh, generationTime); // lock
value = null; // force data re-generation
}
// If no data retrieved or data re-generation forced, re-generate data and reset stale key
if (value == null)
{
value = strategy();
Put(key, value, realTtl);
Put(staleKey, Stale.Created, ttl) // unlock
}
return value;
}
// Fallback to default caching approach if no ttl given
public static Tvalue SmartGet(Tkey key, Func<Tvalue> strategy) =>
Get(key, strategy);
// Simulate default argument for generationTime
// C# 4.0 has default arguments, so this wouldn't be needed.
public static Tvalue SmartGet(Tkey key, TimeSpan ttl, Func<Tvalue> strategy) =>
SmartGet(key, ttl, new TimeSpan(0, 0, 30), strategy);
// Convenience overloads to allow calling it the same way as
// in Ruby, by just passing in the timespans as integers in
// seconds.
public static Tvalue SmartGet(Tkey key, int ttl, int generationTime, Func<Tvalue> strategy) =>
SmartGet(key, new TimeSpan(0, 0, ttl), new TimeSpan(0, 0, generationTime), strategy);
public static Tvalue SmartGet(Tkey key, int ttl, Func<Tvalue> strategy) =>
SmartGet(key, new TimeSpan(0, 0, ttl), strategy);
}
```
Please note that I do not know C#, I do not know .NET, I have not tested this, I don't even know if it is syntactically valid. Hope it helps anyway. | It appears this code is being passed a block to be evaluated if the cache does not contain the requested data (`yield` is how you call the block). This is fairly idiomatic ruby code; I don't know how (or even if) you could "translate it" to c#.
Look for a use case to see what I mean. You should find something vaguely like this:
```
x = smart_get([:foo,"bar"]) { call_expensive_operation_foo("bar") }
```
A better bet would be to figure out what you need it to do and write something that does that de novo in c#, rather than trying to "translate" from ruby. | Converting Ruby to C# | [
"",
"c#",
"ruby",
"caching",
""
] |
This was the best name I could come up with for the topic and none of my searches yielded information relevant to the question.
How do I call a function from a string, i.e.
```
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
call f
``` | ```
functions_to_call = ["func_1", "func_2", "func_3"]
for f in functions_to_call:
eval(f+'()')
```
*Edited to add:*
Yes, eval() generally is a bad idea, but this is what the OP was looking for. | You can use the python builtin locals() to get local declarations, eg:
```
def f():
print "Hello, world"
def g():
print "Goodbye, world"
for fname in ["f", "g"]:
fn = locals()[fname]
print "Calling %s" % (fname)
fn()
```
You can use the "imp" module to load functions from user-specified python files which gives you a bit more flexibility.
Using locals() makes sure you can't call generic python, whereas with eval, you could end up with the user setting your string to something untoward like:
```
f = 'open("/etc/passwd").readlines'
print eval(f+"()")
```
or similar and end up with your programming doing things you don't expect to be possible. Using similar tricks with locals() and dicts in general will just give attackers KeyErrors. | Calling unknown Python functions | [
"",
"python",
""
] |
I need a queue which multiple threads can put stuff into, and multiple threads may read from.
Python has at least two queue classes, `queue.Queue` and `collections.deque`, with the former seemingly using the latter internally. Both claim to be thread-safe in the documentation.
However, the `Queue` docs also state:
> `collections.deque` is an alternative
> implementation of unbounded queues
> with fast atomic `append()` and
> `popleft()` operations **that do not
> require locking** and also support indexing.
Which I guess I don't quite understand: Does this mean `deque` isn't fully thread-safe after all?
If it is, I may not fully understand the difference between the two classes. I can see that `Queue` adds blocking functionality. On the other hand, it loses some `deque` features like support for the `in` operator.
Is accessing the [internal `deque` object](https://github.com/python/cpython/blob/0e2d67457ba245442457432d15115d1dc1b0ecb5/Lib/queue.py#L207) directly
```
x in Queue().queue
```
thread-safe?
Also, why does `Queue` employ a mutex for its operations when `deque` is thread-safe already? | `queue.Queue` and `collections.deque` serve different purposes. `queue.Queue` is intended for allowing different threads to communicate using queued messages/data, whereas `collections.deque` is simply intended as a data structure. That's why `queue.Queue` has methods like `put_nowait()`, `get_nowait()`, and `join()`, whereas `collections.deque` doesn't. `queue.Queue` isn't intended to be used as a collection, which is why it lacks the likes of the `in` operator.
It boils down to this: if you have multiple threads and you want them to be able to communicate without the need for locks, you're looking for `queue.Queue`; if you just want a queue or a double-ended queue as a datastructure, use `collections.deque`.
Finally, accessing and manipulating the internal deque of a `queue.Queue` is playing with fire - you really don't want to be doing that. | If all you're looking for is **a thread-safe way to transfer objects between threads**, then both would work (both for FIFO and LIFO). For FIFO:
* [`Queue.put()` and `Queue.get()` are thread-safe](http://docs.python.org/2/library/queue.html#)
* [`deque.append()` and `deque.popleft()` are thread-safe](http://docs.python.org/2/library/collections.html#deque-objects)
Note:
* Other operations on `deque` might not be thread safe, I'm not sure.
* `deque` does not block on `pop()` or `popleft()` so you can't base your consumer thread flow on blocking till a new item arrives.
However, it seems that **deque has a significant efficiency advantage**. Here are some benchmark results in seconds using CPython 2.7.3 for inserting and removing 100k items
```
deque 0.0747888759791
Queue 1.60079066852
```
Here's the benchmark code:
```
import time
import Queue
import collections
q = collections.deque()
t0 = time.clock()
for i in xrange(100000):
q.append(1)
for i in xrange(100000):
q.popleft()
print 'deque', time.clock() - t0
q = Queue.Queue(200000)
t0 = time.clock()
for i in xrange(100000):
q.put(1)
for i in xrange(100000):
q.get()
print 'Queue', time.clock() - t0
``` | queue.Queue vs. collections.deque | [
"",
"python",
"queue",
"thread-safety",
"python-multithreading",
"deque",
""
] |
I've been running up against a problem with Java Swing + my Wacom Graphire tablet for a few years in several Java applications and have now encountered it in my own.
I use a pen tablet to get around wrist issues while clicking a mouse, and it works fine under Windows except when I'm using Java applications. In Java applications, the single-click of the pen doesn't work correctly. (Usually the problem only occurs with file-selection dialog boxes or tree controls.) The pen tablet also comes with a wireless mouse that works with the same tablet, and its single-click does work correctly.
I don't know whether the problem is in the WACOM driver or in the Java Swing runtime for Windows or both. Has anyone encountered this before? I'd like to file a bug report with WACOM but I have no idea what to tell them.
I have been able to reproduce this in my own application that has a JEditorPane with an HTML document that I've added a HyperlinkListener to. I get HyperlinkEvent.ACTIVATED events on every single click with the mouse, but I do NOT get HyperlinkEvent.ACTIVATED events on every single click with the pen.
One big difference between a pen and a mouse is that when you click a button on a mouse, it's really easy to cause the button-click without mouse movement. On the pen tablet it is very hard to do this, and that seems to correlate with the lack of HyperlinkEvent.ACTIVATED events -- if I am very careful not to move the pen position when I tap the tablet, I think I can get ACTIVATED events.
Any suggestions for things to try so I can give WACOM some good information on this bug? It's really frustrating to not be able to use my pen with Java apps, especially since the pen works fine with "regular" Windows (non-Java) applications.
Normally I wouldn't ask this question here but I'd like to find out from a **programmer's** standpoint what might be going on so I can file a good bug report. | What you should do is add a `mouseListener` and see when it registers a `mouseClicked()`, `mousePressed()`, `mouseReleased()` event. I'm not sure if the swing reads the tablet pen as a mouse though. However, it should give you some insight into what's actually going on. | I tried dr.manhattan's suggestion and it works like a charm. I get mousePressed/mouseReleased events correctly; mouseClicked events happen always with the pen tablet mouse, but mouseClicked events do not happen with the pen unless I manage to keep the pen very still. Even a 1-pixel movement is enough to make it fail. I guess I should blame Java for this one: there's no way to specify a "click radius" for acceptible movement.
```
package com.example.bugs;
import java.awt.Dimension;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import javax.swing.JFrame;
public class WacomMouseClickBug {
public static void main(String[] args) {
JFrame jframe = new JFrame();
jframe.addMouseListener(new MouseListener(){
@Override public void mouseClicked(MouseEvent event) {
System.out.println("mouseClicked: "+event);
}
@Override public void mouseEntered(MouseEvent event) {}
@Override public void mouseExited(MouseEvent event) {}
@Override public void mousePressed(MouseEvent event) {
System.out.println("mousePressed: "+event);
}
@Override public void mouseReleased(MouseEvent event) {
System.out.println("mouseReleased: "+event);
}
});
jframe.setPreferredSize(new Dimension(400,400));
jframe.pack();
jframe.setLocationRelativeTo(null);
jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jframe.setVisible(true);
}
}
``` | java Swing debugging headaches with Wacom pen tablet | [
"",
"java",
"swing",
"wacom",
"pen-tablet",
""
] |
I have 5 databases which represent different regions of the country. In each database, there are a few hundred tables, each with 10,000-2,000,000 transaction records. Each table is a representation of a customer in the respective region. Each of these tables has the same schema.
I want to query all tables as if they were one table. The only way I can think of doing it is creating a view that unions all tables, and then just running my queries against that. However, the customer tables will change all the time (as we gain and lose customers), so I'd have to change the query for my view to include new tables (or remove ones that are no longer used).
Is there a better way?
## EDIT
In response to the comments, (I also posted this as a response to an answer):
In most cases, I won't be removing any tables, they will remain for historic purposes. As I posted in comment to one response, the idea was to reduce the time it takes a smaller customers (one with only 10,000 records) to query their own history. There are about 1000 customers with an average of 1,000,000 rows (and growing) a piece. If I were to add all records to one table, I'd have nearly a billion records in that table. I also thought I was planning for the future, in that when we get say 5000 customers, we don't have one giant table holding all transaction records (this may be an error in my thinking). So then, is it better not to divide the records as I have done? Should I mash it all into one table? Will indexing on customer Id's prevent delays in querying data for smaller customers? | I think your design may be broken. Why not use one single table with a region and a customer column?
If I were you, I would consider refactoring to one single table, and if necessary (for reverse compatibility for example), I would use views to provide the same info as in the previous tables.
---
Edit to answer OP comments to this post :
One table with 10 000 000 000 rows in it will do just fine, provided you use proper indexing. Database servers are built to cope with this kind of volume.
**Performance is definitely not a valid reason to split one such table into thousands of smaller ones !** | Agree with Brann,
That's an insane DB Schema Design. Why didn't you go with (or is an option to change to) a single normalised structure with columns to filter by region and whatever condition separates each table within a region database.
In that structure you're stuck with some horribly large (~500 tables) unioned view that you would have to dynamically regenerate as regularly as new tables appear in the system. | What is the best way to query data from multilpe tables and databases? | [
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I'm looking at [a path finding tutorial](http://www.cokeandcode.com/main/tutorials/path-finding/) and I noticed a `return` statement inside a `void` method (class `PathTest`, line 126):
```
if ((x < 0) || (y < 0) || (x >= map.getWidthInTiles()) || (y >= map.getHeightInTiles())) {
return;
}
```
I'm a novice at Java. Can anyone tell me why it's there? As far as I knew, `return` inside a void method isn't allowed. | It just exits the method at that point. Once `return` is executed, the rest of the code won't be executed.
eg.
```
public void test(int n) {
if (n == 1) {
return;
}
else if (n == 2) {
doStuff();
return;
}
doOtherStuff();
}
```
Note that the compiler is smart enough to tell you some code cannot be reached:
```
if (n == 3) {
return;
youWillGetAnError(); //compiler error here
}
``` | You can have `return` in a void method, you just can't *return any value* (as in `return 5;`), that's why they call it a *void* method. Some people always explicitly end void methods with a return statement, but it's not mandatory. It *can* be used to leave a function early, though:
```
void someFunct(int arg)
{
if (arg == 0)
{
//Leave because this is a bad value
return;
}
//Otherwise, do something
}
``` | What does the return keyword do in a void method in Java? | [
"",
"java",
"methods",
"return",
"void",
""
] |
I've been introducing myself to LinqToSQL lately through a poorly-made project at work. I'm curious as to why this works:
```
var territories = db.Territories.Where(t => t.PendingUserCount > 0);
```
But this results in a compilation error:
```
var territories = db.Territories;
if (someCondition)
territories = territories.Where(t => t.PendingUserCount > 0);
// Cannot implicitly convert 'System.Linq.IQueryable<Territory> to System.Data.Linq.Table<Territory>
```
I've also tried to call `db.Territories.ToList()`, but to no avail.
I'm sure it's just a misunderstanding of how Linq works, but I'd be appreciative if someone could help me out. | Alternative:
```
var territories = db.Territories.AsQueryable();
if (someCondition)
territories = territories.Where(t => t.PendingUserCount > 0);
``` | db.Territories returns a table object. Hence the 'var' will be of type System.Data.Linq.Table. Later you try (based on some condition) to assign something of type System.Linq.IQueryable to the variable. As .NET is strongly typed, the compiler throws an error.
Variables of type var will be assigned a type when they get assigned first. That's how I try to remember myself. | Basic LinqToSql question: Why won't this compile? | [
"",
"c#",
"linq-to-sql",
""
] |
Recently I was confused by [this question](https://stackoverflow.com/questions/700410/subtraction-without-minus-sign/700487#700487). Maybe because I didn't read language specifications (it's my fault, I know).
C99 standard doesn't say which negative numbers representation should be used by compiler. I always thought that the only right way to store negative numbers is two's complement (in most cases).
So here's my question: do you know any present-day compiler that implements by default one's complement or sign-magnitude representation? Can we change default representation with some compiler flag?
What is the simplest way to determine which representation is used?
And what about C++ standard? | I think it's not so much a question of what representation the compiler uses, but rather what representation the underlying machine uses. The compiler would be very stupid to pick a representation not supported by the target machine, since that would introduce loads of overhead for no benefit.
Some checksum fields in the IP protocol suite use one's complement, so perhaps dedicated "network accelerator"-type CPU:s implement it. | While [twos-complement](http://en.wikipedia.org/wiki/Twos-complement) representation is by far the most common, it is not the only one ([see some](http://en.wikipedia.org/wiki/Signed_number_representations)). The C and C++ standardisation committees did not want to require non-twos-complement machines to emulate a non-native representation. Therefore neither C not C++ require a specific negative integer format.
This leads to the undefined behaviour of bitwise operations on signed types. | Compilers and negative numbers representations | [
"",
"c++",
"c",
"compiler-construction",
"representation",
"negative-number",
""
] |
One of my favorite features about python is that you can write configuration files in python that are very simple to read and understand. If you put a few boundaries on yourself, you can be pretty confident that non-pythonistas will know exactly what you mean and will be perfectly capable of reconfiguring your program.
My question is, what exactly are those boundaries? My own personal heuristic was
1. Avoid flow control. No functions, loops, or conditionals. Those wouldn't be in a text config file and people aren't expecting to have understand them. In general, it probably shouldn't matter the order in which your statements execute.
2. Stick to literal assignments. Methods and functions called on objects are harder to think through. Anything implicit is going to be a mess. If there's something complicated that has to happen with your parameters, change how they're interpreted.
3. Language keywords and error handling are right out.
I guess I ask this because I came across a situation with my Django config file where it seems to be useful to break these rules. I happen to like it, but I feel a little guilty. Basically, my project is deployed through svn checkouts to a couple different servers that won't all be configured the same (some will share a database, some won't, for example). So, I throw a hook at the end:
```
try:
from settings_overrides import *
LOCALIZED = True
except ImportError:
LOCALIZED = False
```
where settings\_overrides is on the python path but outside the working copy. What do you think, either about this example, or about python config boundaries in general? | There is a Django wiki page, which addresses exactly the thing you're asking.
<http://code.djangoproject.com/wiki/SplitSettings>
Do not reinvent the wheel. Use [configparser](http://docs.python.org/library/configparser.html) and INI files. Python files are to easy to break by someone, who doesn't know Python. | Your heuristics are good. Rules are made so that boundaries are set and only broken when it's obviously a vastly better solution than the alternate.
Still, I can't help but wonder that the site checking code should be in the parser, and an additional configuration item added that selects which option should be taken.
I don't think that in this case the alternative is so bad that breaking the rules makes sense... | Is it ever polite to put code in a python configuration file? | [
"",
"python",
"django",
""
] |
I'm trying not to use the ',' char as a thousand separator when displaying a string, but to use a space instead. I guess I need to define a custom culture, but I don't seem to get it right. Any pointers?
eg: display 1000000 as 1 000 000 instead of 1,000,000
(no, `String.Replace()` is not the solution I'd like to use :P) | I suggest you find a [`NumberFormatInfo`](http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.aspx) which most closely matches what you want (i.e. it's right apart from the thousands separator), call [`Clone()`](http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.clone.aspx) on it and then set the [`NumberGroupSeparator`](http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.numbergroupseparator.aspx) property. (If you're going to format the numbers using currency formats, you need to change [`CurrencyGroupSeparator`](http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.currencygroupseparator.aspx) instead/as well.) Use that as the format info for your calls to `string.Format` etc, and you should be fine. For example:
```
using System;
using System.Globalization;
class Test
{
static void Main()
{
NumberFormatInfo nfi = (NumberFormatInfo)
CultureInfo.InvariantCulture.NumberFormat.Clone();
nfi.NumberGroupSeparator = " ";
Console.WriteLine(12345.ToString("n", nfi)); // 12 345.00
}
}
``` | Create your own [NumberFormatInfo](http://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo(VS.71).aspx) (derivative) with a different thousand separator. | Use a custom thousand separator in C# | [
"",
"c#",
"formatting",
"text-formatting",
""
] |
From the perspective of a cross application/applet java accessibility service, how would you link to a package but only optionally execute an action based on existence/availability of a package (being already loaded) at runtime?
I think what I'm interested in here is a way to resolve the [class identity crisis](http://www.ibm.com/developerworks/java/library/j-dyn0429/) but rather than the issue being between 2 apps sharing objects, being a service loaded at a higher level of the class loaders.
It seems like reflection is the way to go, but I am not sure how or if I can implement a derived class this way. I need to add a specific listener derived from the specific optional classes, I can load the listener using the applet class loader but the internals still fail. Say you wanted to add an JInternalFrameListener, but Swing wasn't guaranteed to be available, using reflection you can find the method to add the listener, but how can you create and have the frame listener work if it cannot find any of the related classes because they can't be found in the base classloader! Do I need to create a thread and use setContextClassLoader to the classloader that knows about swing so that I can get the class to be loaded reliably? simply trying to set the class loader on my existing thread didn't seem to work.
***Earlier description of issues***
*Sorry, I'm not quite sure what to ask or how to make this clear, so it rambles on a bit.*
Say a class uses some feature of another, but the other class may not always be available - say finding the website from JNLP if this is a JNLP app.
At one stage I thought that simply compiling against JNLP would mean that my class would not load unless JNLP was available, and so to identify this optional section I simply wrapped a `try{} catch( NoClassDefFoundError )` around it.
Later something changed (perhaps changing jdk or ?? I don't recall) and it seemed that I should also use a `try{} catch( ClassNotFoundException )`.
Now I wanted to extend this idea to other optional features, but it doesn't seem to work consistently.
Say I wanted to add some feature to do something more advanced in a JRE1.6 runtime using the same jar and classes as I run in a JRE1.3, or say I want to handle some controls in a specific gui toolkit which may not always be used like SWT or oracle.forms.
Is there some way of doing this more reliably? It just seems wrong to cause an exception and catch it to ignore it all the time.
The current issue comes down to being able to compile against oracle.forms but then the accessibility component installed in ext is unable to access the oracle.forms classes even though objects from the oracle.forms package have been created. If I throw the frmall.jar into the ext directory to test then the accessibility component works up to the point that the whole lot gets flakey because of the different versions of the same package.
I seem to be caught up on an issue with the class loader not being the right one or something (??). How do I find the right one?
Edit:
The answers so far are kindof interesting but not quite getting me where I want to be.
In the case of the gui components I currently compile in the form of a factory something like...
```
import oracle.forms.ui.*;
import java.awt.*;
static public IComponentNode newNode( INode parent, Component component ) {
System.out.println( component.getClass().toString() );
try{
if( component instanceof FormDesktopContainer )
... does stuff here like return new FormDesktopNode( parent, (FormDesktopContainer) component )
} catch ( NoClassDefFoundError a ) {
System.out.println( a.getMessage() );
}
```
where it prints out `class oracle.forms.ui.FormDesktopContainer` and then throws and exception on the instanceof call with NoClassDefFound thus printing out `oracle/forms/ui/FormDesktopContainer`
**So how can it have an instance of a class yet not be able to find it?** | getSystemClass loader was not useful for this purpose as there where multiple possible class loaders to interact with based on which applet the given window was in. The accessibility components being loaded at a more base class loader cannot see the applet specific classes.
To interact with the objects reflection does the job, though it does add so much more to maintain.
```
// statically linking would be
return component.getText();
// dynamically is
try {
return (String)component.getClass().getMethod("getText", new Class [] {}).invoke(component, new Object [] {});
} catch (Throwable e) {
e.printStackTrace();
}
```
The trickier bit is in writing a class derived from an interface that is not directly accessible, using the Proxy service allows this to be accomplished, providing the proxy service the applet specific class loader and the dynamically loaded class for the interface.
```
public void addListener(Container parent) {
if (parent == null) { return; }
if ("oracle.forms".equals(parent.getClass().getName())) {
// Using the class loader of the provided object in the applet
// get the "class" of the interface you want to implement
Class desktopListenerClass = Class.forName( "oracle.DesktopListener"
, true, parent.getClass().getClassLoader());
// Ask the proxy to create an instance of the class,
// providing your implementation through the InvocationHandler::invoke
Object desktopListener = Proxy.newProxyInstance(
parent.getClass().getClassLoader()
, new Class[] { desktopListenerClass }, new InvocationHandler() {
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
if ("functionName".equals(method.getName())) {
// do stuff
}
return null;
}
});
// do something with your new object
Method addDesktopListener = parent.getClass().getMethod("");
addDesktopListener.invoke(parent, desktopListener);
}
}
```
examples cut down to show general method | How about this? messy, but it ought to work:
```
public boolean exists(String className){
try {
Class.forName(className);
return true;
}
catch (ClassNotFoundException){
return false;
}
}
``` | How to link to a package but only optionally execute an action based on existence/availability of package at runtime? | [
"",
"java",
"user-interface",
"accessibility",
""
] |
I have created a LINQ to SQL datacontext with a single datatable in it. I am trying to simply insert a new record into that table. The problem I am coming across is LINQ is not offering an Add method to pass in the new record to. I have seen countless examples where there is an Add method, but I can't seem to get it. Am I completely missing something or is it something else?
```
using (praetoriaTestDataContext db = new praetoriaTestDataContext())
{
PageHit hit = new PageHit();
hit.DateViewed = DateTime.Now;
hit.Page = "test";
db.PageHits.Add(hit); //Add method is not available!
db.SubmitChanges();
}
```
Thanks! | Table's Add and Remove methods have been renamed to InsertOnSubmit and DeleteOnSubmit.
```
db.PageHits.InsertOnSubmit(hit);
``` | With LINQ-to-SQL, you want `PageHits.`[`InsertOnSubmit`](http://msdn.microsoft.com/en-us/library/bb763516.aspx) | LINQ to SQL - No Add method available | [
"",
"c#",
"linq-to-sql",
"insert",
""
] |
Is there a way in AJAX or JS to add further inputs upon a button click? | In short, yes you can add more inputs on a button click.
For example, in jQuery, you could have something like this where the buttonID is the id attribute for the button and the formID is the id attribute for your form:
```
$("buttonID").click(function() {
//add new inputs here, something like:
$("formID").append('<input type="text" id="newInput" name="newInput" />');
});
```
You can also have the additional inputs hidden to start off with and then 'un-hide' them on a click if you want. | Further inputs? Run any JavaScript you want when a user clicks a button by adding an event listener to the button that listens for a click. | Adding inputs with AJAX? | [
"",
"javascript",
"ajax",
""
] |
In my WPF project there is a listbox in which I have to display images and next to each image their text (for example : date the photo was taken, location etc).
I have tried creating a generic List but I still can't assign it to the listbox
Something like
Bscially I have been trying something on this lines.
```
public class LoadImages
{
public static List<ImageLoader> LoadImages()
{
List<ImageLoader> img = new List<ImageLoader>();
Uri uri = new Uri(@"http://somedomain.com/pic.jpg", UriKind.Absolute);
BitmapImage bi = new BitmapImage(uri);
img.Add(new ImageLoader("1_1",bi));
return img;
}
}
public class ImageLoader
{
string mediaid;
BitmapImage thumbnail;
public ImageLoader(string mediaid, BitmapImage b)
{
this.mediaid = mediaid;
this.thumbnail = b;
}
}
```
And my XAML looks like this.
```
<ListBox Name="ListBox1" SelectionMode="Extended" ItemsSource="{Binding}"
Width="300" Height="300" IsSynchronizedWithCurrentItem="True">
<ListBox.ItemTemplate>
<DataTemplate>
<Grid>
<Image Source="{Binding thumbnail}"/>
<TextBlock Text="{Binding mediaid}"/>
</Grid>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
```
For the time being I have set the Window.DataContext
```
<Window.DataContext>
<ObjectDataProvider ObjectType="{x:Type local:LoadImages}" MethodName="LoadImages"/>
</Window.DataContext>
```
But everytime I run the app the listbox shows up empty.
Any suggestions. | After a lot of reading and googling I found my answer, all I had to do in the ImageLoader class is to create properties for mediaid and thumbnail and after that binding to the listbox now works like a char. So the ImageLoader class in asked in the question above now looks like
```
public class ImageLoader
{
string mediaid;
BitmapImage thumbnail;
public string MediaId
{
get { return mediaid; }
set { mediaid = value; }
}
public BitmapImage Thumbnail
{
get { return thumbnail; }
set { thumbnail = value; }
}
public ImageLoader(string mediaid, BitmapImage b)
{
this.mediaid = mediaid;
this.thumbnail = b;
}
}
``` | Image element do support Uris for Source property. Why not making your LoadImages class return a set of Uris instead of images? Also image element can do async job for you ;) | Create custom generic list with bitmapimage | [
"",
"c#",
"wpf",
"image",
"listbox",
"generic-list",
""
] |
Is there a way to limit a form input field to be between certain number range, say (0,100)
I'm filtering the input in the onkeydown event, to accept only numbers, the problem
is I want to reject a number if that number would make the input to go out of range
So I need a way to see if the current value of the input plus the key the user is pressing
will sum up between the range.
I tried using:
```
if((parseFloat(this.value) + parseFloat(String.fromCharCode(e.keyCode)) > 100){
return false;
}
```
the thing is e.keyCode can return different codes for the same number, right now is returning 57 for the
number 9, but 105 if i press the number on the numpad.
Is there a way to accomplish this? | Trying to anticipate what the resulting value is going to be is harder than you think. Remember the user might be pressing backspace, or the cursor might not be at the end of the field, or the user might have part of the value selected, to be replaced on next keypress, and so on. It's also possible to manipulate the text field through mouse operations you won't get any say in.
The traditional approach is to put your validation on the ‘keyup’ event instead of ‘keypress’. Then you get the full, post-change value of the field. You don't get the chance to deny the keypress, but you can reset the field to the last-known-good value instead.
But either way it's best not to try to constrain input too tightly, because this can make it terribly difficult to type. For example, “12.” is an invalid number you might want to deny... but if you did, it would become very difficult to type “12.3”! Better to allow any input, but signal when the current input is out of bounds, by some mechanism (eg. turning the text red is common). | Personally, I would just check it when the field loses focus (or when the form is submitted). Popping up errors as the user is typing (or preventing their keystrokes from registering in the field) is usually just going to annoy them.
And of course you probably knew this already, but make sure you check the value on the server side after the form is submitted as well. Never rely on javascript validation! | Limit an html form input to a certain float range | [
"",
"javascript",
"filtering",
""
] |
Can anyone can confirm the best idea for storing jquery code, initially i was inserting everything in the HEAD of a document but i am seeing more and more code being used across pages,
Is the best way to use include files .. with the extension .JS?
Anything else or better with respect to this?
What would be the best place to store my .js file if this is the case..
of the root i have
/css
/scripts
etc ... /scripts is where my jquery files are... but should i be using the same for my .js files?
Anyone have some recommendation on directory structure? | You should store as much code as you can in .js files, because that would allow the browser to download just once and re-use in all requests.
If you include your code directly in the HEAD section as plain text you're forcing that code to be downloaded in each request, slowing down the page transfer.
The directory where you store the files is up to you really... I use `/css` and `/js`, but `/scripts` is used as well in many ocassions. | Where you insert your JavaScript on the page depends on what you're trying to do. There are [arguments](http://wonko.com/post/remote_javascript_includes_without_the_performance_penalty_part) to link your JavaScript at the end of the page for speed's sake, so that the browser can load the page before attempting to parse/execute any JavaScript. Of course, if you need the JavaScript to execute before your page loads, you'd need to put in the head of your document. It's really up to you and your needs.
.js is the standard extensions for JavaScript files, so it'd be preferable to keep using that. It's a good idea to link to your JavaScript files instead of putting them right on the page, so the user's browser can cache the JavaScript files.
Where you store the scripts is up to you, but it seems like a fine idea to store all your jQuery files in a /scripts directory. | Storing jquery code in external file and directory structure? | [
"",
"javascript",
"jquery",
"structure",
"external",
"include",
""
] |
Problem: I want to implement several php-worker processes who are listening on a MQ-server queue for asynchronous jobs. The problem now is that simply running this processes as daemons on a server doesn't really give me any level of control over the instances (Load, Status, locked up)...except maybe for dumping ps -aux.
Because of that I'm looking for a runtime environment of some kind that lets me monitor and control the instances, either on system (process) level or on a higher layer (some kind of Java-style appserver)
Any pointers? | Here's some code that may be useful.
```
<?
define('WANT_PROCESSORS', 5);
define('PROCESSOR_EXECUTABLE', '/path/to/your/processor');
set_time_limit(0);
$cycles = 0;
$run = true;
$reload = false;
declare(ticks = 30);
function signal_handler($signal) {
switch($signal) {
case SIGTERM :
global $run;
$run = false;
break;
case SIGHUP :
global $reload;
$reload = true;
break;
}
}
pcntl_signal(SIGTERM, 'signal_handler');
pcntl_signal(SIGHUP, 'signal_handler');
function spawn_processor() {
$pid = pcntl_fork();
if($pid) {
global $processors;
$processors[] = $pid;
} else {
if(posix_setsid() == -1)
die("Forked process could not detach from terminal\n");
fclose(stdin);
fclose(stdout);
fclose(stderr);
pcntl_exec(PROCESSOR_EXECUTABLE);
die('Failed to fork ' . PROCESSOR_EXECUTABLE . "\n");
}
}
function spawn_processors() {
global $processors;
if($processors)
kill_processors();
$processors = array();
for($ix = 0; $ix < WANT_PROCESSORS; $ix++)
spawn_processor();
}
function kill_processors() {
global $processors;
foreach($processors as $processor)
posix_kill($processor, SIGTERM);
foreach($processors as $processor)
pcntl_waitpid($processor);
unset($processors);
}
function check_processors() {
global $processors;
$valid = array();
foreach($processors as $processor) {
pcntl_waitpid($processor, $status, WNOHANG);
if(posix_getsid($processor))
$valid[] = $processor;
}
$processors = $valid;
if(count($processors) > WANT_PROCESSORS) {
for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
posix_kill($processors[$ix], SIGTERM);
for($ix = count($processors) - 1; $ix >= WANT_PROCESSORS; $ix--)
pcntl_waitpid($processors[$ix]);
} elseif(count($processors) < WANT_PROCESSORS) {
for($ix = count($processors); $ix < WANT_PROCESSORS; $ix++)
spawn_processor();
}
}
spawn_processors();
while($run) {
$cycles++;
if($reload) {
$reload = false;
kill_processors();
spawn_processors();
} else {
check_processors();
}
usleep(150000);
}
kill_processors();
pcntl_wait();
?>
``` | It sounds like you already have a MQ up and running on a \*nix system and just want a way to manage workers.
A very simple way to do so is to use GNU screen. To start 10 workers you can use:
```
#!/bin/sh
for x in `seq 1 10` ; do
screen -dmS worker_$x php /path/to/script.php worker$x
end
```
This will start 10 workers in the background using screens named worker\_1,2,3 and so on.
You can reattach to the screens by running screen -r worker\_ and list the running workers by using screen -list.
For more info this guide may be of help:
<http://www.kuro5hin.org/story/2004/3/9/16838/14935>
Also try:
* screen --help
* man screen
* or [google](http://www.google.com/search?q=introduction%20to%20gnu%20screen).
For production servers I would normally recommend using the normal system startup scripts, but I have been running screen commands from the startup scripts for years with no problems. | PHP Daemon/worker environment | [
"",
"php",
"parallel-processing",
"daemon",
"rabbitmq",
"process-control",
""
] |
I'm trying to get the CREATE scripts for existing tables within SQL Server 2008. I assume I can do this by querying the sys.tables somehow, however this isn't returning me the CREATE script data. | Possible this be helpful for you. This script generate indexes, FK's, PK and common structure for any table.
For example -
**DDL:**
```
CREATE TABLE [dbo].[WorkOut](
[WorkOutID] [bigint] IDENTITY(1,1) NOT NULL,
[TimeSheetDate] [datetime] NOT NULL,
[DateOut] [datetime] NOT NULL,
[EmployeeID] [int] NOT NULL,
[IsMainWorkPlace] [bit] NOT NULL,
[DepartmentUID] [uniqueidentifier] NOT NULL,
[WorkPlaceUID] [uniqueidentifier] NULL,
[TeamUID] [uniqueidentifier] NULL,
[WorkShiftCD] [nvarchar](10) NULL,
[WorkHours] [real] NULL,
[AbsenceCode] [varchar](25) NULL,
[PaymentType] [char](2) NULL,
[CategoryID] [int] NULL,
[Year] AS (datepart(year,[TimeSheetDate])),
CONSTRAINT [PK_WorkOut] PRIMARY KEY CLUSTERED
(
[WorkOutID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
ALTER TABLE [dbo].[WorkOut] ADD
CONSTRAINT [DF__WorkOut__IsMainW__2C1E8537] DEFAULT ((1)) FOR [IsMainWorkPlace]
ALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID])
REFERENCES [dbo].[Employee] ([EmployeeID])
ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]
```
**Query:**
```
DECLARE @table_name SYSNAME
SELECT @table_name = 'dbo.WorkOut'
DECLARE
@object_name SYSNAME
, @object_id INT
SELECT
@object_name = '[' + s.name + '].[' + o.name + ']'
, @object_id = o.[object_id]
FROM sys.objects o WITH (NOWAIT)
JOIN sys.schemas s WITH (NOWAIT) ON o.[schema_id] = s.[schema_id]
WHERE s.name + '.' + o.name = @table_name
AND o.[type] = 'U'
AND o.is_ms_shipped = 0
DECLARE @SQL NVARCHAR(MAX) = ''
;WITH index_column AS
(
SELECT
ic.[object_id]
, ic.index_id
, ic.is_descending_key
, ic.is_included_column
, c.name
FROM sys.index_columns ic WITH (NOWAIT)
JOIN sys.columns c WITH (NOWAIT) ON ic.[object_id] = c.[object_id] AND ic.column_id = c.column_id
WHERE ic.[object_id] = @object_id
),
fk_columns AS
(
SELECT
k.constraint_object_id
, cname = c.name
, rcname = rc.name
FROM sys.foreign_key_columns k WITH (NOWAIT)
JOIN sys.columns rc WITH (NOWAIT) ON rc.[object_id] = k.referenced_object_id AND rc.column_id = k.referenced_column_id
JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = k.parent_object_id AND c.column_id = k.parent_column_id
WHERE k.parent_object_id = @object_id
)
SELECT @SQL = 'CREATE TABLE ' + @object_name + CHAR(13) + '(' + CHAR(13) + STUFF((
SELECT CHAR(9) + ', [' + c.name + '] ' +
CASE WHEN c.is_computed = 1
THEN 'AS ' + cc.[definition]
ELSE UPPER(tp.name) +
CASE WHEN tp.name IN ('varchar', 'char', 'varbinary', 'binary', 'text')
THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length AS VARCHAR(5)) END + ')'
WHEN tp.name IN ('nvarchar', 'nchar', 'ntext')
THEN '(' + CASE WHEN c.max_length = -1 THEN 'MAX' ELSE CAST(c.max_length / 2 AS VARCHAR(5)) END + ')'
WHEN tp.name IN ('datetime2', 'time2', 'datetimeoffset')
THEN '(' + CAST(c.scale AS VARCHAR(5)) + ')'
WHEN t.name IN ('decimal', 'numeric')
THEN '(' + CAST(c.[precision] AS VARCHAR(5)) + ',' + CAST(c.scale AS VARCHAR(5)) + ')'
ELSE ''
END +
CASE WHEN c.collation_name IS NOT NULL THEN ' COLLATE ' + c.collation_name ELSE '' END +
CASE WHEN c.is_nullable = 1 THEN ' NULL' ELSE ' NOT NULL' END +
CASE WHEN dc.[definition] IS NOT NULL THEN ' DEFAULT' + dc.[definition] ELSE '' END +
CASE WHEN ic.is_identity = 1 THEN ' IDENTITY(' + CAST(ISNULL(ic.seed_value, '0') AS CHAR(1)) + ',' + CAST(ISNULL(ic.increment_value, '1') AS CHAR(1)) + ')' ELSE '' END
END + CHAR(13)
FROM sys.columns c WITH (NOWAIT)
JOIN sys.types tp WITH (NOWAIT) ON c.user_type_id = tp.user_type_id
LEFT JOIN sys.computed_columns cc WITH (NOWAIT) ON c.[object_id] = cc.[object_id] AND c.column_id = cc.column_id
LEFT JOIN sys.default_constraints dc WITH (NOWAIT) ON c.default_object_id != 0 AND c.[object_id] = dc.parent_object_id AND c.column_id = dc.parent_column_id
LEFT JOIN sys.identity_columns ic WITH (NOWAIT) ON c.is_identity = 1 AND c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
WHERE c.[object_id] = @object_id
ORDER BY c.column_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, CHAR(9) + ' ')
+ ISNULL((SELECT CHAR(9) + ', CONSTRAINT [' + k.name + '] PRIMARY KEY (' +
(SELECT STUFF((
SELECT ', [' + c.name + '] ' + CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END
FROM sys.index_columns ic WITH (NOWAIT)
JOIN sys.columns c WITH (NOWAIT) ON c.[object_id] = ic.[object_id] AND c.column_id = ic.column_id
WHERE ic.is_included_column = 0
AND ic.[object_id] = k.parent_object_id
AND ic.index_id = k.unique_index_id
FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ''))
+ ')' + CHAR(13)
FROM sys.key_constraints k WITH (NOWAIT)
WHERE k.parent_object_id = @object_id
AND k.[type] = 'PK'), '') + ')' + CHAR(13)
+ ISNULL((SELECT (
SELECT CHAR(13) +
'ALTER TABLE ' + @object_name + ' WITH'
+ CASE WHEN fk.is_not_trusted = 1
THEN ' NOCHECK'
ELSE ' CHECK'
END +
' ADD CONSTRAINT [' + fk.name + '] FOREIGN KEY('
+ STUFF((
SELECT ', [' + k.cname + ']'
FROM fk_columns k
WHERE k.constraint_object_id = fk.[object_id]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
+ ')' +
' REFERENCES [' + SCHEMA_NAME(ro.[schema_id]) + '].[' + ro.name + '] ('
+ STUFF((
SELECT ', [' + k.rcname + ']'
FROM fk_columns k
WHERE k.constraint_object_id = fk.[object_id]
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
+ ')'
+ CASE
WHEN fk.delete_referential_action = 1 THEN ' ON DELETE CASCADE'
WHEN fk.delete_referential_action = 2 THEN ' ON DELETE SET NULL'
WHEN fk.delete_referential_action = 3 THEN ' ON DELETE SET DEFAULT'
ELSE ''
END
+ CASE
WHEN fk.update_referential_action = 1 THEN ' ON UPDATE CASCADE'
WHEN fk.update_referential_action = 2 THEN ' ON UPDATE SET NULL'
WHEN fk.update_referential_action = 3 THEN ' ON UPDATE SET DEFAULT'
ELSE ''
END
+ CHAR(13) + 'ALTER TABLE ' + @object_name + ' CHECK CONSTRAINT [' + fk.name + ']' + CHAR(13)
FROM sys.foreign_keys fk WITH (NOWAIT)
JOIN sys.objects ro WITH (NOWAIT) ON ro.[object_id] = fk.referenced_object_id
WHERE fk.parent_object_id = @object_id
FOR XML PATH(N''), TYPE).value('.', 'NVARCHAR(MAX)')), '')
+ ISNULL(((SELECT
CHAR(13) + 'CREATE' + CASE WHEN i.is_unique = 1 THEN ' UNIQUE' ELSE '' END
+ ' NONCLUSTERED INDEX [' + i.name + '] ON ' + @object_name + ' (' +
STUFF((
SELECT ', [' + c.name + ']' + CASE WHEN c.is_descending_key = 1 THEN ' DESC' ELSE ' ASC' END
FROM index_column c
WHERE c.is_included_column = 0
AND c.index_id = i.index_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')'
+ ISNULL(CHAR(13) + 'INCLUDE (' +
STUFF((
SELECT ', [' + c.name + ']'
FROM index_column c
WHERE c.is_included_column = 1
AND c.index_id = i.index_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')', '') + CHAR(13)
FROM sys.indexes i WITH (NOWAIT)
WHERE i.[object_id] = @object_id
AND i.is_primary_key = 0
AND i.[type] = 2
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
), '')
PRINT @SQL
--EXEC sys.sp_executesql @SQL
```
**Output:**
```
CREATE TABLE [dbo].[WorkOut]
(
[WorkOutID] BIGINT NOT NULL IDENTITY(1,1)
, [TimeSheetDate] DATETIME NOT NULL
, [DateOut] DATETIME NOT NULL
, [EmployeeID] INT NOT NULL
, [IsMainWorkPlace] BIT NOT NULL DEFAULT((1))
, [DepartmentUID] UNIQUEIDENTIFIER NOT NULL
, [WorkPlaceUID] UNIQUEIDENTIFIER NULL
, [TeamUID] UNIQUEIDENTIFIER NULL
, [WorkShiftCD] NVARCHAR(10) COLLATE Cyrillic_General_CI_AS NULL
, [WorkHours] REAL NULL
, [AbsenceCode] VARCHAR(25) COLLATE Cyrillic_General_CI_AS NULL
, [PaymentType] CHAR(2) COLLATE Cyrillic_General_CI_AS NULL
, [CategoryID] INT NULL
, [Year] AS (datepart(year,[TimeSheetDate]))
, CONSTRAINT [PK_WorkOut] PRIMARY KEY ([WorkOutID] ASC)
)
ALTER TABLE [dbo].[WorkOut] WITH CHECK ADD CONSTRAINT [FK_WorkOut_Employee_EmployeeID] FOREIGN KEY([EmployeeID]) REFERENCES [dbo].[Employee] ([EmployeeID])
ALTER TABLE [dbo].[WorkOut] CHECK CONSTRAINT [FK_WorkOut_Employee_EmployeeID]
CREATE NONCLUSTERED INDEX [IX_WorkOut_WorkShiftCD_AbsenceCode] ON [dbo].[WorkOut] ([WorkShiftCD] ASC, [AbsenceCode] ASC)
INCLUDE ([WorkOutID], [WorkHours])
```
**Also check this article -**
[How to Generate a CREATE TABLE Script For an Existing Table: Part 1](http://www.c-sharpcorner.com/UploadFile/67b45a/how-to-generate-a-create-table-script-for-an-existing-table/) | do you mean you wish to create a TSQL script which generates a CREATE script, or use the Management tools in SQL SERVER Management Studio to generate a Create script?
If it's the latter, it's a simply matter of right-clicking a table, and selecting Script Table As -> Create To -> New Query Window.
If you want the whole database scripted, then right click the database and select Tasks--> Generate Scripts... and then follow the wizard
otherwise it's a matter of selecting all sorts of fun things out of the various system tables. | Generate SQL Create Scripts for existing tables with Query | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Is there a Java library for rotating JPEG files in increments of 90 degrees, without incurring image degradation? | I found this: <http://mediachest.sourceforge.net/mediautil/>
API: <http://mediachest.sourceforge.net/mediautil/javadocs/mediautil/image/jpeg/LLJTran.html> | Building on Henry's answer, here's an example of how to use [MediaUtil](http://mediachest.sourceforge.net/mediautil/) to perform lossless JPEG rotation based on the EXIF data:
```
try {
// Read image EXIF data
LLJTran llj = new LLJTran(imageFile);
llj.read(LLJTran.READ_INFO, true);
AbstractImageInfo<?> imageInfo = llj.getImageInfo();
if (!(imageInfo instanceof Exif))
throw new Exception("Image has no EXIF data");
// Determine the orientation
Exif exif = (Exif) imageInfo;
int orientation = 1;
Entry orientationTag = exif.getTagValue(Exif.ORIENTATION, true);
if (orientationTag != null)
orientation = (Integer) orientationTag.getValue(0);
// Determine required transform operation
int operation = 0;
if (orientation > 0
&& orientation < Exif.opToCorrectOrientation.length)
operation = Exif.opToCorrectOrientation[orientation];
if (operation == 0)
throw new Exception("Image orientation is already correct");
OutputStream output = null;
try {
// Transform image
llj.read(LLJTran.READ_ALL, true);
llj.transform(operation, LLJTran.OPT_DEFAULTS
| LLJTran.OPT_XFORM_ORIENTATION);
// Overwrite original file
output = new BufferedOutputStream(new FileOutputStream(imageFile));
llj.save(output, LLJTran.OPT_WRITE_ALL);
} finally {
IOUtils.closeQuietly(output);
llj.freeMemory();
}
} catch (Exception e) {
// Unable to rotate image based on EXIF data
...
}
``` | Lossless JPEG Rotate (90/180/270 degrees) in Java? | [
"",
"java",
"algorithm",
"jpeg",
"rotation",
"lossless",
""
] |
I have a static method used to launch a browser with a given URL. When the browser is already open, this takes over the active browser window.
This is a problem if the browser is in use for something else, such as data entry. Is there a way to open a URL in a new Browser window (or tab)?
```
public static void openURL(String urlText)
{
if (Desktop.isDesktopSupported())
{
URI uri = URI.create(urlText);
try
{
Desktop.getDesktop().browse(uri);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
```
Alternately, is there a better way to do this? | Have a look at [BrowserLaunch2](http://browserlaunch2.sourceforge.net/). In particular [this method](http://browserlaunch2.sourceforge.net/docs.shtml#forcing_new_browser_win) for forcing a new window to be opened. | You can use Yose method to identify which browser is default and then use the following
commands:
firefox.exe -new-tab
ie7 <http://blogs.msdn.com/tonyschr/archive/2007/01/19/ie-automation-amp-tabs.aspx>
opera -newpage
Note that some browsers do not support tabs, or have no command line params to open an url in a tab. In that case you can use AHK scripting engine. | Can Java's Desktop library launch a URL in a new Browser Tab or Window? | [
"",
"java",
"desktop",
"uri",
""
] |
I have this class:
```
public abstract class AbstractIncomingCall {
/*
class properties
*/
public void changeStatus(/*some parameters*/){
//store parameters in class properties
isValid();
}
protected abstract boolean isValid();
}
```
...which is extended by this class:
```
public class IncomingCallImpl extends AbstractIncomingCall{
//I override the parent's method
public void changeStatus(/*same parameters as parent's method*/) {
super.changeStatus(/*same parameters as parent's method*/);
//do something interesting
}
protected boolean isValid() throws StatusChangeNotOccurredException {
//implement my validation algorithm
}
```
What I would like to achieve is ensuring that whenever changeStatus(/\*some parameters\*/) gets called the isValid() method is called; note that the isValid() method is implemented only in the concrete class, also it uses class properties inheritated from the parent class.
Is there a way I can follow to ensure that isValid() is called other than calling super?
I dislike very much the fact that I have to pass parameters all around, I think I'm going totally in the wrong direction and there is a cleaner way to achieve this.
What I would like to keep in my code is the "isValid() calling logic" in the abstract class, because every call needs to be validated and I can't rely on me remembering this in the future :P
Thanks in advance :] | It sounds like you want changeStatus() to follow the Template Method pattern. In this pattern, you define changeStatus() in the abstract class (making it final if you don't trust people to extend properly), and have it call the required methods:
```
public final void changeStatus()
{
doSomethingSubclassSpecific();
isValid()
}
protected abstract doSomethingSubclassSpecific();
``` | You don't need to override `changeStatus()`. You are very close to implement the [Template Method](http://en.wikipedia.org/wiki/Template_method_pattern) design pattern. | Ensuring method is called | [
"",
"java",
"inheritance",
""
] |
Assuming I have a table `foo` where I have something like this:
`id`, `user_id`, `timestamp`, `some_value`
What I want to do is remove all rows that aren't the newest N **per user**.
The deletion itself could be handled by a:
```
DELETE FROM foo WHERE id NOT IN (...)
```
so you could rephrase the problem into: How do I get the newest N(there might be less) rows **for each user**. This means if I have U users I may end up with N\*U rows so `LIMIT` wont really work. | MySQL does not support reading from a table with `SELECT` and performing an `UPDATE`/`INSERT`/`DELETE` on the same table in the same query. So doing what you want in one statement is going to be tricky.
I would do it in two stages: first, query the newest `$N` records per user, and store them in a temporary table:
```
CREATE TEMPORARY TABLE foo_top_n
SELECT f1.id
FROM foo f1 LEFT OUTER JOIN foo f2
ON (f1.user_id = f2.user_id AND f1.id < f2.id)
GROUP BY f1.id
HAVING COUNT(*) < $N;
```
Next, use the multi-table `DELETE` syntax and join `foo` to the temporary table, deleting where no match is found:
```
DELETE f1 FROM foo f1 LEFT OUTER JOIN foo_top_n f2 USING (id)
WHERE f2.id IS NULL;
``` | Actually, it is possible to do it a single query:
```
DELETE l.*
FROM foo l
JOIN (
SELECT user_id,
COALESCE(
(
SELECT timestamp
FROM foo li
WHERE li.user_id = dlo.user_id
ORDER BY
li.user_id DESC, li.timestamp DESC
LIMIT 2, 1
), CAST('0001-01-01' AS DATETIME)) AS mts,
COALESCE(
(
SELECT id
FROM foo li
WHERE li.user_id = dlo.user_id
ORDER BY
li.user_id DESC, li.timestamp DESC, li.id DESC
LIMIT 2, 1
), -1) AS mid
FROM (
SELECT DISTINCT user_id
FROM foo dl
) dlo
) lo
ON l.user_id = lo.user_id
AND (l.timestamp, l.id) < (mts, mid)
```
See detailed explanations here:
* [**Keeping rows**](http://explainextended.com/2009/04/07/keeping-rows/) (how to delete all rows except `TOP N`)
* [**Advanced row sampling**](http://explainextended.com/2009/03/06/advanced-row-sampling/) (how to select `TOP N` rows for each `GROUP`)
* [**Keeping latest rows for a group**](http://explainextended.com/2009/04/26/keeping-latest-rows-for-a-group/) (how to use the approaches above together) | How can I remove all rows that aren't the newest N for each user in MySQL? | [
"",
"sql",
"mysql",
"database",
""
] |
This morning, I read a very good question about what the person should expect from a Sharepoint position. I have a similar question about server side engineering. What can I expect from server side engineering positions, and how is it similar and different from desktop development?
I have experience with WinForms, WPF, some light multithreading experience, some experience using Web services, some experience writing some thin, simple web services, writing data access layers (DAL), and some experience setting up and using SQL Server based database with a CRUD style interface and using stored procedures.
My team doesn't have a person with the server side engineer title, so I really am not sure what those kinds of skills are like. | They are very different worlds. I've worked both sides of the camp, but mostly server side. I work classic client-server rather than web though (yes, we are still around).
* Client-side is all about user interaction, server-side (most often) has no user. This is actually quite liberating.
* Server-side means thinking about managers, rather than users. You need to provide access for fault-finding, reporting, diagnostic logging - your own *and* the Windows event log usually.
* Client-side is ephemeral (users come and go), server-side is persistent. Resource management is therefore paramount: leaks mean death. Memory leaks, handle leaks, heap fragmentation. You end up dreaming about this stuff. I got dragged to Spain on 24 hours notice because one system could blow the top off the Windows DDE memory allocator in an anomalous condition, which tells you how important this stuff is.
* On the client side responsiveness to the user (servicing the GUI) is everything, on the server-side it's more complicated. Threading becomes more important, but threading for scalability rather than for keeping the GUI responsive. I haven't started counting processor cycles or checking interrupt latency figures like I used to in my firmware days, but it's getting close.
* Security becomes important, but less than you might think. Not every server application is internet-facing. Think in terms of levels of access, limiting access through views etc.
* Once upon a time server-side always meant native, but that's changing.
### Edit:
By threading for scalability, I mean that it's quite easy to apply threads in a way that doesn't scale. Spawning one thread for each query is fine, until your modelling tells you that you might have five hundred concurrent queries. So you need to think in terms of thread pools and queuing.
As far as "managers" are concerned, there are really two types of requirement here, one is reporting for SysAdms, and one is reporting for be-suited management types. SysAdms need help with fault-finding, which I take to be systemic faults that happen to impinge on your applications (network outages, network storms, hard disk full/crash, server failover, invoking DR etc) and diagnostics, which I take to be reporting anomalous behaviour of your *own* applications.
SysAdms have very short term needs - this hour, this day, email me, SMS me, get it back up etc. They need detailed technical information available all the time, because you don't know when you'll *really* need it. But no matter how much information you give them, only the very best SysAdms won't pick up the phone and ring you when the ordure hits the air movement device.
Managers need medium to long term performance reporting, how many queries per day/week/month, how did I do last week, what do I need to do to improve this week, how do I make performance to target visible to my staff/peers/superiors etc. This is mostly on request (reports), though things like wallboards with running stats are often asked for... but even those aren't necessarily real-time. You can suck this kind of stuff out of a database on a poll. As a server-side guy, you might have to design some views to facilitate this reporting, but most of your work aims at the anomalous conditions that interest SysAdms. At least, in my experience that's the case. But bear in mind if the SysAdm isn't left happy, you'll be dealing with the manager anyway...
An aside: curiously, although SysAdms seem to lead quite stressful lives, all the best ones I've ever worked with were very laid-back people. Odd, that. I suppose it's a coping mechanism. | Server side skills deal a lot more with scalability, and long term resource consumption. Also shared resource contention.
On a desktop app , if you leak or hold onto memory too long, its probably not a big deal. Do that in a server app, and you just locked up many users.
On the server side app, you really need to be aware of connection and thread pools, deadlocks, and a slew of other issues. | What is the difference between Server side skills and desktop development skills? | [
"",
"c#",
".net",
"asp.net",
"database",
"multithreading",
""
] |
I have an array of 128 booleans that represent bits. How can I convert these 128 bit representations into 16 bytes?
Example:
I have an array that looks like this:
```
0110001100110000100010111011001011010011010001010001101101001100
1000010000000000001000111111111101000011111001111011111011111001
```
(Converted to 1s and 0s to be more concise)
I need to convert those bits to the following byte array:
```
99 48 139 178 211 69 27 76 132 0 35 255 67 231 190 249
```
EDIT: This doesn't seem to work:
```
public byte[] ToByteArray() {
int numBytes = Count / 8;
if (_bits.Count % 8 != 0) numBytes++;
byte[] bytes = new byte[numBytes];
int byteIndex = 0, bitIndex = 0;
for (int i = 0; i < _bits.Count; i++) {
if (_bits[i])
bytes[byteIndex] |= (byte)(1 << bitIndex);
bitIndex++;
if (bitIndex == 8) {
bitIndex = 0;
byteIndex++;
}
}
return bytes;
}
```
It outputs:
```
198 12 209 77 203 162 216 50 33 0 196 255 194 231 125 159
``` | The code is treating the first bit as the low bit of the word, so you end up with each word reversed. As a quick-and-dirty fix, try this:
```
bytes[byteIndex] |= (byte)(1 << (7-bitIndex));
```
That puts the first bit in the array at the highest position in the first byte, etc. | ```
bool[] bools = ...
BitArray a = new BitArray(bools);
byte[] bytes = new byte[a.Length / 8];
a.CopyTo(bytes, 0);
```
---
EDIT: Actually this also returns:
```
198 12 209 77 203 162 216 50 33 0 196 255 194 231 125 159
```
Wrong endianness? I'll leave answer anyway, for reference.
---
EDIT: You can use BitArray.CopyTo() by reversing the arrays like so:
```
bool[] bools = ...
Array.Reverse(bools); // NOTE: this modifies your original array
BitArray a = new BitArray(bools);
byte[] bytes = new byte[a.Length / 8];
a.CopyTo(bytes, 0);
Array.Reverse(bytes);
``` | How can I convert bits to bytes? | [
"",
"c#",
"bit-manipulation",
"byte",
""
] |
i am creating my own CMS frame work, because many of the clients i have, the have same requirements, like news module, newsletter module, etc.
now i am doing it fine, the only thing that is bothering me, is if a client wants to move from my server he would ask me to gibe him his files, and of course if i do so the new person who will take it he will see all my code, use it and benefit from i, and this is so bad for me that i spend all this time on creating my system and any one can easily see the code, plus he will see all the logic for my system, and he can easily know how other clients of mine sites are working, and that is a threat to me, finally i am using third party controls that i have paid for their license, and i don't want him to take it on a golden plate.
now what is the best way to solve this ? i thought it is encrypting, but how can i do that and how efficient is it ?
-should i merge all my CS files and Dlls in bin folder to one Dll and encrypt it, and how can i do that ?
i totally appreciate all the help on this matter as it is really crucial for me. | you should read this
[Best .NET obfuscation tools/strategy](https://stackoverflow.com/questions/2525/best-net-obfuscation-tools-strategy)
[How effective is obfuscation?](https://stackoverflow.com/questions/551892/how-effective-is-obfuscation) | In my experience, this is rarely worth the effort. Lots of companies who provide libraries like this don't bother obfuscating their code (Telerik, etc).
Especially considering what you are writing (CMSes are everywhere), you'd likely see more benefit from your time spent implementing features that put your product/implementation in a competitive advantage and make companies see that the software you are capable of writing has value, rather than the code itself.
In the end, you want to ensure *you* are a key factor in making software work for a company, not the DLLs you give them. | Encrypt my framework and code | [
"",
"c#",
"asp.net",
"encryption",
""
] |
Is there a simple way to have a code library automatically detect if it's being called from a console application or a windows application? I'd like my library not to report to the Windows Event log if it's being called from a console window, but instead report to the console window. If however, it's not being run from within a console window, it should report to the Windows Event Log.
I thought about requiring my the logging component to be passed the log target, but it would be neat if it could just automatically handle these two targets natively. I don't yet require something as extensive as log4net, indeed, if it becomes necessary to provide support to log to a database/file and other as yet unknown logging targets, then I may recommend such a solution. For now though, just having my component auto-detect the environment and log to the console or the event log according to the environment would be plenty. | Architecturally, passing the logging context into the library component is the right choice. The library doesn't, and indeed shouldn't, know that much context about the environment it's being run in.
Because you want to support these two special cases natively within the library, I'd suggest a unified approach.
1. Go ahead and create the more generalized logging entry point/knob that the caller controls.
2. Create a separate entry point/knob that automatically sets the generalized one for the cases that you want to automatically support.
Even that seems too complicated based on your description, though. Have you considered simply using appropriate TraceListeners in your Diagnostics collection, where your console app adds the appropriate TraceListener to output to the console and the non-console app adds the appropriate EventLog TraceListener to output to the Windows event log? This has the added advantage of working well with all the built-in .net logging support without assuming any external dependencies (e.g., log4net). | Just discovered that "Console.Title" will be a blank string in a windows application and it will be automatically set in a console application.
Still a hack though. | How do I detect if I'm running in the console | [
"",
"c#",
".net",
"vb.net",
"logging",
""
] |
Is there any way to truncate jQuery?
I need to use only AJAX related methods in jQuery code.
As you might know the minified version is only 55KB and the uncompressed version is about 110KB. | I think the answer to your question is 'probably not'.
But consider these points:
* You don't have to serve it on every page request, sensible HTTP response headers should mean it only needs to be downloaded once per client browser.
* If you use the Google CDN for jQuery, your client may not need to download it at all, as there is a very good chance they will already have it cached.
i.e.
```
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"></script>
``` | Using gzip compression it brings it down to 19kb. It's going to be cached from there on out, so I'm not sure why it's an issue. That's far less than most decent sized images.
Using a CDN is also an option if you don't mind someone else hosting your code and your issue is just overall bandwidth. | 55KB of JQUERY is too big for my application | [
"",
"javascript",
"jquery",
""
] |
The advantages of immutable objects in Java seem clear:
* consistent state
* automatic thread safety
* simplicity
You can favour immutability by using private final fields and constructor injection.
But, what are the downsides to favouring immutable objects in Java?
i.e.
* incompatibility with ORM or web presentation tools?
* Inflexible design?
* Implementation complexities?
Is it possible to design a large-scale system (deep object graph) that predominately uses immutable objects? | > But, what are the downsides to
> favouring immutable objects in Java?
> incompatibility with ORM or web
> presentation tools?
Reflection based frameworks are complicated by immutable objects since they **requires** constructor injection:
* there are no default arguments in Java, which forces us to ALWAYS provide all of the necessary dependencies
* constructor overriding can be messy
* constructor argument names are not usually available through reflection, which forces us to depend on argument order for dependency resolution
> Implementation complexities?
Creating immutable objects is still a boring task; the compiler should take care of the implementation details, as in [groovy](http://groovy.codehaus.org/Immutable+AST+Macro)
> Is it possible to design a large-scale system (deep object graph) that predominately uses immutable objects?
definitely yes; immutable objects makes great building blocks for other objects (they favor composition) since it's much easier to maintain the invariant of a complex object when you can rely on its immutable components. The only true downside to me is about creating many temporary objects (e.g. [String concat was a problem in the past](https://web.archive.org/web/20080324114426/http://java.sun.com/developer/JDCTechTips/2002/tt0305.html)). | With immutability, any time you need to modify data, you need to create a new object. This can be expensive.
Imagine needing to modify one bit in an object that consumes several megabytes of memory: you would need to instantiate a whole new object, allocate memory, etc. If you need to do this many times, mutability becomes very attractive. | Downsides to immutable objects in Java? | [
"",
"java",
"immutability",
""
] |
I have developed a winform application in 1280 X 1024 pixels.....when using the same screen resolution
it shown exactly...But i change my screen resolution to 800 X 600 pixels it shows screen with close button out of screen.
How to fix this...is there is any restriction to build an application using a particular screen resolution basically..
Thanks in Advance.... | 1 There is no default resolution. You can observe what resolutions have your clients and take average one.
This is tip for you if you develop app for a company. Just go there and take info about it. And if you have need to design app only for one resolution - make sure that company will be ok to have all computers set to this resolution. it's in 99% not able to do, but sometimes it can be done, when your app is so specific and rare and computers are bought only for this one and only app, like netbooks.
2 How to handle different resolutions? Use `Dock` and `Anchor` properties available mostly in every these days GUI application development tools.
When you will use this, controls will naturally scale to size of a screen and your controls will no longer go behind corners of your app.
3 There is no restriction to build app for a resolution you choose.
But nowadays app I think can assume that res is min. 1024x768. And when it overlaps like you describe on 800x600 just don't bother yourself to implement fancy logic to handle this one. User will catch fast that he needs to change res to higher one. | set the form to fullscreen and the form will always be the correct size.
```
this.WindowState = FormWindowState.Maximized;
```
(is it me or is it obvious that a form of 1280x1024 doesn't fit in 800x600 screen?) | Is there is any standard screen resolution to develop winform application in c# | [
"",
"c#",
".net",
"winforms",
""
] |
I am looking for a way to easily split a python list in half.
So that if I have an array:
```
A = [0,1,2,3,4,5]
```
I would be able to get:
```
B = [0,1,2]
C = [3,4,5]
``` | ```
A = [1,2,3,4,5,6]
B = A[:len(A)//2]
C = A[len(A)//2:]
```
If you want a function:
```
def split_list(a_list):
half = len(a_list)//2
return a_list[:half], a_list[half:]
A = [1,2,3,4,5,6]
B, C = split_list(A)
``` | A little more generic solution (you can specify the number of parts you want, not just split 'in half'):
```
def split_list(alist, wanted_parts=1):
length = len(alist)
return [ alist[i*length // wanted_parts: (i+1)*length // wanted_parts]
for i in range(wanted_parts) ]
A = [0,1,2,3,4,5,6,7,8,9]
print split_list(A, wanted_parts=1)
print split_list(A, wanted_parts=2)
print split_list(A, wanted_parts=8)
``` | Split list into smaller lists (split in half) | [
"",
"python",
"list",
"split",
""
] |
In my C++ program, I need to pull a 64 bit float from an external byte sequence. Is there some way to ensure, at compile-time, that doubles are 64 bits? Is there some other type I should use to store the data instead?
Edit: If you're reading this and *actually* looking for a way to ensure storage in the IEEE 754 format, have a look at Adam Rosenfield's answer below. | An improvement on the other answers (which assume a char is 8-bits, the standard does not guarantee this..). Would be like this:
```
char a[sizeof(double) * CHAR_BIT == 64];
```
or
```
BOOST_STATIC_ASSERT(sizeof(double) * CHAR_BIT == 64);
```
You can find CHAR\_BIT defined in `<limits.h>` or `<climits>`. | In C99, you can just check if the preprocessor symbol `__STDC_IEC_559__` is defined. If it is, then you are guaranteed that a `double` will be an 8-byte value represented with IEEE 754 (also known as IEC 60559) format. See the C99 standard, Annex F. I'm not sure if this symbol is available in C++, though.
```
#ifndef __STDC_IEC_559__
#error "Requires IEEE 754 floating point!"
#endif
```
Alternatively, you can check the predefined constants `__DBL_DIG__` (should be 15), `__DBL_MANT_DIG__` (should be 53), `__DBL_MAX_10_EXP__` (should be 308), `__DBL_MAX_EXP__` (should be 1024), `__DBL_MIN_10_EXP__` (should be -307), and `__DBL_MIN_EXP__` (should be -1021). These should be available in all flavors of C and C++. | Ensuring C++ doubles are 64 bits | [
"",
"c++",
"types",
"precision",
"compiler-construction",
"ieee-754",
""
] |
I'm using Eclipse to help me clean up some code to use Java generics properly. Most of the time it's doing an excellent job of inferring types, but there are some cases where the inferred type has to be as generic as possible: Object. But Eclipse seems to be giving me an option to choose between a type of Object and a type of '?'.
So what's the difference between:
```
HashMap<String, ?> hash1;
```
and
```
HashMap<String, Object> hash2;
``` | An instance of `HashMap<String, String>` matches `Map<String, ?>` but not `Map<String, Object>`. Say you want to write a method that accepts maps from `String`s to anything: If you would write
```
public void foobar(Map<String, Object> ms) {
...
}
```
you can't supply a `HashMap<String, String>`. If you write
```
public void foobar(Map<String, ?> ms) {
...
}
```
it works!
A thing sometimes misunderstood in Java's generics is that `List<String>` is not a subtype of `List<Object>`. (But `String[]` is in fact a subtype of `Object[]`, that's one of the reasons why generics and arrays don't mix well. (arrays in Java are covariant, generics are not, they are *invariant*)).
Sample:
If you'd like to write a method that accepts `List`s of `InputStream`s and subtypes of `InputStream`, you'd write
```
public void foobar(List<? extends InputStream> ms) {
...
}
```
By the way: [Joshua Bloch's Effective Java](https://rads.stackoverflow.com/amzn/click/com/0201310058) is an excellent resource when you'd like to understand the not so simple things in Java. (Your question above is also covered very well in the book.) | Another way to think about this problem is that
```
HashMap<String, ?> hash1;
```
is equivalent to
```
HashMap<String, ? extends Object> hash1;
```
Couple this knowledge with the "Get and Put Principle" in section (2.4) from *[Java Generics and Collections](http://oreilly.com/catalog/9780596527754/)*:
> The Get and Put Principle: use an
> extends wildcard when you only get
> values out of a structure, use super
> wildcard when you only put values into
> a structure, and don't use a wildcard
> when you both get and put.
and the wild card may start making more sense, hopefully. | What is the difference between ? and Object in Java generics? | [
"",
"java",
"generics",
""
] |
Where can I find the application.exe.config file in my C# project?
I'm attempting to enable the JIT debugger but cannot find the config file.
thanks! | It's usually in the root of your project. If its not there you might not have one. You can add it by right clicking on your project and add new item.
It's called App.Config at this point. Once you compile it is then renamed to AppName.exe.Config, and placed with your build output. | I had to learn that each project in the solution can have a settings file. But the one in the startup project is the one that becomes application.exe.config in your output folder. | Where can I find the application.exe.config file in my C# project? | [
"",
"c#",
"configuration",
""
] |
During code review I discovered many places of our C# code that looks like this:
```
if(IsValid()) {
return true;
}
else {
return false;
}
```
or even "better":
```
return (IsValid()? true : false);
```
I always wondered why not just write the code like this:
```
return IsValid();
```
This is the way I would write this code. I ain't questioning the skills of the developers, but maybe trying to look into the developer's soul. Why would a developer favor more complex code and not a more simple and intuitive? Or maybe the reason is that it is hard to accept the Boolean type as the first-class citizen? | Yes, you should do it as you say. These people are doing these overly verbose things because they first learned it that way, perhaps in CS 101, and it never occurs to them to go outside the space of *what they know works* to see if there is a better, easier way.
This does speak to their competence. Good programmers need to be a little more thoughtful and a lot less hidebound. | I think `return IsValid();` is perfectly valid and readable code.
BTW, I would certainly slap anyone who writes (`IsValid() ? true : false`) in the face. It's unnecessarily complicated.
PS. This is what `svn blame` is designed for. | Boolean types | [
"",
"c#",
"language-agnostic",
"coding-style",
"boolean",
""
] |
I have a scheduled task that runs a script on a regular basis (every hour). This script does some heavy interaction with the database and filesystem and regularly takes several minutes to run. The problem is, the server's cpu-usage spikes while the script is running and slows down normal operations. Is there a way to throttle this process so that it takes longer but does not consume as many resources?
I've looked at different configuration options for PHP but there does not appear to be any that fit my needs.
Setting memory\_limit in php.ini to something lower causes my data objects to overflow quite easily.
I've seen similar posts where people suggested using sleep() at certain points in the script but that does not prevent the script from spiking the server.
The optimal solution would be some way to tell the Lamp (in this case Wamp) stack to only use 10% max cpu utilization. I'm not concerned at all about runtime and would prefer that it take longer if it means saving cpu cycles per second. My alternate solution would be to setup a different server with database replication so the cron could go to town without slowing everything else down.
Environment: Windows Server 2k3, Apache 2.2.11, PHP 5.2.9, MySQL 5.1
I appreciate any insight to this situation.
**EDIT:** I appreciate all the answers, even the ones that are \*nix-specific. It's still early enough in my situation to change the hosting environment. Hopefully this question will help others out regardless of the OS. | This is a tricky problem. If you're running the PHP script via the command line, you can set the process's scheduling priority to low (`start /low php.exe myscript.php` I believe). If your PHP script itself is actually doing most of the processing that's eating your CPU, this might work. However, you said you are doing some heavy database and filesystem interaction, which this solution will not help. It looks like there is a MySQL hint "LOW\_PRIORITY" for INSERT and UPDATE queries that may help you there, but I have not tried those. | [You can set processes in Windows to be a lower priority.](http://www.itworld.com/nls_windows_lowpriority060814) I'm not sure how the process is being kicked off, but if you set the process to be a low priority, whatever wants CPU resources will get them if you set the priority to be really low. | Suggestions/Tricks for Throttling a PHP script | [
"",
"php",
"cron",
"lamp",
"wamp",
""
] |
Good evening :)
I'm playing around with g++ and makefiles. I've gotten to this point:
foo.h:
```
#ifndef _FOO_H_
#define _FOO_H_
#include "bar.h"
class foo {
private:
bar something;
public:
bool start();
bool stop();
};
#endif // _FOO_H_
```
Foo.h is eventually included in my main cpp file so I can set things in motion by calling start/stop.
```
void somewhere() {
foo* hihi = new foo;
hihi->start();
delete hihi;
}
```
Then there's bar.h:
```
#ifndef _BAR_H_
#define _BAR_H_
class bar {
};
#endif // _BAR_H_
```
g++ doesn't seem to like it however:
```
g++ (some_flags) -c main.cpp
In file included from main.cpp:2:
foo.h:8: error: ‘bar’ does not name a type
```
I'm using makefiles, and tried a combination of things like:
```
main.o: main.cpp foo.h bar.h
```
Even though I don't think I should have to add bar.h here, shouldn't including it in foo.h be enough?
To clarify, this is roughly how it's set up now (yes I know this can be done in a more efficient manner):
```
main.o: main.cpp foo.h
$(CC) $(CFLAGS) -c main.cpp
foo.o: foo.h foo.cpp
$(CC) $(CFLAGS) -c foo.cpp
bar.o: bar.h bar.cpp
$(CC) $(CFLAGS) -c bar.cpp
```
What's going on? I figure it's something I'm missing about g++ and the way it handles header includes, point me in the right direction please!
**edit - found the solution:**
Doh! I feel dumb right now.
Was messing around with boost::asio and kind of forgot I still left this on top of my headers somewhere:
using boost::asio::ip::tcp;
Let's just say there's a boost::asio::ip::tcp::bar function :D
Oh well, thanks anyway! | > Was messing around with boost::asio and kind of forgot I still left this on top of my headers somewhere: using boost::asio::ip::tcp;
>
> Let's just say there's a boost::asio::ip::tcp::bar function
[Dan Saks explains](http://www.ddj.com/article/printableArticle.jhtml?articleID=184403396&dept_url=/cpp/) some reasons why you should typedef your class names, even though it might seem redundant.
Well, you've run into real life situation where typedefing a class would have probably helped you find your problem a little easier:
```
class bar {
// ...
};
typedef class bar bar;
```
Generates this more meaningful message if there's a function named `bar()` already declared:
```
In file included from C:\temp\foo.h:4,
from C:\temp\test.cpp:4:
C:\temp\bar.h:7: error: `typedef class bar bar' redeclared as different kind of symbol
C:\temp\test.cpp:1: error: previous declaration of `void bar(int)'
C:\temp\bar.h:7: error: declaration of `typedef class bar bar'
``` | Double check everything. If you include `bar.h` into `foo.h`, the compiler should not raise up an error. Do you include `foo.h` from `bar.h`? Better don't do this because that would cause a circular dependency between headers, which will cause that kind of bugs.
Also check for spelling of header guards. This can be a common source of annoyance:
```
#ifdef _BAR_H_ // OOPS! we wanted #ifndef
#define _BAR_H_
class bar {
};
#endif // _BAR_H_
```
In addition, you should avoid putting the underscore before your header guard macro name. These names are reserved to the compiler. Call it `INCLUDED_BAR_H` or just `BAR_H_` instead. | g++ header included: still doesn't find definition | [
"",
"c++",
"include",
"makefile",
"g++",
"header-files",
""
] |
Has anyone tried running glassfish with JRockit? I see some references saying it's not possible but they are very outdated. Anyone tried this? | It is possible in Windows. We do this for several production, public facing, web apps.
We had to remove some default Glassfish JVM flags, since they don't apply to JRockit (this is optional, it's just that the start up warnings really annoyed me), and tune the JVM a little differently, but other than that we have not run into any issues.
(We use SJSAS though, but I don't think that should make a difference)
Here are the versions of the software we are using:
* Windows Server 2003
* Sun Java System Application Server 9.1\_01
* JRockit R27.5.0 (Java 6) | AFAIK there was a problem on windows: <https://glassfish.dev.java.net/servlets/ReadMsg?list=dev&msgNo=878>. Linux and Solaris seems ok. | Can you run glassfish with JRockit? | [
"",
"java",
"glassfish",
"sun",
"jrockit",
""
] |
I've got a table with a large number of rows that is not suitable for paging. The rows in this table can be sorted by clicking a column header which triggers a client side sorting algoritm based on <http://www.exforsys.com/tutorials/jquery/jquery-basic-alphabetical-sorting.html> The function dynamically adds an "expando" property to each row, thereby caching the key pre-sort:
```
row.sortKey = $(row).children('td').eq(column).text().toUpperCase();
```
As you can see, the property values are simply set to the contents of the column that was clicked and they are discarded (nulled) once the sorting has finished. Performance is actually surprisingly good - but columns that contain more text appear to be slower to sort.
As the sorting is only done to make it easier for the user to find the row(s) that they are looking for I figured things could be speeded up by cropping the key values with substr(0,7) or something (eight chars should provide more than enough precision). However, I found that doing a substr() incurred more performance cost than it saved, and if anything it made sorting slower.
Does anyone know any (other) optimisations that can be applied to this method?
Here is a more complete example:
```
var rows = $table.find('tbody > tr').get();
$.each(rows, function(index, row) {
row.sortKey = $(row).children('td').eq(column).text().toUpperCase()
})
rows.sort(function(a, b) {
if (a.sortKey < b.sortKey) return -1
if (a.sortKey > b.sortKey) return 1
return 0
})
$.each(rows, function(index, row) {
$table.children('tbody').append(row)
row.sortKey = null
})
```
EDIT: Here is the final version of my code, incorporating many of the optimisations provided in the answers below:
```
$('table.sortable').each(function() {
var $table = $(this);
var storage = new Array();
var rows = $table.find('tbody > tr').get();
$('th', $table).each(function(column) {
$(this).click(function() {
var colIndex = this.cellIndex;
for(i=0;i<rows.length;i++) {
rows[i].sortKey = $(rows[i].childNodes[colIndex]).text().toUpperCase();
}
rows.sort(function(a, b) {
if (a.sortKey < b.sortKey) return -1;
if (a.sortKey > b.sortKey) return 1;
return 0;
});
for(i=0;i<rows.length;i++) {
storage.push(rows[i]);
rows[i].sortKey = null;
}
$table.children('tbody').append(storage);
});
});
});
``` | one optimization that i can think of is to modify this code:
```
$.each(rows, function(index, row) {
$table.children('tbody').append(row)
row.sortKey = null
})
```
to instead of appending one row at a time, append larger chunks, or all if possible. to do this, you will need to first create a string of all rows, and then append it all at once.
use array.push and array.join to concat the string | There are several problems with the example you gave. Number one problem is that you are selecting the columns using jquery inside the loops. This is a major performance penalty. If you have any control over the html code I would suggest you use normal DOM methods to get the desired colum you want to sort on. Note that sometimes when you might expect a table cell node, you may get a text node. I'll get back to that later. A for loop is faster so you may consider using that instead of $.each, but I suggest you benchmark it.
I took your example and created a table with 1000 rows. It took about 750ms on my machine to sort it. I did some optimizations (see code below) and managed to get it down to 200ms. The sorting itself took around 20ms (not bad).
```
var sb = [];
sb.push("<table border='1'>");
var x;
for (var i = 0; i < 1000; i++) {
x = Math.floor(Math.random() * 1000);
sb.push("<tr><td>data");
sb.push(x);
sb.push("</td></tr>");
}
sb.push("</table>");
document.write(sb.join(""));
$table = $("table");
var rows = $table.find('tbody > tr').get();
var columnIndex = 0;
var t = new Date();
$.each(rows, function(index, row) {
row.sortKey = $(row.childNodes[columnIndex]).text();
});
alert("Sort key: " + (new Date() - t) + "ms");
t = new Date();
rows.sort(function(a, b) {
return a.sortKey.localeCompare(b.sortKey);
});
alert("Sort: " + (new Date() - t) + "ms");
t = new Date();
var tbody = $table.children('tbody').get(0);
$.each(rows, function(index, row) {
tbody.appendChild(row);
delete row.sortKey;
})
alert("Table: " + (new Date() - t) + "ms");
```
When you write for speed you want each iteration to be as quick as possible, so don't do stuff in the loops that you can do outside of them. For example, moving $table.children('tbody').get(0); outside of the last loop sped things up enormously.
As for using DOM methods to access a column, what you need is the column index, so you could iterate over the th columns until you find the correct one (provided the html formatting is identical for th tags and td tags). You can then use that index to get the correct row child node.
Also, if the table is static and the users are liable to do more sorting on it, you should cache the rows and not delete the sortKey property. Then you save about 30% sort time. There is also the matter of the table contents. If the content is text, this sorting method is fine. If it contains numbers etc. then you need to consider that, since I am using localeCompare which is a method of the String kind. | Table row sorting & string performance | [
"",
"javascript",
"jquery",
"performance",
"string",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.