Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm having a problem regarding namespaces used by my service references. I have a number of WCF services, say with the namespace `MyCompany.Services.MyProduct` (*the actual namespaces are longer*). As part of the product, I'm also providing a sample C# .NET website. This web application uses the namespace `MyCompany.MyProduct`. During initial development, the service was added as a project reference to the website and uses directly. I used a factory pattern that returns an object instance that implements `MyCompany.Services.MyProduct.IMyService`. So far, so good. Now I want to change this to use an actual service reference. After adding the reference and typing `MyCompany.Services.MyProduct` in the namespace textbox, it generates classes in the namespace **MyCompany.MyProduct.MyCompany.Services.MyProduct**. *BAD!* I don't want to have to change `using` directives in several places just because I'm using a proxy class. So I tried prepending the namespace with `global::`, but that is not accepted. Note that I hadn't even deleted the original assembly references yet, and "reuse types" is enabled, but no reusing was done, apparently. **However, I don't want to keep the assembly references around in my sample website for it to work anyway**. The only solution I've come up with so far is setting the default namespace for my web application to `MyCompany` (because it cannot be empty), and adding the service reference as `Services.MyProduct`. Suppose that a customer wants to use my sample website as a starting point, and they change the default namespace to `OtherCompany.Whatever`, this will obviously break my workaround. Is there a good solution to this problem? **To summarize**: I want to generate a service reference proxy in the original namespace, without referencing the assembly. Note: I have seen [this question](https://stackoverflow.com/questions/1165116/wcf-proxy-types-are-in-a-different-namespace-than-wcf-service-types), but there was no solution provided that is acceptable for my use case. --- Edit: As John Saunders suggested, I've submitted some feedback to Microsoft about this: [Feedback item @ Microsoft Connect](https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=478305)
*I've added a [write-up of this solution](http://thorarin.net/blog/post.aspx?id=b6e3efee-9058-4da8-99bc-b566136ae512) to my blog. The same information really, but perhaps a little less fragmented* I've found an alternative to using `svcutil.exe` to accomplish what I want. It (imo) makes updating the service reference easier than rerunning the utility. You should explicitly specify a namespace uri on your ServiceContract and DataContracts (*see further below for comment*). ``` [ServiceContract(Namespace = "http://company.com/MyCompany.Services.MyProduct")] public interface IService { [OperationContract] CompositeType GetData(); } [DataContract(Namespace = "http://company.com/MyCompany.Services.MyProduct")] public class CompositeType { // Whatever } ``` The namespace could be anything, but technically it needs to be a valid uri, so I chose this scheme. You might have to build manually for things to work later, so do that. Once this is done, enable the **Show All Files** option in the Solution Explorer. Expand the service reference you added previously. Double click the `Reference.svcmap` file. There will be a `<NamespaceMappings />` element, which you will need to edit. Continuing my example: ``` <NamespaceMappings> <NamespaceMapping TargetNamespace="http://company.com/MyCompany.Services.MyProduct" ClrNamespace="MyCompany.Services.MyProduct" /> </NamespaceMappings> ``` Save the file, right click the service reference and select **Update Service Reference**. You can add as many mappings as you need (I actually needed two). The effect is the same as the `svcutil /namespace:` approach, but without having to use the command line util itself, making for easier updating. ## Difference with svcutil The downside to this approach is that you need to use explicit namespace mappings. Using `svcutil`, you have the option to map everything not explicitly mapped like this (the solution John Saunders was referring to): ``` svcutil /namespace:*,MyCompany.Services.MyProduct ... ``` You might think to use: ``` <NamespaceMappings> <NamespaceMapping TargetNamespace="*" ClrNamespace="MyCompany.Services.MyProduct" /> </NamespaceMappings> ``` but this will **not** work, because Visual Studio already implicitly adds this mapping, pointing to the generated namespace name we're trying to get rid of. The above configuration will cause Visual Studio to complain about a duplicate key. **Ad explicit namespaces**: When no explit namespace is specified in your code, it *seems* that .NET will generate a uri of the form **`http://schemas.datacontract.org/2004/07/MyCompany.Services.MyProduct`**. You could map that just as well as the explicit namespaces in my example, but I don't know if there is any guarantee for this behavior. Therefore, going with an explicit namespace might be better. *NB: mapping two TargetNamespaces to the same ClrNamespace seems to break code generation*
Your use case was wrong. You should never have included the service as a reference in the first place. I believe svcutil.exe will accept a switch specifying the full namespace to use.
WCF service reference namespace differs from original
[ "", "c#", ".net", "wcf", "namespaces", "svcutil.exe", "" ]
I am building an application in ASP.NET 2.0 and the value for the view state is huge: ``` <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwULLTExNz... ``` The value contains 535,000 characters. Is this normal? How can I make it smaller?
Look into [enabling ASP.NET tracing](http://www.codeguru.com/csharp/.net/net_asp/article.php/c19355/The-Basics-of-NET-Tracing.htm) for your web pages - that will tell you what controls are storing how much in view state. You can then go and disable view state for controls that you know aren't using it.
ViewState can grow ugly on you. Basically I would say that the problem is that ViewState is enabled by default on everything, and a *lot* of things don't need it to be. The most basic example would be Label objects. Try disabling ViewState where you find it unnecessary (EnableViewState is the property you're looking for).
Large ViewState value in ASP.NET
[ "", "c#", "asp.net", "viewstate", "" ]
I will be implementing a version control system in C++ for my final year project. I would like to know: 1. What are the features a version control system should must support. 2. What features do you consider are missing in existing implementations (so that my version control system does more than just reinventing the wheel) 3. References (if any) to start with.
1. *What are the features a version control system should must support.* **Core features: Create Project, Check in, Check out, Branch, Get Latest/Previous, View History, Compare, Rollback** 2. *What features do you consider are missing in existing implementations (so that my version control system does more than just reinventing the wheel )* **Auto Build, Code Analysis, Email Notification, In-place editor, Database based storage**
If you want to do something different / innovative, then I'd reccommended looking into targeting your revision control at something other than source code. There are other application where revision control could be useful (e.g. revision control for documents) and source control has been extensively done already - your unlikely to cone up with an innovation that doesn't already exist in some other source control already.
What features to implement in a version control system?
[ "", "c++", "version-control", "" ]
We have a handful of developers working on a non-commercial (read: just for fun) cross-platform C++ project. We've already identified all the cross-platform libraries we'll need. However, some of our developers prefer to use Microsoft Visual C++ 2008, others prefer to code in Emacs on GNU/Linux. We're wondering if it is possible for all of us to work more or less simultaneously out of both environments, from the same code repository. Ultimately we want the project to compile cleanly on both platforms from the start. Any of our developers are happily willing to switch over to the other environment if this is not possible. We all use both Linux and Windows on a regular basis and enjoy both, so this isn't a question of trying to educate one set devs about the virtues of the other platform. This is about each of us being able to develop in the environment we enjoy most yet still collaborate on a fun project. Any suggestions or experiences to share?
Use [CMake](http://cmake.org/) to manage your build files. This will let you setup a single repository, with one set of text files in it. Each dev can then run the appropriate cmake scripts to build the correct build environment for their system (Visual Studio 2008/2005/GNU C++ build scripts/etc). There are many advantages here: * Each dev can use their own build environment * Dependencies can be handled very cleanly, including platform specific deps. * Builds can be out of source, which helps prevent accidentally committing inappropriate files * Easy migration to new dev. environments (ie: when VS 2010 is released, some devs can migrate there just by rebuilding their build folder)
I've done it and seen it done without too many problems. You'll want to try an isolate the code that is different for the different platforms. Also, you'll want to think about your **directory structure**. Something like * project/src <- .cc and .h files * project/src/linux|win <- code that is specific to one platform or the other * project/linux <- make files and other project related stuff * project/win <- .sln and .csproj files Basically you just want to be really clear what is specific to each system and what is common. Also, **unit tests** are going to be really important since there may be minor difference and you want to make it easy for the windows guys to run some tests to make sure the linux code works as expected and the other way around.
Simultaneous C++ development on Linux and Windows
[ "", "c++", "cross-platform", "" ]
I am sure that I am just not understanding something fundamental about events and/or delegates in C#, but why can't I do the Boolean tests in this code sample: ``` public class UseSomeEventBase { public delegate void SomeEventHandler(object sender, EventArgs e); public event SomeEventHandler SomeEvent; protected void OnSomeEvent(EventArgs e) { // CANONICAL WAY TO TEST EVENT. OF COURSE, THIS WORKS. if (SomeEvent != null) SomeEvent(this, e); } } public class UseSomeEvent : UseSomeEventBase { public bool IsSomeEventHandlerNull() { // "LEFT HAND SIDE" COMPILER ERROR return SomeEvent == null; } } class Program { static void Main(string[] args) { var useSomeEvent = new UseSomeEvent(); useSomeEvent.SomeEvent +=new UseSomeEventBase.SomeEventHandler(FuncToHandle); // "LEFT HAND SIDE" COMPILER ERROR if (useSomeEvent.SomeEvent == null) { } var useSomeEventBase = new UseSomeEventBase(); useSomeEventBase.SomeEvent += new UseSomeEventBase.SomeEventHandler(FuncToHandle); // "LEFT HAND SIDE" COMPILER ERROR if (useSomeEventBase.SomeEvent == null) { } } static void FuncToHandle(object sender, EventArgs e) { } } ```
An event is really just an "add" operation and a "remove" operation. You can't get the value, you can't set the value, you can't call it - you can just subscribe a handler for the event (`add`) or unsubscribe one (`remove`). This is fine - it's encapsulation, plain and simple. It's up to the publisher to implement add/remove appropriately, but unless the publisher chooses to make the details available, subscribers can't modify or access the implementation-specific parts. *Field-like events* in C# (where you don't specify the add/remove bits) hide this - they create a variable of a delegate type *and* an event. The event's add/remove implementations just use the variable to keep track of the subscribers. Inside the class you refer to the variable (so you can get the currently subscribed delegates, execute them etc) and outside the class you refer to the event itself (so only have add/remove abilities). The alternative to field-like events is where you explicitly implement the add/remove yourself, e.g. ``` private EventHandler clickHandler; // Normal private field public event EventHandler Click { add { Console.WriteLine("New subscriber"); clickHandler += value; } remove { Console.WriteLine("Lost a subscriber"); clickHandler -= value; } } ``` See [my article on events](http://pobox.com/~skeet/csharp/events.html) for more information. Of course the event publisher *can* also make more information available - you could write a property like `ClickHandlers` to return the current multi-cast delegate, or `HasClickHandlers`to return whether there are any or not. That's not part of the core event model though.
You can easily use a very simple approach here to not repeatedly subscribe to an event. Either of the 2 approaches below can be used: 1. **Flag approach** : \_getWarehouseForVendorCompletedSubscribed is a private variable initialized to false. ``` if (!_getWarehouseForVendorCompletedSubscribed) { _serviceClient.GetWarehouseForVendorCompleted += new EventHandler<GetWarehouseForVendorCompletedEventArgs>(_serviceClient_GetWarehouseForVendorCompleted); _getWarehouseForVendorCompletedSubscribed = true; } ``` 2. **Unsubscribe Approach** :Include an unsubscribe everytime you want to subscribe. ``` _serviceClient.GetWarehouseForVendorCompleted -= new EventHandler<GetWarehouseForVendorCompletedEventArgs> (_serviceClient_GetWarehouseForVendorCompleted); _serviceClient.GetWarehouseForVendorCompleted += new EventHandler<GetWarehouseForVendorCompletedEventArgs> (_serviceClient_GetWarehouseForVendorCompleted); ```
In C#, why can't I test if a event handler is null anywhere outside of the class that it's defined?
[ "", "c#", "events", "event-handling", "delegates", "" ]
I'm reading through Pro Drupal development right now, and an example on page 82 uses the following function: ``` function menufun_menu_link_alter(&$item, $menu){ if ($item['link_path'] == 'logout'){ $item['link_title'] = 'Sign off'; } } ``` I understand that I can find out what parameters "hook\_menu\_link\_alter" takes by looking at api.drupal.org. What I don't understand is how I'm supposed to find out what $item actually contains. In this example, $item['link\_path'] and $item['link\_title'] were both used - how did he know they existed? I'm not only interested in this specific example, of course. I want to be able to do this for any function that I look at in drupal. Thanks in advance for the help!
If you went to api.drupal.org and looked up `hook_menu_link_alter`, you'd get this page: <http://api.drupal.org/api/function/hook_menu_link_alter/6> Which includes the following note: "`$item` Associative array defining a menu link as passed into `menu_link_save()`." If you follow the link to the `menu_link_save` page, you'd be here: <http://api.drupal.org/api/function/menu_link_save/6> In which the contents of the `$item` parameter is clearly listed under **Parameters**.
If you're using the Devel module I like to throw a ``` dsm($item) ``` in there to see what $item currently contains. It prints a nice interactive array/object navigator. Granted you still have to go to the API to look up all the possibilities, since this only shows what's currently set. But it usually contains enough to get what I'm looking for.
drupal hook functions and their arguments
[ "", "php", "drupal", "drupal-6", "drupal-modules", "" ]
Below is my func. I call it with ``` if(try_strtol(v, rhs)) ``` and RHS = "15\t// comment" ``` bool try_strtol(int64_t &v, const string& s) { try { std::stringstream ss(s); if ((ss >> v).fail() || !(ss >> std::ws).eof()) throw std::bad_cast(); return true; } catch(...) { return false; } } ``` It returns false, i except true with v=15. How do i fix this?
If you want it to return a boolean, just do this: ``` bool try_strtol(int64_t &v, const string& s) { std::stringstream ss(s); return (ss >> v).fail() || !(ss >> std::ws).eof(); } ``` And it's failing because it's a bad cast. Were you hoping the comment would be ignored?
Why do you expect `(ss >> std::ws).eof()` to be true? rhs contains non-whitespace characters after 15, so the if condition will be true and the exception will be thrown, which is why it returns false.
why this conversion doesn't work?
[ "", "c++", "std", "stringstream", "" ]
What is the best approach to encrypting a SQLite database file in .Net/C#? I'm using [sqlite-dotnet2](http://sourceforge.net/projects/sqlite-dotnet2) wrapper. There are tools like [SQLite Encryption Extension](http://www.hwaci.com/sw/sqlite/see.html) and [SQLite Crypt](http://sqlite-crypt.com/), but both are non-free, while my project is under GPL. The naive approach I thought of using was to let SQLite handle a temporary file, then to encrypt it on program exit, and overwrite (zero-out) the original. The obvious drawback is that if program crashes (and while it is running), the plain text DB is accessible. Is there a better way to approach this? Can I pass an encrypted stream to the wrapper (instead of using SQLiteConnection.CreateFile) ? [edit] Maybe I am overthinking this. Is is sufficient to use Password option in the connection string? Would the file be encrypted properly in that case (or is it some weaker protection)?
I recommend using the `System.Data.Sqlite` wrapper, which includes encryption. It works great, it's easy to use, and it's a complete ADO.Net implementation. You can get the wrapper from <https://system.data.sqlite.org>, and the developer describes how to use the encryption on this forum at: <https://web.archive.org/web/20100207030625/http://sqlite.phxsoftware.com/forums/t/130.aspx>. Hint - you just set the password property. He also describes how he does the encryption using the Microsoft Crypto API elsewhere in the forum.
Take a look at: <http://zetetic.net/software/sqlcipher> It is open source. You can chek also the code for the wxsqlite3.
Encrypt SQLite database in C#
[ "", "c#", ".net", "file", "sqlite", "encryption", "" ]
I am wondering whether WPF on .Net 3.5 supports touch or multi-touch for laptop? Appreciate if there are some cool Demos to show the effect. I am using VSTS2008 + C#. thanks in advance, George
WPF 4.0 Beta 2 supports full multi-touch, but only on Windows 7, as Windows 7 is the first multi-touch enabled Windows version. For 3.5 on XP you can try out the Breeze for WPF 3.5 multi-touch framework at <http://code.google.com/p/breezemultitouch/> its open source and plugs into [TUIO (multi-touch protocol)](http://www.tuio.org). TUIO allows you to bridge between various multi-touch devices and your WPF 3.5 application without the need for operating system multi-touch support.
It's not really WPF's responsibility to support touch-devices, but the O/S. The O/S simply delegates the events of mouseDown == fingerTouchedScreen to WPF (not a 100% accurate statement, but good enough :) ). If you want to develop WPF for touchscreen-devices, you really need to look at your UI design instead of what's supported and what's not. [This post](https://stackoverflow.com/questions/1116342/wpf-touchscreen-user-interface-guidelines) has a nice answer for that. Basically, you work with the same events as you'd do with your standard smith'n'wesson point'n'click devices :)
WPF supports touch or multi-touch screen?
[ "", "c#", ".net", "wpf", "touch", "multi-touch", "" ]
I use more than one class and I need a... lets say Global storage for all the class and method. Is it the right way to create a static class for storage? ``` public static class Storage { public static string filePath { get; set; } } ``` Or is there other ways to do it?
If you *really* need to make your example a singleton then here is how you do it. ``` public class StorageSingleton { private static readonly StorageSingleton instance; static StorageSingleton() { instance = new Singleton(); } // Mark constructor as private as no one can create it but itself. private StorageSingleton() { // For constructing } // The only way to access the created instance. public static StorageSingleton Instance { get { return instance; } } // Note that this will be null when the instance if not set to // something in the constructor. public string FilePath { get; set; } } ``` The way to call and set the singleton is the following: ``` // Is this is the first time you call "Instance" then it will create itself var storage = StorageSingleton.Instance; if (storage.FilePath == null) { storage.FilePath = "myfile.txt"; } ``` Alternatively you can add into the constructor the following to avoid null reference exception: ``` // Mark constructor as private as no one can create it but itself. private StorageSingleton() { FilePath = string.Empty; } ``` Word of warning; making anything global or singleton will break your code in the long run. Later on you really should be checking out the repository pattern.
You could consider using the Singleton design pattern: [Implementing Singleton in c#](http://msdn.microsoft.com/en-us/library/ms998558.aspx) eg. ``` using System; public class Singleton { private static Singleton instance; private Singleton() {} public static Singleton Instance { get { if (instance == null) { instance = new Singleton(); } return instance; } } } ```
c# separate class for storage
[ "", "c#", "class", "storage", "" ]
I have a python script which process a file line by line, if the line matches a regex, it calls a function to handle it. My question is is there a better write to refactor my script. The script works, but as it is, i need to keep indent to the right of the editor as I add more and more regex for my file. Thank you for any idea. Now my code end up like this: ``` for line in fi.readlines(): result= reg1.match(line) if result: handleReg1(result) else: result = reg2.match(line) if result: handleReg2(result) else: result = reg3.match(line) if result: handleReg3(result) else: result = reg4.match(line) if result: handleReg4(result) else: result = reg5.match(line) if result: handleReg5(result) ```
I'd switch to using a data structure mapping regexes to functions. Something like: ``` map = { reg1: handleReg1, reg2: handleReg2, etc } ``` Then you just loop through them: ``` for reg, handler in map.items(): result = reg.match(line) if result: handler(result) break ``` If you need the matches to happen in a particular order you'll need to use a list instead of a dictionary, but the principal is the same.
Here's a trivial one: ``` handlers = { reg1 : handleReg1, ... } for line in fi.readlines(): for h in handlers: x = h.match(line) if x: handlers[h](x) ``` If there could be a line that matches several regexps this code will be different from the code you pasted: it will call several handlers. Adding `break` won't help, because the regexps will be tried in a different order, so you'll end up calling the wrong one. So if this is the case you should iterate over list: ``` handlers = [ (reg1, handleReg1), (reg2, handleReg2), ... ] for line in fi.readlines(): for reg, handler in handlers: x = reg.match(line) if x: handler(x) break ```
Need help in refactoring my python script
[ "", "python", "" ]
When you start messing around with Spring's auto-proxy stuff, you often run into this behaviour as documented: > Classes that implement the > BeanPostProcessor interface are > special, and so they are treated > differently by the container. All > BeanPostProcessors and their directly > referenced beans will be instantiated > on startup, as part of the special > startup phase of the > ApplicationContext, then all those > BeanPostProcessors will be registered > in a sorted fashion - and applied to > all further beans. Since AOP > auto-proxying is implemented as a > BeanPostProcessor itself, no > BeanPostProcessors or directly > referenced beans are eligible for > auto-proxying (and thus will not have > aspects 'woven' into them. > > For any such bean, you should see an > info log message: “Bean 'foo' is not > eligible for getting processed by all > BeanPostProcessors (for example: not > eligible for auto-proxying)”. In other words, if I write my own BeanPostProcessor, and that class directly references other beans in the context, then those referenced beans will not be eligible for auto-proxying, and a message is logged to that effect. My problem is that tracking down where that direct reference is can be very difficult, since the "direct reference" can in fact be a chain of transitive dependencies that ends up taking in half the beans in the application context. All Spring gives you is that single info message, and it's not really much help, beyond telling you when a bean has been caught in this web of references. The BeanPostProcessor I'm developing does have direct references to other beans, but it's a very limited set of references. Despite this, pretty much every bean in my context is then being excluded from being auto-proxied, according to the log messages, but I can't see where that dependency is happening. Has anyone found a better way of tracking this down?
Just to bring some closure to this question, the collapse of the uninitialized object graph was caused by the `BeanPostProcessor` using `@Autowired` to get its dependencies, and the autowire mechanism effectively caused every other bean definition to be initialized before my `BeanPostProcessor` got a chance to have a say in the matter. The solution is not to use autowiring for your BPPs.
Follow this recipe: 1. Open `BeanPostProcessorChecker` in your IDE (it's an inner class of `AbstractApplicationContext`) 2. Set a breakpoint on `if (logger.isInfoEnabled()) {` in the method `postProcessAfterInitialization` 3. Run your code 4. When you hit the breakpoint, look for calls to `getBean(String,Class<T>)` in your stack trace. One of these calls will try to create a `BeanPostProcessor`. That bean should be the culprit. **Background** Imagine this situation: ``` public class FooPP implements BeanPostProcessor { @Autowire private Config config; } ``` When Spring has to create `config` (since it's a dependency of `FooPP`), it has a problem: The contract says that all `BeanPostProcessor` must be applied to every bean that is being created. But when Spring needs `config`, there is at least one PP (namely `FooPP`) which isn't ready for service! This gets worse when you use an `@Configuration` class to define this bean: ``` @Configuration public class BadSpringConfig { @Lazy @Bean public Config config() { return new Config(); } @Lazy @Bean public FooPP fooPP() { return new FooPP(); } } ``` Every configuration class is a bean. That means to build a bean factory from `BadSpringConfig`, Spring needs to apply the post-processor `fooPP` but in order to do that, it first needs the bean factory ... In this example, it's possible to break one of the cyclic dependencies. You can make `FooPP` implement `BeanFactoryAware` to get Spring inject the `BeanFactory` into the post processor. That way, you don't need autowiring. Later in the code, you can lazily ask for the bean: ``` private LazyInit<Config> helper = new LazyInit<Config>() { @Override protected InjectionHelper computeValue() { return beanFactory.getBean( Config.class ); } }; @Override public Object postProcessBeforeInitialization( Object bean, String beanName ) throws BeansException { String value = helper.get().getConfig(...); } ``` ([source for LazyInit](http://blog.pdark.de/2010/03/04/java-tricks-lazy-initialization/)) To break the cycle between the bean factory and the post processor, you need to configure the post processor in an XML config file. Spring can read that and build all the structures without getting confused.
Tracking down cause of Spring's "not eligible for auto-proxying"
[ "", "java", "spring", "aop", "" ]
I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage. Also, any thoughts on setting its quality for playback would be great. Thank you.
Looks like PyMedia does this: <http://pymedia.org/> and some more info here on converting to various formats, whilst setting the bitrate: <http://pymedia.org/tut/recode_audio.html> e.g. ``` params= { 'id': acodec.getCodecId('mp3'), 'bitrate': r.bitrate, 'sample_rate': r.sample_rate, 'ext': 'mp3', 'channels': r.channels } enc= acodec.Encoder( params ) ```
I wrote [a library](http://pydub.com) designed to do that =D ``` from pydub import AudioSegment AudioSegment.from_file("/input/file").export("/output/file", format="mp3") ``` Easy! to specify a bitrate, just use the `bitrate` kwarg… ``` from pydub import AudioSegment sound = AudioSegment.from_file("/input/file") sound.export("/output/file", format="mp3", bitrate="128k") ```
Python library for converting files to MP3 and setting their quality
[ "", "python", "audio", "compression", "" ]
Using Javascript how can I identify the element at a given position? Basically I'm looking to write a function that takes two input parameters (the x and y coordinates) and returns the html element at the position on the screen represented by the parameters.
``` document.elementFromPoint(x, y) document.elementsFromPoint(x, y) ``` <https://drafts.csswg.org/cssom-view/#dom-document-elementfrompoint> <https://developer.mozilla.org/en-US/docs/Web/API/Document/elementFromPoint> <https://developer.mozilla.org/en-US/docs/Web/API/Document/elementsFromPoint>
You can use the native JavaScript `elementFromPoint(x, y)` method, that returns the element at coordinates x,y in the viewport. See the [elementFromPoint w3c draft](http://dev.w3.org/csswg/cssom-view/#dom-document-elementfrompoint) And, a code sample: ``` function changeColor(newColor) { // Get the element placed at coords (2, 2) var elem = document.elementFromPoint(2, 2); // Set the foreground color to the element elem.style.color = newColor; } ``` ``` <p id="para1">Change this text color using the following buttons.</p> <button onclick="changeColor('blue');">Blue</button> <button onclick="changeColor('red');">Red</button> ``` You can use `setInterval()` to continuously check the element's hover event but it's not recommended, try to use `.hover(...)` and css instead to enhance the application performance.
Get element at specified position - JavaScript
[ "", "javascript", "html", "dom", "" ]
A recent project called for importing data into an Oracle database. The program that will do this is a C# .Net 3.5 app and I'm using the Oracle.DataAccess connection library to handle the actual inserting. I ran into a problem where I'd receive this error message when inserting a particular field: ORA-12899 Value too large for column X I used `Field.Substring(0, MaxLength);` but still got the error (though not for every record). Finally I saw what should have been obvious, my string was in ANSI and the field was UTF8. Its length is defined in bytes, not characters. This gets me to my question. What is the best way to trim my string to fix the MaxLength? My substring code works by character length. Is there simple C# function that can trim a UT8 string intelligently by byte length (ie not hack off half a character) ?
Here are two possible solution - a LINQ one-liner processing the input left to right and a traditional `for`-loop processing the input from right to left. Which processing direction is faster depends on the string length, the allowed byte length, and the number and distribution of multibyte characters and is hard to give a general suggestion. The decision between LINQ and traditional code I probably a matter of taste (or maybe speed). If speed matters, one could think about just accumulating the byte length of each character until reaching the maximum length instead of calculating the byte length of the whole string in each iteration. But I am not sure if this will work because I don't know UTF-8 encoding well enough. I could theoreticaly imagine that the byte length of a string does not equal the sum of the byte lengths of all characters. ``` public static String LimitByteLength(String input, Int32 maxLength) { return new String(input .TakeWhile((c, i) => Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength) .ToArray()); } public static String LimitByteLength2(String input, Int32 maxLength) { for (Int32 i = input.Length - 1; i >= 0; i--) { if (Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength) { return input.Substring(0, i + 1); } } return String.Empty; } ```
I think we can do better than naively counting the total length of a string with each addition. LINQ is cool, but it can accidentally encourage inefficient code. What if I wanted the first 80,000 bytes of a giant UTF string? That's a *lot* of unnecessary counting. "I've got 1 byte. Now I've got 2. Now I've got 13... Now I have 52,384..." That's silly. Most of the time, at least in l'anglais, we can cut *exactly* on that `nth` byte. Even in another language, we're less than 6 bytes away from a good cutting point. So I'm going to start from @Oren's suggestion, which is to key off of the leading bit of a UTF8 char value. Let's start by cutting right at the `n+1th` byte, and use Oren's trick to figure out if we need to cut a few bytes earlier. **Three possibilities** If the first byte after the cut has a `0` in the leading bit, I know I'm cutting precisely before a single byte (conventional ASCII) character, and can cut cleanly. If I have a `11` following the cut, the next byte after the cut is the *start* of a multi-byte character, so that's a good place to cut too! If I have a `10`, however, I know I'm in the middle of a multi-byte character, and need to go back to check to see where it really starts. That is, though I want to cut the string after the nth byte, if that n+1th byte comes in the middle of a multi-byte character, cutting would create an invalid UTF8 value. I need to back up until I get to one that starts with `11` and cut just before it. **Code** Notes: I'm using stuff like `Convert.ToByte("11000000", 2)` so that it's easy to tell what bits I'm masking (a little more about bit masking [here](https://stackoverflow.com/a/16328309/1028230)). In a nutshell, I'm `&`ing to return what's in the byte's first two bits and bringing back `0`s for the rest. Then I check the `XX` from `XX000000` to see if it's `10` or `11`, where appropriate. I found out *today* that [C# 6.0 might actually support binary representations](https://stackoverflow.com/a/23546326/1028230), which is cool, but we'll keep using this kludge for now to illustrate what's going on. The `PadLeft` is just because I'm overly OCD about output to the Console. So here's a function that'll cut you down to a string that's `n` bytes long or the greatest number less than `n` that's ends with a "complete" UTF8 character. ``` public static string CutToUTF8Length(string str, int byteLength) { byte[] byteArray = Encoding.UTF8.GetBytes(str); string returnValue = string.Empty; if (byteArray.Length > byteLength) { int bytePointer = byteLength; // Check high bit to see if we're [potentially] in the middle of a multi-byte char if (bytePointer >= 0 && (byteArray[bytePointer] & Convert.ToByte("10000000", 2)) > 0) { // If so, keep walking back until we have a byte starting with `11`, // which means the first byte of a multi-byte UTF8 character. while (bytePointer >= 0 && Convert.ToByte("11000000", 2) != (byteArray[bytePointer] & Convert.ToByte("11000000", 2))) { bytePointer--; } } // See if we had 1s in the high bit all the way back. If so, we're toast. Return empty string. if (0 != bytePointer) { returnValue = Encoding.UTF8.GetString(byteArray, 0, bytePointer); // hat tip to @NealEhardt! Well played. ;^) } } else { returnValue = str; } return returnValue; } ``` I initially wrote this as a string extension. Just add back the `this` before `string str` to put it back into extension format, of course. I removed the `this` so that we could just slap the method into `Program.cs` in a simple console app to demonstrate. **Test and expected output** Here's a good test case, with the output it create below, written expecting to be the `Main` method in a simple console app's `Program.cs`. ``` static void Main(string[] args) { string testValue = "12345“”67890”"; for (int i = 0; i < 15; i++) { string cutValue = Program.CutToUTF8Length(testValue, i); Console.WriteLine(i.ToString().PadLeft(2) + ": " + Encoding.UTF8.GetByteCount(cutValue).ToString().PadLeft(2) + ":: " + cutValue); } Console.WriteLine(); Console.WriteLine(); foreach (byte b in Encoding.UTF8.GetBytes(testValue)) { Console.WriteLine(b.ToString().PadLeft(3) + " " + (char)b); } Console.WriteLine("Return to end."); Console.ReadLine(); } ``` Output follows. Notice that the "smart quotes" in `testValue` are three bytes long in UTF8 (though when we write the chars to the console in ASCII, it outputs dumb quotes). Also note the `?`s output for the second and third bytes of each smart quote in the output. The first five characters of our `testValue` are single bytes in UTF8, so 0-5 byte values should be 0-5 characters. Then we have a three-byte smart quote, which can't be included in its entirety until 5 + 3 bytes. Sure enough, we see that pop out at the call for `8`.Our next smart quote pops out at 8 + 3 = 11, and then we're back to single byte characters through 14. ``` 0: 0:: 1: 1:: 1 2: 2:: 12 3: 3:: 123 4: 4:: 1234 5: 5:: 12345 6: 5:: 12345 7: 5:: 12345 8: 8:: 12345" 9: 8:: 12345" 10: 8:: 12345" 11: 11:: 12345"" 12: 12:: 12345""6 13: 13:: 12345""67 14: 14:: 12345""678 49 1 50 2 51 3 52 4 53 5 226 â 128 ? 156 ? 226 â 128 ? 157 ? 54 6 55 7 56 8 57 9 48 0 226 â 128 ? 157 ? Return to end. ``` So that's kind of fun, and I'm in just before the question's five year anniversary. Though Oren's description of the bits had a small error, that's *exactly* the trick you want to use. Thanks for the question; neat.
Best way to shorten UTF8 string based on byte length
[ "", "c#", "oracle", "utf-8", "ora-12899", "" ]
[Construct](https://construct.readthedocs.io) is a DSL implemented in Python used to describe data structures (binary and textual). Once you have the data structure described, construct can parse and build it for you. Which is good ("DRY", "Declarative", "Denotational-Semantics"...) Usage example: ``` # code from construct.formats.graphics.png itxt_info = Struct("itxt_info", CString("keyword"), UBInt8("compression_flag"), compression_method, CString("language_tag"), CString("translated_keyword"), OnDemand( Field("text", lambda ctx: ctx._.length - (len(ctx.keyword) + len(ctx.language_tag) + len(ctx.translated_keyword) + 5), ), ), ) ``` I am in need for such a tool for Haskell and I wonder if something like this exists. I know of: * Data.Binary: User implements parsing and building seperately * Parsec: Only for parsing? Only for text? I guess one must use Template Haskell to achieve this?
Currently (afaik) there is no equivalent to Construct in Haskell. One can be implemented using Template Haskell.
I'd say it depends what you want to do, and if you need to comply with any existing format. [Data.Binary](http://hackage.haskell.org/package/binary) will (surprise!) help you with binary data, both reading and writing. You can either write the code to read/write yourself, or let go of the details and generate the required code for your data structures using some additional tools like [DrIFT](http://repetae.net/computer/haskell/DrIFT/) or [Derive](http://community.haskell.org/~ndm/derive/). DrIFT works as a preprocessor, while Derive can work as a preprocessor and with TemplateHaskell. [Parsec](http://www.haskell.org/haskellwiki/Parsec) will only help you with parsing text. No binary data (as easily), and no writing. Work is done with regular `String`s. There are `ByteString` equivalents on hackage. For your example above I'd use Data.Binary and write custom `put`/`get`ers myself. Have a look at the [parser category at hackage](http://hackage.haskell.org/packages/archive/pkg-list.html#cat:parsing) for more options.
Haskell equivalent of Python's "Construct"
[ "", "python", "parsing", "haskell", "dsl", "construct", "" ]
I have a thread running that delegates out some tasks. When a single task is complete, an event is raised saying that it has completed. These tasks need to be run in a specific order and need to wait for the previous task to finish. How can I make the thread wait until it receives the "task completed" event? (Aside from the obvious eventhandler that sets a flag and then a while loop polling the flag)
I often use the [`AutoResetEvent`](http://msdn.microsoft.com/en-us/library/system.threading.autoresetevent.aspx) wait handle when I need to wait for an asynchronous task to finish: ``` public void PerformAsyncTasks() { SomeClass someObj = new SomeClass() AutoResetEvent waitHandle = new AutoResetEvent(false); // create and attach event handler for the "Completed" event EventHandler eventHandler = delegate(object sender, EventArgs e) { waitHandle.Set(); // signal that the finished event was raised } someObj.TaskCompleted += eventHandler; // call the async method someObj.PerformFirstTaskAsync(); // Wait until the event handler is invoked waitHandle.WaitOne(); // the completed event has been raised, go on with the next one someObj.PerformSecondTaskAsync(); waitHandle.WaitOne(); // ...and so on } ```
One option would be to use an [EventWaitHandle](http://msdn.microsoft.com/en-us/library/system.threading.eventwaithandle.aspx) to signal completion.
Which methods can be used to make thread wait for an event and then continue its execution?
[ "", "c#", ".net", "multithreading", "events", "event-handling", "" ]
I've been storing some simple user click-actions in text files so I can easily refer to them. I'm not looking to go the full db route, as I'm looking to learn hadoop, and should be able get the data from my log files, but until then I'm looking for a fairly light-weight solution. I could continue with the txt file route, but it ends up being a lot of files (one for each type of user click). What i was thinking of doing is putting an associative array into a text file (one for each day), and then reading in that associative array and updating key=>value to represent the click area and number of clicks. Of course, I can't store an associative array directly in a txt file, without going through a bunch of 'read into an array', 'check array for existing key', 'update array', print out to txt'. All of this just doesn't seem very efficient (even just reading the txt into the array is currently a bit of a pain). I'm hoping somebody has a better suggestion for how I should do this.
> Of course, I can't store an associative array directly in a txt file Yes, you can. Just [serialize](http://nl.php.net/serialize) the array before saving it.
how are you storing them now? javascript? have you considered google.com/analytics/ there are "serialize" functions available for php and javascript(3rd party) that will let you store arrays as text. php.net/serialize phpjs.org/functions/serialize:508
storing simple user action/click tracking (associative array in text??)
[ "", "php", "click", "tracking", "" ]
I'm writing a plug-in for a program where I need to track when native objects are added, removed, and edited in the active document. The API has events that are fired when the document is edited. However, the program does not track when the native objects actually change. Instead an object changing is treated as the object being deleted and then immediately replaced with another modified object with the same ID. This is done this way so the program can keep track of an undo record. After some experimenting I've determined that the events are evoked as follows: **An Object Is Added:** OnAddObject Event **An Object Is Removed:** OnDeleteObject Event **An Object is Changed:** OnReplaceObject Event->OnDeleteObject Event->OnAddObject Event Right now my plug-in is only watching the OnAdd and OnDelete events where it is adding and removing instances of my custom object to and from a collection. But this also means every time an object changes my plugin is removing an reinitializing a near identical object. I'd rather just know that the document object has changed so my custom object can be refreshed rather then completely reinstantiated. How can my methods that are subscribed to the OnDelete and OnAdd events tell that the object is not really being added or deleted but is being replaced because it has changed?
I suggest introducing "lock" variables (e.g. bools) When the Replaced event fires, set them to lock In Added and removed check the lock variables. If they are set to lock, set them to unlock and return.
Keep track of the document ID.
How can I track when multiple events have occured?
[ "", "c#", "events", "event-handling", "" ]
What is the difference between Java exception handling and using `assert` conditions? It's known that Assert is of two types. But when should we use `assert` keyword?
Use assertions for internal logic checks within your code, and normal exceptions for error conditions outside your immediate code's control. Don't forget that assertions can be turned on and off - if you care about things like argument validation, that should be explicit using exceptions. (You could, however, choose to perform argument validation on *private* methods using assertions, on the grounds that a violation at that point is due to an internal bug rather than an external error.) Alternatively it's entire reasonable (IMO) to use exceptions for everything. I personally don't use assertions much at all, but it's a matter of personal preference to some extent. (There can certainly be objective arguments for and against assertions, but it's not sufficiently clear cut to remove preference altogether.)
Java assertions are built on top of Java exceptions and exception handling. Indeed, when a Java assertion fails, the result is an AssertionError exception that can be caught like any other Java exception. The key differences between exceptions and assertions are: * Assertions are *intended* to be used solely as a means of detecting programming errors, aka bugs. By contrast, an exception can indicate other kinds of error or "exceptional" condition; e.g. invalid user input, missing files, heap full and so on. * The Java language provides syntactic support for assertions, in the form of the `assert` statement. Compare the following: ``` if (x != y) { throw new SomeException("x != y"); } assert x != y; ``` * Most importantly, Java allows you to enable or disable assertion checking globally or on individual classes when you start the JVM. Note: some people say that you should *always* run production code with assertion checking turned off. I tend to disagree with this as a blanket statement. If your production code is known to be stable AND you need to squeeze that last bit of performance out of it, then turning off assertions is good. But, if a (say) 10% performance hit is not a real problem, I'd prefer to have an application die with an assertion error if the alternative is for it to continue and corrupt an important database. @Mario Ortegón commented thus: > The "turning off" is because assertions can be used to verify the result of an optimized algorithm by comparing its implementation against a well-known, but slow, algorithm. So, in development it is OK to invoke that `O(N^3)` method to assert that the `O(log N)` algorithm works as intended. But this is something that you do not want in production. Whether or not you think it is *good practice* to turn off assertions in production, it is definitely *bad practice* to write assertions that have a significant impact on performance when enabled. Why? Because it means that you no longer have the option of enabling assertions in production (to trace a problem) or in your stress / capacity testing. In my opinion, if you need to do `O(N^3)` pre/post-condition testing, you should do it in your unit tests.
Exception Vs Assertion
[ "", "java", "assert", "" ]
In my mind the ID field of a business object should be read-only (public get and private set) as by definition the ID will never change (as it uniquely identifies a record in the database). This creates a problem when you create a new object (ID not set yet), save it in the database through a stored procedure for example which returns the newly created ID then how do you store it back in the object if the ID property is read-only? Example: ``` Employee employee = new Employee(); employee.FirstName="John"; employee.LastName="Smith"; EmployeeDAL.Save(employee); ``` How does the Save method (which actually connects to the database to save the new employee) update the EmployeeId property in the Employee object if this property is read-only (which should be as the EmployeeId will never ever change once it's created). It looks like the Id should be writable by the DAL and read-only for the rest of the world. How do you implement this especially if the DAL classes and the Business object ones are in different assemblies? I don't want to create a Save method in the Employee class as this class should have nothing to do with the database.
Another possible solution is to declare Employee as :- ``` public class Employee { public int Id { get; internal set; } } ``` ... provided that the Employee and DAL classes are in the same assembly I don't claim to like it but I have used it.
You can make your DAL method just return an updated object: ``` public class EmployeeDAL { Employee EmployeeDAL.Save (Employee employee) { // Save employee // Get the newly generated ID // Recreate the object with the new ID and return it } } ``` Alternatively, you can generate a new ID in code, instantiate an object with this ID then ask your DAL to save it. If you wish that your object was updated during Save operation you will have to make this property public. I personally like to create *immutable* objects, those you can only setup once by passing all the values into constructor. With this approach you would just create an object to be saved, then retrieve it back along with the assigned ID from the database and return it to the caller.
Should the ID in the business object be read-only or not?
[ "", "c#", "business-logic", "" ]
I have two integer values `a` and `b`, but I need their ratio in floating point. I know that `a < b` and I want to calculate `a / b`, so if I use integer division I'll always get 0 with a remainder of `a`. How can I force `c` to be a floating point number in Python 2 in the following? ``` c = a / b ``` --- In 3.x, the behaviour is reversed; see [Why does integer division yield a float instead of another integer?](https://stackoverflow.com/questions/1282945) for the opposite, 3.x-specific problem.
In Python 2, division of two ints produces an int. In Python 3, it produces a float. We can get the new behaviour by importing from `__future__`. ``` >>> from __future__ import division >>> a = 4 >>> b = 6 >>> c = a / b >>> c 0.66666666666666663 ```
You can cast to float by doing `c = a / float(b)`. If the numerator or denominator is a float, then the result will be also. --- A caveat: as commenters have pointed out, this won't work if `b` might be something other than an integer or floating-point number (or a string representing one). If you might be dealing with other types (such as complex numbers) you'll need to either check for those or use a different method.
How can I force division to be floating point? Division keeps rounding down to 0?
[ "", "python", "floating-point", "integer", "division", "python-2.x", "" ]
Is there a way to collect the IP address of a client connected to your website through a proxy server? The entire setup is an internal LAN and through the sysadmin, I have control over the proxy machine as well. I am using PHP5 for the website server side. I tried `$_SERVER['REMOTE_ADDR']` in PHP but this variable just stores the IP address of the proxy. Any ideas?
The standard solution (in php) is: ``` if ($_SERVER['HTTP_X_FORWARDED_FOR']){ $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else{ $ip = $_SERVER['REMOTE_ADDR']; } ``` But as the first answer says this all depends on the header actually being set.
It depends on the proxy. Some proxies add a header which gives the original IP address, the X-Forwarded-For header, but given that most companies uses proxies to hide the internal network structure that's rare. If this is the case then you're not going to be able to do it easily. If you have control over the proxy then it's time to read the proxy documentation to see how to add that header.
Finding IP address of client connected through a proxy
[ "", "php", "proxy", "ip-address", "" ]
The syntax maybe wrong ``` public static class Storage { public static string filePath { get; set; } } ``` And ``` public class Storage { private void Storage () {}; public static string filePath { get; set; } } ``` I got this from an example on the internet. what is the use of the second one?
If you look at the IL code, the static class will be `abstract` and `sealed` which gives two important qualities: * You cannot create instances from it * It cannot be inherited A consequence of the first point is that a static class cannot contain non-static members. There may be many uses of static members in a non-static class. One common use is to have a class factory: ``` public class SomeClass { public int SomeInt { get; set; } public static SomeClass Create(int defaultValue) { SomeClass result = new SomeClass(); result.SomeInt = defaultValue; return result; } } ```
Here is the official/MSDN hot-spot to learn about [static classes](http://msdn.microsoft.com/en-us/library/79b3xss3%28VS.80%29.aspx) > The main features of a static class > are: > \* They only contain static members. > \* They cannot be instantiated. > \* They are sealed. > \* They cannot contain Instance Constructors Basically a static class is identical to a 'normal'/non-static class which has only static methods and a private ctor. Marking it as static helps clarify intent and helps the compiler do some compile-time checks to disallow certain things e.g. disallow instantiation. Real-world uses I can think of: Use it to house or as a way to organize * utility methods (methods not associated with any instance of a type) e.g. Math for Min and Max methods * extension methods e.g. StopWatchExtensions for a Reset method on a StopWatch
c# What is the different between static class and non-static (I am talking about the class itself not the field)
[ "", "c#", "class", "static", "" ]
If I want to create a function template, where the template parameter isn't used in the argument list, I can do it thusly: ``` template<T> T myFunction() { //return some T } ``` But the invocation must specify the 'T' to use, as the compiler doesn't know how to work it out. ``` myFunction<int>(); ``` But, suppose I wanted to do something similar, but for the '[]' operator. template ``` T SomeObject::operator [ unsigned int ] { //Return some T } ``` Is there any way to invoke this operator? This doesn't appear valid: ``` SomeObject a; a<int>[3]; ```
This should work: ``` class C { public: template <class T> T operator[](int n) { return T(); } }; void foo() { C c; int x = c.operator[]<int>(0); } ``` But it's of no real value because you'd always have to specify the type, and so it looks like a very ugly function call - the point of an operator overload is to look like an operator invocation.
[Boost.Program\_options](http://www.boost.org/doc/libs/1_39_0/doc/html/program_options.html) uses this neat syntax: ``` int& i = a["option"].as<int>(); ``` Which is achieved with something like this: ``` class variable_value { public: variable_value(const boost::any& value) : m_value(value) {} template<class T> const T& as() const { return boost::any_cast<const T&>(m_value); } template<class T> T& as() { return boost::any_cast<T&>(m_value); } private: boost::any m_value; }; class variables_map { public: const variable_value& operator[](const std::string& name) const { return m_variables[name]; } variable_value& operator[](const std::string& name) { return m_variables[name]; } private: std::map<std::string, variable_value> m_variables; }; ``` You could adapt this idea to suit your own needs.
Operator templates in C++
[ "", "c++", "" ]
I wish to do a select on a table and order the results by a certain keyword or list of keywords. For example, I have a table like so: | ID | Code | | --- | --- | | 1 | Health | | 2 | Freeze | | 3 | Phone | | 4 | Phone | | 5 | Health | | 6 | Hot | So, rather than just doing a simple `Order By asc/desc`, I'd like to `order by Health, Phone, Freeze, Hot`. Is this possible?
Try using this: ``` select * from table order by FIELD(Code, 'Health', 'Phone', 'Freeze', 'Hot') ```
Here's a horrible hack: ``` select * from table order by ( case Code when 'Health' then 0 when 'Phone' then 1 when 'Freeze' then 2 when 'Hot' then 3 end ) ```
SQL Order By list of strings?
[ "", "sql", "mysql", "" ]
Can someone explain the difference between using ``` define('SOMETHING', true); ``` and ``` $SOMETHING = true; ``` And maybe the benefits between one or the other? I use variables everywhere and even in a config type file that is included to everypage I still use variables as I don't see why to use the define method.
DEFINE makes a constant, and constants are global and can be used anywhere. They also cannot be redefined, which variables can be. I normally use DEFINE for Configs because no one can mess with it after the fact, and I can check it anywhere without global-ling, making for easier checks.
Once defined, a 'constant' cannot be changed at runtime, whereas an ordinary variable assignment can. Constants are better for things like configuration directives which should not be changed during execution. Furthermore, code is easier to read (and maintain & handover) if values which are meant to be constant are *explicitly* made so.
DEFINE vs Variable in PHP
[ "", "php", "" ]
I really appreciate if someone can help me with using how to use shape drawable as my background xml for my view. This is what I tried: But I never get the color. Android always gives me black text on white background, regardless what color attribute I put. ``` <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <stroke android:width="1dip" android:color="#FBBB" /> <solid android:color="#6000"/> </shape> ``` I tried , does not work ``` <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" android:color="#6000> </shape> ``` I tried , does not work ``` <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" android:background="#6000> </shape> ``` I google this is the limited result I found to try.
You have wrong color settings, you must specify 4 byte colors, eg: `#ffff8080` ``` <shape xmlns:android="http://schemas.android.com/apk/res/android"> <solid android:color="#f0600000"/> <stroke android:width="3dp" android:color="#ffff8080"/> <corners android:radius="3dp" /> <padding android:left="10dp" android:top="10dp" android:right="10dp" android:bottom="10dp" /> </shape> ```
OK - I'm pretty sure my problem is the same as what drove your question, and that I've found its cause. The problem is conflicting resource definitions (specifically, resource filenames). Say, for example, for some reason you put a file named "drawable\_bg.png" in /res/color/ in your project; and forgot that you did this (or it happened accidentally). If you then try to define a Shape Drawable in your project named "res/drawable/dialog\_bg.xml" - the PNG (from 'MyLib') takes precedence. Since you can have many "res" folders for different DPI, form-factor, SDK, etc - it's fairly easy to wind up with a filename collision. This can also happen with Android Library projects. If your project has any dependencies on projects which themselves have resources, they can cause conflicts. As I just found today, Eclipse can either hide or fail to show a warning about this in many situations. When this happens it can easily appear that the Shape Drawable is not applied. Since "dialog\_bg.png" probably isn't designed for your view you get unexpected results and it's easy to be confused about what's really going on. The easiest way to solve this is to rename the shape drawable in your project. If the problem is with a resource(s) in an Android Library Project, then there may be a better solution found by applying the recommended practice as described at <http://tools.android.com/recent/buildchangesinrevision14>.
Using shape drawable as my background xml
[ "", "java", "android", "" ]
I am writing a program use JSP and Java. How can I use property files to support multiple languages? And by the way, there are always some things like `\u4345`. What is this? How do they come?
For the multiple languages, check out the [ResourceBundle](http://java.sun.com/javase/6/docs/api/java/util/ResourceBundle.html) class. About the `\u4345`, this is one of the dark and very annoying legacy corners of Java. The property files need to be in ASCII, so that all non-ASCII characters need to encoded as `\uxxxx` (their Unicode value). You can convert a file to use this encoding with the [native2ascii command line tool](http://java.sun.com/javase/6/docs/technotes/tools/windows/native2ascii.html). If you are using an IDE or a build tool, there should be an option to invoke this automatically. If the property file is something you have full control over yourself, you can starting from Java6 also use UTF-8 (or any other character set) directly in the property file, and [specify that encoding](http://java.sun.com/javase/6/docs/api/java/util/Properties.html#load%28java.io.Reader%29) when you load it: ``` // new in Java6 props.load(new InputStreamReader(new FileInputStream(file), 'UTF-8')); ``` Again, this only works if you load the Properties yourself, not if someone else does it, such as a ResourceBundle (used for internationalization).
there is an entire tutorial on <http://java.sun.com/docs/books/tutorial/i18n/index.html> This specifies and explains about anything you need to know.
How to handle multiple languages in Java apps?
[ "", "java", "internationalization", "" ]
Can anyone explain to me how to do more complex data sets like team stats, weather, dice, complex number types i understand all the math and how everything works i just dont know how to input more complex data, and then how to read the data it spits out if someone could provide examples in python that would be a big help
You have to encode your input and your output to something that can be represented by the neural network units. ( for example 1 for "x has a certain property p" -1 for "x doesn't have the property p" if your units' range is in [-1, 1]) The way you encode your input and the way you decode your output depends on what you want to train the neural network for. Moreover, there are many "neural networks" algoritms and learning rules for different tasks( Back propagation, boltzman machines, self organizing maps).
Your features must be decomposed into parts that can be represented as real numbers. The magic of a Neural Net is it's a black box, the correct associations will be made (with internal weights) during the training --- **Inputs** Choose as few features as are needed to accurately describe the situation, then decompose each into a set of real valued numbers. * Weather: [temp today, humidity today, temp yesterday, humidity yesterday...] *the association between today's temp and today's humidity is made internally* * Team stats: [ave height, ave weight, max height, top score,...] * Dice: *not sure I understand this one, do you mean how to encode discrete values?\** * Complex number: [a,*ai*,b,*bi*,...] \* Discrete valued features are tricky, but can still still be encoded as (0.0,1.0). The problem is they don't provide a gradient to learn the threshold on. --- **Outputs** You decide what you want the output to mean, and then encode your training examples in that format. The fewer output values, the easier to train. * Weather: [tomorrow's chance of rain, tomorrow's temp,...] \*\* * Team stats: [chance of winning, chance of winning by more than 20,...] * Complex number: [x,*xi*,...] \*\* Here your *training* vectors would be: 1.0 if it rained the next day, 0.0 if it didn't --- Of course, whether or not the problem can actually be modeled by a neural net is a different question.
Neural net input/output
[ "", "python", "neural-network", "" ]
I was wondering when the destructor is called under these circumstances, and if it is will it be called on the main UI thread? Let's say I have the following code, when would the destructor be called, and would it wait until I have finished all my function calls? ``` private void Foo() { MyObject myObj = new MyObject(); DoSomeFunThingsWithMyObject(myObj); myObj = new MyObject(); //is the destructor for the first instance called now? DoLongOminousFunctionality(myObj); } //Or will it be called after the DoLongOminousFunctionality? ``` It's just something that interests me, if the thread is interrupted at myObj = new MyObject(), or if the Destructor call waits until the Thread is free. Thanks for the information.
Destructor will be called when Garbage collector decides that it have to clean up some old objects. You cannot rely on destructors execution time in .NET Instead of that you should use Dispose() if you want to clean up some resources when they are not needed (especially when you have any unmanaged resources such as TCP connections, SQL connections etc.) See [Implementing a Dispose Method](http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx)
If it is crucial that the lifetime of your objects is managed, inherit from IDisposible and you can use the [using keyword](http://msdn.microsoft.com/en-us/library/yh598w02.aspx).
When is my destructor called in this circumstance? (C#)
[ "", "c#", ".net", "destructor", "" ]
I worked on a parser for arithmetic expressions. The key there was building a syntax tree, where leaves are variables and nodes are operators. Not I'm thinking about parsing SQL queries. Parsing simple select won't be a problem, but I'm not pretty sure about the complex queries. Can you point me to a good reference about sql parsing. Thank you in advance!
Take a look at the [SQL BNF grammars](http://savage.net.au/SQL/)
*Some codesamples:* Look at sourceforge [Open SQL parser](http://sourceforge.net/projects/osqlp/). There was a question for [sql parser library](https://stackoverflow.com/questions/660609/sql-parser-library-for-java) before. Look there.
SQL - Parsing a query
[ "", "sql", "" ]
I have a JavaScript object that is treated as an associative array. Let's call it "fields". It has several elements, e.g.: ``` fields['element1'] = ... fields['element2'] = ... fields['element3'] = ... ``` Given fields[0], is it possible to obtain the name of the property (which is "element1") instead of its value?
Let's say you have an object oObject. It could be: ``` var oObject = {} ; oObject["aaa"] = "AAA" ; oObject["bbb"] = "BBB" ; oObject["ccc"] = "CCC" ; oObject["ddd"] = "DDD" ; oObject["eee"] = "EEE" ; ``` Now, let's say you want to know its properties' names and values, to put into the variable strName and strValue. For that you use the "for(x in o)" construct, as in the following example: ``` var strName, strValue ; for(strName in oObject) { strValue = oObject[strName] ; alert("name : " + strName + " : value : " + strValue) ; } ``` The "for(x in o)" construct will iterate over all properties of an object "o", and at each iteration, will put in variable "x" the current property name. All you have to do, then, to have its value, is to write o[x], but you already knew that. ## Additional info After some thinking, and after seeing the comment of Hank Gay, I feel additional info could be interesting. Let's be naive (and forget the "in JavaScript, all objects, including arrays, are associative containers" thing). You will usually need two kind of containers: Maps and Arrays. Maps are created as in my example above (using the "o = new Object() ;" or the "o = {} ;" notation, and must be accessed through their properties. Of course, maps being maps, no ordering is guaranteed. Arrays are created differently, and even if they can be accessed as maps, they should be accessed only through their indices, to be sure order is maintained. Point is: * If you need a map, use a "new Object()" container * If you need an array, une an array, use a "new Array()" container * Don't EVER mix the two, and don't EVER access the map through indices, and for arrays, ALWAYS access its data through its indices, because if you don't follow those principles, you won't get what you want.
No, for two reasons. 1. fields[0] and fields["element1"] are different properties. 2. properties in an object are explicitly unordered You could loop over the properties: ``` function (obj) { for (prop in obj) { if (obj.hasOwnProperty(prop) { return prop; } } }; ``` …to get the "first" property for some arbitrary value of "first" that could change at any time. <http://ajaxian.com/archives/fun-with-browsers-for-in-loop> explains the hasOwnProperty pattern.
JavaScript - Getting a name of an element in associative array
[ "", "javascript", "arrays", "" ]
I currently have an extension method on System.Windows.Forms.Control like this: ``` public static void ExampleMethod(this Control ctrl){ /* ... */ } ``` However, this method doesn't appear on classes derived from Control, such as PictureBox. Can I make an extension method that appears not only in Control, but for classes derived from Control, without having to do an explicit cast?
You must include the using statement for the namespace in which your extensions class is defined or the extension methods will not be in scope. Extension methods work fine on derived types (e.g. the extension methods defined on `IEnumerable<T>` in System.Linq).
An extension method will actually apply to all inheritors/implementors of the type that's being extended (in this case, Control). You might try checking your using statements to ensure the namespace that the extension method is in is being referenced where you're trying to call it.
C#: Adding extension methods to a base class so that they appear in derived classes
[ "", "c#", "extension-methods", "intellisense", "" ]
I'm trying to learn how to do Unit Testing and Mocking. I understand some of the principles of TDD and basic testing. However, I'm looking at refactoring the below code that was written without tests and am trying to understand how it needs to change in order to make it testable. ``` public class AgentRepository { public Agent Select(int agentId) { Agent tmp = null; using (IDataReader agentInformation = GetAgentFromDatabase(agentId)) { if (agentInformation.Read()) { tmp = new Agent(); tmp.AgentId = int.Parse(agentInformation["AgentId"].ToString()); tmp.FirstName = agentInformation["FirstName"].ToString(); tmp.LastName = agentInformation["LastName"].ToString(); tmp.Address1 = agentInformation["Address1"].ToString(); tmp.Address2 = agentInformation["Address2"].ToString(); tmp.City = agentInformation["City"].ToString(); tmp.State = agentInformation["State"].ToString(); tmp.PostalCode = agentInformation["PostalCode"].ToString(); tmp.PhoneNumber = agentInformation["PhoneNumber"].ToString(); } } return tmp; } private IDataReader GetAgentFromDatabase(int agentId) { SqlCommand cmd = new SqlCommand("SelectAgentById"); cmd.CommandType = CommandType.StoredProcedure; SqlDatabase sqlDb = new SqlDatabase("MyConnectionString"); sqlDb.AddInParameter(cmd, "AgentId", DbType.Int32, agentId); return sqlDb.ExecuteReader(cmd); } } ``` These two methods are in a single class. The database-related code in the GetAgentFromDatabase is related to Enterprise Libraries. How would I be able to go about making this testable? Should I abstract out the GetAgentFromDatabase method into a different class? Should GetAgentFromDatabase return something other than an IDataReader? Any suggestions or pointers to external links would be greatly appreciated.
You're correct about moving **GetAgentFromDatabase()** into a separate class. Here's how I redefined **AgentRepository**: ``` public class AgentRepository { private IAgentDataProvider m_provider; public AgentRepository( IAgentDataProvider provider ) { m_provider = provider; } public Agent GetAgent( int agentId ) { Agent agent = null; using( IDataReader agentDataReader = m_provider.GetAgent( agentId ) ) { if( agentDataReader.Read() ) { agent = new Agent(); // set agent properties later } } return agent; } } ``` where I defined the **IAgentDataProvider** interface as follows: ``` public interface IAgentDataProvider { IDataReader GetAgent( int agentId ); } ``` So, **AgentRepository** is the class under test. We'll mock **IAgentDataProvider** and inject the dependency. (I did it with **Moq**, but you can easily redo it with a different isolation framework). ``` [TestFixture] public class AgentRepositoryTest { private AgentRepository m_repo; private Mock<IAgentDataProvider> m_mockProvider; [SetUp] public void CaseSetup() { m_mockProvider = new Mock<IAgentDataProvider>(); m_repo = new AgentRepository( m_mockProvider.Object ); } [TearDown] public void CaseTeardown() { m_mockProvider.Verify(); } [Test] public void AgentFactory_OnEmptyDataReader_ShouldReturnNull() { m_mockProvider .Setup( p => p.GetAgent( It.IsAny<int>() ) ) .Returns<int>( id => GetEmptyAgentDataReader() ); Agent agent = m_repo.GetAgent( 1 ); Assert.IsNull( agent ); } [Test] public void AgentFactory_OnNonemptyDataReader_ShouldReturnAgent_WithFieldsPopulated() { m_mockProvider .Setup( p => p.GetAgent( It.IsAny<int>() ) ) .Returns<int>( id => GetSampleNonEmptyAgentDataReader() ); Agent agent = m_repo.GetAgent( 1 ); Assert.IsNotNull( agent ); // verify more agent properties later } private IDataReader GetEmptyAgentDataReader() { return new FakeAgentDataReader() { ... }; } private IDataReader GetSampleNonEmptyAgentDataReader() { return new FakeAgentDataReader() { ... }; } } ``` (I left out the implementation of class **FakeAgentDataReader**, which implements **IDataReader** and is trivial -- you only need to implement **Read()** and **Dispose()** to make the tests work.) The purpose of **AgentRepository** here is to take **IDataReader** objects and turn them into properly formed **Agent** objects. You can expand the above test fixture to test more interesting cases. After unit-testing **AgentRepository** in isolation from the actual database, you will need unit tests for a concrete implementation of **IAgentDataProvider**, but that's a topic for a separate question. HTH
The problem here is deciding what is SUT and what is Test. With your example you are trying to Test the `Select()` method and therefore want to isolate that from the database. You have several choices, 1. Virtualise the `GetAgentFromDatabase()` so you can provide a derived class with code to return the correct values, in this case creating an object that provides `IDataReaderFunctionaity` without talking to the DB i.e. ``` class MyDerivedExample : YourUnnamedClass { protected override IDataReader GetAgentFromDatabase() { return new MyDataReader({"AgentId", "1"}, {"FirstName", "Fred"}, ...); } } ``` 2. As [Gishu suggested](https://stackoverflow.com/questions/1233486/how-could-i-refactor-this-factory-type-method-and-database-call-to-be-testable/1233585#1233585) instead of using IsA relationships (inheritance) use HasA (object composition) where you once again have a class that handles creating a mock `IDataReader`, but this time without inheriting. However both of these result in lots of code that simply defines a set of results that we be returned when queried. Admittedly we can keep this code in the Test code, instead of our main code, but its an effort. All you are really doing is define a result set for particular queries, and you know what’s really good at doing that... A database 3. I used LinqToSQL a while back and discovered that the `DataContext` objects have some very useful methods, including `DeleteDatabase` and `CreateDatabase`. ``` public const string UnitTestConnection = "Data Source=.;Initial Catalog=MyAppUnitTest;Integrated Security=True"; [FixtureSetUp()] public void Setup() { OARsDataContext context = new MyAppDataContext(UnitTestConnection); if (context.DatabaseExists()) { Console.WriteLine("Removing exisitng test database"); context.DeleteDatabase(); } Console.WriteLine("Creating new test database"); context.CreateDatabase(); context.SubmitChanges(); } ``` Consider it for a while. The problem with using a database for unit tests is that the data will change. Delete your database and use your tests to evolve your data that can be used in future tests. There are two things to be careful of Make sure your tests run in the correct order. The MbUnit syntax for this is `[DependsOn("NameOfPreviousTest")]`. Make sure only one set of tests is running against a particular database.
How could I refactor this factory-type method and database call to be testable?
[ "", "c#", "unit-testing", "mocking", "isolation-frameworks", "" ]
I'm using C# .NET 2.0 Windows Application. and I'm using app.config for my Application Settings. but change in AppSettings doesn't reflected runtime, it Needs Application to be restarted. How can I avoid it. Here is my code snippet I used to read and write the Application Settings. I'm reading the Setting like this ``` string temp = ConfigurationManager.AppSettings.Get(key); ``` I'm updating the value like this where node is the current configuration/appSettings Node ``` node.Attributes["value"].Value = value; xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); ```
You could try calling ``` ConfigurationManager.RefreshSection("appSettings") ``` to refresh the AppSettings section of the file from disk. Once they have been refreshed, you should be able to read the new values. I've just tested this and it does indeed work.
Alternatively, you could create a singleton 'Options' to hold on to your application settings and perform your read/writes for you. Once loaded, changing the .config doesn't require reloading, you simply set a property on the singleton and call your .Save() method. The 'runtime' version of your settings is in the singleton, no need to read from disk.
Change in AppSettings needs restart my Application how can I avoid?
[ "", "c#", ".net", "configuration", "appsettings", "configurationmanager", "" ]
What exactly does `Connection Lifetime=0` mean in a connection string?
Updated: A value of zero (0) causes pooled connections to have the maximum connection timeout. [Ref](http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqlconnection.html).
Based on my research I believe that [Myra](https://stackoverflow.com/questions/1233488/connection-lifetime-0-in-mysql-connection-string/1233557#1233557) is the closest of the other answers. It is *not* the same as the connection timeout. Instead see this pseudocode from [this](https://learn.microsoft.com/en-us/archive/blogs/angelsb/ado-net-the-misunderstood-connection-lifetime-managed-pooler-connection-string-keyword) article: ``` On SqlConnection.Close Check if time the connection has been open is greater than Connection Lifetime if it is, throw the connection away Else Put connection on the pool ``` The same article explains why you would rarely want to use this property (and the situations in which you might). Note that it has been renamed to "Load Balance Timeout" in an attempt to clarify its behavior per the above article.
Connection Lifetime=0 in MySql connection string
[ "", ".net", "sql", "mysql", "connection-string", "" ]
Does anyone know how I can get the SelectedItem (not the Header) from a TreeView? Here is my code: ``` <TreeView Name="treeView1" DataContext="{Binding Path=PresentationsViewModel}" Grid.Column="1" > <TreeViewItem IsExpanded="True" Header="Objects-A-List" DisplayMemberPath="Name" ItemsSource="{Binding Path=MyItem.ListA}"></TreeViewItem> <TreeViewItem IsExpanded="True" Header="Objects-B-List" DisplayMemberPath="Name" ItemsSource="{Binding Path=MyItem.ListB}"></TreeViewItem> <TreeViewItem IsExpanded="True" Header="Objects-C-List" DisplayMemberPath="Name" ItemsSource="{Binding Path=MyItem.ListC}"></TreeViewItem> </TreeView> ``` Note that there are 3 different Lists, containing 3 different Object-Types. It'd help me a lot to have something like: ``` public Object SelectedObject { set { _selectedObject = value; RunMyMethod(); RaisePropertyChanged("SelectedObject"); } } ```
Ok I know this is an old question and probably dead but as Charlie has it right. This is something that can also be used in code. You could do for example: ``` <ContentPresenter Content="{Binding ElementName=treeView1, Path=SelectedItem}" /> ``` Which will show the selected item. You can add a style or DataTemplate to that or use a default DataTemplate to the object you are trying to show.
**Step 1** Install the NuGet: `Install-Package System.Windows.Interactivity.WPF` **Step 2** In your Window tag add: `xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"` **Step 3** In the TreeView add: ``` <TreeView Name="treeView1" ... > <i:Interaction.Triggers> <i:EventTrigger EventName="SelectedItemChanged"> <i:InvokeCommandAction Command="{Binding SelectedItemChangedCommand}" CommandParameter="{Binding ElementName=treeView1, Path=SelectedItem}"/> </i:EventTrigger> </i:Interaction.Triggers> ... </TreeView> ``` **Step 4** In your ViewModel add: ``` private ICommand _selectedItemChangedCommand; public ICommand SelectedItemChangedCommand { get { if (_selectedItemChangedCommand == null) _selectedItemChangedCommand = new RelayCommand(args => SelectedItemChanged(args)); return _selectedItemChangedCommand; } } private void SelectedItemChanged(object args) { //Cast your object } ```
Get SelectedItem from TreeView?
[ "", "c#", "wpf", "xaml", "treeview", "selecteditem", "" ]
I have failed to find documentation for the operator % as it is used on strings in Python. What does this operator do when it is used with a string on the left hand side?
It's the string formatting operator. Read up on [string formatting in Python](http://docs.python.org/library/stdtypes.html#printf-style-string-formatting). ``` format % values ``` Creates a string where `format` specifies a format and `values` are the values to be filled in.
It applies [printf-like formatting](http://en.wikipedia.org/wiki/Printf) to a string, so that you can substitute certain parts of a string with values of variables. Example ``` # assuming numFiles is an int variable print "Found %d files" % (numFiles, ) ``` See the link provided by Konrad
What does % do to strings in Python?
[ "", "python", "string", "documentation", "operators", "" ]
How would i determine the country that a spcific IP address is originating from using c#. I need to use this to check if connections originate from a specific country.
You can use this SQL data in your project to determine that: [IP address geolocation SQL database](http://ipinfodb.com/ip_database.php). Download that data and import it into your database to run checks locally. Or you can use their free API that returns XML containing the country code and country name. You'd make a request to the following URL with the IP address you wanted to check, as seen in this example: <http://ipinfodb.com/ip_query_country.php?ip=74.125.45.100> Returns: ``` <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> </Response> ```
Here is a free [IP Address to Country database](http://www.countryipblocks.net/).
How to determine if an IP address belongs to a country
[ "", "c#", ".net", "geolocation", "" ]
> **No surprise here, possible dupes:** > > [Good Books for Learning Web Programming](https://stackoverflow.com/questions/931557/good-books-resources-for-learning-web-programming) > [Required Reading for a Soon to be Web Developer](https://stackoverflow.com/questions/911343/required-reading-for-a-soon-to-be-web-developer) > and there will be more. For a personal project, I'm starting to do some web programming using Django. I've programmed since I was very young on several languages, mostly Pascal/Delphi, C/C++, SQL Python (sometimes in other languages such as Java, PHP or Perl) so I'm not a newbie programmer, but apart from basic HTML I've little experience on the front-end side of web programming (CSS, HTML, Javascript and AJAX) because on my former professional projects if there was web programming somebody would do the frontend part. I'm loving the Django framework but I feel very limited and lost with the frontend part and would like to improve on it. What books that wont lose 100 pages explaining what a variable or a function is would you recommend? (better if all of these technologies are covered together, but different books on each topic is Ok too)
I can recommend a few that I usually provide my Jr. web developers, and that I find my self referencing from time to time. These are create for basic web development, and depending on your back end server environment, you may want to purchase a books on IIS and C#, or Apache and PHP etc. 1. [Bulletproof Web Design](https://rads.stackoverflow.com/amzn/click/com/0321346939) by Dan Cedarholm 2. [CSS Mastery:](https://rads.stackoverflow.com/amzn/click/com/1590596145) Advanced Web Standards Solutions 3. [JavaScript: The Definitive Guide](https://rads.stackoverflow.com/amzn/click/com/0596101996)
My suggestion is: Go over to the [w3schools.com](http://w3schools.com/) tutorials on all these subjects. They will guide you through the basics to the more advanced stuff. But IMO web design needs a good eye for design; this is something which comes from practice and lots of it until you know what looks good to the end user. I also suggest learning straight to xhtml, because it forces good syntax as has a wider range of possibilities. Another piece of advice is: don't try learning the style in html, you'll end wasting your time when you realise it can be done much faster in CSS.
What books to start on web programming for an experienced non-web programmer?
[ "", "javascript", "html", "css", "django", "dhtml", "" ]
Want to get `current datetime` to insert into `lastModifiedTime` column. I am using **`MySQL`** database. My questions are: 1. is there a function available in SQL? or 2. it is implementation depended so each database has its own function for this? 3. what is the function available in MySQL?
Complete answer: **1. Is there a function available in SQL?** Yes, the SQL 92 spec, Oct 97, pg. 171, section 6.16 specifies this functions: ``` CURRENT_TIME Time of day at moment of evaluation CURRENT_DATE Date at moment of evaluation CURRENT_TIMESTAMP Date & Time at moment of evaluation ``` **2. It is implementation depended so each database has its own function for this?** Each database has its own implementations, but they have to implement the three function above if they comply with the SQL 92 specification (but depends on the version of the spec) **3. What is the function available in MySQL?** ``` NOW() returns 2009-08-05 15:13:00 CURDATE() returns 2009-08-05 CURTIME() returns 15:13:00 ```
`NOW()` returns `2009-08-05 15:13:00` `CURDATE()` returns `2009-08-05` `CURTIME()` returns `15:13:00`
how to get current datetime in SQL?
[ "", "sql", "database", "current-time", "" ]
I am using declare @insertsql nvarchar(MAX) --above @insertsql for sp\_executesql takes only nvarchar as input set @insertsql='--i am giving More than 10000 characters here -----' EXEC sp\_executesql @insertsql, N'@inXMLRequest XML OUTPUT', @inXMLRequest OUTPUT how to insert morethan 10000 charecters in NVARCHAR(MAX) in sql server2005 can any one help please Thanks in advance
Thanks to All, i got the answer Insted of using SP\_Executesql directly we Executing nvarchar variable Above we Are preparing @insertsql nvarchar variable morethan 8000 characters and it is giving to sp\_executesql like this EXEC sp\_executesql @insertsql, N'@inXMLRequest XML OUTPUT',@inXMLRequest OUTPUT insted of above query replaced with below query Exec ('DeClare @inXMLRequest XML SET @inXMLRequest='------above 8000 characters---') Finally we will execute that nvarchar string and get out put
This has happened to me when I use inline SQL statements instead of stored procs. If you are hitting that ceiling, you may want to consider moving to stored procs.
How to set morethan max size charecters in NVARCHAR(MAX),sql Server2005
[ "", "sql", "sql-server-2005", "t-sql", "" ]
Will the \_otherThing field below be protected by the locks? ``` class ThreadSafeThing { private readonly object _sync = new object(); private SomeOtherThing _otherThing; public SomeOtherThing OtherThing { get { lock(_sync) return _otherThing; } } public void UpdateOtherThing(SomeOtherThing otherThing) { lock(_sync) _otherThing = otherThing; } } ```
Yes. This is not related to lock. C# programs are expressed using statements. Using {} groups multiple statements as a block. A block can be used in the context where a single statement is allowed. See C# language specification section 1.5.
This construction: ``` lock(_sync) _otherThing = otherThing; ``` ...is the same as this construction: ``` lock(_sync) { _otherThing = otherThing; } ``` So yes, the assignment of `_otherThing` is protected by the lock.
Can I lock(something) on one line in C#?
[ "", "c#", "multithreading", "syntax", "locking", "" ]
How do you start a thread with parameters in C#?
Yep : ``` Thread t = new Thread (new ParameterizedThreadStart(myMethod)); t.Start (myParameterObject); ```
One of the 2 overloads of the Thread constructor takse a ParameterizedThreadStart delegate which allows you to pass a single parameter to the start method. Unfortunately though it only allows for a single parameter and it does so in an unsafe way because it passes it as object. I find it's much easier to use a lambda expression to capture the relevant parameters and pass them in a strongly typed fashion. Try the following ``` public Thread StartTheThread(SomeType param1, SomeOtherType param2) { var t = new Thread(() => RealStart(param1, param2)); t.Start(); return t; } private static void RealStart(SomeType param1, SomeOtherType param2) { ... } ```
ThreadStart with parameters
[ "", "c#", "multithreading", "" ]
I am trying to refactor a piece of code which seems easily refactorable but is proving difficult. There are two method which seem very similar and I feel should be refactored:- ``` public class MyClass { private void AddBasicData(Receiver receiver) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = aHelper.GetBasic(); receiver.ObjB = bHelper.GetBasic(); receiver.ObjC = cHelper.GetBasic(); } private void AddExistingData(Receiver receiver) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = aHelper.GetExisting(); receiver.ObjB = bHelper.GetExisting(); receiver.ObjC = cHelper.GetExisting(); } } ``` The reference code for this class is here... ``` public class AHelper : Helper<A> { } public class BHelper : Helper<B> { } public class CHelper : Helper<C> { } public class Helper<T> : IHelper<T> where T : IMyObj { public T GetBasic() { ... } public T GetExisting() { ... } } public interface IHelper<T> { T GetBasic(); T GetExisting(); } public class A : IMyObj {} public class B : IMyObj {} public class C : IMyObj {} public interface IMyObj {} public class Receiver { public A ObjA { get; set; } public B ObjB { get; set; } public C ObjC { get; set; } } ``` My first attempt was to refactor like this... ``` public class MyClass { private void AddBasicData(Receiver receiver) { Func<Helper<IMyObj>, IMyObj> func = x => x.GetBasic(); AddData(receiver, func); } private void AddExistingData(Receiver receiver) { Func<Helper<IMyObj>, IMyObj> func = x => x.GetExisting(); AddData(receiver, func); } private void AddData(Receiver receiver, Func<Helper<IMyObj>, IMyObj> func) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = func(aHelper); receiver.ObjB = func(bHelper); receiver.ObjC = func(cHelper); } } ``` The problem with this is objects like `new AHelper()` is not assignable to `Helper<IMyObj>` :-( Can anyone see how this could be nicely refactored? Thanks in advance Russell
Try using a templated function. It should infer the type based on the type of parameter you pass, so you shouldn't need to explicitly specify the type in the AddData call. ``` public class MyClass { private void AddData<T>(Receiver receiver, Func<Helper<T>, T> func) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = func(aHelper); receiver.ObjB = func(bHelper); receiver.ObjC = func(cHelper); } } ``` Attempt #2: Tricky problem I think you need a more generic IHelper interface. Would something like this help? ``` public interface IHelper { IMyObj GetBasic(); IMyObj GetExisting(); } public interface IHelper<T> : IHelper { T GetBasic(); T GetExisting(); } ``` You'll have to work out the name conflict between the derived interface and the base interface, but I'm not sure exactly how you'd want to do that, and I'm running out of time, so I'll leave that as it for the moment. Attempt #3 (I'm determined to get this!): Would this be cheating? ``` public enum RetrievalMethod { Basic, Existing } public class Helper<T> : IHelper<T> where T : IMyObj { public T Get(RetrievalMethod rm) { switch(rm) { case RetrievalMethod.Basic: return GetBasic(); case RetrievalMethod.Existing: return GetExisting(); } } ... } ... private void AddData(Receiver receiver, RetrievalMethod rm) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = aHelper.Get(rm); receiver.ObjB = bHelper.Get(rm); receiver.ObjC = cHelper.Get(rm); } ```
You can use casting for solving assigning problem. If AHelper actually return A, I think this works ``` private void AddData(Receiver receiver, Func<Helper<IMyObj>, IMyObj> func) { var aHelper = new AHelper(); var bHelper = new BHelper(); var cHelper = new CHelper(); receiver.ObjA = (A) func(aHelper); receiver.ObjB = (B) func(bHelper); receiver.ObjC = (C) func(cHelper); } ``` if you override methods, you can do casting, dont need to change definition of "Func, IMyObj>" ``` public class AHelper : Helper<A> { public override A GetBasic() { return new A(); } } ```
C# Generics Refactoring
[ "", "c#", "generics", "refactoring", "" ]
I was hoping to make a list of some of the differences in terms between java and objective c to abate my confusion. It makes explaining my difficulties easier when I know what things are called. For example: * null - nil * this - self * map - dictionary (not even quite sure about this one) What I'm looking for are similar comparisons or corrections to ones I have listed. Not just limited to elements in the language, but concepts as well...
You're right about map = dictionary. I'll add: * @public, @private, @protected (the default) are for instance variables only (not methods) and work like C++'s visibility modifiers (i.e. you declare them as section headers rather than before each variable). * Class methods are like static methods in Java * There are only two data structures: NSDictionary and NSArray (in immutable and mutable variants). These are highly optimized and work well for most situations. For everything else, there's [CHDataStructures](http://cocoaheads.byu.edu/code/CHDataStructures) * @interface doesn't work like Java's interfaces - it defines the instance variables and methods of a single class. * You need header files. C is to blame for this. This pretty much sucks, as maintaining the things is an unnecessary pain. * There is no concept of a "package". The closest thing is a framework, but these should not be used as Java's packages (i.e. don't create them just to organize your classes). * Instead of "new Class()", say [[Class alloc] init]. Think of "alloc" like the new operator and init as the constructor. * id's are generic object pointers, like references of type Object in Java. * The base class is NSObject. Inheritance from NSObject is not automatic and must be explicitly specified.
Another conceptual difference is the behavior when calling methods (sending messages) on null objects (to nil). **Java** ``` MyClass myObject = null; myObject.doSomething(); <-- results in a NullPointerException ``` **Obj-C** ``` id myObject = nil; [myObject doSomething]; <-- is valid ``` [Here](https://stackoverflow.com/questions/156395/sending-a-message-to-nil) is a very good SO question about that behavior.
What are some of the different names of terms and concepts in Objective C as compared with Java?
[ "", "java", "objective-c", "cocoa", "" ]
Suppose I have these two tables: ``` Invoice ------- iInvoiceID int PK not null dtCompleted datetime null InvoiceItem ----------- iInvoiceItemID int PK not null iInvoiceID int FK (Invoice.iInvoiceID) not null dtCompleted datetime null ``` Each InvoiceItem might be fulfilled by a different process (executable) that runs on a different machine. When the process is complete, I want it to call a stored procedure to stamp the InvoiceItem.dtCompleted field, and I want this stored procedure to return back a flag indicating whether the entire invoice has been completed. Whichever process happens to be the one that finished the invoice is going to kick off another process to do some final business logic on the invoice, e.g. stamp the dtCompleted and send a receipt email. Obviously I want this other process to fire only once for a given Invoice. Here is my naive implementation: ``` CREATE PROCEDURE dbo.spuCompleteInvoiceItem @iInvoiceItemID INT AS BEGIN BEGIN TRAN UPDATE InvoiceItem SET dtCompleted = GETDATE() WHERE iInvoiceItemID = @iInvoiceItemID IF EXISTS(SELECT * FROM InvoiceItem WHERE dtCompleted IS NULL AND iInvoiceID = (SELECT iInvoiceID FROM InvoiceItem WHERE iInvoiceItemID=@iInvoiceItemID)) SELECT 'NotComplete' AS OverallInvoice ELSE SELECT 'Complete' AS OverallInvoice COMMIT END ``` Is this sufficient? Or do I need to increase the transaction serialization level and if so, what level would provide the best balance of performance and safety? Pre-emptive comments: * I know I could achieve the same business goal by implementing a central concurrency service at the process/executable level, but I think that's overkill. My instinct is that if I craft my stored procedure and transaction well, I can use SQL Server as my inter-process concurrency service for this simple operation without heavily impacting performance or increasing deadlock frequency (have my cake and eat it too.) * I'm not worrying about error handling in this example. I'll add the proper TRY/CATCH/ROLLBACK/RAISERROR stuff after. ## Update 1: According to the experts, not only do I need the most restrictive transaction isolation level -- serializable -- but I also need to lock all the InvoiceItems of a particular invoice before I do anything else, to ensure that other concurrent calls to the stored procedure will block until the current one completes. Otherwise I might get deadlocks. Here's my latest version of the implementation: ``` CREATE PROCEDURE dbo.spuCompleteInvoiceItem @iInvoiceItemID INT AS BEGIN IF @iInvoiceItemID IS NULL RAISERROR('@iInvoiceItemID cannot be null.', 16, 1) BEGIN TRAN SET TRANSACTION ISOLATION LEVEL SERIALIZABLE DECLARE @iInvoiceID INT SELECT @iInvoiceID = iInvoiceID FROM InvoiceItem WHERE dtCompleted IS NULL AND iInvoiceID = (SELECT iInvoiceID FROM InvoiceItem WHERE iInvoiceItemID=@iInvoiceItemID) IF @iInvoiceID IS NULL BEGIN -- Should never happen SELECT 'AlreadyComplete' AS Result END ELSE BEGIN UPDATE InvoiceItem SET dtCompleted = GETDATE() WHERE iInvoiceItemID = @iInvoiceItemID IF EXISTS(SELECT * FROM InvoiceItem WHERE iInvoiceID=@iInvoiceID AND dtCompleted IS NULL) SELECT 'NotComplete' AS Result ELSE SELECT 'Complete' AS Result END COMMIT ``` Thanks, Jordan Rieger
Your problem here is if two processes simultaniously complete the remaining two items they are both going to think they finished the invoice. What you want is a lock on all the other detail items in the invoice to ensure nothing else can change the status whilst process is updating the status. This will of course reduce concurency but this should be limited to only the current invoice and should be fairly short. You can do this using the SERIALIZABLE isolation level to ensure that you cannot get phantom reads, ``` DECLARE @iInvoiceId int BEGIN TRAN SET TRANSACTION ISOLATION LEVEL SERIALIZABLE SET NOCOUNT ON -- This select locks the rows and ensures that repeated -- selects will produce the same result -- ie no other transaction can affect these rows, -- or insert a row into this invoice SELECT @iInvoiceId = iInvoiceId FROM InvoiceItem WITH (xlock) WHERE iInvoiceId = ( SELECT iInvoiceId FROM InvoiceItem WHERE iInvoiceItemId = @iInvoiceItemId) SET NOCOUNT OFF -- perform request of query as before COMMIT ```
You have two alternatives: 1. Stateful with short transactions. Mark the status of invoices being processed. The job picks an invoice to be processed and updates its status to 'processing' (pick-update atomically), then commit. It processes the the invoice, then comes back and updates the status as 'complete'. There cannot be other job processing the same invoice because the invoice is marked 'processing'. This is the typical queued based workflow. 2. Stateless with long transactions. Lookup for an invoice to process and lock it (UPDLOCK). In practice this is done by doing the complete update at the begining of transaction, thus locking the invoice in X mode. Keep the transaction open while the invoice is processed. At the end, mark it as complete and comit. There is nothing transaction isolation levels can do to help you here. They only affact the duartion and scope of S-locks and S-locks have no way of preventing two jobs from attempting to process the same invoince, leading to blocking and deadlocks. If the 'processing' is of any length, then you must use the statefull short transactions, since holding long transaction locks in the database will kill every other activity. The drawback is that jobs can crash w/o completing the processing and leave invoices in abandoned 'processing' state. Usually this is resolved by a 'garbage collecting' job that resets the status back to 'available' if they don't complete in alloted time. **Update** K. Then the EXISTS query should have a WHERE clause with the InvoiceID, shouldn't it? As it is now, it would return 'Complete' when all invoice items, from all invoices, have been stamped with a complete date. Anyway, that final check for complete is a guaranteed deadlock, on any isolation level: T1 updates item N-1 and selects the EXISTS. T2 updates item N and selects the EXISTS. T1 blocks on the T2 update, T2 blocks on T1 update, deadlock. No isolation level can help there and is an *extremly* likely scenario. No isolation level can prevent this, becuase the cause of the deadlock is the pre-existing update, not the SELECT. Ultimately the problem is caused by the fact that parallel processors dequeue **correlated** items. As long as you allow this to happen, deadlocks are going to be an everyday (or even everysecond...) fact of life with your processing. I know this because, as a developer with SQL Server in Redmond, I've spent the better part of the past 10 years in this problem space. This is why Service Broker (the built-in queues of SQL Server) do [Conversation Group Locking](http://msdn.microsoft.com/en-us/library/ms171615.aspx): to isolate correlated messages processing. Unless you ensure that items from one invoce are only processed by one job, you will spend the rest of your days solving new deadlock scenarios in item processing. The best you can do is come up with a very restrictive locking that block the entire invoice upfront, but that in effect is exactly what I'm telling you to do (blockk access to corellated items).
How do I make this transaction safe in a concurrent environment? (SQL Server 2005)
[ "", "sql", "sql-server", "sql-server-2005", "concurrency", "locking", "" ]
I've built a console application that references version 4.3.2.1 of another dll we've built. It worked fine and did its job. Then version 4.3.2.2 of the dll is built, and the console application starts to die becuase it wants to see 4.3.2.1. Is there any way to tell the console application to use 4.3.2.1 or higher? The methods that are present in 4.3.2.1 are also present in 4.3.2.2 and will be in all subsequent versions of the dll.
Use the [`<assemblyBinding>`](http://msdn.microsoft.com/en-us/library/twy1dw1e.aspx) element of app.config: ``` <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Foo.dll" publicKeyToken="1234567890abcdef" culture="neutral" /> <bindingRedirect oldVersion="4.3.2.1" newVersion="4.3.2.2"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> ``` See also ["Redirecting Assembly Versions"](http://msdn.microsoft.com/en-us/library/7wd6ex19.aspx) for more information. This is assuming you don't want to recompile the app, of course - if you don't mind recompiling, then just setting "Use specific version" to false should be fine.
Pull up the properties window when you have the reference selected to the other DLL. Make sure the "Specific Version" property is set to false.
C# versioning of references for a console application
[ "", "c#", "visual-studio", "dll", "versioning", "" ]
Situation: I need a tiny button, with some text on it. Problem: The button seems to think displaying empty space near its edges is more important than displaying my Text. I can't for the life of me figure out how to remove that blank stuff at the edges. Any help is greatly appreciated! Thanks in advance. -MonsterMaw
Assuming you're talking about WinForms, you can set the `FlatStyle` property of the button to `System`. That will let you resize the button so that it is small enough for the text to fit exacty, without any internal padding.
You could set the Text property of your Button to empty. Then place a label over the top of the button, if all else fails... Edit. Don't forget to set your label's backcolor to transparent.
C#: Remove the spacing at the edges inside a button
[ "", "c#", "string", "button", "" ]
I'm trying to send the PHPSESSID via a HTTP GET variable for a cookie-less client. I've seen this in various drupal implementations where `?PHPSESSIONID=123ABC` is appending to each link, but how do I specify this in PHP and is there any way of changing the GET parameter so it could be ?token=123ABC, or even sent via HTTP POST? Standard LAMP stack, running the Zend framework. Thanks!
Using a cookie or not is configured by these PHP options : * [`session.use_cookies`](http://php.net/manual/en/session.configuration.php#ini.session.use-cookies) * [`session.use_only_cookies`](http://php.net/manual/en/session.configuration.php#ini.session.use-only-cookies) If the first one is set, cookies will be used if possible. PHP should detect if cookies are enabled or not, and use them only if they are supported by the client. To enable passing of the session id by GET instead of cookies, you might have to activate [session.use\_trans\_sid](http://php.net/manual/en/session.configuration.php#ini.session.use-trans-sid), which is disabled by default *(Which means that, by defaut, session id is only passed by cookies -- never by GET)*. But note that, with this option activated, PHP will pass the session id by GET at least for the first page each user of your site will come to... as they won't have the cookie at first, and the only way to check if they support cookies is by setting one, and trying to read it back on the next page. And users that don't support cookies, including search engines I'd probably say, will have that session id -- and that is not nice :-( And, you might also want to take a look at [session.name](http://php.net/manual/en/session.configuration.php#ini.session.name) to set the name of the key *(set to to "token" instead of "PHPSESSID", I mean)* For more details, you can take a look at the [Session Handling](http://php.net/manual/en/book.session.php) section of the manual :-)
You can change PHPSESSID using [`session_name()`](http://php.net/session_name) or `session.name` in your php.ini file (or using [`ini_set()`](http://php.net/ini_set)). For cookieless clients, there's the `session.use_trans_sid` php.ini option - you should be aware that this can cause problems - for example users passing URLs with session IDs in to each other, or search engines picking up such URLs.
How can I send PHPSESSID in the URL?
[ "", "php", "url", "session", "" ]
> **Possible Duplicate:** > [How to format a JSON date?](https://stackoverflow.com/questions/206384/how-to-format-a-json-date) I have the following result from a $getJSON call from JavaScript. How do I convert the start property to a proper date in JavaScript? > [ > {"id":1,"start":"/Date(1238540400000)/"}, > {"id":2,"start":"/Date(1238626800000)/"} > ] Thanks!
You need to extract the number from the string, and pass it into the Date `constructor`: ``` var x = [{ "id": 1, "start": "\/Date(1238540400000)\/" }, { "id": 2, "start": "\/Date(1238626800000)\/" }]; var myDate = new Date(x[0].start.match(/\d+/)[0] * 1); ``` The parts are: ``` x[0].start - get the string from the JSON x[0].start.match(/\d+/)[0] - extract the numeric part x[0].start.match(/\d+/)[0] * 1 - convert it to a numeric type new Date(x[0].start.match(/\d+/)[0] * 1)) - Create a date object ```
I use this: ``` function parseJsonDate(jsonDateString){ return new Date(parseInt(jsonDateString.replace('/Date(', ''))); } ``` --- Update 2018: This is an old question. Instead of still using this old non standard serialization format I would recommend to modify the server code to return better format for date. Either an ISO string containing time zone information, or only the milliseconds. If you use only the milliseconds for transport it should be `UTC` on server and client. * `2018-07-31T11:56:48Z` - ISO string can be parsed using `new Date("2018-07-31T11:56:48Z")` and obtained from a `Date` object using `dateObject.toISOString()` * `1533038208000` - milliseconds since midnight January 1, 1970, UTC - can be parsed using new Date(1533038208000) and obtained from a `Date` object using `dateObject.getTime()`
Converting json results to a date
[ "", "javascript", "json", "" ]
I am trying to convert a Python dictionary to a string for use as URL parameters. I am sure that there is a better, more [Pythonic](http://en.wikipedia.org/wiki/Pythonic) way of doing this. What is it? ``` x = "" for key, val in {'a':'A', 'b':'B'}.items(): x += "%s=%s&" %(key,val) x = x[:-1] ```
Here is the correct way of using it in Python 3. ``` from urllib.parse import urlencode params = {'a':'A', 'b':'B'} print(urlencode(params)) ```
Use [`urllib.parse.urlencode()`](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.urlencode). It takes a dictionary of key-value pairs, and converts it into a form suitable for a URL (e.g., `key1=val1&key2=val2`). For your example: ``` >>> import urllib.parse >>> params = {'a':'A', 'b':'B'} >>> urllib.parse.urlencode(params) 'a=A&b=B' ``` If you want to make a URL with repetitive params such as: `p=1&p=2&p=3` you have two options: ``` >>> a = (('p',1),('p',2), ('p', 3)) >>> urllib.parse.urlencode(a) 'p=1&p=2&p=3' ``` or: ``` >>> urllib.parse.urlencode({'p': [1, 2, 3]}, doseq=True) 'p=1&p=2&p=3' ``` If you are still using Python 2, use [`urllib.urlencode()`](http://docs.python.org/2/library/urllib.html#urllib.urlencode).
Python Dictionary to URL Parameters
[ "", "python", "dictionary", "url-parameters", "" ]
Is there an easy way to have the Ant logger (default or other) add a timestamp to each message? The only way I can think of is to use the Log4jListener and have its settings include the timestamp. Or write a custom logger that subclasses DefaultLogger and writes the timestamp. If there's a better or easier way (preferably without requiring that users install a new jar file into their Ant lib directory), I'd be interested in hearing about it.
You can define an Ant macrodef to set the current timestamp, then call the macrodef each time you need to reference it throughout your build.xml The following macrodef will set the timestamp to a property (you can add an attribute to the macrodef if you want to customise the property it sets): ``` <macrodef name="set.timestamp"> <sequential> <tstamp> <format property="current.time" pattern="MM/dd/yyyy hh:mm"/> </tstamp> </sequential> </macrodef> ``` Then to use it, just access the property set by the macrodef as you need: ``` <target name="doFoo" depends="dir.check" if="dir.exists"> <set.timestamp/> <!--in this example, just echo the timestamp --> <echo message="${current.time}"/> </target> ``` For more information on ant macrodefs, check out the [documentation](http://ant.apache.org/manual/Tasks/macrodef.html).
Try this ant -logger org.apache.tools.ant.listener.ProfileLogger It prints the entry time and exit time for each target along with the time taken for each target in ms.
Timestamps in Ant log?
[ "", "java", "ant", "timestamp", "" ]
I have this query: ``` SELECT g.title, g.asin, g.platform_id, r.rank FROM games g INNER JOIN ranks r ON ( g.id = r.game_id ) ORDER BY r.rank DESC LIMIT 5` ``` Now, this is my JOIN using `Zend_Db_Select` but it gives me array error ``` $query = $this->select(); $query->from(array('g' => 'games'), array()); $query->join(array('r' => 'ranks'), 'g.id = r.game_id', array('g.title', 'g.asin', 'g.platform_id', 'r.rank')); $query->order('r.rank DESC'); $query->limit($top); $resultRows = $this->fetchAll($query); return $resultRows; ``` Anyone know what I could be doing wrong? I want to get all the columns in 'games' to show and the 'rank' column in the ranks table.
I am going to assume you've solved this, but it would be nice to leave the answer for others. Add this below the instantiation of the select object. ``` $query->setIntegrityCheck(false); ```
You could also type fewer characters.... ``` $query = $this->select() ->from(array('g' => 'games'), array('title', 'asin', 'platform_id')) ->join(array('r' => 'ranks'), 'g.id = r.game_id', array('rank')) ->order('r.rank DESC') ->limit($top); return $this->fetchAll($query); ``` Good luck!
Zend Framework Db Select Join table help
[ "", "sql", "zend-framework", "select", "join", "zend-db", "" ]
How to write a query to find the time difference ? time format is like this **2009-08-12 02:59:59** i want to compare this time with **2009-08-12 02:59:10** how to check these two i want to return some row having the time difference is 30sec how to write a SQL statement ??
``` select date_part('second',date1) - date_part('second',date2) ``` In SQL you can do like this which give you output in seconds
``` SELECT * FROM your_table WHERE time1_column - time2_column = interval '30s' ``` Sorry this is the best I can do, given your description of the problem...
pgsql time diffrence?
[ "", "sql", "datetime", "plpgsql", "" ]
I'm looking for a way to convert a list of tuples like this: ``` [(1,4),(2,4),(3,4),(4,15),(5,15),(6,23),(7,23),(8,23),(9,15),(10,23),(11,15),(12,15)] ``` into a dictionary like this: ``` {4:[1,2,3] ,15:[4,5,9,11,12], 23:[6,7,8,10]} ``` The second element from each tuple becomes a dictionary key, and all the first tuple elements associated with that key are stored in a value list. Can you show me how that can be done?
``` >>> from collections import defaultdict >>> l= [(1,4),(2,4),(3,4),(4,15),(5,15),(6,23),(7,23),(8,23),(9,15),(10,23),(11,15),(12,15)] >>> d= defaultdict( list ) >>> for v, k in l: ... d[k].append(v) ... >>> d defaultdict(<type 'list'>, {23: [6, 7, 8, 10], 4: [1, 2, 3], 15: [4, 5, 9, 11, 12]}) >>> [ {k:d[k]} for k in sorted(d) ] [{4: [1, 2, 3]}, {15: [4, 5, 9, 11, 12]}, {23: [6, 7, 8, 10]}] ```
``` >>> a = [(1,4),(2,4),(3,4),(4,15),(5,15),(6,23),(7,23),(8,23),(9,15),(10,23),(11,15),(12,15)] >>> b = {} >>> for i, j in a: ... b.setdefault(j, []).append(i) ... >>> b {23: [6, 7, 8, 10], 4: [1, 2, 3], 15: [4, 5, 9, 11, 12]} >>> ```
Converting list of tuples into a dictionary
[ "", "python", "list", "dictionary", "" ]
Is there some clever way of getting the date and time of when the dll was built/compiled? I’m using the assembly version numbering and reflection to retrieve and display this info when the app is deployed. But in some scenarios it would be more convenient to know *when* then dll was actually compiled rather than the auto incrementing version number. I don’t think the modified date on the dll file itself is reliable due to the way the app is deployed. ``` Dim assemblies = AppDomain.CurrentDomain.GetAssemblies Dim assemblyName As String Dim assemblyVersion As String For Each assembly In assemblies assemblyName = assembly.GetName.Name.ToString assemblyVersion = assembly.GetName.Version.ToString ' How to get the date/time of the build?? ' ... Next ``` Any suggestions?
If you set the assembly version (usually in AssemblyInfo.cs) to `Major.Minor.*` (e.g. `1.0.*`), then you can probably retrieve the build date at runtime with something like this: ``` var version = Assembly.GetExecutingAssembly().GetName().Version; DateTime buildDate = new DateTime(2000, 1, 1) .AddDays(version.Build) .AddSeconds(version.Revision*2); ``` When using a `*` for the third and fourth part of the assembly version, then these two parts are set automatically at compile time to the following values: * third part is the number of days since 2000-01-01 * fourth part is the number of seconds since midnight divided by two (although [some MSDN pages](http://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.aspx) say it is a random number) Oh, and you have to take care of daylight saving time yourself (e.g. add one hour if it's daylight saving time).
You can get it by this way: ``` File.GetLastWriteTime(Assembly.GetExecutingAssembly().Location); ``` Returns `DateTime` object.
Compile date and time
[ "", "c#", ".net", "vb.net", "reflection", "" ]
I am using the `file_get_contents` function to get the URL into a string: ``` $contents = file_get_contents("http://google.com"); ``` But, I am getting the following error: > file\_get\_contents() [function.file-get-contents]: php\_network\_getaddresses: getaddrinfo failed: No such host is known. How can I solve this in my localhost?
Your "localhost" cannot resolve the name google.com to an ip address, which means your machine doesn't/can't reach a valid dns server. This probably doesn't only affect php. Try `ping google.com` on the console of that machine. Does this complain about the unknown host, too? The steps necessary to solve the problem depend on what your "localhost" is (operating system, flavour/distribution, ...) and how it is connected to the internet.
It is because you need to turn on allow\_url\_fopen in php.ini
file_get_contents(): No such host is known error
[ "", "php", "networking", "" ]
I wrote the following C++ program ``` class MyClass { public: int i; int j; MyClass() {}; }; int main(void) { MyClass inst; inst.i = 1; inst.j = 2; } ``` and I compiled. ``` # g++ program.cpp # ls -l a.out -rwxr-xr-x 1 root wheel 4837 Aug 7 20:50 a.out ``` Then, I `#include`d the header file iostream in the source file and I compiled again. ``` # g++ program.cpp # ls -l a.out -rwxr-xr-x 1 root wheel 6505 Aug 7 20:54 a.out ``` The file size, as expected, was increased. I also wrote the following C program ``` int main(void) { int i = 1; int j = 2; } ``` and I compiled ``` # gcc program.c # ls -l a.out -rwxr-xr-x 1 root wheel 4570 Aug 7 21:01 a.out ``` Then, I `#include`d the header file stdio.h and I compiled again ``` # gcc program.c # ls -l a.out -rwxr-xr-x 1 root wheel 4570 Aug 7 21:04 a.out ``` Oddly enough, the executable files' size remained the same.
By including `iostream` in your source file, the compiler needs to generate code to setup and tear down the C++ standard I/O library. You can see this by looking at the output from `nm`, which shows the symbols (generally functions) on your object file: ``` $ nm --demangle test_with_iostream 08049914 d _DYNAMIC 08049a00 d _GLOBAL_OFFSET_TABLE_ 08048718 t global constructors keyed to main 0804883c R _IO_stdin_used w _Jv_RegisterClasses 080486d8 t __static_initialization_and_destruction_0(int, int) 08048748 W MyClass::MyClass() U std::string::size() const@@GLIBCXX_3.4 U std::string::operator[](unsigned int) const@@GLIBCXX_3.4 U std::ios_base::Init::Init()@@GLIBCXX_3.4 U std::ios_base::Init::~Init()@@GLIBCXX_3.4 080485cc t std::__verify_grouping(char const*, unsigned int, std::string const&) 0804874e W unsigned int const& std::min<unsigned int>(unsigned int const&, unsigned int const&) 08049a3c b std::__ioinit 08049904 d __CTOR_END__ ... (remaining output snipped) ... ``` (`--demangle` takes the C++ function names "mangled" by by the compiler and produces more meaningful names. The first column is the address, if the function is included in the executable. The second column is the type. "t" is code in the "text" segment. "U" are symbols linked in from other places; in this case, from the C++ shared library.) Compare this with the functions generated from your source file without including `iostream`: ``` $ nm --demangle test_without_iostream 08049508 d _DYNAMIC 080495f4 d _GLOBAL_OFFSET_TABLE_ 080484ec R _IO_stdin_used w _Jv_RegisterClasses 0804841c W MyClass::MyClass() 080494f8 d __CTOR_END__ ... (remaining output snipped) ... ``` When your source file included `iostream`, the compiler generated several functions not present without `iostream`. When your source file includes only `stdio.h`, the generated binary is similar to the test without `iostream`, since the C standard I/O library doesn't need any extra initialization above and beyond what's already happening in the C dynamic library. You can see this by looking at the `nm` output, which is identical. In general, though, trying to intuit information about the amount of code generated by a particular source file based on the size of the executable is not going to be meaningful; there's too much that could change, and simple things like the location of the source file may change the binary if the compiler includes debugging information. You may also find `objdump` useful for poking around at the contents of your executables.
Header files are typically just declarations and don't directly result in machine code being generated. The linker is smart enough not to pull in unused functions from the CRT, so just including stdio.h without using any of its functions would not result in more code in your executable. EDIT: They can include inline functions, classes, and so on which do include code, but those should not result in an increase in your executable size until they are actually used.
Why don't C header files increase the binary's size?
[ "", "c++", "executable", "header-files", "" ]
I need to refresh the controls in the form, for that i use `this.Refresh()` but the form is flickering. How can I smoothly refresh the controls? Using `Application.DoEvents()` does this smoothly, but is this a good idea to refresh using `Application.DoEvents()`?
Here's a link on MSDN about reducing flicker in WinForms applications: <http://msdn.microsoft.com/en-us/library/3t7htc9c(VS.80).aspx> The easiest way is to set the DoubleBuffered property for the Form to true.
Form.ActiveForm.Update();
Refresh all the controls on the form
[ "", "c#", "" ]
ok, its a little more complicated than the question. ``` class A { static int needsToBeThreadSafe = 0; public static void M1() { needsToBeThreadSafe = RandomNumber(); } public static void M2() { print(needsToBeThreadSafe); } } ``` now i require that between M1() and M2() calls 'needsToBeThreadSafe' stays Thread Safe.
What you might be trying to ask about is the [**ThreadStatic**] attribute. If you want each thread that uses the class **`A`** to have its own separate value of **`needsToBeThreadSafe`** then you just need to decorate that field with the [**ThreadStatic**] attribute. For more info refer to the [MSDN documentation for `ThreadStaticAttribute`](http://msdn.microsoft.com/en-us/library/system.threadstaticattribute(VS.71).aspx).
How About: ``` public static void M1() { Interlocked.Exchange( ref needsToBeThreadSafe, RandomNumber() ); } public static void M2() { print( Interlocked.Read( ref needsToBeThreadSafe ) ); } ```
I need to create a Thread-safe static variable in C# .Net
[ "", "c#", "static", "thread-safety", "" ]
I have an aspx page that contains regular html, some uicomponents, and multiple tokens of the form {tokenname} . When the page loads, I want to parse the page content and replace these tokens with the correct content. The idea is that there will be multiple template pages using the same codebehind. I've no trouble parsing the string data itself, (see [named string formatting](https://stackoverflow.com/questions/159017/named-string-formatting-in-c "named string formatting"), [replace tokens in template](https://stackoverflow.com/questions/20267/best-way-to-replace-tokens-in-a-large-text-template "replace tokens")) my trouble lies in when to read, and how to write the data back to the page... What's the best way for me to rewrite the page content? I've been using a streamreader, and the replacing the page with Response.Write, but this is no good - a page containing other .net components does not render correctly. Any suggestions would be greatly appreciated!
Many thanks to those that contributed to this question, however I ended up using a different solution - Overriding the render function as per [this page](http://www.aspcode.net/Last-second-HTML-changes-in-your-ASPNET-page.aspx), except I parsed the page content for multiple different tags using regular expressions. ``` protected override void Render(HtmlTextWriter writer) { if (!Page.IsPostBack) { using (System.IO.MemoryStream stream = new System.IO.MemoryStream()) { using (System.IO.StreamWriter streamWriter = new System.IO.StreamWriter(stream)) { HtmlTextWriter htmlWriter = new HtmlTextWriter(streamWriter); base.Render(htmlWriter); htmlWriter.Flush(); stream.Position = 0; using (System.IO.StreamReader oReader = new System.IO.StreamReader(stream)) { string pageContent = oReader.ReadToEnd(); pageContent = ParseTagsFromPage(pageContent); writer.Write(pageContent); oReader.Close(); } } } } else { base.Render(writer); } } ``` Here's the regex tag parser ``` private string ParseTagsFromPage(string pageContent) { string regexPattern = "{zeus:(.*?)}"; //matches {zeus:anytagname} string tagName = ""; string fieldName = ""; string replacement = ""; MatchCollection tagMatches = Regex.Matches(pageContent, regexPattern); foreach (Match match in tagMatches) { tagName = match.ToString(); fieldName = tagName.Replace("{zeus:", "").Replace("}", ""); //get data based on my found field name, using some other function call replacement = GetFieldValue(fieldName); pageContent = pageContent.Replace(tagName, replacement); } return pageContent; } ``` Seems to work quite well, as within the GetFieldValue function you can use your field name in any way you wish.
Take a look at System.Web.UI.Adapters.PageAdapter method TransformText - generally it is used for multi device support, but you can postprocess your page with this.
Replace tokens in an aspx page on load
[ "", "c#", "asp.net", ".net-2.0", "c#-2.0", "" ]
... Is it possible to create hot-spots in C# so that when the mouse is over a certain area an event gets triggered?
Your standard From object exposes a OnMouseMove event. Given that you don't have any controls where the hot spots will be, you could just handle the coordinates in that event: ``` protected override void OnMouseMove(MouseEventArgs mouseEvent) { string X = mouseEvent.X.ToString(); string Y = mouseEvent.Y.ToString(); //Add code here to match X & Y to your hot spot coordinates. } ```
Create a transparent `Panel` (truly transparent - by setting the WS\_EX\_TRANSPARENT bit in its extended window style - [here's how](http://saftsack.fs.uni-bayreuth.de/~dun3/archives/creating-a-transparent-panel-in-net/108.html)), put it in the position you want on top of other controls, and handle `MouseMove` on it.
Creating HotSpots in C#
[ "", "c#", "winforms", "" ]
I'm a bit confused about the way Java treats `==` and `equals()` when it comes to `int`, `Integer` and other types of numbers. For example: ``` Integer X = 9000; int x = 9000; Short Y = 9000; short y = 9000; List<Boolean> results = new ArrayList<Boolean>(); // results.add(X == Y); DOES NOT COMPILE 1) results.add(Y == 9000); // 2) results.add(X == y); // 3) results.add(X.equals(x)); // 4) results.add(X.equals(Y)); // 5) results.add(X.equals(y)); // 6) System.out.println(results); ``` outputs (maybe you should make your guess first): ``` [true, true, true, false, false] ``` 1. That `X == Y` does not compile is to be expected, being different objects. 2. I'm a little surprised that `Y == 9` is `true`, given that 9 is by default an `int`, and given that 1) didn't even compile. Note that you can't put an `int` into a method expecting a `Short`, yet here they are equal. 3. This is surprising for the same reason as two, but it seems worse. 4. Not surprising, as `x` is autoboxed to and `Integer`. 5. Not surprising, as objects in different classes should not be `equal()`. 6. What?? `X == y` is `true` but `X.equals(y)` is `false`? Shouldn't `==` always be stricter than `equals()`? I'd appreciate it if anyone can help me make sense of this. For what reason do == and equals() behave this way? **Edit:** I have changed 9 to 9000 to show that this behavior is not related to the any unusual ways that the integers from -128 to 127 behave. **2nd Edit:** OK, if you think you understand this stuff, you should consider the following, just to make sure: ``` Integer X = 9000; Integer Z = 9000; short y = 9000; List<Boolean> results = new ArrayList<Boolean>(); results.add(X == Z); // 1) results.add(X == y); // 2) results.add(X.equals(Z)); // 3) results.add(X.equals(y)); // 4) System.out.println(results); ``` outputs: ``` [false, true, true, false] ``` The reason, as best as I understand it: 1. Different instance, so different. 2. `X` unboxed, then same value, so equal. 3. Same value, so equal. 4. `y` cannot be boxed to an `Integer` so cannot be equal.
The reason for ``` X == y ``` being true has to do with [binary numeric promotion.](http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#170983) When at least one operand to the equality operator is convertible to a numeric type, the [numeric equality operator](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#5198) is used. First, the first operand is unboxed. Then, both operands are converted to `int`. While ``` X.equals(y) ``` is a normal function call. As has been mentioned, `y` will be autoboxed to a `Short` object. `Integer.equals` always returns false if the argument is not an `Integer` instance. This can be easily seen by inspecting the implementation. One could argue that this is a design flaw.
(small) Integer instances are cached, so the invariant x == y is holded for small instances (actually -127 +128, depends on JVM): ``` Integer a = 10; Integer b = 10; assert(a == b); // ok, same instance reused a = 1024; b = 1024; assert(a == b); // fail, not the same instance.... assert(a.equals(b)); // but same _value_ ``` EDIT 4) and 5) yield false because `equals` check types: `X` is an Integer whereas `Y` is a Short. This is the [java.lang.Integer#equals](http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#equals(java.lang.Object)) method: ``` public boolean equals(Object obj) { if (obj instanceof Integer) { return value == ((Integer)obj).intValue(); } return false; } ```
Why are these == but not `equals()`?
[ "", "java", "equals", "" ]
I have found this popular PHP/MySQL Script called Zip Location by SaniSoft and it works great besides one thing: It doesn't in some instances. It seems that any radius under 20 miles returns the same amount of zip codes as 20 miles. I have searched all over google, but to no avail and I was wondering if someone had some insight on this situation. I would rather figure out this problem before having to pay for a program, and I could also use the learning experience. The database is a list of zip codes and longitudes and latitudes of each zip code. The script uses a method that determines the distance around the zip code entered and returns the zip codes in that radius based on their lon/lat. Thank you!! Edit: From using the distance function that the script provides I have discovered that the distance between the Zip Codes that the program gives me and my zip code are coming up as 0 miles. **MAJOR UPDATE** From research it turns out that the database has duplicate lat/lon values. Please be aware of this when using Zip Locator. Although the PHP does its job, you will need to find a new Database of zip codes. I will post my findings at a later date.
Second try! Seeing your edited problem statement, I'd look at how you assign your zip values. Lots of errors can be introduced if your zip codes are integers instead of strings. The biggest problem is that American zip codes can start with 0. The examples in ziptest.php aren't good, since the treat zips as integers. When you try to describe my own zip code with an integer: ``` $zip1 = 02446; ``` Its interpreted by PHP as the octal value 2446. phpZipLocator the uses that value as a string without any explicit conversion. PHP therefore gives it the decimal value of octal 2446 as a string (1318) which is not a zip code at all. Instead of notifying that it didn't find a zip code, phpZipLocator does a radius search of all zip codes within a given radius of something that doesn't exist, which it decides should be 1. If I set the zip code using a string ``` $zip1 = '02446'; ``` I get the correct result. IMHO it seems like phpZipLocator could use a little work.
The following approximate distance calculations are relatively simple, but can produce distance errors of 10% of more. These approximate calculations are performed using latitude and longitude values in **degrees**. The first approximation requires only simple math functions: ``` Approximate distance in miles = sqrt(x * x + y * y) where x = 69.1 * (lat2 - lat1) and y = 53 * (lon2 - lon1) ``` You can improve the accuracy of this approximate distance calculation by adding the cosine math function: ``` Approximate distance in miles = sqrt(x * x + y * y) where x = 69.1 * (lat2 - lat1) and y = 69.1 * (lon2 - lon1) * cos(lat1/57.3) ``` If you need greater accuracy, you must use the exact distance calculation. The exact distance calculation requires use of spherical geometry, since the Earth is a sphere. The exact distance calculation also requires a high level of floating point mathematical accuracy - about 15 digits of accuracy (sometimes called "double-precision"). In addition, the trig math functions used in the exact calculation require conversion of the latitude and longitude values from degrees to radians. To convert latitude or longitude from degrees to radians, divide the latitude and longitude values in this database by 180/pi, or 57.2958. The radius of the Earth is assumed to be 6,371 kilometers, or 3,958.75 miles. You must include the degrees-to-radians conversion in the calculation. Substituting degrees for radians, the calculation is: ``` Exact distance in miles = 3958.75 * arccos[sin(lat1/57.2958) * sin(lat2/57.2958) + cos(lat1/57.2958) * cos(lat2/57.2958) * cos(lon2/57.2958 - lon1/57.2958)] ```
Finding Zip Codes in a Specific Radius
[ "", "php", "mysql", "" ]
I am using VSTS 2008 + C# + .Net 3.0. I have two input strings, I think they are different. But the following C# code thinks they are the same, and throws System.Data.ConstraintException, says Column Name is contrained to be unique, but value already exists. Any ideas what is wrong? Here is my code and my input strings, Hex view of my input strings, <http://i30.tinypic.com/2anx2b.jpg> Notepad view of my input strings, <http://i30.tinypic.com/2q03hn4.jpg> My code, ``` static void Main(string[] args) { string[] buf = new string[] { "2ch", "2ch" }; DataTable bulkInserTable = new DataTable("BulkTable"); DataColumn column = null; DataRow row = null; column = new DataColumn(); column.DataType = System.Type.GetType("System.String"); column.ColumnName = "Name"; column.ReadOnly = true; column.Unique = true; bulkInserTable.Columns.Add(column); foreach (string item in buf) { row = bulkInserTable.NewRow(); row["Name"] = item; bulkInserTable.Rows.Add(row); } } ``` EDIT 1: My confusion is, why C# Dictionary thinks they are different, but DataSet thinks they are of the same. Any solution to make the behavior consistent? Here is my code to prove C# Dictionary thinks they are different, the return buf array is of two elements. ``` Dictionary<string, bool> dic = new Dictionary<string, bool>(); foreach (string s in buf) { dic[s] = true; } buf = new List<string>(dic.Keys).ToArray(); // we got two strings here, other than one, which proves Dictionary thinks the two strings are different. ```
well for a start you need you sample code to be: ``` foreach (string item in buf) { row = bulkInserTable.NewRow(); row["Name"] = item; bulkInserTable.Rows.Add(row); } ``` Though that still exhibits the issue at least it's for the real reason The reason for this is that, when creating a data table the default compare options in effect are: ``` this._compareFlags = CompareOptions.IgnoreWidth CompareOptions.IgnoreKanaType | CompareOptions.IgnoreCase; ``` From the [docs](http://msdn.microsoft.com/en-us/library/system.globalization.compareoptions.aspx) Ignore Width: > Indicates that the string comparison must ignore the character width. For example, Japanese katakana characters can be written as full-width or half-width. If this value is selected, the katakana characters written as full-width are considered equal to the same characters written as half-width. ``` System.Globalization.CultureInfo.CurrentCulture.CompareInfo.Compare( "2ch", "2ch", System.Globalization.CompareOptions.IgnoreWidth); ``` returns 0, i.e. identical I strongly suggest you *do* consider such values identical or cause further confusion down the line however if you really want to change it: ``` //CaseSensitive property uses this under the hood internal bool SetCaseSensitiveValue( bool isCaseSensitive, bool userSet, bool resetIndexes) { if (!userSet && ( this._caseSensitiveUserSet || (this._caseSensitive == isCaseSensitive))) { return false; } this._caseSensitive = isCaseSensitive; if (isCaseSensitive) { this._compareFlags = CompareOptions.None; } else { this._compareFlags = CompareOptions.IgnoreWidth | CompareOptions.IgnoreKanaType | CompareOptions.IgnoreCase; } if (resetIndexes) { this.ResetIndexes(); foreach (Constraint constraint in this.Constraints) { constraint.CheckConstraint(); } } return true; } ``` Thus you can ignore case and totally disable the complex comparison options. If you want to make a Dictionary with the same behaviour use the following comparer: ``` public class DataTableIgnoreCaseComparer : IEqualityComparer<string> { private readonly System.Globalization.CompareInfo ci = System.Globalization.CultureInfo.CurrentCulture.CompareInfo; private const System.Globalization.CompareOptions options = CompareOptions.IgnoreCase | CompareOptions.IgnoreKanaType | CompareOptions.IgnoreWidth; public DataTableIgnoreCaseComparer() {} public bool Equals(string a, string b) { return ci.Compare(a, b, options) == 0; } public int GetHashCode(string s) { return ci.GetSortKey(s, options).GetHashCode(); } } ```
It depends on what you mean by "the same". The two strings have different Unicode values, but I suspect under some normalization rules they would be the same. Just so that others can reproduce it easily without cut and paste issues, the second string is: ``` "\uff12\uff43\uff48" ``` These are the ["full width"](http://www.unicode.org/charts/PDF/UFF00.pdf) versions of "2ch". EDIT: To respond to your edit, clearly the `DataSet` uses a different idea of equality, whereas unless you provide anything specific, `Dictionary` will use ordinal comparisons (as provided by string itself). EDIT: I'm pretty sure the problem is that the DataTable is using CompareOptions.IgnoreWidth: ``` using System; using System.Data; using System.Globalization; class Test { static void Main() { string a = "2ch"; string b = "\uff12\uff43\uff48"; DataTable table = new DataTable(); CompareInfo ci = table.Locale.CompareInfo; // Prints 0, i.e. equal Console.WriteLine(ci.Compare(a, b, CompareOptions.IgnoreWidth)); } } ``` EDIT: If you set the `DataTable`'s `CaseSensitive` property to true, I suspect it will behave the same as `Dictionary`.
C# string duplication issue
[ "", "c#", ".net", "string", "visual-studio-2008", "ado.net", "" ]
I have a Java EE project which build fine with Ant, deploys perfectly to JBoss, and runs without any trouble. This project includes a few **custom** tag libraries (which is not [JSTL](https://stackoverflow.com/tags/jstl/info)!), which are also working without any difficulties. The problem is with the Eclipse IDE (Ganymede): in every single JSP file which uses our custom tags, the JSP parser flags the taglib include line with with this error: `Cannot find the tag library descriptor for (example).tld` This also causes every use of the tab library to be flagged as an error, and since the IDE doesn't have their definition, it can't check tag parameters, etc. Our perfectly-working JSP files are a sea of red errors, and my eyes are beginning to burn. How can I simply tell Eclipse, "The tag library descriptor you are looking for is "src/web/WEB-INF/(example)-taglib/(example).tld"? I've already asked this question on the Eclipse support forums, with no helpful results.
It turns out that the cause was that this project wasn't being considered by Eclipse to actually be a Java EE project at all; it was an old project from 3.1, and the Eclipse 3.5 we are using now requires several "natures" to be set in the project configuration file. ``` <natures> <nature>org.eclipse.jdt.core.javanature</nature> <nature>InCode.inCodeNature</nature> <nature>org.eclipse.dltk.javascript.core.nature</nature> <nature>net.sf.eclipsecs.core.CheckstyleNature</nature> <nature>org.eclipse.wst.jsdt.core.jsNature</nature> <nature>org.eclipse.wst.common.project.facet.core.nature</nature> <nature>org.eclipse.wst.common.modulecore.ModuleCoreNature</nature> <nature>org.eclipse.jem.workbench.JavaEMFNature</nature> </natures> ``` I was able to find the cause by creating a new "Dynamic Web Project" which properly read its JSP files, and diffing against the config of the older project. The only way I could find to add these was by editing the .project file, but after re-opening the project, everything magically worked. The settings referenced by pribeiro, above, weren't necessary since the project already conformed to the default settings. Both pribeiro and nitind's answers gave me ideas to jumpstart my search, thanks. Is there a way of editing these "natures" from within the UI?
In Eclipse Helios "Java EE Module Dependencies" in the project properties has been replaced with "Deployment Assembly". So for solving this problem with Eclipse Helios, the way I did it is the following: * Right click on the project in package explorer and choose "Import..." * Accept the default selection "File System" and press "Next" * Press "Browse" in the From directory line, go to your tomcat installation and locate the file webapps/examples/WEB-INF/lib (I have tomcat 6, other versions of Tomcat may have the path webapps/jsp-examples/WEB-INF/lib). Once in the path press OK. * Click besides jstl.jar and standard.jar to activate the check boxes * On the line Into folder click on Browse and choose the library folder. I use /lib inside the project. * Click "Finish" * Right click on the project in Package Explorer view and choose properties (or press Alt + Enter) * Click on "Java Build Path" * Click "Add Jar", click on your project, folder lib, select jstl.jar, press OK * Click "Add Jar", click on your project, folder lib, select standard.jar, press OK * Press OK to dismiss the properties dialog * Click on the Problems view and select the message "Classpath entry .../jstl.jar will not be exported or published. Runtime ClassNotFoundExceptions may result.". * Right click on it and select "Quick Fix". * Accept the default "Mark the associated raw classpath entry as a publish/export dependency" and press Finish. * Do the same for standard.jar This solves the problem, but if you want to check what has happened in "Deployment Assembly", open the project properties again, select "Deployment Assembly" and you'll see that standard.jar and jstl.jar have been added to WEB-INF/lib folder.
Eclipse "cannot find the tag library descriptor" for custom tags (not JSTL!)
[ "", "java", "eclipse", "jsp", "taglib", "custom-tags", "" ]
I have a input field where i only wish users to type numbers html: `<input id="num" type="text" name="page" size="4" value="" />` jquery/ js: ``` $("#num").keypress(function (e){ if( e.which!=8 && e.which!=0 && (e.which<48 || e.which>57)){ return false; } }); ``` hope someone can help me. btw: I'm not interesting in a larger jquery plugin to make the function work. (I have found some jquery-plugins , but there must be som other ways to fix it, with a smaller code)
Try this: ``` $("#num").keypress(function (e){ var charCode = (e.which) ? e.which : e.keyCode; if (charCode > 31 && (charCode < 48 || charCode > 57)) { return false; } }); ``` Values 48 through 57 represent the digits 0-9.
Never do this. A user can update a textbox without pressing the key. He can copy paste, drag. some text. Also this will be ***irritating to the user***. Just display a label nect to the filed saying that this accepts only numbers. And then **Validate your code at submission**
input field, only numbers jquery/js
[ "", "javascript", "jquery", "" ]
I need to check if a process with a given HANDLE is still running, I tried to do it using the following code however it always returns at the second return false, even if the process is running. ``` bool isProcessRunning(HANDLE process) { if(process == INVALID_HANDLE_VALUE)return false; DWORD exitCode; if(GetExitCodeProcess(process, &exitCode) != 0) return false;//always returns here return GetLastError() == STILL_ACTIVE;//still running } ```
You can test the process life by using ``` bool isProcessRunning(HANDLE process) { return WaitForSingleObject( process, 0 ) == WAIT_TIMEOUT; } ```
<http://msdn.microsoft.com/en-us/library/ms683189%28VS.85%29.aspx> > **Return Value** > > If the function succeeds, the return > value is nonzero. > > If the function > fails, the return value is zero. To > get extended error information, call > *GetLastError*.
Detecting if a process is still running
[ "", "c++", "windows", "process", "" ]
I came across a situation where I have a pretty big file that I need to read binary data from. Consequently, I realized that the default BinaryReader implementation in .NET is pretty slow. Upon looking at it with [.NET Reflector](http://en.wikipedia.org/wiki/.NET_Reflector) I came across this: ``` public virtual int ReadInt32() { if (this.m_isMemoryStream) { MemoryStream stream = this.m_stream as MemoryStream; return stream.InternalReadInt32(); } this.FillBuffer(4); return (((this.m_buffer[0] | (this.m_buffer[1] << 8)) | (this.m_buffer[2] << 0x10)) | (this.m_buffer[3] << 0x18)); } ``` Which strikes me as extremely inefficient, thinking at how computers were designed to work with 32-bit values since the 32 bit CPU was invented. So I made my own (unsafe) FastBinaryReader class with code such as this instead: ``` public unsafe class FastBinaryReader :IDisposable { private static byte[] buffer = new byte[50]; //private Stream baseStream; public Stream BaseStream { get; private set; } public FastBinaryReader(Stream input) { BaseStream = input; } public int ReadInt32() { BaseStream.Read(buffer, 0, 4); fixed (byte* numRef = &(buffer[0])) { return *(((int*)numRef)); } } ... } ``` Which is much faster - I managed to shave off 5-7 seconds off the time it took to read a 500 MB file, but it's still pretty slow overall (29 seconds initially and ~22 seconds now with my `FastBinaryReader`). It still kind of baffles me as to why it still takes so long to read such a relatively small file. If I copy the file from one disk to another it takes only a couple of seconds, so disk throughput is not an issue. I further inlined the ReadInt32, etc. calls, and I ended up with this code: ``` using (var br = new FastBinaryReader(new FileStream(cacheFilePath, FileMode.Open, FileAccess.Read, FileShare.Read, 0x10000, FileOptions.SequentialScan))) while (br.BaseStream.Position < br.BaseStream.Length) { var doc = DocumentData.Deserialize(br); docData[doc.InternalId] = doc; } } ``` --- ``` public static DocumentData Deserialize(FastBinaryReader reader) { byte[] buffer = new byte[4 + 4 + 8 + 4 + 4 + 1 + 4]; reader.BaseStream.Read(buffer, 0, buffer.Length); DocumentData data = new DocumentData(); fixed (byte* numRef = &(buffer[0])) { data.InternalId = *((int*)&(numRef[0])); data.b = *((int*)&(numRef[4])); data.c = *((long*)&(numRef[8])); data.d = *((float*)&(numRef[16])); data.e = *((float*)&(numRef[20])); data.f = numRef[24]; data.g = *((int*)&(numRef[25])); } return data; } ``` Any further ideas on how to make this even faster? I was thinking maybe I could use marshalling to map the entire file straight into memory on top of some custom structure, since the data is linear, fixed size and sequential. **SOLVED:** I came to the conclusion that FileStream's buffering/BufferedStream are flawed. Please see the accepted answer and my own answer (with the solution) below.
When you do a filecopy, large chunks of data are read and written to disk. You are reading the entire file four bytes at a time. This is bound to be slower. Even if the stream implementation is smart enough to buffer, you still have at least 500 MB/4 = 131072000 API calls. Isn't it more wise to just read a large chunk of data, and then go through it sequentially, and repeat until the file has been processed?
I ran into a similar performance issue with BinaryReader/FileStream, and after profiling, I discovered that the problem isn't with `FileStream` buffering, but instead with this line: ``` while (br.BaseStream.Position < br.BaseStream.Length) { ``` Specifically, the property `br.BaseStream.Length` on a `FileStream` makes a (relatively) slow system call to get the file size on each loop. After changing the code to this: ``` long length = br.BaseStream.Length; while (br.BaseStream.Position < length) { ``` and using an appropriate buffer size for the `FileStream`, I achieved similar performance to the `MemoryStream` example.
Faster (unsafe) BinaryReader in .NET
[ "", "c#", "performance", "pointers", "unsafe", "binaryreader", "" ]
I want to handle some SAPI messages from a DLL, which is some sort of plugin. How to handle messages/events inside a VC++ dll. The SAPI event handling is shown in the example at: <http://msdn.microsoft.com/en-us/library/ms720165%28VS.85%29.aspx>
To process "normal" messages, you still need a Window object. It can be a special "message-only" window that only shares the messaging queue infrastructure with normal windows. To create it, first register your message handling class with `RegisterClass()`. Next, create an message queue by passing `HWND_MESSAGE` as the parent window to `CreateWindow()`. You will get back an HWND you can then to SAPI. However, SAPI supports other interfaces as well. The ISpNotifySource documentation names 4: Windows messages, callbacks, events and COM (ISpNotifySink). To use callbacks, simply pass the address of one of your DLL methods to `SetNotifyCallbackFunction`.
If your code is running as a plugin, you might want to look at having SAPI call you back directly using [ISpNotifySource::SetNotifyCallbackFunction](http://msdn.microsoft.com/en-us/library/ms717971(VS.85).aspx) instead of [ISpNotifySource::SetNotifyWindowMessage](http://msdn.microsoft.com/en-us/library/ms717985(VS.85).aspx). SAPI will then call your function directly when an event occurs.
How to use WndProc from a C++ dll?
[ "", "c++", "dll", "sapi", "wndproc", "" ]
I have a C++ source file and a Python source file. I'd like the C++ source file to be able to use the contents of the Python source file as a big string literal. I could do something like this: ``` char* python_code = " #include "script.py" " ``` But that won't work because there need to be \'s at the end of each line. I could manually copy and paste in the contents of the Python code and surround each line with quotes and a terminating \n, but that's ugly. Even though the python source is going to effectively be compiled into my C++ app, I'd like to keep it in a separate file because it's more organized and works better with editors (emacs isn't smart enough to recognize that a **C** string literal is python code and switch to python mode while you're inside it). Please don't suggest I use PyRun\_File, that's what I'm trying to avoid in the first place ;)
The C/C++ preprocessor acts in units of tokens, and a string literal is a *single* token. As such, you can't intervene in the middle of a string literal like that. You could preprocess script.py into something like: ``` "some code\n" "some more code that will be appended\n" ``` and #include that, however. Or you can use [`xxd`](http://linux.die.net/man/1/xxd)`​ -i` to generate a C static array ready for inclusion.
This won't get you all the way there, but it will get you pretty damn close. Assuming `script.py` contains this: ``` print "The current CPU time in seconds is: ", time.clock() ``` First, wrap it up like this: ``` STRINGIFY(print "The current CPU time in seconds is: ", time.clock()) ``` Then, just before you include it, do this: ``` #define STRINGIFY(x) #x const char * script_py = #include "script.py" ; ``` There's probably an even tighter answer than that, but I'm still searching.
C/C++, can you #include a file into a string literal?
[ "", "c++", "c", "include", "c-preprocessor", "string-literals", "" ]
> **Possible Duplicate:** > [Multi value Dictionary](https://stackoverflow.com/questions/569903/multi-value-dictionary) I just need to store 2 value. How can I do that?
[Multi value Dictionary](https://stackoverflow.com/questions/569903/multi-value-dictionary)
If you need variable number of values per key, you can use ``` Dictionary<TypeOfKey, List<TypeOfValue> ``` E.g. if you want to store an arbitrary number of integers per string, you could do this ``` var numbersPerString = new Dictionary<string, List<int>>(); ```
Something like Dictionary but can store more than one value?
[ "", "c#", "dictionary", "" ]
> **Possible Duplicate:** > [Regular cast vs. static\_cast vs. dynamic\_cast](https://stackoverflow.com/questions/28002/regular-cast-vs-staticcast-vs-dynamiccast) I don't quite get when to use static cast and when dynamic. Any explanation please?
Use `dynamic_cast` when casting from a base class type to a derived class type. It checks that the object being cast is actually of the derived class type and returns a null pointer if the object is not of the desired type (unless you're casting to a reference type -- then it throws a `bad_cast` exception). Use `static_cast` if this extra check is not necessary. As Arkaitz said, since `dynamic_cast` performs the extra check, it requires RTTI information and thus has a greater runtime overhead, whereas `static_cast` is performed at compile-time.
In some contexts, like this one, "static" refers to compile-time and "dynamic" refers to run-time. For compile-time checking, use static\_cast (limited to what the compiler knows). For run-time checking, use dynamic\_cast (limited to classes with RTTI). For no checking, use reinterpret\_cast.
static cast versus dynamic cast
[ "", "c++", "" ]
I have a netbook with 1.20Ghz Processor & 1GB Ram. I'm running a C# WinForms app on it which, at 5 minute intervals, reads every line of a text file and depending on what the content of that line is, either skips it or writes it to an xml file. Sometimes it may be processing about 2000 lines. When it begins this task, the processor gets maxed out, 100% use. However on my desktop with 2.40Ghz Processor and 3GB Ram it's untouched (for obvious reasons)... is there any way I can actually reduce this processor issue dramatically? The code isn't complex, I'm not bad at coding either and I'm not constantly opening the file, reading and writing... it's all done in one fell swoop. Any help greatly appreciated!? **Sample Code** \*\*\*Timer..... ``` #region Timers Setup aTimer.Tick += new EventHandler(OnTimedEvent); aTimer.Interval = 60000; aTimer.Enabled = true; aTimer.Start(); radioButton60Mins.Checked = true; #endregion Timers Setup private void OnTimedEvent(object source, EventArgs e) { string msgLoggerMessage = "Checking For New Messages " + DateTime.Now; listBoxActivityLog.Items.Add(msgLoggerMessage); MessageLogger messageLogger = new MessageLogger(); messageLogger.LogMessage(msgLoggerMessage); if (radioButton1Min.Checked) { aTimer.Interval = 60000; } if (radioButton60Mins.Checked) { aTimer.Interval = 3600000; } if (radioButton5Mins.Checked) { aTimer.Interval = 300000; } // split the file into a list of sms messages List<SmsMessage> messages = smsPar.ParseFile(smsPar.CopyFile()); // sanitize the list to get rid of stuff we don't want smsPar.SanitizeSmsMessageList(messages); ApplyAppropriateColoursToRecSMSListinDGV(); } public List<SmsMessage> ParseFile(string filePath) { List<SmsMessage> list = new List<SmsMessage>(); using (StreamReader file = new StreamReader(filePath)) { string line; while ((line = file.ReadLine()) != null) { var sms = ParseLine(line); list.Add(sms); } } return list; } public SmsMessage ParseLine(string line) { string[] words = line.Split(','); for (int i = 0; i < words.Length; i++) { words[i] = words[i].Trim('"'); } SmsMessage msg = new SmsMessage(); msg.Number = int.Parse(words[0]); msg.MobNumber = words[1]; msg.Message = words[4]; msg.FollowedUp = "Unassigned"; msg.Outcome = string.Empty; try { //DateTime Conversion!!! string[] splitWords = words[2].Split('/'); string year = splitWords[0].Replace("09", "20" + splitWords[0]); string dateString = splitWords[2] + "/" + splitWords[1] + "/" + year; string timeString = words[3]; string wholeDT = dateString + " " + timeString; DateTime dateTime = DateTime.Parse(wholeDT); msg.Date = dateTime; } catch (Exception e) { MessageBox.Show(e.ToString()); Application.Exit(); } return msg; } public void SanitizeSmsMessageList(List<SmsMessage> list) { // strip out unwanted messages // list.Remove(some_message); etc... List<SmsMessage> remove = new List<SmsMessage>(); foreach (SmsMessage message in list) { if (message.Number > 1) { remove.Add(message); } } foreach (SmsMessage msg in remove) { list.Remove(msg); } //Fire Received messages to xml doc ParseSmsToXMLDB(list); } public void ParseSmsToXMLDB(List<SmsMessage> list) { try { if (File.Exists(WriteDirectory + SaveName)) { xmlE.AddXMLElement(list, WriteDirectory + SaveName); } else { xmlE.CreateNewXML(WriteDirectory + SaveName); xmlE.AddXMLElement(list, WriteDirectory + SaveName); } } catch (Exception e) { MessageBox.Show(e.ToString()); Application.Exit(); } } public void CreateNewXML(string writeDir) { try { XElement Database = new XElement("Database"); Database.Save(writeDir); } catch (Exception e) { MessageBox.Show(e.ToString()); } } public void AddXMLElement(List<SmsMessage> messages, string writeDir) { try { XElement Database = XElement.Load(writeDir); foreach (SmsMessage msg in messages) { if (!DoesExist(msg.MobNumber, writeDir)) { Database.Add(new XElement("SMS", new XElement("Number", msg.MobNumber), new XElement("DateTime", msg.Date), new XElement("Message", msg.Message), new XElement("FollowedUpBy", msg.FollowedUp), new XElement("Outcome", msg.Outcome), new XElement("Quantity", msg.Quantity), new XElement("Points", msg.Points))); EventNotify.SendNotification("A New Message Has Arrived!", msg.MobNumber); } } Database.Save(writeDir); EventNotify.UpdateDataGridView(); EventNotify.UpdateStatisticsDB(); } catch (Exception e) { MessageBox.Show(e.ToString()); } } public bool DoesExist(string number, string writeDir) { XElement main = XElement.Load(writeDir); return main.Descendants("Number") .Any(element => element.Value == number); } ```
Use a profiler and/or Performance Monitor and/or `\\live.sysinternals.com\tools\procmon.exe` and/or ResourceMonitor to determine what's going on
If the 5 minute process is a background task, you can make use of Thread Priority. MSDN [here](http://msdn.microsoft.com/en-us/library/system.threading.thread.priority.aspx). If you do the processing on a separate thread, change your timer to be a System.Threading.Timer and use callback events, you should be able to set a lower priority on that thread than the rest of your application.
C# WinForms App Maxing Processor But Doing Nothing Strenuous!
[ "", "c#", "winforms", "processor", "" ]
In my log4j.properties I have: ``` log4j.rootLogger=DEBUG,stdout log4j.logger.notRootLogger=DEBUG,somewhereelse ``` The appenders stdout and somewhereelse are both configured properly, stdout writes to the console and somewhereelse writes to a file. In my code in each class either I set either: ``` static Logger log = Logger.getLogger("notRootLogger); ``` ^ When I don't want stuff going to the console. -OR- ``` static Logger log = Logger.getRootLogger(); ``` ^ When I do. What do I have to do in log4.properties to stop the things that are written to notRootLogger ending up in stdout? Is there some sort of inheritance of wherever the root logger writes to going on that needs to be turned off somehow? I don't want to have to configure a logger for every single class individually that I just want to log to the console.
You need to set `additivity = false`, IIRC. From the [log4j manual](http://logging.apache.org/log4j/1.2/manual.html): > Each enabled logging request for a > given logger will be forwarded to all > the appenders in that logger as well > as the appenders higher in the > hierarchy. In other words, appenders > are inherited additively from the > logger hierarchy. For example, if a > console appender is added to the root > logger, then all enabled logging > requests will at least print on the > console. If in addition a file > appender is added to a logger, say C, > then enabled logging requests for C > and C's children will print on a file > and on the console. It is possible to > override this default behavior so that > appender accumulation is no longer > additive by setting the additivity > flag to false. Try this: ``` log4j.rootLogger=DEBUG,stdout log4j.logger.notRootLogger=DEBUG,somewhereelse log4j.additivity.notRootLogger=false ```
Hmm, should have read the short intro to log4j more carefully ``` log4j.additivity.notRootLogger=false ``` fixes it, because it inherits appenders from the loggers above it in the hierarchy, and the root logger is at the top of the hierarchy obviously.
log4j directs all log output to stdout even though it's not supposed to
[ "", "java", "log4j", "" ]
I am running a client/server application using JBoss. How can I connect to the server JVM's MBeanServer? I want to use the MemoryMX MBean to track the memory consumption. I can connect to the JBoss MBeanServer using JNDI lookup but the java.lang.MemoryMX MBean is not registered with the JBoss MBeanServer. EDIT: The requirement is for programmatic access to the memory usage from the client.
Unlike the JBoss server's MBeanServer, the JVM's MBean server doesn't allow remote monitoring by default. You need to set various system properties to allow that: <http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html>
I wrote a class like this: ``` import javax.management.remote.JMXServiceURL; import javax.management.MBeanAttributeInfo; import javax.management.MBeanInfo; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.remote.JMXConnector; import javax.management.remote.JMXConnectorFactory; public class JVMRuntimeClient { static void main(String[] args) throws Exception { if (args == null) { System.out.println("Usage: java JVMRuntimeClient HOST PORT"); } if(args.length < 2) { System.out.println("Usage: java JVMRuntimeClient HOST PORT"); } try { JMXServiceURL target = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://"+args[0]+":"+args[1]+"/jmxrmi"); JMXConnector connector = JMXConnectorFactory.connect(target); MBeanServerConnection remote = connector.getMBeanServerConnection(); /** * this is the part where you MUST know which MBean to get * com.digitalscripter.search.statistics:name=requestStatistics,type=RequestStatistics * YOURS WILL VARY! */ ObjectName bean = new ObjectName("com.digitalscripter.search.statistics:name=requestStatistics,type=RequestStatistics"); MBeanInfo info = remote.getMBeanInfo(bean); MBeanAttributeInfo[] attributes = info.getAttributes(); for (MBeanAttributeInfo attr : attributes) { System.out.println(attr.getDescription() + " " + remote.getAttribute(bean,attr.getName())); } connector.close(); } catch(Exception e) { System.out.println(e.getMessage()); System.exit(0); } } } ```
Accessing a remote MBean server
[ "", "java", "jboss", "jmx", "mbeans", "" ]
What are the advantages/disadvantages of using NHibernate ? What kind of applications should be (& should not be) built using NHibernate ?
Since other ppl have listed advantages I will just list the disadvantages --- Disadvantages 1. Increased startup time due to metadata preparation ( not good for desktop like apps) 2. Huge learning curve without orm background. 3. Comparatively Hard to fine tune generated sql. 4. Hard to get session management right if used in non-typical environments ( read non webapps ) 5. Not suited for apps without a clean domain object model ( no all apps in world dont need clean domain object models) . 6. Have to jump through hoops if you have badly designed ( legacy ) db schema.
Advantages: 1. Flexible and very powerful mapping capabilities. 2. Caching. 3. Very polished UnitOfWork implementation. 4. Future query ([article](http://elegantcode.com/2009/01/25/nhibernate-and-future-queries/)). 5. Model classes are POCO - which effectively means you can easily implement anemic domain antipatter. 6. Interceptors - you can do a kind of aspect oriented programming... Like very easily implementing audition, logging, authorization, validation, ect for your domain. 7. Lucene.NET and NHibernate are well integrated with each other - gives you a very fast and effective implementation of full-text indexing. 8. It's very mature and popular in enterprise environment. 9. Big community. Disadvantages: 1. Already mentioned learning curve. You can start using NHibernate very fast but it will take you months to master it. I'd highly recomend to read Manning NHibernate book. 2. Writing XML mapping can be very tedious especially for big databases with hundreds and thousands of tables and views and stored procedures. Yes, there is tools that will help you by generating those mappings but you still will have to do quite a lot of manual work there. [Fluent NHibernate](http://fluentnhibernate.org/) seem to simplify this process by getting rid of XML mappings, so is [Castle ActiveRecord](http://www.castleproject.org/activerecord/index.html) (AR though is impossible to use for anemic domain as you define mappings in attributes on your model classes). 3. Performance may be low for certain scenarious. For instance large bulk operations. For those you might have to use IStatelessSession but its awkward experience, least to say...
Advantages and Disadvantages of NHibernate
[ "", "sql", "nhibernate", "orm", "" ]
I am aware of `COLUMNS_UPDATED`, well I need some quick shortcut (if anyone has made, I am already making one, but if anyone can save my time, I will appriciate it) I need basicaly an XML of only updated column values, I need this for replication purpose. SELECT \* FROM inserted gives me each column, but I need only updated ones. something like following... ``` CREATE TRIGGER DBCustomers_Insert ON DBCustomers AFTER UPDATE AS BEGIN DECLARE @sql as NVARCHAR(1024); SET @sql = 'SELECT '; I NEED HELP FOR FOLLOWING LINE ...., I can manually write every column, but I need an automated routin which can work regardless of column specification for each column, if its modified append $sql = ',' + columnname... SET @sql = $sql + ' FROM inserted FOR XML RAW'; DECLARE @x as XML; SET @x = CAST(EXEC(@sql) AS XML); .. use @x END ```
Inside the trigger, you can use `COLUMNS_UPDATED()` like this in order to get updated value ``` -- Get the table id of the trigger -- DECLARE @idTable INT SELECT @idTable = T.id FROM sysobjects P JOIN sysobjects T ON P.parent_obj = T.id WHERE P.id = @@procid -- Get COLUMNS_UPDATED if update -- DECLARE @Columns_Updated VARCHAR(50) SELECT @Columns_Updated = ISNULL(@Columns_Updated + ', ', '') + name FROM syscolumns WHERE id = @idTable AND CONVERT(VARBINARY,REVERSE(COLUMNS_UPDATED())) & POWER(CONVERT(BIGINT, 2), colorder - 1) > 0 ``` But this snipet of code fails when you have a table with more than 62 columns.. Arth.Overflow... Here is the final version which handles more than 62 columns but give only the number of the updated columns. It's easy to link with 'syscolumns' to get the name ``` DECLARE @Columns_Updated VARCHAR(100) SET @Columns_Updated = '' DECLARE @maxByteCU INT DECLARE @curByteCU INT SELECT @maxByteCU = DATALENGTH(COLUMNS_UPDATED()), @curByteCU = 1 WHILE @curByteCU <= @maxByteCU BEGIN DECLARE @cByte INT SET @cByte = SUBSTRING(COLUMNS_UPDATED(), @curByteCU, 1) DECLARE @curBit INT DECLARE @maxBit INT SELECT @curBit = 1, @maxBit = 8 WHILE @curBit <= @maxBit BEGIN IF CONVERT(BIT, @cByte & POWER(2,@curBit - 1)) <> 0 SET @Columns_Updated = @Columns_Updated + '[' + CONVERT(VARCHAR, 8 * (@curByteCU - 1) + @curBit) + ']' SET @curBit = @curBit + 1 END SET @curByteCU = @curByteCU + 1 END ```
I've another completely different solution that doesn't use COLUMNS\_UPDATED at all, nor does it rely on building dynamic SQL at runtime. (You might want to use dynamic SQL at design time but thats another story.) Basically you start with [the inserted and deleted tables](http://msdn.microsoft.com/en-us/library/ms191300.aspx), unpivot each of them so you are just left with the unique key, field value and field name columns for each. Then you join the two and filter for anything that's changed. Here is a full working example, including some test calls to show what is logged. ``` -- -------------------- Setup tables and some initial data -------------------- CREATE TABLE dbo.Sample_Table (ContactID int, Forename varchar(100), Surname varchar(100), Extn varchar(16), Email varchar(100), Age int ); INSERT INTO Sample_Table VALUES (1,'Bob','Smith','2295','bs@example.com',24); INSERT INTO Sample_Table VALUES (2,'Alice','Brown','2255','ab@example.com',32); INSERT INTO Sample_Table VALUES (3,'Reg','Jones','2280','rj@example.com',19); INSERT INTO Sample_Table VALUES (4,'Mary','Doe','2216','md@example.com',28); INSERT INTO Sample_Table VALUES (5,'Peter','Nash','2214','pn@example.com',25); CREATE TABLE dbo.Sample_Table_Changes (ContactID int, FieldName sysname, FieldValueWas sql_variant, FieldValueIs sql_variant, modified datetime default (GETDATE())); GO -- -------------------- Create trigger -------------------- CREATE TRIGGER TriggerName ON dbo.Sample_Table FOR DELETE, INSERT, UPDATE AS BEGIN SET NOCOUNT ON; --Unpivot deleted WITH deleted_unpvt AS ( SELECT ContactID, FieldName, FieldValue FROM (SELECT ContactID , cast(Forename as sql_variant) Forename , cast(Surname as sql_variant) Surname , cast(Extn as sql_variant) Extn , cast(Email as sql_variant) Email , cast(Age as sql_variant) Age FROM deleted) p UNPIVOT (FieldValue FOR FieldName IN (Forename, Surname, Extn, Email, Age) ) AS deleted_unpvt ), --Unpivot inserted inserted_unpvt AS ( SELECT ContactID, FieldName, FieldValue FROM (SELECT ContactID , cast(Forename as sql_variant) Forename , cast(Surname as sql_variant) Surname , cast(Extn as sql_variant) Extn , cast(Email as sql_variant) Email , cast(Age as sql_variant) Age FROM inserted) p UNPIVOT (FieldValue FOR FieldName IN (Forename, Surname, Extn, Email, Age) ) AS inserted_unpvt ) --Join them together and show what's changed INSERT INTO Sample_Table_Changes (ContactID, FieldName, FieldValueWas, FieldValueIs) SELECT Coalesce (D.ContactID, I.ContactID) ContactID , Coalesce (D.FieldName, I.FieldName) FieldName , D.FieldValue as FieldValueWas , I.FieldValue AS FieldValueIs FROM deleted_unpvt d FULL OUTER JOIN inserted_unpvt i on D.ContactID = I.ContactID AND D.FieldName = I.FieldName WHERE D.FieldValue <> I.FieldValue --Changes OR (D.FieldValue IS NOT NULL AND I.FieldValue IS NULL) -- Deletions OR (D.FieldValue IS NULL AND I.FieldValue IS NOT NULL) -- Insertions END GO -- -------------------- Try some changes -------------------- UPDATE Sample_Table SET age = age+1; UPDATE Sample_Table SET Extn = '5'+Extn where Extn Like '221_'; DELETE FROM Sample_Table WHERE ContactID = 3; INSERT INTO Sample_Table VALUES (6,'Stephen','Turner','2299','st@example.com',25); UPDATE Sample_Table SET ContactID = 7 where ContactID = 4; --this will be shown as a delete and an insert -- -------------------- See the results -------------------- SELECT *, SQL_VARIANT_PROPERTY(FieldValueWas, 'BaseType') FieldBaseType, SQL_VARIANT_PROPERTY(FieldValueWas, 'MaxLength') FieldMaxLength from Sample_Table_Changes; -- -------------------- Cleanup -------------------- DROP TABLE dbo.Sample_Table; DROP TABLE dbo.Sample_Table_Changes; ``` So no messing around with bigint bitfields and arth overflow problems. If you know the columns you want to compare at design time then you don't need any dynamic SQL. On the downside the output is in a different format and all the field values are converted to sql\_variant, the first could be fixed by pivoting the output again, and the second could be fixed by recasting back to the required types based on your knowledge of the design of the table, but both of these would require some complex dynamic sql. Both of these might not be an issue in your XML output. This [question](https://stackoverflow.com/questions/49758169/merge-pending-data-changes-into-a-view) does something similar to getting the output back in the same format. Edit: Reviewing the comments below, if you have a natural primary key that could change then you can still use this method. You just need to add a column that is populated by default with a GUID using the NEWID() function. You then use this column in place of the primary key. You may want to add an index to this field, but as the deleted and inserted tables in a trigger are in memory it might not get used and may have a negative effect on performance.
SQL Server Update Trigger, Get Only modified fields
[ "", "sql", "sql-server", "triggers", "sql-update", "database-replication", "" ]
I was reading the Wikipedia article on [SFINAE](http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error) and encountered following code sample: ``` struct Test { typedef int Type; }; template < typename T > void f( typename T::Type ) {} // definition #1 template < typename T > void f( T ) {} // definition #2 void foo() { f< Test > ( 10 ); //call #1 f< int > ( 10 ); //call #2 without error thanks to SFINAE } ``` Now I've actually written code like this before, and somehow intuitively I knew that I needed to type "typename T" instead of just "T". However, it would be nice to know the actual logic behind it. Anyone care to explain?
In general, C++'s syntax (inherited from C) has a technical defect: the parser MUST know whether something names a type, or not, otherwise it just can't solve certain ambiguities (e.g., is `X * Y` a multiplication, or the declaration of a pointer Y to objects of type X? it all depends on whether X names a type...!-). The `typename` "adjective" lets you make that perfectly clear and explicit when needed (which, as another answer mentions, is typical when template parameters are involved;-).
The short version that you need to do `typename X::Y` whenever X is or depends on a template parameter. Until X is known, the compiler can't tell if Y is a type or a value. So you have to add `typename` to specify that it is a type. For example: ``` template <typename T> struct Foo { typename T::some_type x; // T is a template parameter. `some_type` may or may not exist depending on what type T is. }; template <typename T> struct Foo { typename some_template<T>::some_type x; // `some_template` may or may not have a `some_type` member, depending on which specialization is used when it is instantiated for type `T` }; ``` As sbi points out in the comments, the cause of the ambiguity is that `Y` might be a static member, an enum or a function. Without knowing the type of `X`, we can't tell. The standard specifies that the compiler should assume it is a value unless it is explicitly labelled a type by using the `typename` keyword. And it sounds like the commenters really want me to mention another related case as well: ;) If the dependant name is a function member template, and you call it with an explicit template argument (`foo.bar<int>()`, for example), you have to add the `template` keyword before the function name, as in `foo.template bar<int>()`. The reason for this is that without the template keyword, the compiler assumes that `bar` is a value, and you wish to invoke the less than operator (`operator<`) on it.
Why do you sometimes need to write `typename T` instead of just `T`?
[ "", "c++", "sfinae", "" ]
I have a bunch of files in a single directory that I would like to organize in sub-directories. This directory structure (which file would go in which directory) is specified in a file list that looks like this: `Directory: Music\` `-> 01-some_song1.mp3` `-> 02-some_song2.mp3` `-> 03-some_song3.mp3` `Directory: Images\` `-> 01-some_image1.jpg` `-> 02-some_image2.jpg` `......................` I was thinking of extracting the data (directory name and file name) and store it in a dictionary that would look like this: ``` dictionary = {'Music': (01-some_song1.mp3, 02-some_song2.mp3, 03-some_song3.mp3), 'Images': (01-some_image1.jpg, 02-some_image2.jpg), ...................................................... } ``` After that I would copy/move the files in their respective directories. I already extracted the directory names and created the empty dirs. For the dictionary values I tried to get a list of lists by doing the following: ``` def get_values(file): values = [] tmp = [] pattern = re.compile(r'^-> (.+?)$') for line in file: if line.strip().startswith('->'): match = re.search(pattern, line.strip()) if match: tmp.append(match.group(1)) elif line.strip().startswith('Directory'): values.append(tmp) del tmp[:] return values ``` This doesn't seem to work. Each list from the `values` list contains the same 4 file names over and over again. What am I doing wrong? I would also like to know what are the other ways of doing this whole thing? I'm sure there's a better/simpler/cleaner way.
no need to use regular expression ``` d = {} for line in open("file"): line=line.strip() if line.endswith("\\"): directory = line.split(":")[-1].strip().replace("\\","") d.setdefault(directory,[]) if line.startswith("->"): song=line.split(" ")[-1] d[directory].append(song) print d ``` output ``` # python python.py {'Images': ['01-some_image1.jpg', '02-some_image2.jpg'], 'Music': ['01-some_song1.mp3', '02-some_song2.mp3', '03-some_song3.mp3']} ```
I think that the cause is that you are reusing always the same list. `del tmp[:]` clears the list and doesn't create a new instance. In your case, you need to create a new list by calling `tmp = []` Following fix should work (I didn't test it) ``` def get_values(file): values = [] tmp = [] pattern = re.compile(r'^-> (.+?)$') for line in file: if line.strip().startswith('->'): match = re.search(pattern, line.strip()) if match: tmp.append(match.group(1)) elif line.strip().startswith('Directory'): values.append(tmp) tmp = [] return values ```
Copying files to directories as specified in a file list with python
[ "", "python", "file", "copy", "directory", "" ]
In an application I'm building I had an enumeration of account statuses: ``` public enum AccountStatus { Active = 1, Trial = 2, Canceled = 3 } ``` However, I needed more information from an AccountStatus so I made a class which has a few extra useful properties: ``` public class AccountStatus { public int Id {get; set;} public string Description {get; set;} public bool IsActive {get; set;} public bool CanReactivate {get; set;} } ``` This class get populated from a database table that might look like this: ``` 1, "Active", True, True 2, "Trial", True, True 3, "ExpiredTrial", False, True 4, "Expelled", False, False ``` This is really handy when I have a customer object that uses the AccountStatus because I can write code like: ``` if(customer.Status.CanReactivate) // Show reactivation form ``` However, I have lost something equally important. I can no longer do this: ``` if(customer.Status == AccountStatus.Active) // allow some stuff to happen ``` What would be the best way, if its even possible, to include something that will allow me to mimic the enumeration within the class. I know that I could add public static fields to the AccountStatus class, but ultimately this doesn't work because if the database changes the code would have to be manually updated. By this, I mean: ``` public static readonly AccountStatus Active = new AccountStatus(1); public static readonly AccountStatus Trial = new AccountStatus(2); // etc, etc ... ``` I imagine there is probably a pattern for this somewhere, I just don't know what its called. Any ideas? ***CLARIFICATION*** Based on the answers so far I need to clarify a couple of things. The table above is a brief example. In my actual table there a many records, I have 12 in there right now. Plus we can add more or remove some existing. This is what I meant by "dynamic" in my question title. Secondly, I gave a very simple use case for the ability I lost which apparently confused matters. Here is another real example: ``` if(customer.Status == AccountStatus.Trial || customer.Status == AccountStatus.ExpiredTrial) ``` ... neither Trial nor ExpiredTrial are boolean values on the property. I don't want to add them either. That would set an even worse precedent than the one I'm trying to avoid (meaning I would have to add a new property to the class every time I added a new record to the table). ***UPDATE*** I selected an answer which didn't really meet was I was looking for, but suggests that I was looking for something unnecessary. After thinking about this, I concur. While adding an enum or static fields does duplicate some work (ie, having the values in both code and in a table) I think the benefits outweigh the negatives.
But why can't you use the enumeration as a property of that class..? ``` public enum State { Active = 1, Trial = 2, Canceled = 3 } public class AccountStatus { public int Id {get; set;} public State State {get; set;} public string Description {get; set;} public bool IsActive {get; set;} public bool CanReactivate {get; set;} } ``` And then: ``` if(customer.Status == AccountStatus.State.Active) // allow some stuff to happen ```
Rather than working with a strongly-typed `enum`, you could just do comparisons using a string: ``` public static readonly AccountStatus Active = new AccountStatus("Active"); ``` or load the type from your database: ``` public static readonly AccountStatus Trial = new AccountStatus( reader["StatusField"] ); ``` You can then do explicit comparisons: ``` if(customer.Status == "Active") ``` You lose the strong typing, but that's what dynamic means :-). You can store the known string values in constants to get some of this back. **edit** You could of course do this using the corresponding integer values, like you hinted at the end of your post. But strings are easier to read, and in this case using integers doesn't offer any sort of typing benefits.
How can I make something like a dynamic enumeration in C#?
[ "", "c#", "design-patterns", "enums", "" ]
I'm trying to extract the shortUrl from the Bit.ly JSON response. The problem is the original URL is included in the response, using the dot notation to traverse the response doesn't work? I can get the other attributes (errorCode, errorMessage etc), but I can't get anything under results beacuse of the URL. Am I missing something? This is the response: ``` { "errorCode": 0, "errorMessage": "", "results": { "http://www.google.com/": { "hash": "2V6CFi", "shortKeywordUrl": "", "shortUrl": "http://bit.ly/1F5ewS", "userHash": "1F5ewS" } }, "statusCode": "OK" } ```
Javascript objects can be accessed via dot notation (`obj.property`) *if and only if* the property name is also a valid Javascript identifier. In your example, since a URL is clearly not a valid identifier, you can use the other method, array-style access (`obj[property]`): ``` var obj = { yahoo: 5 'http://www.google.com':10 }; // Both of these work just fine. var yahoo = obj.yahoo; var google = obj['http://www.google.com']; ```
`eval` will work to parse JSON, but it is often considered unsafe because it allows the JSON file to execute whatever code it likes. [This question](https://stackoverflow.com/questions/945015/alternatives-to-javascript-eval-for-parsing-json) discusses why and indicates some safer ways to parse JSON.
Parsing Bit.ly JSON response in Javascript (url in json response)
[ "", "javascript", "json", "bit.ly", "" ]
I am trying to let the user upload an image from my facebook app using the following php ``` <?php echo render_header('Your'); define("MAX_SIZE", "1536"); function getExtension($str){ $i = strpos($str,"."); if(!$i) {return "";} $l = strlen($str) - $i; $ext = substr($str, $i+1, $l); return $ext; } $errors = 0; if(isset($_POST['Upload'])){ $image = $_FILES["file1"]["name"]; if($image){ $filename = stripslashes($_FILES["file1"]["name"]); $extension = getExtension($filename); $extension = strtolower($extension); if((strcasecmp($extension,"jpg") != 0) && (strcasecmp($extension,"jpeg") != 0) && (strcasecmp($extension,"png") != 0) && (strcasecmp($extension,"gif") != 0)) { $errors = 1; } else{ $size = filesize($_FILES['file1']['tmp_name']); if($size > MAX_SIZE*1024){ $errors = 2; } else{ $image_name = md5(uniqid()) . '.' . $extension; $newname = "../images/" . $image_name; $flName = "/images/" . $image_name; $copied = move_uploaded_file($_FILES['file1']['tmp_name'], $newname); if(!$copied){ $errors = 3; } } } } } if(isset($_POST['Upload']) && $errors == 0){ //add to database here ... if($errors == 0){ include "uploadedFile.php"; } } else{ $user_details = $fb->api_client->users_getInfo($user, 'first_name,last_name,pic_square'); $image_url = $user_details[0]['pic_square']; if($image_url == ""){ $image_url = "http://static.ak.fbcdn.net/pics/q_silhouette.gif"; } $user_name = $user_details[0]['first_name'] . " " . $user_details[0]['last_name']; if(isset($_POST['Upload']) && $errors == 0){ ?> <div id="error" class="error"> <h2 id="standard_error" name="standard_error">Failed to upload tattoo.</h2> <p id="standard_explanation" name="standard_explanation"> Error uploading file. This error occurred because either the photo was a size we don't support or there was a problem with the image file. <br/> </p> </div> <?php } ?> <div id="newalbum" align="center"> <form id="upload" enctype="multipart/form-data" name="upload" action="" method="post"> <table class="formtable" cellspacing="0" border="0"> <tbody> <tr class="tallrow"> <td class="label"> Upload Image: <br/> <small> You can upload <br/> JPG, GIF or PNG <br/> files. </small> </td> <td> <div id="files"> <div id="1"> <input id="file1" class="inputfile" type="file" name="file1" size="22"/> </div> </div> </td> </tr> </tbody> </table> <div class="formbuttons"> <input id="" class="inputbutton" type="submit" value="Upload Tattoo" name="Upload" /> <br/> <small>The file size limit 1.5 MB. If your upload does not work, try uploading a smaller picture.</small> <br/> </div> <?php } ?> ``` But when I execute this code and the user presses the "Upload" button, the value of $\_FILES['file1']['name'] comes out to be blank. Is this code permissible in Facebook apps? If not what is the correct way to upload files? Thanks **Edit** Ok so the problem is with facebook. They strip out all file tags from any request! It is suggested that we use iframes instead! Thanks for the help everyone!
Ok. Found the problem. Facebook strips all file tags before sending in a request. The solution is to use iframes instead.
Before you try to access the file name, try this... ``` switch ($_FILES['file1']['error']) { case UPLOAD_ERR_INI_SIZE: echo '<p class="warning">File Upload Failed! File too large.</p>'; break; case UPLOAD_ERR_FORM_SIZE: echo '<p class="warning">File Upload Failed! File exceeds limit.</p>'; break; case UPLOAD_ERR_PARTIAL: echo '<p class="warning">File Upload Failed! Please try again.</p>'; break; case UPLOAD_ERR_NO_TMP_DIR: echo '<p class="warning">File Upload Failed! No temp directory.</p>'; break; case UPLOAD_ERR_CANT_WRITE: echo '<p class="warning">File Upload Failed! Failed to write to disk.</p>'; break; case UPLOAD_ERR_EXTENSION: echo '<p class="warning">File Upload Failed!</p>'; break; } ``` This should tell you where the problem lies.
Facebook - cannot upload file using the <input type="file"> tag
[ "", "php", "facebook", "" ]
I have a dropdown menu inside a DIV. I want the dropdown to be hide when user click anywhere else. ``` $('div').blur(function() { $(this).hide(); } ``` is not working. I know .blur works only with `<a>` but in this case what is the simplest solution?
I think the issue is that divs don't fire the `onfocusout` event. You'll need to capture click events on the body and then work out if the target was then menu div. If it wasn't, then the user has clicked elsewhere and the div needs to be hidden. ``` <head> <script> $(document).ready(function(){ $("body").click(function(e) { if(e.target.id !== 'menu'){ $("#menu").hide(); } }); }); </script> <style>#menu { display: none; }</style> </head> <body> <div id="menu_button" onclick="$('#menu').show();">Menu....</div> <div id="menu"> <!-- Menu options here --> </div> <p>Other stuff</p> </body> ```
Try using tabindex attribute on your div, see: Check [this](http://www.barryvan.com.au/2009/01/onfocus-and-onblur-for-divs-in-fx/) post for more information and demo.
How to blur the div element?
[ "", "javascript", "jquery", "html", "onblur", "" ]
I'm essentially looking for a "@Ignore" type annotation with which I can stop a particular field from being persisted. How can this be achieved?
[`@Transient`](http://docs.oracle.com/javaee/7/api/javax/persistence/Transient.html) complies with your needs.
To ignore a field, annotate it with [`@Transient`](http://docs.oracle.com/javaee/7/api/javax/persistence/Transient.html) so it will not be mapped by hibernate. but then **jackson will not serialize** the field when converting to JSON. **If you need mix JPA with JSON**(omit by JPA but still include in Jackson) use [`@JsonInclude`](http://fasterxml.github.io/jackson-annotations/javadoc/2.0.0/com/fasterxml/jackson/annotation/JsonInclude.Include.html) : ``` @JsonInclude() @Transient private String token; ``` **TIP:** You can also use [JsonInclude.Include.NON\_NULL](http://fasterxml.github.io/jackson-annotations/javadoc/2.0.0/com/fasterxml/jackson/annotation/JsonInclude.Include.html#NON_NULL) and hide fields in JSON during deserialization when `token == null`: ``` @JsonInclude(JsonInclude.Include.NON_NULL) @Transient private String token; ```
What is the easiest way to ignore a JPA field during persistence?
[ "", "java", "database", "hibernate", "jpa", "" ]
I'm looking for a way to use the length of a match group in the replace expression with the c# regex.replace function. That is, what can I replace ??? with in the following example to get the desired output shown below? **Example:** ``` val = Regex.Replace("xxx", @"(?<exes>x{1,6})", "${exes} - ???"); ``` **Desired output** ``` X - 3 ``` **Note:** This is an extremely contrived/simplified example to demonstrate the question. I realize for this example a regular expression is not the ideal way of doing this. Just trust me that the real world application of the answer is part of a more complex problem that does necessitate the use of a RegEx replace here.
Try using the version of `Regex.Replace` that calls a function to determine what the replacement text should be: ``` public string Replace(string, MatchEvaluator); ``` <http://msdn.microsoft.com/en-us/library/aa332127(VS.71).aspx> The function could then look at the matched text (the `Match` object is supplied as the argument to the evaluator function) and return a string with the proper calculated value.
If you are using C# 3 you can simply create a MatchEvaluator from a lambda expression: ``` string val = Regex.Replace( "xxx", @"(?<exes>x{1,6})", new MatchEvaluator( m => m.Groups["exes"].Value[0] + " - " + m.Groups["exes"].Value.Length.ToString() ) ); ``` In C# 2 you can use a delegate: ``` string val = Regex.Replace( "xxx", @"(?<exes>x{1,6})", new MatchEvaluator( delegate(Match m) { return m.Groups["exes"].Value[0] + " - " + m.Groups["exes"].Value.Length.ToString(); } ) ); ```
How can I use a calculated value in a RegEx replace operation in C#?
[ "", "c#", "regex", "" ]
I have an XML message like so: ``` <root> <elementA>something</elementA> <elementB>something else</elementB> <elementC>yet another thing</elementC> </root> ``` I want to compare a message of this type produced by a method under test to an expected message, but I don't care about `elementA`. So, I'd like the above message to be considered equal to: ``` <root> <elementA>something different</elementA> <elementB>something else</elementB> <elementC>yet another thing</elementC> </root> ``` I'm using the latest version of [XMLUnit](http://xmlunit.sourceforge.net/). I'm imagining that the answer involves creating a custom `DifferenceListener`; I just don't want to reinvent the wheel if there's something ready to use out there. Suggestions that use a library other than XMLUnit are welcome.
Things have changed a lot for [XMLUnit](http://www.xmlunit.org/api/java/2.1.1/index.html) since this question was answered. You can now easily ignore a node when using a `DiffBuilder`: ``` final Diff documentDiff = DiffBuilder .compare(expectedSource) .withTest(actualSource) .withNodeFilter(node -> !node.getNodeName().equals(someName)) .build(); ``` If you then call `documentDiff.hasDifferences()` nodes added to filter will be ignored.
I wound up implementing a `DifferenceListener` that takes a list of node names (with namespaces) to ignore textual differences for: ``` public class IgnoreNamedElementsDifferenceListener implements DifferenceListener { private Set<String> blackList = new HashSet<String>(); public IgnoreNamedElementsDifferenceListener(String ... elementNames) { for (String name : elementNames) { blackList.add(name); } } public int differenceFound(Difference difference) { if (difference.getId() == DifferenceConstants.TEXT_VALUE_ID) { if (blackList.contains(difference.getControlNodeDetail().getNode().getParentNode().getNodeName())) { return DifferenceListener.RETURN_IGNORE_DIFFERENCE_NODES_IDENTICAL; } } return DifferenceListener.RETURN_ACCEPT_DIFFERENCE; } public void skippedComparison(Node node, Node node1) { } } ```
How do I ignore certain elements when comparing XML?
[ "", "java", "xml", "junit", "xmlunit", "" ]
I'm working on a class library and have opted for a route with my design to make implementation and thread safety slightly easier, however I'm wondering if there might be a better approach. A brief background is that I have a multi-threaded heuristic algorithm within a class library, that once set-up with a scenario should attempt to solve it. However I obviously want it to be thread safe and if someone makes a change to anything while it is solving for that to causes crashes or errors. The current approach I've got is if I have a class A, then I create a number InternalA instances for each A instance. The InternalA has many of the important properties from the A class, but is internal an inaccessible outside the library. The downside of this, is that if I wish to extend the decision making logic (or actually let someone do this outside the library) then it means I need to change the code within the InternalA (or provide some sort of delegate function). Does this sound like the right approach?
It's hard to really say from just that - but I *can* say that if you can make everything immutable, your life will be a lot easier. Look at how functional languages approach immutable data structures and collections. The less shared mutable data you have, the simple threading will be.
Why Not? Create generic class, that accepts 2 members class (eg. Lock/Unlock) - so you could provide * Threadsafe impl (implmenetation can use Monitor.Enter/Exit inside) * System-wide safe impl (using Mutex) * Unsafe, but fast (using empty impl).
Thread Safe Class Library Design
[ "", "c#", "oop", "" ]
I have been able to work the Date Picker into JQGrid when editing inline, but I am unable to use it inside the add/edit window. Does anyone have instructions on how to do this or an example I can look at? demo from that site of what I am trying to do: <http://www.the-di-lab.com/demo/apples> I read that I could use the following method but not sure how to integrate it: ``` dataInit : function (elem) { $(elem).datepicker(); } ```
It looks like they are using 'afterShowForm' to attach a date/color picker to a div. (view source) ``` jQuery("#list").navGrid("#pager",{edit:true,add:true,del:true}, {width:400,height:400,closeAfterEdit:true, afterShowForm:function(){ $("#jsrs").load("/demo/apples/jsrs"); }, onclickSubmit:function() { $("#jsrs").empty(); } }, ``` (view source) ``` http://www.the-di-lab.com/demo/apples/jsrs //Js for colorPicker $('#color').ColorPicker({ onSubmit: function(hsb, hex, rgb) { $('#color').val("#"+hex); }, onBeforeShow: function () { $(this).ColorPickerSetColor(this.value); } }).bind('keyup', function(){ $(this).ColorPickerSetColor(this.value); }); //Js for datePicker $('#date').DatePicker({ format:'Y-m-d', date: $('#date').val(), current: $('#date').val(), starts: 1, position: 'bottom', onBeforeShow: function(){ $('#date').DatePickerSetDate($('#date').val(), true); }, onChange: function(formated, dates){ $('#date').val(formated); } }); ``` Thanks for finding this example, I was looking for how to do this as well.
Adding datepicker is an easy task: ``` colModel: [ ... other column definitions ... { name:'my_date', index:'my_date', label: 'Date', width: 80, editable: true, edittype: 'text', editoptions: { size: 10, maxlengh: 10, dataInit: function(element) { $(element).datepicker({dateFormat: 'yy.mm.dd'}) } } }, ... other column definitions ... ] ``` Of couse, instead of `.datepicker` you can use any plugin like colorpicker or autocomplete.
JQGrid / Date Picked within Add/Edit window
[ "", "javascript", "jquery", "jqgrid", "datepicker", "jqmodal", "" ]
I'm terrible at SQL. I do not know if what I am trying to do is possible. But, because of our data structure, I need to solve this problem this way or do a massive architectural change. I am trying to count the number of 'Provinces' (a.k.a States) for a Country. However, there are just a few Provinces that need to be ignored from the count. Because of this, I am trying to retrieve a list of countries, with a count of the provinces in each country. As an example, I need to query for the United States, and ignore 'Washington D.C.' from the count. The reason why is because by our requirements, Washington D.C. is not a state. Here is what I am trying at the moment (it does not work): ``` SELECT c.Name AS 'CountryName', ISNULL(COUNT(p.[ID]), 0) as 'ProvinceCount' FROM Country c LEFT OUTER JOIN [Province] p ON p.[CountryID]=c.[ID] WHERE c.[ID]=@idParameter and p.[Name] <> 'Washington D.C.' ``` As you can imagine, this query does not return any results when the idParameter matches that of the United States. How do I get the correct count while figuring in exceptions? Thank you very much for your help.
You need a `GROUP BY` clause to get a proper count, and you need an outer join to display '0' values for those countries with no valid provinces. ``` select c.Name as 'CountryName', isnull(count(c.Name), 0) as 'ProvinceCount' from Country c left outer join Province p on p.CountryID = c.[ID] where c.[ID] = @idParameter and p.[Name] not in ('Washington D.C', 'Another State') group by c.Name ```
You don't want to list out text, a typo can cause a hard to see bug. You also want to eventually make it so the user can have a page to maintain this themselves. So: ``` ALTER TABLE Province ADD IsState bit GO UPDATE Province set IsState = 1 where Name not in ('Washington D.C', 'Another State') GO UPDATE Province SET IsState = 0 WHERE IsState IS NULL GO -- double check the data at this point by browsing it... SELECT c.name AS 'country name', isnull(count(1), 0) AS 'provice count' FROM Country c INNER JOIN Province p ON p.CountryID = c.[ID] WHERE c.[ID] = @idParameter AND p.IsState = 1 GROUP BY c.name ORDER BY 1 GO ```
SQL - LEFT OUTER JOIN and WHERE clause
[ "", "sql", "join", "" ]
I have an array of float rtmp1[NMAX \* 3][3], and it is used as rtmp1[i][n], where n is from 0 to 2, and i is from 0 to 3 \* NMAX - 1. However, I would like to convert rtmp1 to be rtmp1[3 \* 3 \* NMAX]. Would addressing this new 1D array as rtmp1[3 \* i + n] be equivalent to rtmp1[i][n]? Thanks in advance for the clarifications.
rtmp1[i][n] is equivalent to rtmp1[i\*NMAX + n] See <http://www.cplusplus.com/doc/tutorial/arrays/>, where your NMAX is their width.
Yes, but what are you trying to prove by doing this? rtmp1[i][n] would most likely have better execution time and is easier to read. "Actually, you'd want to use rtmp[i + 3\*n]" what is the difference? All you are doing is swapping addresses.
C++: 2D arrays vs. 1D array differences
[ "", "c++", "arrays", "multidimensional-array", "" ]
As a starting programmer with little industry-experience, I find it frustrating to learn a new project which is based on many technologies that I am not familiar with. The project I am working on has about 150 classes with 15000 lines of code, and with its extensive focus on socket and security programming which I have zero experience in, I am not sure where I should start. With limited documentation and help at hands, self-study is my best resource in trying to understand this project as a whole. I often find myself spending way too much time learning insignificant features of the product while missing out the crucial classes that I should really be focusing on... The main reason why it takes so much time is that I often have to look into Java API reference every few minutes to understand a small code block... I am sure I will eventually learn it through trial-and-error, but I am sure that there are some useful advices and guidelines I can use :)
A few things come to mind: 1. Spend a little time getting familiar with the JDK and its standard classes. Having knowledge off the top of your head will take time and a lot more checking the API spec, but you can also spend some time just browsing without a particular thing you're looking up. 2. If your project is using some frameworks or libraries, you can often get a high-level view of what these bring to the project by reading the "intro" page on the project site. I think this might be of particular help to you, since you cited unfamiliarity with some of the technologies used in the project as a source of frustration. 3. If there are any functional integration or regression tests, these can often be a good way to get a handle on what the main entry points into the project are. Having a good grasp of the high-level functionality of a project is often helpful when trying to understand the little details. 4. If you can find a mentor on your team to show you the ropes, that will probably help a lot. I think, based on the size of the project you mentioned, that this can be a gentle introduction to production code for you. It might seem big now, but 15000 lines of code is on the smaller side of the projects you might eventually work on during the course of your career. Remember also that this is necessarily going to be a learning experience for you. It's one of your first projects in the industry, so it might take a little while to get used to things. Keep in mind that you're not the first person to have to swim in library / framework soup in an unfamiliar code base. Good luck!
Initially, you don't need to understand every line of code. Borrow a senior developer for a half-hour and ask him to give you the birds-eye view of the architecture - what the major blocks of code are, how they interact, and how the user / usage drives data through the system. Then spend some time investigating the source for the modules you feel (after the explanation) will give you the best insight into "how it all works". I have a (probably quite odd) habit of printing out large blocks of sourcecode, and covering a floor with the printouts. Then I can kneel down and crawl around on the floor with a pen and literally draw arrows from point to point, and draw around sections - I find that displaying code in 2D makes it easier to figure things out. It also enables making copious notes that help me understand the flow in more detail. Before long, you'll start to recognise idioms (stylised ways of doing things) that characterise the code, and eventually you'll find your way into the mindset of the authors. Then everything will be a lot simpler. While you're on the floor, crawling around, have a laptop+google handy, so you can decipher anything odd you encounter. Also: Coloured highlighter pens FTW. Make (at least) two passes at understanding the source. The first time don't try to understand any of the minutiae... try to get a feel for "movement" - where data goes, and where execution goes. That will give you a framework for your mental model of the code. When you go through next time, you can start to break down details, but a top-down approach always makes things easier for me. If you're not familiar with the technologies, language or environment, then do see if there are any books around you can grab. There's a lot more visible space in the real world than you can fit on a computer screen, and having google on a laptop, syntax/library references in a book, and the code all around you makes (for me at least) the whole process VASTLY simpler.
Java: what are some steps to learn a project where you have a little or no experience
[ "", "java", "legacy-code", "" ]
I want to check if a generic variable is of a certain type but don't want to check the generic part. Let's say I have a variable of `List<int>` and another of `List<double>`. I just want to check if it is of type `List<>` ``` if(variable is List) {} ``` And not ``` if (variable is List<int> || variable is List<double>) {} ``` is this possible? Thanks
``` variable.GetType().IsGenericType && variable.GetType().GetGenericTypeDefinition() == typeof(List<>) ``` Of course, this only works if variable is of type `List<T>`, and isn't a derived class. If you want to check if it's `List<T>` or inherited from it, you should traverse the inheritance hierarchy and check the above statement for each base class: ``` static bool IsList(object obj) { Type t = obj.GetType(); do { if (t.IsGenericType && t.GetGenericTypeDefinition() == typeof(List<>)) return true; t = t.BaseType; } while (t != null); return false; } ```
You can test an *exact* type via reflection: ``` object list = new List<int>(); Type type = list.GetType(); if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(List<>)) { Console.WriteLine("is a List-of-" + type.GetGenericArguments()[0].Name); } ``` Personally, though, I'd look for `IList<T>` - more versatile than the concrete `List<T>`: ``` foreach (Type interfaceType in type.GetInterfaces()) { if (interfaceType.IsGenericType && interfaceType.GetGenericTypeDefinition() == typeof(IList<>)) { Console.WriteLine("Is an IList-of-" + interfaceType.GetGenericArguments()[0].Name); } } ```
check for generic type
[ "", "c#", "generics", "" ]
Ok, so i'm a complete newb with oracle. Now that that's out of the way; I think you can get an understand of what i'm trying to do below. For each stored procedure found, output the DDL to a filename with it's name. The problem is i can't figure out how to get the spool target to pick up the value of FileName which is being set by the cursor. ``` DECLARE objName varchar2(50); FileName varchar2(50); cursor curProcs is select OBJECT_NAME into objName FROM ALL_PROCEDURES WHERE OWNER = 'AMS' ORDER BY OBJECT_NAME; -- get all procs in db BEGIN open curProcs; if curProcs%ISOPEN THEN LOOP FETCH curProcs into objName; EXIT WHEN curProcs%NOTFOUND; FileName := 'C:\ ' || objName || '.PRC.SQL'; spool FileName; --BREAKS DBMS_METADATA.GET_DDL('PROCEDURE',objName); spool off; END LOOP; END IF; END; ``` Any ideas as to where i'm going wrong? and if anyone has an example of this i would appreciate it greatly. I feel i must be dancing around it because if i create a column initially then ``` spool &ColName ``` i get a result, i just can't seem to dynmically change that &colname Thanks for your help.
SPOOL is a SQL*Plus directive and you can't mix it into the PL/SQL anonymous block. If you're going to do this purely in SQL*Plus, I think the general idea would be to process in two passes, i.e. use a first script that dynamically generates the spool filename references into a second script that actually makes the dbms\_metadata call. [Edit] ``` This should be close to what you need - maybe a line termination problem, depending on your platform: set pagesize 0 set linesize 300 spool wrapper.sql select 'spool '||object_name||'.sql'||chr(10)|| 'begin dbms_metadata.get_ddl('||chr(39)||object_type||chr(39)||','||chr(39)||object_name||chr(39)||')'||' end;'||chr(10)|| '/'||chr(10)|| 'spool off' from user_objects where object_type = 'PROCEDURE' ; spool off ```
I found a better solution without the need to declare/begin/end code blocks or query statements. A sample spool filename with the date and time can be achieved by: ``` sql> column dt new_value _dt sql> select to_char(sysdate,'ddMONyyyy_hh24mi') dt from dual; sql> spool &_dt ``` My file name: 27JUN2011\_1727.lst You can even specify the file extension if you need to (eg .txt). Just create another variable. source: <http://oracle.ittoolbox.com/groups/technical-functional/oracle-apps-l/variable-file-name-with-spool-1508529>
ORACLE How to use spool with dynamic spool location
[ "", "sql", "oracle", "oracle10g", "" ]
I'm having a dilemma with the Google Maps API - I'm using the `GDirections` object to get the time needed to travel between two points. I've tested the following code in Firebug, so that the chances of the rest of my code affecting this one call is low (I can't link to the actual map; it's on a local computer). ``` var start = "NY, USA"; var end = "CA, USA"; var searchString = "from: " + start + " to: " + end; C.console(searchString); var myDir = new GDirections(); GEvent.addListener(myDir,"load",C.console(myDir,getDuration())); myDir.load(searchString); ``` `C.console` is just a function I wrote to print its arguments to the Firebug debug log. When I run that code, it outputs the searchString `from: NY, USA to: CA, USA` and the callback function is called. However, it outputs `null` rather than the duration of the `GDirections` object. Next, I run ``` C.console(myDir.getDuration()) ``` and it ouputs ``` Object seconds=157994 html=1 day 20 hours ``` as desired. Does anyone know why it takes two calls for this to work? I think it's time dependent, because in the code if I simply call it twice it'll give me null two times in a row, which wasn't quite surprising. I already use the event listener, though, to wait for it to finish loading. I've also tried using the `addoverlay` event instead of the `load` event, but that won't work either. Has anyone seen this before or has ideas as to how I can fix it? Any help would be greatly appreciated!
This has truly boggled me. I played with the code, and this works: ``` var start = "NY, USA"; var end = "CA, USA"; var searchString = "from: " + start + " to: " + end; C.console(searchString); var directions = new GDirections(); GEvent.addListener(directions,"load",function() { C.console(directions.getDuration()); }); directions.load(searchString); ``` Apart from a variable change (`myDir` is now `directions`), I used an anonymous function for the callback function. It seems it was a scope problem before, and maybe the second time through the variable had been defined and so it wasn't `null` anymore. Anyhow, this seems to work properly now - I hope this helps others too.
Your problem is that this line ``` GEvent.addListener(myDir,"load",C.console(myDir,getDuration())); ``` causes C.console to be executed immediately, and whatever it returns is to be used as the callback function. That's not what you want. What you want is for C.console to be called when the callback occurs. One way to do that is ``` GEvent.addListener(myDir,"load",function() { C.console(myDir,getDuration()) }); ```
Google Maps API - GDirections.getDuration() returns null first time, works second time
[ "", "javascript", "google-maps", "event-handling", "" ]
I have a form with lots of data to be posted, so i was wondering if there is any way to get all the data to be posted automatically. like for example i sent data this way ``` $.ajax({ type: 'post', url: 'http://mymegafiles.com/rapidleech/index.php', data: 'link=' + $('#link').val() + '&yt_fmt' + $('#yt_fmt').val(), }); ``` but there are so many fields that it doesnt look a good idea to me. Thank You.
``` $.ajax({ type: 'post', url: 'http://mymegafiles.com/rapidleech/index.php', data: $('#formId').serialize() }); ```
Assuming the fields are all part of a form, you'll be wanting to use [$('form').serialize()](http://docs.jquery.com/Ajax/serialize)
automatic form post data
[ "", "javascript", "jquery", "" ]
I'm using Qt framework which has by default non-blocking I/O to develop an application navigating through several web pages (online stores) and carrying out different actions on these pages. I'm "mapping" specific web page to a state machine which I use to navigate through this page. This state machine has these transitions; `Connect, LogIn, Query, LogOut, Disconnect` and these states; `Start, Connecting, Connected, LoggingIn, LoggedIn, Querying, QueryDone, LoggingOut, LoggedOut, Disconnecting, Disconnected` Transitions from \*ing to \*ed states (`Connecting->Connected`), are due to `LoadFinished` asynchronous network events received from network object when currently requested url is loaded. Transitions from \*ed to \*ing states (`Connected->LoggingIn`) are due to events send by me. I want to be able to send several events (commands) to this machine (like Connect, LogIn, Query("productA"), Query("productB"), LogOut, LogIn, Query("productC"), LogOut, Disconnect) **at once** and have it process them. I **don't want to block** waiting for the machine to finish processing all events I sent to it. The problem is they have to be interleaved with the above mentioned network events informing machine about the url being downloaded. Without interleaving machine can't advance its state (and process my events) because advancing from \*ing to \*ed occurs only after receiving network type of event. How can I achieve my design goal? **EDIT** 1. The state machine I'm using has its own event loop and events are not queued in it so could be missed by machine if they come when the machine is busy. 2. Network I/O events are not posted directly to neither the state machine nor the event queue I'm using. They are posted to my code (handler) and I have to handle them. I can forward them as I wish but please have in mind remark no. 1. 3. Take a look at my *answer* to this question where I described my current design in details. The question is if and how can I improve this design by making it * More robust * Simpler
Sounds like you want the state machine to have an event queue. Queue up the events, start processing the first one, and when that completes pull the next event off the queue and start on that. So instead of the state machine being driven by the client code directly, it's driven by the queue. This means that any logic which involves using the result of one transition in the next one has to be in the machine. For example, if the "login complete" page tells you where to go next. If that's not possible, then the event could perhaps include a callback which the machine can call, to return whatever it needs to know.
Asking this question I already had a working design which I didn't want to write about not to skew answers in any direction :) I'm going to describe in this pseudo answer what the design I have is. In addition to the state machine I have a queue of events. Instead of posting events directly to the machine I'm placing them in the queue. There is however problem with network events which are asynchronous and come in any moment. If the queue is not empty and a network event comes I can't place it in the queue because the machine will be stuck waiting for it before processing events already in the queue. And the machine will wait forever because this network event is waiting behind all events placed in the queue earlier. To overcome this problem I have two types of messages; normal and priority ones. Normal ones are those send by me and priority ones are all network ones. When I get network event I don't place it in the queue but instead I send it directly to the machine. This way it can finish its current task and progress to the next state before pulling the next event from the queue of events. It works designed this way only because there is exactly 1:1 interleave of my events and network events. Because of this when the machine is waiting for a network event it's not busy doing anything (so it's ready to accept it and does not miss it) and vice versa - when the machine waits for my task it's only waiting for my task and not another network one. I asked this question in hope for some more simple design than what I have now.
How to design a state machine in face of non-blocking I/O?
[ "", "c++", "qt", "events", "state-machine", "qstatemachine", "" ]
I asked [this question in regard to SQL Server](https://stackoverflow.com/questions/612480/can-i-use-sql-to-plot-actual-dates-based-on-schedule-information), but what's the answer for an Oracle environment (10g)? If I have a table containing schedule information that implies particular dates, is there a SQL statement that can be written to convert that information into actual rows, using something like MSSQL's Commom Table Expressions, perhaps? Consider a payment schedule table with these columns: * **StartDate** - the date the schedule begins (1st payment is due on this date) * **Term** - the length in months of the schedule * **Frequency** - the number of months between recurrences * **PaymentAmt** - the payment amount :-) ``` SchedID StartDate Term Frequency PaymentAmt ------------------------------------------------- 1 05-Jan-2003 48 12 1000.00 2 20-Dec-2008 42 6 25.00 ``` Is there a single SQL statement to allow me to go from the above to the following? ``` Running SchedID Payment Due Expected Num Date Total -------------------------------------- 1 1 05-Jan-2003 1000.00 1 2 05-Jan-2004 2000.00 1 3 05-Jan-2005 3000.00 1 4 05-Jan-2006 4000.00 2 1 20-Dec-2008 25.00 2 2 20-Jun-2009 50.00 2 3 20-Dec-2009 75.00 2 4 20-Jun-2010 100.00 2 5 20-Dec-2010 125.00 2 6 20-Jun-2011 150.00 2 7 20-Dec-2011 175.00 ``` Your thoughts are appreciated.
I didn't set out to answer my own question, but I'm doing work with Oracle now and I have had to learn some new Oracle-flavored things. Anyway, the CONNECT BY statement is really nice--yes, much nicer than MSSQL's hierchical query approach, and using that construct, I was able to produce a very clean query that does what I was looking for: ``` SELECT DISTINCT t.SchedID ,level as PaymentNum ,add_months(T.StartDate,level - 1) as DueDate ,(level * t.PaymentAmt) as RunningTotal FROM SchedTest t CONNECT BY level <= (t.Term / t.Frequency) ORDER BY t.SchedID, level ``` My only remaining issue is that I had to use DISTINCT because I couldn't figure out how to select my rows from DUAL (the affable one-row Oracle table) instead of from my table of schedule data, which has at least 2 rows. If I could do the above with FROM DUAL, then my DISTINCT indicator wouldn't be necessary. Any thoughts? Other than that, I think this is pretty nice. Et tu?
Oracle actually has syntax for hierarchical queries using the CONNECT BY clause. SQL Server's use of the WITH clause looks like a hack in comparison: ``` SELECT t.SchedId, CASE LEVEL WHEN 1 THEN t.StartDate ELSE ADD_MONTHS(t.StartDate, t.frequency) END 'DueDate', CASE LEVEL WHEN 1 THEN t.PaymentAmt ELSE SUM(t.paymentAmt) END 'RunningExpectedTotal' FROM PaymentScheduleTable t WHERE t.PaymentNum <= t.Term / t.Frequency CONNECT BY PRIOR t.startdate = t.startdate GROUP BY t.schedid, t.startdate, t.frequency, t.paymentamt ORDER BY t.SchedId, t.PaymentNum ``` I'm not 100% on that - I'm more confident about using: ``` SELECT t.SchedId, t.StartDate 'DueDate', t.PaymentAmt 'RunningExpectedTotal' FROM PaymentScheduleTable t WHERE t.PaymentNum <= t.Term / t.Frequency CONNECT BY PRIOR t.startdate = t.startdate ORDER BY t.SchedId, t.PaymentNum ``` ...but it doesn't include the logic to handle when you're dealing with the 2nd+ entry in the chain to add months & sum the amounts. The summing could be done with GROUP BY CUBE or ROLLUP depending on the detail needed.
Can I use Oracle SQL to plot actual dates from Schedule Information?
[ "", "sql", "oracle", "oracle10g", "" ]