Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am using SmtpClient to send an email with an attachment.
However for a certain batch we need to somehow save the MailMessage instead of sending them.
We are then thinking/hoping to manually upload the messages to the users drafts folder.
Is it possible to save these messages with the attachment intact (impossible, I would have thought). Or alternatively upload the messages to a folder in the users account?
If anyone has any experience of this, I'd much appreciate a bit of help or a pointer. | When testing in ASP.NET we save our emails to a folder rather then send them through an email server. Maybe you could change your`web.config` settings like this for your batch?
```
<system.net>
<mailSettings>
<smtp deliveryMethod="SpecifiedPickupDirectory">
<specifiedPickupDirectory pickupDirectoryLocation="c:\Temp\mail\"/>
</smtp>
</mailSettings>
</system.net>
```
**Additional Info:**
* [MSDN: <specifiedPickupDirectory> Element (Network Settings)](https://msdn.microsoft.com/en-us/library/ms164241.aspx)
* [Configuring SmtpClient to drop emails in a folder on disk](http://mikehadlow.blogspot.com/2010/01/configuring-smtpclient-to-drop-emails.html) | As well as the `SpecifiedPickupDirectory` information of the other answers, if you want to ensure your emails are sent to a folder relative to the site root - handy in testing on build servers where you don't know the paths - you can add a quick check in your email sending code:
```
SmtpClient client = new SmtpClient();
...
// Add "~" support for pickupdirectories.
if (client.DeliveryMethod == SmtpDeliveryMethod.SpecifiedPickupDirectory && client.PickupDirectoryLocation.StartsWith("~"))
{
string root = AppDomain.CurrentDomain.BaseDirectory;
string pickupRoot = client.PickupDirectoryLocation.Replace("~/", root);
pickupRoot = pickupRoot.Replace("/",@"\");
client.PickupDirectoryLocation = pickupRoot;
}
```
And your tests will look something like this (make sure you use App\_Data so IIS can write to the folder):
```
// Arrange - get SitePath from AppDomain.Current.BaseDirectory + ..\
string pickupPath = Path.Combine(SitePath, "App_Data", "TempSmtp");
if (!Directory.Exists(pickupPath))
Directory.CreateDirectory(pickupPath);
foreach (string file in Directory.GetFiles(pickupPath, "*.eml"))
{
File.Delete(file);
}
// Act (send some emails)
// Assert
Assert.That(Directory.GetFiles(pickupPath, "*.eml").Count(), Is.EqualTo(1));
``` | How can I save an email instead of sending when using SmtpClient? | [
"",
"c#",
".net",
"email",
"smtp",
""
] |
I need to put different codes in one file to many files.
The file is apparantly shared by AWK's creators at their homepage.
The file is also [here](http://dpaste.com/12282/) for easy use.
**My attempt to the problem**
I can get the lines where each code locate by
```
awk '{ print $1 }'
```
However, I do no know how
1. to get the exact line numbers so that I can use them
2. to collect codes between the specific lines so that the first word of each line is ignored
3. to put these separate codes into new files which are named by the first word at the line
I am sure that the problem can be solved by AWK and with Python too. Perhaps, we need to use them together.
**[edit]** after the first answer
I get the following error when I try to execute it with awk
```
$awk awkcode.txt
awk: syntax error at source line 1
context is
>>> awkcode <<< .txt
awk: bailing out at source line 1
``` | Did you try to:
1. Create a file unbundle.awk with the following content:
$1 != prev { close(prev); prev = $1 }
{ print substr($0, index($0, " ") + 1) >$1 }
2. Remove the following lines form the file awkcode.txt:
**# unbundle - unpack a bundle into separate files**
$1 != prev { close(prev); prev = $1 }
{ print substr($0, index($0, " ") + 1) >$1 }
3. Run the following command:
awk -f unbundle.awk awkcode.txt | Are you trying to unpack a file in that format? It's a kind of shell archive. For more information, see <http://en.wikipedia.org/wiki/Shar>
If you execute that program with awk, awk will create all those files. You don't need to write or rewrite much. You can simply run that awk program, and it should still work.
First, view the file in "plain" format. <http://dpaste.com/12282/plain/>
Second, save the plain version of the file as 'awkcode.shar'
Third, I think you need to use the following command.
```
awk -f awkcode.shar
```
---
If you want to replace it with a Python program, it would be something like this.
```
import urllib2, sys
data= urllib2.urlopen( "http://dpaste.com/12282/plain/" )
currName, currFile = None, sys.stdout
for line in data:
fileName, _, text= line.strip().partition(' ')
if fileName == currName:
currFile.write(line+"\n")
else:
if currFile is not None:
currFile.close()
currName= fileName
currFile= open( currName, "w" )
if currFile is not None:
currFile.close()
``` | Unable to separate codes in one file to many files in AWK/Python | [
"",
"python",
"awk",
""
] |
I have the following layout for my mvc project:
* /Controllers
+ /Demo
+ /Demo/DemoArea1Controller
+ /Demo/DemoArea2Controller
+ etc...
* /Views
+ /Demo
+ /Demo/DemoArea1/Index.aspx
+ /Demo/DemoArea2/Index.aspx
However, when I have this for `DemoArea1Controller`:
```
public class DemoArea1Controller : Controller
{
public ActionResult Index()
{
return View();
}
}
```
I get the "The view 'index' or its master could not be found" error, with the usual search locations.
How can I specify that controllers in the "Demo" namespace search in the "Demo" view subfolder? | You can easily extend the WebFormViewEngine to specify all the locations you want to look in:
```
public class CustomViewEngine : WebFormViewEngine
{
public CustomViewEngine()
{
var viewLocations = new[] {
"~/Views/{1}/{0}.aspx",
"~/Views/{1}/{0}.ascx",
"~/Views/Shared/{0}.aspx",
"~/Views/Shared/{0}.ascx",
"~/AnotherPath/Views/{0}.ascx"
// etc
};
this.PartialViewLocationFormats = viewLocations;
this.ViewLocationFormats = viewLocations;
}
}
```
Make sure you remember to register the view engine by modifying the Application\_Start method in your Global.asax.cs
```
protected void Application_Start()
{
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new CustomViewEngine());
}
``` | Now in MVC 6 you can implement `IViewLocationExpander` interface without messing around with view engines:
```
public class MyViewLocationExpander : IViewLocationExpander
{
public void PopulateValues(ViewLocationExpanderContext context) {}
public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable<string> viewLocations)
{
return new[]
{
"/AnotherPath/Views/{1}/{0}.cshtml",
"/AnotherPath/Views/Shared/{0}.cshtml"
}; // add `.Union(viewLocations)` to add default locations
}
}
```
where `{0}` is target view name, `{1}` - controller name and `{2}` - area name.
You can return your own list of locations, merge it with default `viewLocations` (`.Union(viewLocations)`) or just change them (`viewLocations.Select(path => "/AnotherPath" + path)`).
To register your custom view location expander in MVC, add next lines to `ConfigureServices` method in `Startup.cs` file:
```
public void ConfigureServices(IServiceCollection services)
{
services.Configure<RazorViewEngineOptions>(options =>
{
options.ViewLocationExpanders.Add(new MyViewLocationExpander());
});
}
``` | Can I specify a custom location to "search for views" in ASP.NET MVC? | [
"",
"c#",
"asp.net",
"webforms",
""
] |
[RDFLib](http://rdflib.net/) needs C extensions to be compiled to install on ActiveState Python 2.5; as far as I can tell, there's no binary installer anywhere obvious on the web. On attempting to install with `python setup.py install`, it produces the following message:
`error: Python was built with Visual Studio 2003;`
`extensions must be built with a compiler than can generate compatible binaries.`
`Visual Studio 2003 was not found on this system. If you have Cygwin installed,`
`you can try compiling with MingW32, by passing "-c mingw32" to setup.py.`
There are [various](http://boodebr.org/main/python/build-windows-extensions) [resources](http://isegserv.itd.rl.ac.uk/blogs/alistair/) on the web about configuring a compiler for distutils that discuss using MinGW, although I haven't got this to work yet. As an alternative I have VS2005.
Can anyone categorically tell me whether you can use the C compiler in VS2005 to build Python extension modules for a VS2003 compiled Python (in this case ActiveState Python 2.5). If this is possible, what configuration is needed? | I can't tell you categorically, but I don't believe you can. I've only run into this problem in the inverse situation (Python built with VS2005, trying to build with VS2003). Searching the web did not turn up any way to hack around it. My eventual solution was to get VC Express, since VC2005 is when Microsoft started releasing the free editions. But that's obviously not an option for you.
I don't use ActiveState Python, but is there a newer version you could use? The source ships with project files for VS2008, and I'm pretty sure the python.org binary builds stopped using VS2003 a while ago. | The main problem is C run-time library. Python 2.4/2.5 linked against msvcr71.dll and therefore all C-extensions should be linked against this dll.
Another option is to use gcc (mingw) instead of VS2005, you can use it to compile python extensions only. There is decent installer that allows you to configure gcc as default compiler for your Python version:
<http://www.develer.com/oss/GccWinBinaries> | Can I use VS2005 to build extensions for a Python system built with VS2003 | [
"",
"python",
"visual-studio-2005",
"visual-studio-2003",
"distutils",
""
] |
I've got an application here that I wrote many years ago that consists of a heavy-weight front end that directly queries a database server. This application runs on about 7 dedicated workstations. There is also a web-based front-end that I whipped up that shares the same feature set, and a web-based administration too for managing and reporting on the data -- they all just hit the database directly.
The application is quite simple and I understand the problem it solves very well. It could use an update, and I don't even have access to the tools necessary to work on the GUI anymore. I've been getting into Java lately, and it seems like a rewrite of this app would be a good project to get started with.
So my question then is this:
The application will require a non-web GUI, I suppose in Swing. This is necessary for very particular reasons. The application will also require a web-based GUI with the same exact features as the Swing front that will probably be deployed as a JSR-168 portlet, and a web-based administration tool (portlet also). With my previous design I ended up with a lot of duplicate code because each component had its own code base, and I foolishly used stored procedures to help to ensure that critical calculations were at least consistent.
Where should I start? I'm having such a hard time wrapping my mind around how this should all work in the Java world. I guess what I'm having the hardest time with is how do I create an application that can have both a Swing (or whatever) front-end and a web-based front end with as little duplication as possible?
Edit: I know conceptually how this *can* work. What I'm asking is for advice specifically related to Java technologies. Which frameworks to consider, etc. | Build a Core that contains the business logic. Use JDepend or a similar tool to ensure that it nowhere references anything swing or anything web/jsp/servlet.
Build the two UIs: For the web version pick a webframework of your choice and call your business logic from there.
For the Swing framework you have two options: access the businesslogic through webservices (you could use RMI or whatever, but I wouldn't), i.e. the logic is on the same webserver that serves the webapp (I'd probably prefer that). The alternative is to ship the weblogic with a swing GUI. Makes the coding and debugging easier, but now you have multiple points that access the db which causes headaches when you want to use caching
In any case you should only duplicate the gui stuff, once in html/css/javascript and once in swing.
Congrats on that project it will teach you tons about design and software architecture | Use a middle tier server.
---
Swing Client -> middle-server with spring-remoting -> database
Web Client -> middle-server with spring-remoting -> database
---
Web Client write once any MVC framework will work stripes, struts, even grails if you are brave rememder to keep it thin....
---
Swing Client write once using miglayout, and glazelist.
**<http://www.miglayout.com/>**
**<http://publicobject.com/glazedlists/glazedlists-1.8.0/>**
take a look at this posting.....
**[Java Swing: Libraries, Tools, Layout Managers](https://stackoverflow.com/questions/458817/java-swing-libraries-tools-layout-managers)**
---
Middle-server write once using jdbc cause you have the db already..
**<http://www.springsource.org/>**
---
database write once using whatever you like. It seems already have this.... | Where to start? Java application with multiple front ends | [
"",
"java",
""
] |
I'm doing some Wave file handling and have them read from disk into an array of bytes. I want to quickly copy portions from this byte array into another buffer for intermediate processing. Currently I use something like this:
```
float[] fin;
byte[] buf;
//fill buf code omitted
for(int i=offset; i < size; i++){
fin[i-offset] = (float) buf[i];
}
```
I feel that this is a slow method, because there is as much computation going on in the for loop conditional and increment as there is over in the actual body. If there was a block copy avaliable in C# or some other way I can implement a block copy, that would be great.
Maybe it isn't too slow, but it sure looks like a lot of work to move some data over. Here "size" is between 2^10 and 2^14. I am then handing the "fin" off to a FFT library, so this is by no means the slowest part of the code, maybe I'm barking up the wrong tree.
EDIT UPDATE:
I realize that micro optimizations are not where someone should spend their time, and I realize that profiling is a better way to achieve speedups overall, but I know that this code is in a 'hot path' and must be completed in under a third of a second on varying end user architectures to minimize our hardware system requirements. Even though I know that the following FFT code will be much more time consuming, I am looking for speedups where I can get them.
Array.Copy sure looks nice, I didn't know about that before, and I consider this Q&A a success already! | There is also:
```
Array.Copy
Array.CopyTo
```
but whether these will be faster will require profiling.
*But be warned* about focusing on micro-optimisations to the extent you miss the big picture, on modern PCs the effect of multi-level memory caching is likely to be greater than one approach or another to the copy.
Edit: Quick check in reflector: both of the above methods boil down to a common native implementation (good).
Note the docs for Array.Copy cover valid type conversions, a value -> value widening conversion like byte to float should be OK. | Have a look at Array.Copy it should be faster | What is the fastest way to copy my array? | [
"",
"c#",
"arrays",
"loops",
"optimization",
""
] |
Please look at the following file: (it is a complete file)
```
#ifndef TEES_ALGORITHM_LIBRARY_WRAPPER_H
#define TEES_ALGORITHM_LIBRARY_WRAPPER_H
#ifdef _TEES_COMPILE_AS_LIB
#include <dfa\Includes\DFC_algorithms.hpp>
#include <DFA\FuzzyClassifier\FuzzyAlgorithmIntialization\InitFuzzyAlgorithm.hpp>
typedef teesalgorithm::tees_fuzzy_algorithms algorithms_switchyard_class;
#else
#include <DFA\Includes\commercial_algorithms.hpp>
//An incomplete class to hide implementation
class algorithms_switchyard_class;
#endif
class AlgorithmLibraryWrapper {
algorithms_switchyard_class * algorithmPtr_;
typedef teesalgorithm::tees_paramObj paramObj_type;
typedef teesalgorithm::tees_errorObj errorObj_type;
typedef teesalgorithm::tees_statusObj statusObj_type;
typedef teesalgorithm::tees_dataObj dataObj_type;
typedef teesalgorithm::tees_outputObj outputObj_type;
public:
AlgorithmLibraryWrapper(const std::string& sAlgName, paramObj_type& paramObj, errorObj_type& errObj, statusObj_type& statusObj, const char* sFilePath);
static bool dataReader(const std::string& sFileName, dataObj_type& dataObj, errorObj_type& errObj, statusObj_type& statusObj);
bool runalgorithm(const dataObj_type& dataObj, outputObj_type& outObj, errorObj_type& errObj, statusObj_type& statusObj);
~AlgorithmLibraryWrapper();
};
#ifdef _TEES_USE_COMPILED_ALGORITHM_LIB
# ifdef _MSC_VER
#if _MSC_VER < 1400 // If VC 2003
#ifdef _DEBUG
#error No AlgorithmLibWrapper libraries compiled for this version of VC
#else
#error No AlgorithmLibWrapper libraries compiled for this version of VC
#endif
#elif defined(UNDER_CE) // Win CE
#ifdef _DEBUG
#pragma comment( lib, "AlgorithmLibWrapperCEd" )
#else
#pragma comment( lib, "AlgorithmLibWrapperCE" )
#endif
#else // If VC 2005
#ifdef _DEBUG
#pragma comment( lib, "AlgorithmLibWrapperd" )
#else
#pragma comment( lib, "AlgorithmLibWrapper" )
#endif
#endif
#endif
#endif
#endif //TEES_ALGORITHM_LIBRARY_WRAPPER_H
```
I am getting the following errors; I don't know why. I manually counted the preprocessor directives also.
> AlgorithmLibraryWrapper.hpp:10:1: unterminated #ifdef
> AlgorithmLibraryWrapper.hpp:7:1: unterminated #ifndef
I am using the poor vxWorks gcc compiler. Please let me know if the fault is mine or the compiler's. | It could be that the problem is in the included files (if there actually are unbalaced `#if`/`#endif`s.
I would try preprocessing with another compiler. You can use gcc for that, doesn't matter it wouldn't compile. Just get gcc (or MinGW if you're on Windows) and run
```
cpp -Iinclude_direcories your_file
```
Or, if you don't like gcc, get MSVC Express edition. Again, you can preprocess code that even doesn't compile, so no problem with nonworking library etc.
Most compilers have an option that will give you the output from the preprocessor so you can check what it's doing. For example,
```
gcc -E file.c >file.preproc
```
will give you the pre-processed source so you can check the balancing of #if against #endif. | At a guess, one of the files you are #including from this one has a mismatched #ifdef/#endif pair. You need to look at all the files (as the preprocesor does), not just this one. | C++ preprocessor unexpected compilation errors | [
"",
"c++",
"compiler-construction",
"c-preprocessor",
""
] |
I am following the tutorial here:
<http://nutch.sourceforge.net/docs/en/tutorial.html>
Crawling works fine, as does the test search from the command line.
When I try to fire up tomcat after moving ROOT.war into place(and it unarchiving and creating a new ROOT folder during startup), I get a page with the 500 error and some errors in the Tomcat logs.
HTTP Status 500 - No Context configured to process this request
```
2009-02-19 15:55:46 WebappLoader[]: Deploy JAR /WEB-INF/lib/xerces-2_6_2.jar to C:\Program Files\Apache Software Foundation\Tomcat 4.1\webapps\ROOT\WEB-INF\lib\xerces-2_6_2.jar
2009-02-19 15:55:47 ContextConfig[] Parse error in default web.xml
org.apache.commons.logging.LogConfigurationException: User-specified log class 'org.apache.commons.logging.impl.Log4JLogger' cannot be found or is not useable.
at org.apache.commons.digester.Digester.createSAXException(Digester.java:3181)
at org.apache.commons.digester.Digester.createSAXException(Digester.java:3207)
at org.apache.commons.digester.Digester.endElement(Digester.java:1225) ............ etc.
```
So it looks like the root of the error is default web.xml, not in the Log4JLogger - although I know very little about Java. I did not edit the web.xml in the tomcat dir.
Anyone know what is going on here?
versions/info:
nutch 0.9
Tomcat 4.1
jre1.5.0\_08
jdk1.6.0\_12
NUTCH\_JAVA\_HOME=C:\Program Files\Java\jdk1.6.0\_12
JAVA\_HOME=C:\Program Files\Java\jdk1.6.0\_12 | For me that tells that it can't find the logger which is reported as a parse error itself. A bit odd or disinformant way to express it, I guess. Anyway, I think you need to add the [Commons Logging .jar](http://commons.apache.org/logging/) to your libraries (`WEB-INF/lib`) and restart Tomcat and then it should work.
Also your Tomcat seems to be ancient, if possible I'd recommend getting 5.5.x or 6.x. | In Java, applications sometimes rely on third party libraries. In this case, it appears that your Tomcat installation does not include one such library. Judging by the error you received, it appears that you are missing the [Apache Commons Logging](http://commons.apache.org/logging/) library (a commonly used library in the Java world that just so happens to not come bundled with Tomcat).
The typical way to distribute a library in Java is via a JAR (Java Archive) file. Simply put, a JAR file is simply a bunch of Java classes that have been zipped into a file that has been renamed from \*.zip to \*.jar.
To obtain the Commons Logging JAR file, you can download it from the [Apache Commons download site](http://commons.apache.org/downloads/download_logging.cgi). You will want the binary version, not the source version. Should you happen to download version 1.1.1 (for example), you should unzip the `commons-logging-1.1.1-bin.zip` file. Inside, you will find a file named `commons-logging-1.1.1.jar`. Copy this JAR file to the `lib` directory wherever your Tomcat software is installed. You may be required to restart Tomcat before it notices this new file.
Hopefully, the next time you try to use the application, you may or may not receive yet another error indicating that yet another class cannot be found. In that case, I welcome you to the wonderful world of JAR hunting! :) Hopefully the application will not require too many libraries above and beyond Commons Logging, but we will see (considering you're trying to run Nutch, I can foresee it requiring [Lucene](http://www.apache.org/dyn/closer.cgi/lucene/java/), so be prepared for that).
Have fun with Nutch! | Problem running Java .war on Tomcat | [
"",
"java",
"tomcat",
"nutch",
""
] |
The use case is long term serialization of complex object graphs in a textual format. | ## Short answer
if you expect humans to create/read the document (configuration files, reports, etc) then you may consider YAML, otherwise choose XML (for machine-to-machine communication).
## Long answer
### Length
Both XML and YAML are approximately the same. Good XML libraries can skip all whitespaces while for YAML it is required. A complex YAML contains a lot of indentation spaces (do not use tabs!)
### Network failure
Part of a YAML document is often a valid document, so if a YAML document is incomplete there is no automatic way to detect it. An XML parser will always check whether a document is at least well-formed, and can check validity against a schema automatically.
### Language support
Many major programming languages support both [YAML](http://yaml.org/) and XML.
### General knowledge
You do not need to explain to a developer (even junior) what is XML. YAML is not that widely used yet.
### Schema
With XML both producer and consumer can agree on a Schema to establish a reliable data exchange format.
### Syntax
XML is very rich: namespaces, entities, attributes.
### External dependencies
Java and Python have XML support in the standard libraries. YAML requires an external dependency for these languages.
### Maturity
XML specification is older and it is rock solid; whereas, YAML is still under construction. YAML 1.1 contains inconsistencies (there is even a [wiki](http://pyyaml.org/wiki/BugsInTheYAMLSpecification) to maintain the list of mistakes).
### XSLT
If you need to transform an XML document to another format (XML, HTML, YAML, PDF) you can use XSLT while for YAML you have to write a program. | XML has xpath and xquery.
But YAML has complex keys and lets you copy one value into many places with aliases (& \* syntax).
I think YAML's strength is not human readability, like they say... I can read properly formatted XML just fine.
Actually YAML's strength is human *writability*. If you are on an iPhone typing up a quick bit of data as you architect a data model or such, YAML is so much easier to type in. The aliases are awesome for that.
Then you can convert it to whatever.
The more advanced features of YAML like strict typing and tags, I never use that. The YAML guys are currently discussing issuing a "micro YAML" to compete with JSON, for those of us who use it basically just as cleaner JSON with aliases.
But XML is better as a markup language where you want to visually distinguish what parts of the data are visible to the end user, as someone else pointed out. | What are the advantages and disadvantes of yaml vs xml for Object graph de/serialization? | [
"",
"java",
"xml",
"serialization",
"yaml",
""
] |
I'm wondering if there's a better way to add dynamic methods to an existing object. Basically, I am trying to assemble new methods dynamically and then append them to an existing function.
This demo code works.
```
builder = function(fn, methods){
//method builder
for(p in methods){
method = 'fn.' + p + '=' + methods[p];
eval(method);
}
return fn;
}
test = {}
test = builder(test, {'one':'function(){ alert("one"); }','two':'function(){ alert("two"); }'} );
test.one();
test.two();
``` | You don't need to eval them each time.
You can create existing function objects, then assign them as properties to your objects.
```
var methods = {
'increment': function() { this.value++; },
'display' : function() { alert(this.value); }
};
function addMethods(object, methods) {
for (var name in methods) {
object[name] = methods[name];
}
};
var obj = { value: 3 };
addMethods(obj, methods);
obj.display(); // "3"
obj.increment();
obj.display(); // "4"
```
The canonical, object-oriented way however, is to use constructors and prototypes, but this isn't really dynamic in that each object you construct will have the same methods:
```
function MyObj(value) {
this.value = value;
};
MyObj.prototype.increment = function() {
this.value++;
};
MyObj.prototype.display = function() {
alert(this.value);
}
var obj = new MyObj(3);
obj.display(); // "3"
obj.increment();
obj.display(); // "4"
``` | mhmh - I may be a bit late, but anyway:
```
new Function(argName1,...,argNameN, body)
```
for example:
```
x = new Function("y","return y*5");
x(3)
```
not *much* better than eval, though.
(it's a pity, but strings are used as code-description, not something more structured as in LISP) | Javascript: better way to add dynamic methods? | [
"",
"javascript",
"dynamic",
"methods",
""
] |
I am running a wiki server for my group at work and recently moved it to a Fedora 8 OS. Everything works great except that an extension I wrote that contacts an MsSql server fails because the function mssql\_connect is not there.
On my old server I used "Free TDS" (with ./configure --prefix=/usr/local/freetds --enable-msdblib) and built PHP with:
./configure \
--with-apxs2=/usr/local/apache/bin/apxs \
--with-mysql=/usr/local/mysql \
**--with-mssql=/usr/local/freetds** \
--enable-safe-mode \
--enable-ftp \
--enable-inline-optimization \
--enable-magic-quotes --enable-xml \
--with-gd \
--with-zlib-dir=/usr/lib \
--with-jpeg-dir=/usr/local/lib
Is there a way I can easily rebuild PHP with the "--with-mssql=/usr/local/freetds" configuration on Fedora? I would like to use yum to do this, but I don't see how. More to the point, I would like to avoid having to build everything from scratch. It's not that I don't know how, I would just like to avoid it.
Thanks for your your advice,
~Eric | So, the answer to adding mssql is as easy as "yum install php-mssql". Fixed my problem without any messing around. FreeTDS is installed an easily configurable. Thanks everyone who tried to answer or left a comment. | Please post output from your build. Is it failing because /usr/local/freetds is missing?
If it is, can you use yum to install it, or download and compile [freetds](http://www.freetds.org/) yourself. I haven't done this but it'll be something like this...
1. Open shell and navigate to directory with freetds archive.
2. Execute tar zxvf to extract.
3. Execute cd
4. Run ./configure for the freetds build
5. Run make, (sudo) make install
Then find where freetds is installed and pass that folder to the ./configure command for PHP. | Fedora 8 howto rebuild custom PHP? | [
"",
"php",
"sql-server",
"fedora",
"rebuild",
""
] |
I have two tables and I need to remove rows from the first table if an exact copy of a row exists in the second table.
Does anyone have an example of how I would go about doing this in MSSQL server? | Well, at some point you're going to have to check all the columns - might as well get joining...
```
DELETE a
FROM a -- first table
INNER JOIN b -- second table
ON b.ID = a.ID
AND b.Name = a.Name
AND b.Foo = a.Foo
AND b.Bar = a.Bar
```
That should do it... there is also `CHECKSUM(*)`, but this only *helps* - you'd still need to check the actual values to preclude hash-conflicts. | If you're using SQL Server 2005, you can use [intersect](http://msdn.microsoft.com/en-us/library/ms188055.aspx):
```
delete * from table1 intersect select * from table2
``` | How can I compare two tables and delete the duplicate rows in SQL? | [
"",
"sql",
"sql-server",
"duplicate-data",
""
] |
Please help me - I'm new to NHibernate and I cannot seem to find what I'm looking for.
I have two tables in a database: `Fund` and `FundBalance`. A `Fund` can have many `FundBalances` and a `FundBalance` has only one `Fund`.
In C#, there is only the `FundBalance` class. Columns from the `Fund` table joined with columns from the `FundBalance` table need to be mapped onto properties of the `FundBalance` class.
For example, the `Fund` table contains the `FundName` property and the `FundBalance` table contains the `AvailableBalance` property. These two tables are joined and the result of the join needs to be mapped to the `FundName` and `AvailableBalance` properties on the `FundBalance` class.
The question: how do I do this with NHibernate? Bonus: How do I specify the mapping using FluentNHibernate?
One solution that I thought of was to create a view in the database, but I would prefer it if the mapping can be done purely using NHibernate. | As I've asked in the comments; how does this FundBalance class exactly looks like ?
What goes in there ?
Can you do something with the `<join table>` element in the NHibernate mapping ?
For example:
<http://ayende.com/Blog/archive/2007/04/24/Multi-Table-Entities-in-NHibernate.aspx> | You don't need to use a view to solve your problem. You just need to be specific on the join when you make the mapping on the FundBalance Table.
If my understanding is good you want to have your FundBalance class more complete and have some properties from the Fund Table.
Try this:
```
<class name="FundBalance" table="FundBalance" lazy="true">
<id name="Id" column="FundBalanceId" unsaved-value="0">
<generator class="native"/>
</id>
<property name="FundBalance" not-null="true" length="80"/>
<join table="Fund">
<key column="FundId"/>
<property name="FundName"/>
</join>
</class>
```
For your FundBalance entity class the code will be:
```
public class FundBalance
{
private long _Id;
public virtual long Id
{
get { return _Id; }
set { _Id = value; }
}
private decimal _FundBalance;
public virtual decimal FundBalance
{
get { return _FundBalance; }
set { _FundBalance= value; }
}
private string _FundName;
public virtual string FundName
{
get { return _FundName; }
set { _FundName= value; }
}
}
```
If you need more sample and explanation for the problems of mapping mutiple table for one enties try this links:
<https://svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate.Test/Join/>
<http://ayende.com/Blog/archive/2007/04/24/Multi-Table-Entities-in-NHibernate.aspx>
I give you the HBM mapped files, for fluent nhibernate i don't know if you can use the keywords JOIN but bassicly it's the same problem.
Hope this helps. | NHibernate mapping after a normalisation | [
"",
"c#",
"nhibernate",
"fluent-nhibernate",
"nhibernate-mapping",
""
] |
I'm soon going to be starting the development stage of my coursework and the current design is to have a single class that handles database connections and many classes that call it; the database class is supposed to open the connection and pass queries along blindly, other classes are responsible for the contents of those queries.
What I want to know is what is the best way for these reference to know about the database class? My instinct would be to make the methods in the Database class static and call them as Database.method() but is there a better way?
P.S. was reference the right word for this? If not what is the right word, it would help next time I have a similar quesiton. | Be cautious.
Singleton will be a bottleneck.
java.sql.Connection is NOT thread safe, so you could run into issues there.
I'd recommend writing your app with service and persistence layers completely separated. Both should be based on interfaces. The service layer is the one that knows about units of work and transactions, using model and persistence objects to fulfill them. The service should be responsible for acquiring the Connection, making it available to the persistence tier, handling transactions, and closing the Connection in the same method scope in which it was created.
Scope and cleanup are key. If you don't do this, you'll exhaust the database connections.
You don't mention connection pools. I'd recommend one.
Have a look at Spring. Its JDBC module handles all this beautifully. If you can't use Spring for your assignment, at least it'll be a good model for how to design your implementation. | The traditional approach is to have a DAO (Data Access Object) for each data class.
I.e. if you have a data class "Person" you also have a class "PersonDAO" that implements methods like findById(), findAll(), save(Person) etc. Basically the DAO class handles all the DB interaction.
The constructor of the DAO class could simply accept a Connection object and thus externalize the problem of creating connections or it could invoke a factory method somewhere that doled out a Connection object.
In either case you'll likely want to have such a factory method.
```
public class Database{
public static Connection getConnection(){
// Create a new connection or use some connection pooling library
}
}
```
As someone pointed out java.sql.Connection is not thread safe so you should not hand out the same connection each time unless you are sure that multiple threads will not be accessing the method.
Of course if you need to create a new connection for each call you'll also need to close the connections once you're done with them. The simple approach is to add a close() method to the DAOs and have them take care of it. This does impose a burden on the code using the DAO.
Even if you use connection pooling it is still necessary to close the connections (return to the pool) once you are done with them.
Some one suggested using Thread local to have a per thread connection. This works in some instances, but wouldn't be useful for a web application where each request is a new thread (that is never reused, might as well not store a reference).
You could however take advantage of this in a webapp if you've configured it so that after handling each request a call is made to Database.closeConnection() which then takes care of closing a Tread local connection if one exist. | Best way for many classes to reference a database connection class | [
"",
"java",
"database",
"oop",
""
] |
I realise the second one avoids the overhead of a function call (**update**, is actually a language construct), but it would be interesting to know if one is better than the other. I have been using `unset()` for most of my coding, but I've recently looked through a few respectable classes found off the net that use `$var = null` instead.
Is there a preferred one, and what is the reasoning? | It was mentioned in the [unset manual's page in 2009](http://web.archive.org/web/20090217225355/https://www.php.net/manual/en/function.unset.php):
> `unset()` does just what its name says - unset a variable. It does not force immediate memory freeing. PHP's garbage collector will do it when it see fits - by intention as soon, as those CPU cycles aren't needed anyway, or as late as before the script would run out of memory, whatever occurs first.
>
> If you are doing `$whatever = null;` then you are rewriting variable's data. You might get memory freed / shrunk faster, but it may steal CPU cycles from the code that truly needs them sooner, resulting in a longer overall execution time.
(Since 2013, that [`unset` man page](https://www.php.net/manual/en/function.unset.php) don't include that section anymore)
Note that until php5.3, if you have [two objects in circular reference](http://paul-m-jones.com/?p=262), such as in a parent-child relationship, calling unset() on the parent object will not free the memory used for the parent reference in the child object. (Nor will the memory be freed when the parent object is garbage-collected.) ([bug 33595](http://bugs.php.net/bug.php?id=33595))
---
The question "[difference between unset and = null](https://stackoverflow.com/q/13667137/6309)" details some differences:
---
`unset($a)` also removes `$a` from the symbol table; for example:
```
$a = str_repeat('hello world ', 100);
unset($a);
var_dump($a);
```
> Outputs:
```
Notice: Undefined variable: a in xxx
NULL
```
> But when `$a = null` is used:
```
$a = str_repeat('hello world ', 100);
$a = null;
var_dump($a);
```
> Outputs:
```
NULL
```
> It seems that `$a = null` is a bit faster than its `unset()` counterpart: updating a symbol table entry appears to be faster than removing it.
---
* when you try to use a non-existent (`unset`) variable, an error will be triggered and the value for the variable expression will be null. (Because, what else should PHP do? Every expression needs to result in some value.)
* A variable with null assigned to it is still a perfectly normal variable though. | `unset` is not actually a function, but a **language construct**. It is no more a function call than a `return` or an `include`.
Aside from performance issues, using `unset` makes your code's *intent* much clearer. | What's better at freeing memory with PHP: unset() or $var = null | [
"",
"php",
""
] |
What would be a nice and good way to temporarily disable a message listener? The problem I want to solve is:
* A JMS message is received by a message listener
* I get an error when trying to process the message.
* I wait for my system to get ready again to be able to process the message.
* Until my system is ready, I don't want any more messages, so...
* ...I want to disable the message listener.
* My system is ready for processing again.
* The failed message gets processed, and the JMS message gets acknowledged.
* Enable the message listener again.
Right now, I'm using Sun App Server. I disabled the message listener by setting it to null in the MessageConsumer, and enabled it again using setMessageListener(myOldMessageListener), but after this I don't get any more messages. | How about if you don't return from the `onMessage()` listener method until your system is ready to process messages again? That'll prevent JMS from delivering another message on that consumer.
That's the async equivalent of not calling `receive()` in a synchronous case.
There's no multi-threading for a given JMS session, so the pipeline of messages is held up until the `onMessage()` method returns.
I'm not familiar with the implications of dynamically calling `setMessageListener()`. The javadoc says [there's undefined behavior](http://java.sun.com/j2ee/1.4/docs/api/javax/jms/MessageConsumer.html#setMessageListener(javax.jms.MessageListener)) if called "when messages are being consumed by an existing listener or sync consumer". If you're calling from within `onMessage()`, it sounds like you're hitting that undefined case.
There are [start](http://java.sun.com/j2ee/1.4/docs/api/javax/jms/Connection.html#start())/[stop](http://java.sun.com/j2ee/1.4/docs/api/javax/jms/Connection.html#stop()) methods at the Connection level, if that's not too coarse-grained for you. | Problem solved by a workaround replacing the message listener by a receive() loop, but I'm still interested in how to disable a message listener and enable it shortly again. | How to temporarily disable a message listener | [
"",
"java",
"jms",
""
] |
```
class A {
public synchronized void myOneMethod() {
// ...
}
}
class B extends A {
public synchronized void myOtherMethod() {
// ...
}
}
// ...
B myObject;
// ...
myObject.myOneMethod(); // acquires lock
myObject.myOtherMethod(); // same lock?
```
How I understand the synchronization model, I'd say that yes, it does, because the lock / monitor is associated with the instance myObject, and it doesn't matter where the method was defined. But am I right? If not, why? If yes, why are you sure, and I'm not? :-) | Yes, you are right, and you got the explanation right too. Nothing much to add.
Note that if the methods were *static*, then they would synchronize on different objects, namely their respective classes (A and B).
*EDIT*: Why am I sure? I don't know, why are you not sure? ;-) `myObject` is just one object - there isn't any distinction between the `myObject` attributes that come from class A and those that come from class B. (Well, technically you could probably use reflection to find out which are which, so there must be *some* distinction, but forget about reflection for now. For common operations on the object there's no distinction.) | Yes, synchronized is equivalent to synchronized(this).
To be more [precise](http://java.sun.com/docs/books/jls/second_edition/html/classes.doc.html#260369):
> For a class (static) method, the lock associated with the Class object for the method's class is used. For an instance method, the lock associated with this (the object for which the method was invoked) is used. | Java: A synchronized method in the superclass acquires the same lock as one in the subclass, right? | [
"",
"java",
"concurrency",
""
] |
Is it possible to Build a string or fix the label in the GUI so that I get square meter information to the user. So that the output will look like 21 m2 but the 2 is raised.
Regards | Use the "²" caracter :
21 m²
The unicode code point is U+00B2 and the UTF-8 hexadecimal is c2 b2 | Use a font that has the ² character.
Eg MS Sans Serif : 0xB2 | How can I show a superscript character in .NET GUI labels? | [
"",
"c#",
"string",
""
] |
Is there any method of programmatically determining the current security zone settings for Internet Explorer?
I'd like to know when my site will have the XMLHttpRequest ActiveX control blocked due to IE security policy, but before the site actually tries to create it and thus causes the yellow bar to appear up the top (saying "To help protect your security, Internet Explorer has restricted this webpage from running scripts or ActiveX controls that could access your computer.")
Thanks. | **Note**: I can't delete an accepted answer, but you may find [this answer useful](https://stackoverflow.com/a/8820863/75525).
Original answer:
There's no handle in JavaScript to detect the security zone being used by IE.
In order to do what you need to do, you could check document.location and determine the security zone from that. | In IE7, you could use the following JavaScript to get some idea of whether your site is trusted or not:
```
window.status = "test";
if (window.status == "test")
alert("Trusted, or local intranet");
else
alert("Not trusted, or internet");
```
This is based on the fact that in IE7 and onwards, scripts can no longer set the status bar text through the window.status method in the Internet and Restricted Zones. See the [Release Notes for Internet Explorer 7](http://msdn.microsoft.com/en-us/ie/aa740486) | Detecting Current Internet Explorer Security Zone | [
"",
"javascript",
"html",
"security",
"internet-explorer",
"xmlhttprequest",
""
] |
I am creating an application that uses a certain file format as its data source. I want this application to open whenever the user double clicks on this file, like how MS Word will open when a user double clicks on a Word document. How do I accomplish this? Also how would I populate the data fields using the file that the user selected. Would I use args[] from the program.cs class? I am using c# to code this application.
N.B. I want this association to be made when the application is installed on the host machine without the user doing anything. | I managed to solve this issue. I used WIX to create an install file and asked it to associate the file with the application when it installs. | **FIRST**, you need to set up file association, so that your file type is associated with your application and opening the file type will run your application.
You can do the file association programatically, there is some detail here as mentioned:
<http://www.codeproject.com/KB/dotnet/System_File_Association.aspx>
You can also do it via your Setup project for you application if you have one. This is an **easier** path for "newbies". Details for using visual studio to get the setup project to add the file association and also set the icon for the file are here:
<http://www.dreamincode.net/forums/topic/58005-file-associations-in-visual-studio/>
Otherwise if you use InnoSetup, Wix etc then I suppose you could just see instructions for those installers to create the association for you.
**SECOND**, you need to have your application accept command line arguments. The opened file(s) is(are) passed as a command line argument(s). You need to process the arguments to get the file path/name(s) and open the given file(s). There is a nice description of this here with code:
[C# Command Line arguments problem in Release build](https://stackoverflow.com/questions/728317/c-command-line-arguments-problem-in-release-build/730545#730545)
In your case, rather than `MessageBox.Show(s)` in the form shown handler, you would call your bespoke argument parsing method.
For a simple application which only accepts files names to open as arguments, this could be as simple as
```
foreach (string filePathName in Args)
DoNamedFileOpen(filePathName);
```
Your code can also have a method that might extract from the file the values for the datafields you are interested in etc.
This is a nice simple approach to the issue of have file associations set on installation of your application, with icons, and having your application handle the opening of those files.
Of course, there are plenty of other options, like run-time file association (asking the user if they want the association), detecting "broken" associations, etc.
This question is a long time here but I hope this is useful for new searches | How can I make an application open when a user double clicks on its associated file? | [
"",
"c#",
""
] |
I have a SQL query, that returns a set of rows:
```
SELECT id, name FROM users where group = 2
```
I need to also include a column that has an incrementing integer value, so the first row needs to have a 1 in the counter column, the second a 2, the third a 3 etc
The query shown here is just a simplified example, in reality the query could be arbitrarily complex, with several joins and nested queries.
I know this could be achieved using a temporary table with an autonumber field, but is there a way of doing it within the query itself ? | For starters, something along the lines of:
```
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY my_order_column) AS Row_Counter
FROM my_table
```
However, it's important to note that the `ROW_NUMBER() OVER (ORDER BY ...)` construct only determines the values of `Row_Counter`, it doesn't guarantee the ordering of the results.
Unless the `SELECT` itself has an explicit `ORDER BY` clause, the results could be returned in any order, dependent on how SQL Server decides to optimise the query. ([See this article for more info](http://blogs.msdn.com/queryoptteam/archive/2006/05/02/588731.aspx).)
The only way to guarantee that the results will *always* be returned in `Row_Counter` order is to apply exactly the same ordering to both the `SELECT` and the `ROW_NUMBER()`:
```
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY my_order_column) AS Row_Counter
FROM my_table
ORDER BY my_order_column -- exact copy of the ordering used for Row_Counter
```
The above pattern will always return results in the correct order and works well for simple queries, but what about an "arbitrarily complex" query with perhaps dozens of expressions in the `ORDER BY` clause? In those situations I prefer something like this instead:
```
SELECT t.*
FROM
(
SELECT my_first_column, my_second_column,
ROW_NUMBER() OVER (ORDER BY ...) AS Row_Counter -- complex ordering
FROM my_table
) AS t
ORDER BY t.Row_Counter
```
Using a nested query means that there's no need to duplicate the complicated `ORDER BY` clause, which means less clutter and easier maintenance. The outer `ORDER BY t.Row_Counter` also makes the intent of the query much clearer to your fellow developers. | In SQL Server 2005 and up, you can use the [`ROW_NUMBER()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx) function, which has options for the sort order and the groups over which the counts are done (and reset). | SQLServer SQL query with a row counter | [
"",
"sql",
"sql-server",
""
] |
I often find myself in a situation where I am facing multiple compilation/linker errors in a C++ project due to some bad design decisions (made by someone else :) ) which lead to circular dependencies between C++ classes in different header files *(can happen also in the same file)*. But fortunately(?) this doesn't happen often enough for me to remember the solution to this problem for the next time it happens again.
So for the purposes of easy recall in the future I am going to post a representative problem and a solution along with it. Better solutions are of-course welcome.
---
* `A.h`
```
class B;
class A
{
int _val;
B *_b;
public:
A(int val)
:_val(val)
{
}
void SetB(B *b)
{
_b = b;
_b->Print(); // COMPILER ERROR: C2027: use of undefined type 'B'
}
void Print()
{
cout<<"Type:A val="<<_val<<endl;
}
};
```
---
* `B.h`
```
#include "A.h"
class B
{
double _val;
A* _a;
public:
B(double val)
:_val(val)
{
}
void SetA(A *a)
{
_a = a;
_a->Print();
}
void Print()
{
cout<<"Type:B val="<<_val<<endl;
}
};
```
---
* `main.cpp`
```
#include "B.h"
#include <iostream>
int main(int argc, char* argv[])
{
A a(10);
B b(3.14);
a.Print();
a.SetB(&b);
b.Print();
b.SetA(&a);
return 0;
}
``` | The way to think about this is to "think like a compiler".
Imagine you are writing a compiler. And you see code like this.
```
// file: A.h
class A {
B _b;
};
// file: B.h
class B {
A _a;
};
// file main.cc
#include "A.h"
#include "B.h"
int main(...) {
A a;
}
```
When you are compiling the **.cc** file (remember that the **.cc** and not the **.h** is the unit of compilation), you need to allocate space for object `A`. So, well, how much space then? Enough to store `B`! What's the size of `B` then? Enough to store `A`! Oops.
Clearly a circular reference that you must break.
You can break it by allowing the compiler to instead reserve as much space as it knows about upfront - pointers and references, for example, will always be 32 or 64 bits (depending on the architecture) and so if you replaced (either one) by a pointer or reference, things would be great. Let's say we replace in `A`:
```
// file: A.h
class A {
// both these are fine, so are various const versions of the same.
B& _b_ref;
B* _b_ptr;
};
```
Now things are better. Somewhat. `main()` still says:
```
// file: main.cc
#include "A.h" // <-- Houston, we have a problem
```
`#include`, for all extents and purposes (if you take the preprocessor out) just copies the file into the **.cc**. So really, the **.cc** looks like:
```
// file: partially_pre_processed_main.cc
class A {
B& _b_ref;
B* _b_ptr;
};
#include "B.h"
int main (...) {
A a;
}
```
You can see why the compiler can't deal with this - it has no idea what `B` is - it has never even seen the symbol before.
So let's tell the compiler about `B`. This is known as a [forward declaration](http://en.cppreference.com/w/cpp/language/class), and is discussed further in [this answer](https://stackoverflow.com/a/4757718/391161).
```
// main.cc
class B;
#include "A.h"
#include "B.h"
int main (...) {
A a;
}
```
This *works*. It is not *great*. But at this point you should have an understanding of the circular reference problem and what we did to "fix" it, albeit the fix is bad.
The reason this fix is bad is because the next person to `#include "A.h"` will have to declare `B` before they can use it and will get a terrible `#include` error. So let's move the declaration into **A.h** itself.
```
// file: A.h
class B;
class A {
B* _b; // or any of the other variants.
};
```
And in **B.h**, at this point, you can just `#include "A.h"` directly.
```
// file: B.h
#include "A.h"
class B {
// note that this is cool because the compiler knows by this time
// how much space A will need.
A _a;
}
```
HTH. | You can avoid compilation errors if you remove the method definitions from the header files and let the classes contain only the method declarations and variable declarations/definitions. The method definitions should be placed in a .cpp file (just like a best practice guideline says).
The down side of the following solution is (assuming that you had placed the methods in the header file to inline them) that the methods are no longer inlined by the compiler and trying to use the inline keyword produces linker errors.
```
//A.h
#ifndef A_H
#define A_H
class B;
class A
{
int _val;
B* _b;
public:
A(int val);
void SetB(B *b);
void Print();
};
#endif
//B.h
#ifndef B_H
#define B_H
class A;
class B
{
double _val;
A* _a;
public:
B(double val);
void SetA(A *a);
void Print();
};
#endif
//A.cpp
#include "A.h"
#include "B.h"
#include <iostream>
using namespace std;
A::A(int val)
:_val(val)
{
}
void A::SetB(B *b)
{
_b = b;
cout<<"Inside SetB()"<<endl;
_b->Print();
}
void A::Print()
{
cout<<"Type:A val="<<_val<<endl;
}
//B.cpp
#include "B.h"
#include "A.h"
#include <iostream>
using namespace std;
B::B(double val)
:_val(val)
{
}
void B::SetA(A *a)
{
_a = a;
cout<<"Inside SetA()"<<endl;
_a->Print();
}
void B::Print()
{
cout<<"Type:B val="<<_val<<endl;
}
//main.cpp
#include "A.h"
#include "B.h"
int main(int argc, char* argv[])
{
A a(10);
B b(3.14);
a.Print();
a.SetB(&b);
b.Print();
b.SetA(&a);
return 0;
}
``` | Resolve build errors due to circular dependency amongst classes | [
"",
"c++",
"compiler-errors",
"circular-dependency",
"c++-faq",
""
] |
I wish to initialize an array of java methods in the child class, **as a class field** like so
```
void callme() {System.out.println("hi!");}
Method[] actions = new Method[] {&callme,&callme};
```
and call all methods in this array at parent class like so:
```
for (meth:actions) {meth.invoke();}
```
However currently I cannot find a way to implicitly initialize the actions array, not through the constructor. The following gives me an error due to unhandled exception:
```
Method[] actions = new Method[] {
this.getClass().getDeclaredMethod("count")
};
```
As said, I cannot catch the exception when initializing this array explicitly as a field, and not in the constructor.
I'm a newbie to java reflection, so this is probably an obvious question, still I found no answer to that at google, any help would be appreciated.
Thanks
P.S.
As Scott below guessed, I "want a superclass to call a specific set of methods defined in a subclass". | Create a static method in your class which will return an array of declared methods and do the correct handling of exceptions.
```
private static Method[] declaredMethods(Class<T> clazz, String methodName) {
Method[] result = new Method[1];
try{
result[0] = clazz.getDeclaredMethod(methodName);
} catch (NoSuchMethodException nsme) {
// respond to the error
} catch (SecurityException se) {
// Respond to the error
}
return result;
}
Method[] actions = declaredMethods("count");
``` | Are you sure reflection is the right thing to do? Normally an interface with several anonymous classes implementing it would be better style.
You can write an initializer block to be able to catch the exception during initialization.
Why don't you use getMethod()? | Receiving a Java Method without using getDeclaredMethod | [
"",
"java",
"reflection",
"initialization",
""
] |
If you have one main thread that starts two other threads.
what is the cleanest way to make the primary thread wait for the two other threads?
I could use bgndworker and sleep spinner that checks for both the bgnd workers's IsBusy, but I would think there's a better way.
**EDIT**
Some more requirements:
* The main thread has some other work to do (e.g. GUI).
* The two spawned threads should be able to report exceptions and return result values | See the answers on [this thread](https://stackoverflow.com/questions/540078/wait-for-pooled-threads-to-complete). I like [this option](https://stackoverflow.com/questions/540078/wait-for-pooled-threads-to-complete/540380#540380) ;-p
```
Forker p = new Forker();
p.Fork(delegate { DoSomeWork(); });
p.Fork(delegate { DoSomeOtherWork(); });
p.Join();
```
Re returning values / reporting exceptions - just have each fork do that as a callback at the end of the logic... (you can use captured variables to pass state into both the forks, including a shared logger etc). | Quick example using Thread.Join();
```
Thread t1 = new Thread(new ThreadStart(delegate()
{
System.Threading.Thread.Sleep(2000);
}));
Thread t2 = new Thread(new ThreadStart(delegate()
{
System.Threading.Thread.Sleep(4000);
}));
t1.Start();
t2.Start();
t1.Join();
t2.Join();
```
**EDIT Another 3example using Wait Handles:**
```
ManualResetEvent[] waitHandles = new ManualResetEvent[]{
new ManualResetEvent(false),
new ManualResetEvent(false)
};
Thread t1 = new Thread(new ParameterizedThreadStart(delegate(object state)
{
ManualResetEvent handle = (ManualResetEvent)state;
System.Threading.Thread.Sleep(2000);
handle.Set();
}));
Thread t2 = new Thread(new ParameterizedThreadStart(delegate(object state)
{
ManualResetEvent handle = (ManualResetEvent)state;
System.Threading.Thread.Sleep(4000);
handle.Set();
}));
t1.Start(waitHandles[0]);
t2.Start(waitHandles[1]);
WaitHandle.WaitAll(waitHandles);
Console.WriteLine("Finished");
``` | Wait for two threads to finish | [
"",
"c#",
"multithreading",
""
] |
Is it at all possible to disable the "The webpage you are viewing is trying to close the window. Do you want to close it?"?
I understand that this is the product of years of virus, and malicious script activity, but in legit app code, (ASP.NET), is there any way to like "register" your app as an APP, or flags you can pass to an IE Popup so that it will not display this when it closes?
The code I'm using is done from within the C# code behind:
```
ClientScript.RegisterClientScriptBlock(GetType(), "save", Utils.MakeScriptBlock("window.close();"));
```
The Utils.MakeScriptBlock is just a function that does what you might expect. It 'injects' a <script...> tag with the code in it...
It's probably not possible to get around this, or else, all the script kiddies would just use that trick, but I thought I'd ask, as I can't be the ONLY one using simple IE "popups" as (pseudo)modal dialog boxes.
This code happens in my ButtonSave\_Click() routine, after everything has passed validation, etc...
\*\* **EDIT** \*\*
Just for reference, here is the code that OPENS the popup, when the ADD button is clicked:
This is in Page\_Init()...
```
ButtonAdd.Attributes.Add("onclick", "window.open('Add.aspx', 'ADD_WINDOW', 'scrollbars=no,width=550,height=550,location=no,menubar=no,resizable=yes,directories=no,status=no,toolbar=no'); return false;");
``` | You can close the window without the popup if the window was opened by your script. Does that help?
**Edit:**
You're already opening the window with script. Change your client script to call self.close().
```
ClientScript.RegisterClientScriptBlock(GetType(), "save", Utils.MakeScriptBlock("self.close();"));
``` | Bizarre, but I've found that
```
<script type="text/javascript">
function closeWindow()
{
window.close();
return false;
}
</script>
```
and then calling
```
return closeWindow();
```
usually gets around this. | Closing an IE 'popup' window with script | [
"",
"c#",
"asp.net",
"internet-explorer",
"asp.net-ajax",
""
] |
I've done several projects and packaged them into jar files, but I've noticed that my jar files run much more slowly than in my IDE.
I use Eclipse to compile and run my programs. In Eclipse, I have everything working. When I package my project as a runnable Jar and execute it by double-clicking, everything still works. But when I have animation, the program runs *extremely* slowly. Instead of 30 frames per second in Eclipse, I now get about 2 frames per second. The animation is very laggy.
Interesting thing is, when I run the same Jar from the command prompt (`java -jar MyCode.jar`), the lag disappears and my program works as normal.
I currently have no idea what's causing this problem. My computer is running Windows Vista. I'm using the latest version of JRE and JDK. My program contains an animation, nothing fancy.
Thanx
**Solution:**
There were several different versions of Java on my computer and an incorrect version was used. | The version of java.exe that comes with Windows is very out of date.
By changing the association of the .jar extension to the more recent JVM you should see much improvement. | You may also consider whether any running antivirus software is affecting performance. Some software treats .jar files as the .zip archives that they are and scans accordingly. Their shell integration might explain the difference between double-clicking and command line as well. | Executable Jars running very slowly | [
"",
"java",
"performance",
"windows-vista",
"jar",
"executable-jar",
""
] |
I'd like to create a directory in the user's 'Documents' folder, but so far I've only found out how to get the user's home directory:
```
javax.swing.JFileChooser fr = new javax.swing.JFileChooser();
javax.swing.filechooser.FileSystemView fw = fr.getFileSystemView();
this.userDirectory = fw.getDefaultDirectory();
```
In Windows the above code returns the 'My Documents' directory, which is great, that's where the new documents are supposed to go. On OS X it only returns the home directory.
Adding 'Documents' to the returned path would cause problems with localization.
How can I do this? | System.getProperty("user.home")+File.separator+"Documents";
And don't worry about with localization, look:
```
macb:Documents laullon$ pwd
/Users/laullon/Documents
```
My OS X is in Spanish. | You want to use Apple's eio.FileManager extension:
```
static public String documentsDirectory()
throws java.io.FileNotFoundException {
// From CarbonCore/Folders.h
final String kDocumentsDirectory = "docs";
return com.apple.eio.FileManager.findFolder(
com.apple.eio.FileManager.kUserDomain,
com.apple.eio.FileManager.OSTypeToInt(kDocumentsDirectory)
);
}
```
[Documentation](http://developer.apple.com/documentation/Java/Reference/1.5.0/appledoc/api) | How do I find the user's 'Documents' folder with Java in OS X? | [
"",
"java",
"macos",
""
] |
Is there a PHP function that would 'pop' first element of array?
`array_pop()` pops last element, but I'd like to pop the first. | You are looking for array\_shift().
[PHP Array Shift](http://www.php.net/manual/en/function.array-shift.php) | **Quick Cheatsheet** If you are not familiar with the lingo, here is a quick translation of alternate terms, which may be easier to remember:
```
* array_unshift() - (aka Prepend ;; InsertBefore ;; InsertAtBegin )
* array_shift() - (aka UnPrepend ;; RemoveBefore ;; RemoveFromBegin )
* array_push() - (aka Append ;; InsertAfter ;; InsertAtEnd )
* array_pop() - (aka UnAppend ;; RemoveAfter ;; RemoveFromEnd )
``` | Pop first element of array instead of last (reversed array_pop)? | [
"",
"php",
"arrays",
""
] |
I am using the excellent [jQuery MultiSelect](http://abeautifulsite.net/notebook/62) plugin.
The problem I have is that I would like to submit the form when the values have changed.
Having all sorts of trouble getting this one working, does anyone have insight into how to achieve this?
Also open to alternative jQuery plugins/scripts if there are any that handle this nicely. | You could try patching jQueryMultiSelect (Untested)
Line:34 --
```
multiSelect: function(o, callback ) {
```
Line:34 ++
```
multiSelect: function(o, callback, postback) {
```
Line 221: ++
```
if( postback ) postback($(this));
```
Full Codez
```
if(jQuery) (function($){
$.extend($.fn, {
multiSelect: function(o, callback, postback) {
// Default options
if( !o ) var o = {};
if( o.selectAll == undefined ) o.selectAll = true;
if( o.selectAllText == undefined ) o.selectAllText = "Select All";
if( o.noneSelected == undefined ) o.noneSelected = 'Select options';
if( o.oneOrMoreSelected == undefined ) o.oneOrMoreSelected = '% selected';
// Initialize each multiSelect
$(this).each( function() {
var select = $(this);
var html = '<input type="text" readonly="readonly" class="multiSelect" value="" style="cursor: default;" />';
html += '<div class="multiSelectOptions" style="position: absolute; z-index: 99999; display: none;">';
if( o.selectAll ) html += '<label class="selectAll"><input type="checkbox" class="selectAll" />' + o.selectAllText + '</label>';
$(select).find('OPTION').each( function() {
if( $(this).val() != '' ) {
html += '<label><input type="checkbox" name="' + $(select).attr('name') + '" value="' + $(this).val() + '"';
if( $(this).attr('selected') ) html += ' checked="checked"';
html += ' />' + $(this).html() + '</label>';
}
});
html += '</div>';
$(select).after(html);
// Events
$(select).next('.multiSelect').mouseover( function() {
$(this).addClass('hover');
}).mouseout( function() {
$(this).removeClass('hover');
}).click( function() {
// Show/hide on click
if( $(this).hasClass('active') ) {
$(this).multiSelectOptionsHide();
} else {
$(this).multiSelectOptionsShow();
}
return false;
}).focus( function() {
// So it can be styled with CSS
$(this).addClass('focus');
}).blur( function() {
// So it can be styled with CSS
$(this).removeClass('focus');
});
// Determine if Select All should be checked initially
if( o.selectAll ) {
var sa = true;
$(select).next('.multiSelect').next('.multiSelectOptions').find('INPUT:checkbox').not('.selectAll').each( function() {
if( !$(this).attr('checked') ) sa = false;
});
if( sa ) $(select).next('.multiSelect').next('.multiSelectOptions').find('INPUT.selectAll').attr('checked', true).parent().addClass('checked');
}
// Handle Select All
$(select).next('.multiSelect').next('.multiSelectOptions').find('INPUT.selectAll').click( function() {
if( $(this).attr('checked') == true ) $(this).parent().parent().find('INPUT:checkbox').attr('checked', true).parent().addClass('checked'); else $(this).parent().parent().find('INPUT:checkbox').attr('checked', false).parent().removeClass('checked');
});
// Handle checkboxes
$(select).next('.multiSelect').next('.multiSelectOptions').find('INPUT:checkbox').click( function() {
$(this).parent().parent().multiSelectUpdateSelected(o);
$(this).parent().parent().find('LABEL').removeClass('checked').find('INPUT:checked').parent().addClass('checked');
$(this).parent().parent().prev('.multiSelect').focus();
if( !$(this).attr('checked') ) $(this).parent().parent().find('INPUT:checkbox.selectAll').attr('checked', false).parent().removeClass('checked');
if( callback ) callback($(this));
});
// Initial display
$(select).next('.multiSelect').next('.multiSelectOptions').each( function() {
$(this).multiSelectUpdateSelected(o);
$(this).find('INPUT:checked').parent().addClass('checked');
});
// Handle hovers
$(select).next('.multiSelect').next('.multiSelectOptions').find('LABEL').mouseover( function() {
$(this).parent().find('LABEL').removeClass('hover');
$(this).addClass('hover');
}).mouseout( function() {
$(this).parent().find('LABEL').removeClass('hover');
});
// Keyboard
$(select).next('.multiSelect').keydown( function(e) {
// Is dropdown visible?
if( $(this).next('.multiSelectOptions').is(':visible') ) {
// Dropdown is visible
// Tab
if( e.keyCode == 9 ) {
$(this).addClass('focus').trigger('click'); // esc, left, right - hide
$(this).focus().next(':input').focus();
return true;
}
// ESC, Left, Right
if( e.keyCode == 27 || e.keyCode == 37 || e.keyCode == 39 ) {
// Hide dropdown
$(this).addClass('focus').trigger('click');
}
// Down
if( e.keyCode == 40 ) {
if( !$(this).next('.multiSelectOptions').find('LABEL').hasClass('hover') ) {
// Default to first item
$(this).next('.multiSelectOptions').find('LABEL:first').addClass('hover');
} else {
// Move down, cycle to top if on bottom
$(this).next('.multiSelectOptions').find('LABEL.hover').removeClass('hover').next('LABEL').addClass('hover');
if( !$(this).next('.multiSelectOptions').find('LABEL').hasClass('hover') ) {
$(this).next('.multiSelectOptions').find('LABEL:first').addClass('hover');
}
}
return false;
}
// Up
if( e.keyCode == 38 ) {
if( !$(this).next('.multiSelectOptions').find('LABEL').hasClass('hover') ) {
// Default to first item
$(this).next('.multiSelectOptions').find('LABEL:first').addClass('hover');
} else {
// Move up, cycle to bottom if on top
$(this).next('.multiSelectOptions').find('LABEL.hover').removeClass('hover').prev('LABEL').addClass('hover');
if( !$(this).next('.multiSelectOptions').find('LABEL').hasClass('hover') ) {
$(this).next('.multiSelectOptions').find('LABEL:last').addClass('hover');
}
}
return false;
}
// Enter, Space
if( e.keyCode == 13 || e.keyCode == 32 ) {
// Select All
if( $(this).next('.multiSelectOptions').find('LABEL.hover INPUT:checkbox').hasClass('selectAll') ) {
if( $(this).next('.multiSelectOptions').find('LABEL.hover INPUT:checkbox').attr('checked') ) {
// Uncheck all
$(this).next('.multiSelectOptions').find('INPUT:checkbox').attr('checked', false).parent().removeClass('checked');
} else {
// Check all
$(this).next('.multiSelectOptions').find('INPUT:checkbox').attr('checked', true).parent().addClass('checked');
}
$(this).next('.multiSelectOptions').multiSelectUpdateSelected(o);
if( callback ) callback($(this));
return false;
}
// Other checkboxes
if( $(this).next('.multiSelectOptions').find('LABEL.hover INPUT:checkbox').attr('checked') ) {
// Uncheck
$(this).next('.multiSelectOptions').find('LABEL.hover INPUT:checkbox').attr('checked', false);
$(this).next('.multiSelectOptions').multiSelectUpdateSelected(o);
$(this).next('.multiSelectOptions').find('LABEL').removeClass('checked').find('INPUT:checked').parent().addClass('checked');
// Select all status can't be checked at this point
$(this).next('.multiSelectOptions').find('INPUT:checkbox.selectAll').attr('checked', false).parent().removeClass('checked');
if( callback ) callback($(this));
} else {
// Check
$(this).next('.multiSelectOptions').find('LABEL.hover INPUT:checkbox').attr('checked', true);
$(this).next('.multiSelectOptions').multiSelectUpdateSelected(o);
$(this).next('.multiSelectOptions').find('LABEL').removeClass('checked').find('INPUT:checked').parent().addClass('checked');
if( callback ) callback($(this));
}
}
return false;
} else {
// Dropdown is not visible
if( e.keyCode == 38 || e.keyCode == 40 || e.keyCode == 13 || e.keyCode == 32 ) { // down, enter, space - show
// Show dropdown
$(this).removeClass('focus').trigger('click');
$(this).next('.multiSelectOptions').find('LABEL:first').addClass('hover');
return false;
}
// Tab key
if( e.keyCode == 9 ) {
// Shift focus to next INPUT element on page
$(this).focus().next(':input').focus();
return true;
}
}
// Prevent enter key from submitting form
if( e.keyCode == 13 ) return false;
});
// Eliminate the original form element
$(select).remove();
});
},
// Hide the dropdown
multiSelectOptionsHide: function() {
$(this).removeClass('active').next('.multiSelectOptions').hide();
if( postback ) postback($(this));
},
// Show the dropdown
multiSelectOptionsShow: function() {
// Hide any open option boxes
$('.multiSelect').multiSelectOptionsHide();
$(this).next('.multiSelectOptions').find('LABEL').removeClass('hover');
$(this).addClass('active').next('.multiSelectOptions').show();
// Position it
var offset = $(this).offset();
$(this).next('.multiSelectOptions').css({ top: offset.top + $(this).outerHeight() + 'px' });
$(this).next('.multiSelectOptions').css({ left: offset.left + 'px' });
// Disappear on hover out
multiSelectCurrent = $(this);
var timer = '';
$(this).next('.multiSelectOptions').hover( function() {
clearTimeout(timer);
}, function() {
timer = setTimeout('$(multiSelectCurrent).multiSelectOptionsHide(); $(multiSelectCurrent).unbind("hover");', 250);
});
},
// Update the textbox with the total number of selected items
multiSelectUpdateSelected: function(o) {
var i = 0, s = '';
$(this).find('INPUT:checkbox:checked').not('.selectAll').each( function() {
i++;
})
if( i == 0 ) {
$(this).prev('INPUT.multiSelect').val( o.noneSelected );
} else {
if( o.oneOrMoreSelected == '*' ) {
var display = '';
$(this).find('INPUT:checkbox:checked').each( function() {
if( $(this).parent().text() != o.selectAllText ) display = display + $(this).parent().text() + ', ';
});
display = display.substr(0, display.length - 2);
$(this).prev('INPUT.multiSelect').val( display );
} else {
$(this).prev('INPUT.multiSelect').val( o.oneOrMoreSelected.replace('%', i) );
}
}
}
});
})(jQuery);
``` | K, if running in to problems because you are trying to get the value for a ASP.NET postback you can try this. It is a bit of a hack but change line in the `renderOption` function:
```
var html = '<label><input type="checkbox" name="' + id + '[]" value="' + option.value + '"';
```
to:
```
var html = '<label><input type="checkbox" name="' + id.replace(/_/g,"$") + '" value="' + option.value + '"';
``` | jQuery MultiSelect submit form on change | [
"",
"javascript",
"jquery",
""
] |
I've been mulling about a [post](http://googletesting.blogspot.com/2008/12/static-methods-are-death-to-testability.html) by Misko Hevery that static methods in Java are a *death to testability*. I don't want to discuss the testability issue but more on the concept of static methods. Why do people hate it so much?
It's true that we don't have closures (but we have a slightly awkward anonymous functions), lambdas & functions as first class objects. In a way, I think static methods can be used to mimic functions as first class objects. | One characteristic of functional programming is immutability of data. `static` does imply that you don't need an object (instance) representing state, so that's not a bad start. You do however have state on the class level, but you can make this `final`. Since (static) methods aren't first-class functions at all, you will still need ugly constructions like anonymous classes to approach a certain *style* of functional programming in Java.
FP is best done in an functional language, since it has the necessary language *support* for things like higher-order functions, immutability, referential transparency and so on.
However, this does not mean that you can't program in a functional style in an imperative language like Java. Other examples can be given as well. It's not because you are programming in Java that you are doing OOP. You can program with global data and unstructured control flows (`goto`) in a structured language as C++. I can do OOP in a functional language like Scheme. Etc.
Steve McConnell mentions the difference of programming *in* a language versus programming *into* a language in Code Complete (also a very popular reference on SO).
So, in short, if you say that "static methods mimic first-class functions", I do not agree.
If, however, and I think that this was more the point you were trying to get across, you would say that "static methods can help for programming in a functional style in Java", I agree. | Static methods make testing hard because they can't be replaced, it's as simple as that.
How can static methods "mimic" functions as first class objects1? Arguably they're worse than anything else on this front. You can "mimic" functions as first class objects by creating single-method interfaces, and indeed Google's Java Collections does exactly this in a number of places (for predicates, projections etc). That can't be done with static methods - there's no way (other than with reflection) to pass the concept of "when you want to apply a function, use *this* method.
No, I can't see how static methods help here. They discourage state-changing (as the only state available is the global state and any mutable state passed in via the parameters) but they don't help on the "functions as first class objects" side.
C# has *better* support for this (with lambda expressions and delegates) but even that's not as general as it might be. (Compare it with F#, for example.)
---
1 As of Java 8, method references will allow methods to be converted to instances of appropriate single-method interfaces, which will make all of this *more* relevant. Back in 2009 that was a long way off though... | static methods make Java a pseudo functional language? | [
"",
"java",
"closures",
"functional-programming",
""
] |
What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected?
I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that:
```
if 'debug' in argv[1:]:
print 'Will be running in debug mode.'
```
How to find out if 009 or 575 was passed?
All those are expected calls:
```
python script.py
python script.py 011
python script.py 256 debug
python script.py 391 xls
python script.py 999 debug pdf
```
At this point I don't care about calls like that:
```
python script.py 001 002 245 568
python script.py some unexpected argument
python script.py 0001
python script.py 02
```
...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments. | As others answered, optparse is the best option, but if you just want quick code try something like this:
```
import sys, re
first_re = re.compile(r'^\d{3}$')
if len(sys.argv) > 1:
if first_re.match(sys.argv[1]):
print "Primary argument is : ", sys.argv[1]
else:
raise ValueError("First argument should be ...")
args = sys.argv[2:]
else:
args = ()
# ... anywhere in code ...
if 'debug' in args:
print 'debug flag'
if 'xls' in args:
print 'xls flag'
```
**EDIT**: Here's an optparse example because so many people are answering optparse without really explaining why, or explaining what you have to change to make it work.
The primary reason to use optparse is it gives you more flexibility for expansion later, and gives you more flexibility on the command line. In other words, your options can appear in any order and usage messages are generated automatically. However to make it work with optparse you need to change your specifications to put '-' or '--' in front of the optional arguments and you need to allow all the arguments to be in any order.
So here's an example using optparse:
```
import sys, re, optparse
first_re = re.compile(r'^\d{3}$')
parser = optparse.OptionParser()
parser.set_defaults(debug=False,xls=False)
parser.add_option('--debug', action='store_true', dest='debug')
parser.add_option('--xls', action='store_true', dest='xls')
(options, args) = parser.parse_args()
if len(args) == 1:
if first_re.match(args[0]):
print "Primary argument is : ", args[0]
else:
raise ValueError("First argument should be ...")
elif len(args) > 1:
raise ValueError("Too many command line arguments")
if options.debug:
print 'debug flag'
if options.xls:
print 'xls flag'
```
The differences here with optparse and your spec is that now you can have command lines like:
```
python script.py --debug --xls 001
```
and you can easily add new options by calling parser.add\_option() | Have a look at the [optparse](http://docs.python.org/library/optparse.html) module. Dealing with sys.argv yourself is fine for really simple stuff, but it gets out of hand quickly.
Note that you may find optparse easier to use if you can change your argument format a little; e.g. replace `debug` with `--debug` and `xls` with `--xls` or `--output=xls`. | How can I process command line arguments in Python? | [
"",
"python",
"command-line",
"command-line-arguments",
""
] |
Right my junit tests look like a long story:
* I create 4 users
* I delete 1 user
* I try to login with the deleted user and make sure it fails
* I login with one of the 3 remaining user and verify I can login
* I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
* I delete the message
* ...
* ...
**Advantages**:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
**Disadvantages**:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests? | unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
* break up your tests to be independent
* consider using an in-memory database as the backend for your tests
* consider wrapping each test or suite in a transaction that is rolled back at the end
* profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be *unit tests of features*, a la TDD] | First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
**Update 1:** Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - <http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx> (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
* Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
* When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
* Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
**Update 2:** Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems. | How can I improve my junit tests | [
"",
"java",
"unit-testing",
"spring",
"junit",
""
] |
We are building an application that may need to be replicated to many servers (I'm hoping not, but there's no way to know if clients will need their own client/server copy until after we release the online product).
Since the number of servers is unknown, I've decided to use GUIDs for any transactional table IDs. Since each client has their own database, I intend to use the NEWSEQUENTIALID() default, and the largest table will add no more than 1.5M rows per year (but on average 15K rows), I don't expect much of a performance problem.
However, I'm not sure how to handle cases where we want the foreign key to indicate 'none selected'. For example, a client has a single admin user. This is set up as a foreign key (login\_id) to login.id (a GUID). However, if a client doesn't yet have an admin user, how would we easily set up a "None Selected" key?
In prior applications we used IDENTITY columns, and inserted a dummy entry in most tables with an ID of 0. Is there an accepted approach to providing similar functionality with a GUID? | You have 3 options:
1) In your referenced table, add the blank value row with a guid of all zeros, then link to this record
2) Just store a null for the blank references
3) Don't have the blank records in the reference table, but store an all zero guid. This may cause problems if joins are done in in the database, or with reports. You'll have to code for this special case.
I'd say 1 and 2 are the only good options. | The only safe way to do this, is set the FK value to NULL. If you need to replicate data between multiple servers, make sure each table that you replicate has GUIDs as their primary keys, foreign key relationships are this way never a problem.
Replication can work very simple this way: replicate inserts and updates first in the order of parent to child table, after that replicate the deleted records from child to parent. (reverse order.)
Hope this helps. | 'None' or 'Not Used' foreign key when using GUIDs as primary key | [
"",
"sql",
"sql-server",
"primary-key",
"guid",
""
] |
I am trying to add a rich text editor in my web page where users can write reviews and format what they have written...something similar to the editor in which we write our posts on this site...
can anyone point me to the right direction regarding this...any tutorial that would help me build such a component...
Also i want a free product....(Forgot to mention earlier...) | after much research here is what i did...
i needed a control that was free and easy to use..all the editors that i went through came with a licence so i decided to make my own control!!!!
well..i couldnt do that on my own so i used the help of a few sites....
[javascript Rich Text Editors](https://stackoverflow.com/questions/299577/javascript-rich-text-editors)
<http://aspalliance.com/1092_Rich_Text_Editor_Part_I>
<http://ws.aspalliance.com/1092_Rich_Text_Editor_Part_II> and a few others too...
so the best option for me was to build my own control so that i could customize it according to my needs.... | Something in the likes of [TinyMCE](http://tinymce.moxiecode.com/) or [FCKeditor](http://www.fckeditor.net/), perhaps.
They're quite complete, and customisable. | Rich Text Editor on a web page | [
"",
"c#",
"asp.net",
"rich-text-editor",
""
] |
I'm making a game and I am using Python for the server side.
It would be fairly trivial to implement chat myself using Python - that's not my question.
**My question is**
I was just wondering if there were any pre-made chat servers or some kind of service that I would be able to implement inside of my game instead of rolling my own chat server?
Maybe like a different process I could run next to my game server process? | I recommend using XMPP/Jabber. There are a lot of libraries for clients and servers in different languages. It's free/open source.
<http://en.wikipedia.org/wiki/XMPP> | Honestly, I think it'd be best for you to roll your own and get it tightly integrated with your program. I know there's no sense in reinventing the wheel, but there are several advantages to doing so in your case: integration, learning, security, and simplicity. | Implementing chat in an application? | [
"",
"python",
"chat",
""
] |
Given the following table:
```
Table events
id
start_time
end_time
```
Is there a way to quickly search for a constant?
E.g.
```
SELECT *
FROM events
WHERE start_time<='2009-02-18 16:27:12'
AND end_time>='2009-02-18 16:27:12'
```
I am using MySQL. Having an index on either field still has to check a range. Moreover an index on both fields will not make a difference (only the first will be used).
I can add fields / indexes to the table (so adding an indexed constructed field containing the info of both fields would be acceptable).
P.S. The need for this came from this question: [Optimize SQL that uses between clause](https://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause) | There is one caveat to my solution:
1) The caveat to this solution is that you must be using the MyISAM engine for the events table. If you cannot use MyISAM then this solution wont work because only MyISAM is supported for Spatial Indexes.
So, assuming that the above isn't an issue for you, the following should work and give you good performance:
This solution makes use of MySQL's support for Spatial Data (see [documentation here](http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html)). While spatial data types can be added to a variety of storage engines, only MyISAM is supported for Spatial R-Tree Indexes (see [documentation here](http://dev.mysql.com/doc/refman/5.0/en/creating-spatial-indexes.html)) which are needed in order to get the performance needed. One other limitation is that spatial data types only work with numerical data so you cannot use this technique with string based range queries.
I wont go into the details of the theory behind how spatial types work and how the spatial index is useful but you should look at [Jeremy Cole's explanation here](http://jcole.us/blog/archives/2007/11/24/on-efficiently-geo-referencing-ips-with-maxmind-geoip-and-mysql-gis/) in regards to how to use spatial data types and indexes for GeoIP lookups. Also look at the comments as they raise some useful points and alternative if you need raw performance and can give up some accuracy.
The basic premise is that we can take the start/end and use the two of them to create four distinct points, one for each corner of a rectangle centered around 0,0 on a xy grid, and then do a quick lookup into the spatial index to determine if the particular point in time we care about is within the rectangle or not. As mentioned previously, see Jeremy Cole's explanation for a more thorough overview of how this works.
In your particular case we will need to do the following:
1) Alter the table to be a MyISAM table (note you shouldn't do this unless you are fully aware of the consequences of such a change like the lack of transactions and the table locking behavior that are associated with MyISAM).
```
alter table events engine = MyISAM;
```
2) Next we add the new column that will hold the spatial data. We will use the polygon data type as we need to be able to hold a full rectangle.
```
alter table events add column time_poly polygon NOT NULL;
```
3) Next we populate the new column with the data (please keep in mind that any processes that update or insert into table events will need to get modified to make sure they are populating the new column also). Since the start and end ranges are times, we will need to convert them to numbers with the unix\_timestamp function (see [documentation here](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp) for how it works).
```
update events set time_poly := LINESTRINGFROMWKB(LINESTRING(
POINT(unix_timestamp(start_time), -1),
POINT(unix_timestamp(end_time), -1),
POINT(unix_timestamp(end_time), 1),
POINT(unix_timestamp(start_time), 1),
POINT(unix_timestamp(start_time), -1)
));
```
4) Next we add the spatial index to the table (as mentioned previously, this will only work for a MyISAM table and will produce the error "ERROR 1464 (HY000): The used table type doesn't support SPATIAL indexes").
```
alter table events add SPATIAL KEY `IXs_time_poly` (`time_poly`);
```
5) Next you will need to use the following select in order to make use of the spatial index when querying the data.
```
SELECT *
FROM events force index (IXs_time_poly)
WHERE MBRCONTAINS(events.time_poly, POINTFROMWKB(POINT(unix_timestamp('2009-02-18 16:27:12'), 0)));
```
The force index is there to make 100% sure that MySQL will use the index for the lookup. If everything went well running an explain on the above select should show something similar to the following:
```
mysql> explain SELECT *
-> FROM events force index (IXs_time_poly)
-> on MBRCONTAINS(events.time_poly, POINTFROMWKB(POINT(unix_timestamp('2009-02-18 16:27:12'), 0)));
+----+-------------+-------+-------+---------------+---------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------------+---------+------+------+-------------+
| 1 | SIMPLE | B | range | IXs_time_poly | IXs_time_poly | 32 | NULL | 1 | Using where |
+----+-------------+-------+-------+---------------+---------------+---------+------+------+-------------+
1 row in set (0.00 sec)
```
Please refer to Jeremy Cole's analysis for details about the performance benefits of this method as compared with a between clause.
Let me know if you have any questions.
Thanks,
-Dipin | There is no efficient way to do exactly this query in `MySQL`.
If your ranges do not overlap, though, you can just use `start_time <= const` along with `ORDER BY start_time DESC LIMIT 1` and further checking for `end_time >= const`.
You'll need to do it in a function, as `MySQL` for some reason doesn't use `INDEX RANGE SCAN` for `ORDER BY` in a subquery if the range condition is taken from a superquery.
```
CREATE UNIQUE INDEX ux_b_start ON b (start_date);
CREATE FUNCTION `fn_get_last_b`(event_date TIMESTAMP) RETURNS int(11)
BEGIN
DECLARE id INT;
SELECT b.id
INTO id
FROM b
FORCE INDEX (ux_b_start)
WHERE b.start_time <= event_date
ORDER BY
b.start_time DESC
LIMIT 1;
RETURN id;
END;
SELECT COUNT(*) FROM a;
1000
SELECT COUNT(*) FROM b;
200000
SELECT *
FROM (
SELECT fn_get_last_b(a.event_time) AS bid,
a.*
FROM a
) ao, b FORCE INDEX (PRIMARY)
WHERE b.id = ao.bid
AND b.end_time >= ao.event_time
1000 rows fetched in 0,0143s (0,1279s)
``` | Optimize query selecting a period | [
"",
"sql",
"mysql",
"query-optimization",
""
] |
I have a left outer join (below) returning results as expected. I need to limit the results from the 'right' table to the 'first' hit. Can I do that somehow? Currently, I get a result for every record in both tables, I only want to see one result from the table on the left (items) no matter how many results I have in the right table (photos).
```
var query = from i in db.items
join p in db.photos
on i.id equals p.item_id into tempPhoto
from tp in tempPhoto.DefaultIfEmpty()
orderby i.date descending
select new
{
itemName = i.name,
itemID = i.id,
id = i.id,
photoID = tp.PhotoID.ToString()
};
GridView1.DataSource = query;
GridView1.DataBind();
``` | This will do the job for you.
```
from i in db.items
let p = db.photos.Where(p2 => i.id == p2.item_id).FirstOrDefault()
orderby i.date descending
select new
{
itemName = i.name,
itemID = i.id,
id = i.id,
photoID = p == null ? null : p.PhotoID.ToString();
}
```
I got this sql when I generated it against my own model (and without the name and second id columns in the projection).
```
SELECT [t0].[Id] AS [Id], CONVERT(NVarChar,(
SELECT [t2].[PhotoId]
FROM (
SELECT TOP (1) [t1].[PhotoId]
FROM [dbo].[Photos] AS [t1]
WHERE [t1].[Item_Id] = ([t0].[Id])
) AS [t2]
)) AS [PhotoId]
FROM [dbo].[Items] AS [t0]
ORDER BY [t0].[Id] DESC
```
When I asked for the plan, it showed that the subquery is implemented by this join:
```
<RelOp LogicalOp="Left Outer Join" PhysicalOp="Nested Loops">
``` | What you want to do is group the table. The best way to do this is:
```
var query = from i in db.items
join p in (from p in db.photos
group p by p.item_id into gp
where gp.Count() > 0
select new { item_id = g.Key, Photo = g.First() })
on i.id equals p.item_id into tempPhoto
from tp in tempPhoto.DefaultIfEmpty()
orderby i.date descending
select new
{
itemName = i.name,
itemID = i.id,
id = i.id,
photoID = tp.Photo.PhotoID.ToString()
};
```
---
Edit: This is Amy B speaking. I'm only doing this because Nick asked me to. Nick, please modify or remove this section as you feel is appropriate.
The SQL generated is quite large. The int 0 (to be compared with the count) is passed in via parameter.
```
SELECT [t0].X AS [id], CONVERT(NVarChar(MAX),(
SELECT [t6].Y
FROM (
SELECT TOP (1) [t5].Y
FROM [dbo].[Photos] AS [t5]
WHERE (([t4].Y IS NULL) AND ([t5].Y IS NULL)) OR (([t4].Y IS NOT NULL) AND ([t5].Y IS NOT NULL) AND ([t4].Y = [t5].Y))
) AS [t6]
)) AS [PhotoId]
FROM [dbo].[Items] AS [t0]
CROSS APPLY ((
SELECT NULL AS [EMPTY]
) AS [t1]
OUTER APPLY (
SELECT [t3].Y
FROM (
SELECT COUNT(*) AS [value], [t2].Y
FROM [dbo].[Photos] AS [t2]
GROUP BY [t2].Y
) AS [t3]
WHERE (([t0].X) = [t3].Y) AND ([t3].[value] > @p0)
) AS [t4])
ORDER BY [t0].Z DESC
```
The execution plan reveals three left joins. At least one is trivial and should not be counted (it brings in the zero). There is enough complexity here that I cannot clearly point to any problem for efficiency. It might run great. | How to limit a LINQ left outer join to one row | [
"",
"c#",
"asp.net",
"database",
"linq",
"join",
""
] |
I have a textbox control Super1 in my MasterPage.
I am using javascript to access this control from my content page like this:
```
<asp:Content ID="ContentPage" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server">
<script type="text/javascript">
function Somethin() {
{
document.forms[0].elements['Super1'].value = "sdfsd";
//document.getElementById('<%=Super1.ClientID%>').value = "sdfsdf";
}
}
</script>
</asp:Content>
```
But while page load it says Super1 not found. How can I access Super1? | In your masterpage's onload add this code :
```
string script = @"<script>
function Somethin() {
document.getElementById('" + Super1.ClientID + @"').value = 'sdfsd';
}
Somethin();
</script>";
if (!Page.ClientScript.IsClientScriptBlockRegistered("somethin_script_block"))
{
Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "somethin_script_block", script);
}
```
this will add your script to the end of the page.
**EDIT :** I just realized, you use your controls ID directly in your javascript code. this may cause the exception. I update your code to fix it.
I hope this helps. | You have to make sure the document has loaded, make sure to call your functions that rely on the DOM being loaded onload. E.g.:
```
<script type="text/javascript">
window.onload = function() {
Somethin();
}
</script>
``` | Javascript to access control in MasterPage | [
"",
"asp.net",
"javascript",
""
] |
Suppose I have a code executed in Unix this way:
```
$ ./mycode
```
My question is is there a way I can time the running time of my code
executed K times. The value of K = 1000 for example.
I am aware of Unix "time" command, but that only executed 1 instance. | try
```
$ time ( your commands )
```
write a loop to go in the parens to repeat your command as needed.
### Update
Okay, we can solve the command line too long issue. This is bash syntax, if you're using another shell you may have to use *expr(1)*.
```
$ time (
> while ((n++ < 100)); do echo "n = $n"; done
> )
real 0m0.001s
user 0m0.000s
sys 0m0.000s
``` | to improve/clarify on Charlie's answer:
```
time (for i in $(seq 10000); do ./mycode; done)
``` | Unix Command For Benchmarking Code Running K times | [
"",
"c++",
"unix",
"benchmarking",
"performance",
""
] |
I need to create an empty map.
```
if (fileParameters == null)
fileParameters = (HashMap<String, String>) Collections.EMPTY_MAP;
```
The problem is that the above code produces this warning:
**Type safety: Unchecked cast from Map to HashMap**
What is the best way to create this empty map? | **1)** If the Map can be immutable:
```
Collections.emptyMap()
// or, in some cases:
Collections.<String, String>emptyMap()
```
You'll have to use the latter sometimes when the compiler cannot automatically figure out what kind of Map is needed (this is called [*type inference*](http://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html)). For example, consider a method declared like this:
```
public void foobar(Map<String, String> map){ ... }
```
When passing the empty Map directly to it, you have to be explicit about the type:
```
foobar(Collections.emptyMap()); // doesn't compile
foobar(Collections.<String, String>emptyMap()); // works fine
```
**2)** If you need to be able to modify the Map, then for example:
```
new HashMap<String, String>()
```
(as [tehblanx pointed out](https://stackoverflow.com/questions/636126/best-way-to-create-an-empty-map-in-java/636134#636134))
---
**Addendum**: If your project uses [**Guava**](https://github.com/google/guava), you have the following alternatives:
**1)** Immutable map:
```
ImmutableMap.of()
// or:
ImmutableMap.<String, String>of()
```
Granted, no big benefits here compared to `Collections.emptyMap()`. [From the Javadoc](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/collect/ImmutableMap.html#of%28%29):
> This map behaves and performs comparably to `Collections.emptyMap()`,
> and is preferable mainly for consistency and maintainability of your
> code.
**2)** Map that you can modify:
```
Maps.newHashMap()
// or:
Maps.<String, String>newHashMap()
```
[`Maps`](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/collect/Maps.html) contains similar factory methods for instantiating other types of maps as well, such as [`TreeMap`](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/collect/Maps.html#newTreeMap%28%29) or [`LinkedHashMap`](https://google.github.io/guava/releases/19.0/api/docs/com/google/common/collect/Maps.html#newLinkedHashMap%28%29).
---
**Update (2018)**: On **Java 9** or newer, the shortest code for creating an immutable empty map is:
```
Map.of()
```
...using the new [convenience factory methods](https://docs.oracle.com/javase/9/docs/api/java/util/Map.html#of--) from [JEP 269](http://openjdk.java.net/jeps/269). | [Collections.emptyMap()](http://java.sun.com/javase/6/docs/api/java/util/Collections.html#emptyMap()) | Best way to create an empty map in Java | [
"",
"java",
"dictionary",
"collections",
"hashmap",
""
] |
I am a developer working on Visual C++, but in my project there are some [Delphi](http://en.wikipedia.org/wiki/Embarcadero_Delphi) components. I need to debug the Delphi components to fix some issues.
What are the things that are a must to generate a DLL file in debug and then start debugging in Delphi? | In Delphi 7 you would do this:
Project | Options | Compiler | Debugging | Debug information (check)
Then go to Run | Parameters | Host Application and enter the name of your exe.
Add some breakpoints in your DLL code and then click run. Your exe will be loaded and you can debug the DLL parts in the Delphi IDE.
If your exe is already running, click Run | Attach to process
-- I've tested this and found that I also needed to check the "Include remote debug symbols" on the Linker page of project options in Delphi 7.
I was able to get a breakpoint to hit using the Run | Parameters as well as Run | Attach to process methods. The test DLL I had created had a single stdcall function and was dynamically loaded by a Visual C++ console application. | We use this quite often (using Delphi).
Be sure to:
1. Enable all debug options on all projects (DLL file(s)). And disable optimization.
2. Be sure to set the host application to the right EXE file.
3. Build DLL file(s).
You can now put breakpoints in both dll and exe. And run the DLL file from the IDE. It starts the EXE file and stops at the requested breakpoints.
It even works when DLL files are dynamically linked (if they are unloaded the blue dots disapear). | How to debug a DLL file in Delphi | [
"",
"c++",
"delphi",
""
] |
The way I currently handle this is by having multiple config files such as:
```
web.config
web.Prod.config
web.QA.config
web.Dev.config
```
When the project gets deployed to the different environments I just rename the corresponding file with the correct settings.
Anyone have suggestions on how to handle this better?
EDIT:
Here are some of the things that change in each config:
* WCF Client Endpoint urls and security
* Custom Database configs
* Session connection strings
* log4net settings | Scott Gu had an [article](http://weblogs.asp.net/scottgu/archive/2007/09/21/tip-trick-automating-dev-qa-staging-and-production-web-config-settings-with-vs-2005.aspx) on this once. The solution he presented was to use a Pre-build event to copy the correct config into place depending on the build configuration chosen.
I also noticed that there already is a similar [question](https://stackoverflow.com/questions/305447/using-different-web-config-in-development-and-production-environment) here on SO. | Transforms seem really helpful for this. You can replace certain sections with different rules.
<http://msdn.microsoft.com/en-us/library/dd465318(v=vs.100).aspx> | How do you handle multiple web.config files for multiple environments? | [
"",
"c#",
"asp.net",
"web-applications",
""
] |
What is the best (easiest) way to take a screenshot of an running application with C++ under Windows? | You have to get the device context of the window ([`GetWindowDC()`](http://msdn.microsoft.com/en-us/library/dd144947(VS.85).aspx)) and copy image ([`BitBlt()`](http://msdn.microsoft.com/en-us/library/dd183370(VS.85).aspx)) from it. Depending on what else you know about the application you will use different methods to find which window's handle to pass into `GetWindowDC()`. | `GetDC(NULL)` + `BitBlt()`
To capture translucent/alpha/layered windows, you must pass the `CAPTUREBLT` flag to `BitBlt`, if you do that, the cursor blinks, read [this technet article](http://technet.microsoft.com/en-us/magazine/2009.02.windowsconfidential.aspx) to find out why.
On NT6+, you might be able to use the [Magnification API](http://msdn.microsoft.com/en-us/library/ms692402(VS.85).aspx) to do what you want. | What is the best way to take screenshots of a Window with C++ in Windows? | [
"",
"c++",
"windows",
"screenshot",
""
] |
Let's say I have a class that exposes one property. Is it considered to be a good aproach to use the private "holder variable" for internal use in the class? Or should I use the property for internal use also.
To explain, should I use:
```
public class foo
{
String _statusHolder;
public String myStaus
{
get { return _statusHolder; }
set{ _statusHolder = value; }
}
public void DisplayMyStatus()
{
Console.WriteLine(_statusHolder);
}
}
```
Or:
```
public class foo
{
String _statusHolder;
public String myStaus
{
get { return _statusHolder; }
set{ _statusHolder = value; }
}
public void DisplayMyStatus()
{
Console.WriteLine(myStaus);
}
}
```
I could see it as beeing more consistent and more readable to use the second approach. It would also be more effective if I would later do some modificatins in the set-statement. But are there any performance issues or is it considered bad-practise for some reason?
---
**EDIT:**
It seems that everybody is leaning towards using the property internally. My initial thoughts was the same, but as a novice programmer, you can never know.
Thanks everyone for the quick feedback! | I tend to go with calling the properties cause once stuff gets tricky you can put in locking and business logic in the getter
For C# 3.0 I would go with something along these lines (and only explicitly create the backing field when its really needed)
```
public class foo
{
public String Status
{
get;
set;
}
public void DisplayMyStatus()
{
Console.WriteLine(Status);
}
}
``` | Performance issues should be negligable, as the JITer or compiler will happily work out that your function call (the getter of the property) doesn't do anything exciting, and can be inlined.
The benefit is future changes to business logic that might be put in the getter, which your class will then automatically take advantage of, without refactoring too much.
Of course, the downside is, you might want to avoid that new business logic in some circumstances, so it is something that needs to be considered based on how likely a) logic will change, and b) that logic might need to be circumvented.
The other (potential) advantage of using the property internally is that you can easily move to, or from, automatic properties. | Class internal usage of public properties | [
"",
"c#",
""
] |
I have taken a dump from my mysql database on my own server (version 5.0.67) and I have tried to import it to my other database (version 5.0.54). However, I get the error:
```
ERROR 1193 (00000) at line 23: Unknown system variable 'character_set_client'
```
I think it refers to this part:
```
SET @saved_cs_client = @@character_set_client;
SET character_set_client = utf8;
```
I have tried removing all the above lines but that made the error worse:
```
ERROR 1064 (00000) at line 23: You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near 'DEFAULT CHARSET=latin1 COMMENT='InnoDB free: 10240 kB; (`member
```
Any help would be greatly appreciated.
## UPDATE
I used the compatibility option like so but when importing I still get the same error!? WTF
```
mysqldump -u root -p dbname --compatible=mysql40 > mrdb.sql
``` | If you're using `mysqldump` to do the data dump, then there's a `--compatibility` option which you can set to some earlier version. 5.0.54 is kinda buggy so upgrade that beast when you have a chance. | If you install phpMyAdmin, you can use it to export the database. When you click 'Export' you will see an option for 'compatibly mode' where you can select MySQL 4 and 3. | MySQL import/export version incompatibilty | [
"",
"mysql",
"sql",
"mysql-error-1193",
""
] |
How can I (easily) take a string such as `"sin(x)*x^2"` which might be entered by a user at runtime and produce a Python function that could be evaluated for any value of `x`? | Python's own internal compiler can parse this, if you use Python notation.
If your change the notation slightly, you'll be happier.
```
import compiler
eq= "sin(x)*x**2"
ast= compiler.parse( eq )
```
You get an abstract syntax tree that you can work with. | **EDIT** `parser` is deprecated in Python 3.9: <https://docs.python.org/3/whatsnew/3.9.html#new-parser>
You can use Python `parser`:
```
import parser
from math import sin
formula = "sin(x)*x**2"
code = parser.expr(formula).compile()
x = 10
print(eval(code))
```
It performs better than pure `eval` and, of course, avoids code injection! | Equation parsing in Python | [
"",
"python",
"parsing",
"equation",
""
] |
I found this code:
```
this.Invoke(new EventHandler(EventGetSum));
```
Is this not the same as writing:
```
EventGetSum();
```
What's the use of this? | If you write `EventGetSum()` that immediately *calls* the EventGetSum method.
If you write `new EventHandler(EventGetSum)` that creates a delegate which will (in turn) call EventGetSum when it's invoked.
The call to [`Control.Invoke`](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invoke.aspx) invokes the given delegate from the UI thread responsible for the control. This is necessary because you mustn't access UI elements from arbitrary threads. | It executes the `EventGetSum` method in the thread to that the window `this` belongs to. | What's the use of Invoke() in .net? | [
"",
"c#",
".net",
"invoke",
""
] |
What would be the best way to go about getting a function that returns a random English word (preferably a noun), without keeping a list of all possible words in a file before hand? | Word lists need not take up all that much space.
Here's a [JSON wordlist with 2,465 words, all nouns](https://www.randomlists.com/data/words.json). It clocks in at under 50K, the size of a medium-sized jpeg image.
I'll leave choosing a random one as an exercise for the reader. | You can't. There is no algorithm to generate meaningful words. You can only generate words that sound like English, but they won't have any meaning. | How to pick a random english word from a list | [
"",
"python",
"random",
"word-list",
""
] |
What the size that you use for common database fields, like firstName, lastName, Email, password, etc? I see these common fields in a lot of databases of blogs, forums, e-commerces, etc. But I don't know if there is some reference or default for the size for that common fields. So, I want to know what the method/reference/basis that you to use for selecting size for common fields. | Partly, it depends on your DBMS. Some, like MySQL 5, care about the length of a VARCHAR(n) column as opposed to an unlimited-length TEXT column; others, like PostgreSQL, consider TEXT and VARCHAR(n) as internally identical, with the exception that a length is checked on VARCHAR(n) columns. Writing something like VARCHAR(65536) in PostgreSQL is silly; if you want an unlimited-length column, choose TEXT and be done with it.
Of course, sometimes trying to store too long of a value will break your layout, or allow someone to abuse the system by choosing a long name with no spaces (for example). Usually what I do for username fields like that is just choose an upper length such that anyone who wants a longer username is trying to cause trouble; 64 characters is a nice round value and seems to work well. For real names and addresses (which aren't frequently displayed to users like a username is), you'll want to go with something longer. You want a value large enough that it can accept any valid input, but not so large that someone could stuff a gigabyte-long string in the field just to attack your system. 1024 characters is pretty reasonable: 1k is an small enough amount of text to easily work with, a round number, and larger than any sane address line or name.
Email addresses can, per the associated RFC whose number I am too lazy to look up right now, be no longer than 320 characters. So there's your email field length. Turns out that SMTP limits the length of fields to 256 characters; since email addresses must be bracketed, the longest valid email address is actually 254 characters. ([This page](http://www.eph.co.uk/resources/email-address-length-faq/) goes into more detail.) So *there's* your email field length.
Passwords should ***NEVER*** be stored in plaintext, so your password field should be a byte array or BLOB type exactly long enough to store the output of the hash function you are using (or largest element of the cryptographic group in use, for more advanced schemes like SRP-6a). | What I tend to do is think of how long a field's value could possibly be, then double that to be safe.
E.g. Name: varchar(70)
Email: varchar(200) | Default size for database fields | [
"",
"sql",
"database",
"database-design",
"field",
""
] |
Is there an easy and fast way to calculate the rank of a field in a database using Ruby on Rails? For example, if I have a math\_scores table, and would like to find a given a
```
MathScore.find(:all, :condtions => ... :order =>...)
```
then iterate through all of them to find out where the test score falls, but there's got to be a more straightforward way... Any advice?
Here's some info on the schema, it's just a simple table:
first\_name varchar(50)
last\_name varchar(50)
test\_id int
score float
Clarification:
I guess by question is closer to how would I retrieve the rank value when doing:
```
rank = MathScore.find_by_sql(
"select count(*) as rank
from (
select * from math_scores
where score > (select score from high_scores where test_id = 33
AND first_name = 'John' AND last_name = 'Doe'
)
order by score desc
) as s"
)
```
I get `[#<HighScore:0x6ca4724 @attributes={"rank"=>"3"}>]:Array` based upon the query, but how do I get at the rank value?
Thanks in advance,
Ben | ```
sql = "select count(*) as rank from (select * from math_scores where score > (select score from high_scores where test_id = 33 AND first_name = 'John' AND last_name = 'Doe') order by score desc) as s"
rank = MathScore.find_by_sql(sql)[0].rank
``` | You can use [named scopes](http://railscasts.com/episodes/108-named-scope) with Rails 2.2 to help with this, eg.
```
class MathScore
named_scope :passed, :conditions => {:score => 60..100 }
named_scope :failed, :conditions => {:score => 0..59}
end
```
Which would allow you to do the following in a controller:
```
@passing_scores = MathScore.passed
@failing_scores = MathScore.failed
```
So you can iterate through them in a view.
---
In response to your clarification (in erb):
```
<% your_array.each do |hs| %>
<%= hs.rank %>
<% end %>
``` | Ruby on Rails calculating "rank" based upon database values? | [
"",
"sql",
"ruby-on-rails",
"database",
""
] |
I am new to using RMI and I am relatively new to using exceptions.
I want to be able to throw an exception over RMI (is this possible?)
I have a simple server which serves up students and I have delete method which if the student doesn't exist I want to throw a custom exception of StudentNotFoundException which extends RemoteException (is this a good thing to do?)
Any advice or guidance would be greatly appreciated.
Server Interface method
```
/**
* Delete a student on the server
*
* @param id of the student
* @throws RemoteException
* @throws StudentNotFoundException when a student is not found in the system
*/
void removeStudent(int id) throws RemoteException, StudentNotFoundException;
```
Server method implementation
```
@Override
public void removeStudent(int id) throws RemoteException, StudentNotFoundException
{
Student student = studentList.remove(id);
if (student == null)
{
throw new StudentNotFoundException("Student with id:" + id + " not found in the system");
}
}
```
Client method
```
private void removeStudent(int id) throws RemoteException
{
try
{
server.removeStudent(id);
System.out.println("Removed student with id: " + id);
}
catch (StudentNotFoundException e)
{
System.out.println(e.getMessage());
}
}
```
StudentNotFoundException
```
package studentserver.common;
import java.rmi.RemoteException;
public class StudentNotFoundException extends RemoteException
{
private static final long serialVersionUID = 1L;
public StudentNotFoundException(String message)
{
super(message);
}
}
```
Thank you for your reply I have now managed to fix my problem and realised that extending RemoteException was bad idea. | It's OK to throw any kind of exception (even custom ones), just make sure to package them up in your export .jar file (if you're using a version of Java where you need to do this manually).
I wouldn't subclass RemoteException, though. Those are typically thrown if there is some kind of connection problem. Presumably, your client will handle connection problems differently from other types of problems. You might tell the user the server is down when you catch a RemoteException, or connect to a different server. For StudentNotFoundException, you probably want to give the user another chance at entering the student info. | Yes, it's possible to throw exceptions via RMI.
No, it's not a good idea to extend `RemoteException` to report application failures. `RemoteException` signals a failure in the remoting mechanism, like a network failure. Use an appropriate exception, extending `java.lang.Exception` yourself if necessary.
For a more detailed explanation, [look at another of my answers](https://stackoverflow.com/questions/351205/bad-idea-to-chain-exceptions-with-rmi/351279#351279). In a nutshell, be careful about chaining exceptions when using RMI. | RMI and exceptions | [
"",
"java",
"networking",
"rmi",
""
] |
I'd like to write a simple application which keeps track of its current memory usage, number of created objects etc. In C++ I'd normally override the new operator, but for obvious reasons I can't do this in C#. Is there any way to do this without using a profiler? | You might want to start with the Garbage Collector. MSDN has some members listed [here](http://msdn.microsoft.com/en-us/library/system.gc_members.aspx) that can show you how to do a few things, like get the total amount of memory it thinks is allocated, how many times the GC has collected. Anything more advanced than that, like getting a count of objects of your loaded assembly and you'll have to probably use a profiler or write something yourself. | Using WMI try :
To get process usage (W2K3/2K8) :
```
"SELECT IDProcess, PercentPrivilegedTime, PercentProcessorTime, PercentUserTime FROM Win32_PerfFormattedData_PerfProc_Process where Name='process_name.exe'"
```
To identify your site use this :
```
"SELECT ProcessId, CommandLine, WorkingSetSize, ThreadCount, PrivatePageCount, PageFileUsage, PageFaults, HandleCount, CreationDate, Caption FROM Win32_Process where Caption='process_name.exe'"
```
Use this tool for [WQL teste](http://code.msdn.microsoft.com/NitoWMI)
Or use PerfMon tool.
For more information about counters see [Windows System Resource Manager Accounting](http://download.microsoft.com/download/d/2/5/d2524d17-b893-46f9-bebe-b1f7b927e144/Windows%20System%20Resource%20Manager%20Accounting.doc), at the end of doc.
Good luck. | Is it possible to track memory usage in a C# application without using a profiler? | [
"",
"c#",
".net",
"memory",
"memory-management",
""
] |
I have a datagridview that is linked to a three columns in the database, and it's only displaying them (no editing). When I run the program in debug, and exit, when I return to the form in Visual Studio, it has my three columns, plus all the columns in the table it's linked to. If I don't remove these, the next time I run the program, they show up on the form. If I remove them, I have to do it every time I run the program.
Any ideas on how to fix this? | According to [this](http://74.125.77.132/search?q=cache:JQGWmnOG24YJ:social.msdn.microsoft.com/Forums/en-US/winformsdesigner/thread/39e239fa-ea7e-4e72-a12b-07948b066b5a/+visual+studio+after+running+program+datagridview&hl=en&ct=clnk&cd=3&gl=us&client=firefox-a) there is a bug in Visual Studio that requires you to set the binding source to none/null before changing it (It seems to have been around since 2005). I removed all the columns, set binding source to none, then reset the binding source to what I wanted and edited the columns. It is working now. | Is `AutoGenerateColumns` set to `True` ?
You should set it to `False` if you want to prevent the DGV from creating columns (in addition to those you created manually) from the datasource.
Edit: To clarify, this admittedly weird behaviour could result if the property is not set to False in design mode itself. I'm thinking of the corner case in which you set it to `False` at runtime. I would also take a look at the designer file (you might have to click the "show all files" option to view it) and see what it contains pertaining to the DGV. Might be a problem there.
Alternatively, have you tried deleting the control itself and creating a new DGV with the same bindings? | Visual Studio adds columns to DataGridView after running program | [
"",
"c#",
"winforms",
"visual-studio-2008",
"datagridview",
""
] |
I'm trying to store a number as a binary string in an array but I need to specify how many bits to store it as.
For example, if I need to store 0 with two bits I need a string "00". Or 1010 with 6 bits so "001010".
Can anyone help?
EDIT: Thanks guys, as I'm rubbish at maths/programming in general I've gone with the simplest solution which was David's. Something like:
```
binaryString.append(Integer.toBinaryString(binaryNumber));
for(int n=binaryString.length(); n<numberOfBits; n++) {
binaryString.insert(0, "0");
}
```
It seems to work fine, so unless it's very inefficient I'll go with it. | Use `Integer.toBinaryString()` then check the string length and prepend it with as many zeros as you need to make your desired length. | Forget about home-made solutions. Use standard [BigInteger](http://java.sun.com/j2se/1.4.2/docs/api/java/math/BigInteger.html) instead. You can specify number of bits and then use toString(int radix) method to recover what you need (I assume you need radix=2).
**EDIT:** I would leave bit control to BigInteger. The object will internally resize its bit buffer to fit the new number dimension. Moreover arithmetic operations can be carried out by means of this object (you do not have to implement binary adders/multipliers etc.). Here is a basic example:
```
package test;
import java.math.BigInteger;
public class TestBigInteger
{
public static void main(String[] args)
{
String value = "1010";
BigInteger bi = new BigInteger(value,2);
// Arithmetic operations
System.out.println("Output: " + bi.toString(2));
bi = bi.add(bi); // 10 + 10
System.out.println("Output: " + bi.toString(2));
bi = bi.multiply(bi); // 20 * 20
System.out.println("Output: " + bi.toString(2));
/*
* Padded to the next event number of bits
*/
System.out.println("Padded Output: " + pad(bi.toString(2), bi.bitLength() + bi.bitLength() % 2));
}
static String pad(String s, int numDigits)
{
StringBuffer sb = new StringBuffer(s);
int numZeros = numDigits - s.length();
while(numZeros-- > 0) {
sb.insert(0, "0");
}
return sb.toString();
}
}
``` | (Java) Specify number of bits (length) when converting binary number to string? | [
"",
"java",
"binary",
""
] |
I am using iPython to run my code. I wonder if there is any module or command which would allow me to check the memory usage of an object. For instance:
```
In [1]: a = range(10000)
In [2]: %memusage a
Out[2]: 1MB
```
Something like `%memusage <object>` and return the memory used by the object.
**Duplicate**
> [Find out how much memory is being used by an object in Python](https://stackoverflow.com/questions/33978/find-out-how-much-memory-is-being-used-by-an-object-in-python) | Unfortunately this is not possible, but there are a number of ways of approximating the answer:
1. for very simple objects (e.g. ints, strings, floats, doubles) which are represented more or less as simple C-language types you can simply calculate the number of bytes as with [John Mulder's solution](https://stackoverflow.com/a/563921/1922357).
2. For more complex objects a good approximation is to serialize the object to a string using cPickle.dumps. The length of the string is a good approximation of the amount of memory required to store an object.
There is one big snag with solution 2, which is that objects usually contain references to other objects. For example a dict contains string-keys and other objects as values. Those other objects might be shared. Since pickle always tries to do a complete serialization of the object it will always over-estimate the amount of memory required to store an object. | If you are using a [numpy array](http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html), then you can use the attribute `ndarray.nbytes` to evaluate its size in memory:
```
from pylab import *
d = array([2,3,4,5])
d.nbytes
#Output: 32
``` | How can I check the memory usage of objects in iPython? | [
"",
"python",
"memory",
"ipython",
""
] |
Scenerio: I'd like to run commands on remote machines from a Java program over ssh (I am using OpenSSH on my development machine). I'd also like to make the ssh connection by passing the password rather than setting up keys as I would with 'expect'.
Problem: When trying to do the 'expect' like password login the Process that is created with ProcessBuilder cannot seem to see the password prompt. When running regular non-ssh commands (e.g 'ls') I can get the streams and interact with them just fine. I am combining standard error and standard out into one stream with `redirectErrorStream(true);` so I am not missing it in standard error...When I run ssh with the '-v' option, I see all of the logging in the stream but I do not see the prompt. This is my first time trying to use ProcessBuilder for something like this. I know it would be easier to use Python, Perl or good ol' expect but my boss wants to utilize what we are trying to get back (remote log files and running scripts) within an existing Java program so I am kind of stuck.
Thanks in advance for the help! | The prompt might only be shown when `ssh` is connected to a TTY, which it isn't in the case of Java.
There's probably a way to supply the password on the command-line in your `ssh` application. That will be the way to get past the prompt.
Alternately, consider connecting directly to the host server from native Java code rather than running an external application. There's [a million libraries](http://www.google.com/search?q=java+ssh+library) that will do this. | Rather than using an external ssh program, why not use a Java ssh library:
* [Trilead](http://www.trilead.com/Products/Trilead_SSH_for_Java/)
* [JTA](http://javassh.org/space/start)
Are two I found with google - that'll avoid the problem that openssh will be working very hard to prevent entering the password on stdin - it'll be opening the terminal directly. expect has to work very hard to simulate a tty in order to work. | Running commands over ssh with Java | [
"",
"java",
"ssh",
"openssh",
""
] |
I have an N-tier structure composed of WCF nodes. I need to occasionally pass very large volumes of data from a terminal node to the top node and I would like to avoid deserializing the very large data field during the intermediate hops. I can't pass directly to the top due to our fall over strategy. Is there any way to avoid deserializing my field? Thanks for any help | Maybe you can do something with a [OnDeserializing] event?
See [this](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.ondeserializingattribute.aspx).
Also, the serialization events are covered in "[Programming WCF Services](http://oreilly.com/catalog/9780596521301/index.html)" (2nd Ed) by Juval Lowy in Chapter 3, pgs 107-110.
I'm not sure if you can completely short-circuit deserialization though... I've never tried. | I think Terry's on the right track. I would look at that event and by using a message contract you should be able to mark the part of the message you just want to pass through. You'll probably need to do some message manipulation (tear apart the incoming message, create a "custom" outgoing message) but you should be able to have the message continue on without being looked at.
Do a search WS-Addressing too; it may provide a pattern for doing this. | WCF Passthrough | [
"",
"c#",
"wcf",
"serialization",
""
] |
I have a windows service that calls a page after a certain interval of time. The page in turn creates some reports.
The problem is that the service stops doing anything after 2-3 calls. as in it calls the page for 2-3 times and then does not do any work though it shows that the service is running...i am using timers in my service..
please can someone help me with a solution here
thank you
the code:(where t1 is my timer)
```
protected override void OnStart(string[] args)
{
GetRecords();
t1.Elapsed += new ElapsedEventHandler(OnElapsedTime);
t1.Interval = //SomeTimeInterval
t1.Enabled = true;
t1.Start();
}
private void OnElapsedTime(object source, ElapsedEventArgs e)
{
try
{
GetRecords();
}
catch (Exception ex)
{
EventLog.WriteEntry(ex.Message);
}
}
public void GetRecords()
{
try
{
string ConnectionString = //Connection string from web.config
WebRequest Request = HttpWebRequest.Create(ConnectionString);
Request.Timeout = 100000000;
HttpWebResponse Response = (HttpWebResponse)Request.GetResponse();
}
catch (Exception ex)
{
}
}
``` | I think you're missing something about disposing your objects like StreamReader, WebRequest, etc.. You should dispose your expensive objects after using them. | Well, what does the code look like? `WebClient` is the easiest way to query a page:
```
string result;
using (WebClient client = new WebClient()) {
result = client.DownloadString(address);
}
// do something with `result`
```
The timer code might also be glitchy if it is stalling... | Calling a aspx page from windows service - Problem | [
"",
"c#",
"windows-services",
"timer",
""
] |
There's been some debate going on in [this question](https://stackoverflow.com/questions/596162/can-you-remove-elements-from-a-stdlist-while-iterating-through-it/596180#596180) about whether the following code is legal C++:
```
std::list<item*>::iterator i = items.begin();
while (i != items.end())
{
bool isActive = (*i)->update();
if (!isActive)
{
items.erase(i++); // *** Is this undefined behavior? ***
}
else
{
other_code_involving(*i);
++i;
}
}
```
The problem here is that `erase()` will invalidate the iterator in question. If that happens before `i++` is evaluated, then incrementing `i` like that is technically undefined behavior, even if it appears to work with a particular compiler. One side of the debate says that all function arguments are fully evaluated before the function is called. The other side says, "the only guarantees are that i++ will happen before the next statement and after i++ is used. Whether that is before erase(i++) is invoked or afterwards is compiler dependent."
I opened this question to hopefully settle that debate. | Quoth the [C++ standard](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf) 1.9.16:
> When calling a function (whether or
> not the function is inline), every
> value computation and side effect
> associated with any argument
> expression, or with the postfix
> expression designating the called
> function, is sequenced before
> execution of every expression or
> statement in the body of the called
> function. (Note: Value computations
> and side effects associated with the
> different argument expressions are
> unsequenced.)
So it would seem to me that this code:
```
foo(i++);
```
is perfectly legal. It will increment `i` and then call `foo` with the previous value of `i`. However, this code:
```
foo(i++, i++);
```
yields undefined behavior because paragraph 1.9.16 also says:
> If a side effect on a scalar object is
> unsequenced relative to either another
> side effect on the same scalar object
> or a value computation using the value
> of the same scalar object, the
> behavior is undefined. | To build on [Kristo's answer](https://stackoverflow.com/questions/598148/is-it-legal-to-use-the-increment-operator-in-a-c-function-call/598150#598150),
```
foo(i++, i++);
```
yields undefined behavior because the order that function arguments are evaluated is undefined (and in the more general case because if you read a variable twice in an expression where you also write it, the result is undefined). You don't know which argument will be incremented first.
```
int i = 1;
foo(i++, i++);
```
might result in a function call of
```
foo(2, 1);
```
or
```
foo(1, 2);
```
or even
```
foo(1, 1);
```
Run the following to see what happens on your platform:
```
#include <iostream>
using namespace std;
void foo(int a, int b)
{
cout << "a: " << a << endl;
cout << "b: " << b << endl;
}
int main()
{
int i = 1;
foo(i++, i++);
}
```
On my machine I get
```
$ ./a.out
a: 2
b: 1
```
every time, but this code is **not portable**, so I would expect to see different results with different compilers. | Is it legal to use the increment operator in a C++ function call? | [
"",
"c++",
"function",
"standards",
""
] |
I'm working in VS2008 and C#, and I'm looking for a (free) code generator tool to generate a property with getter and setter, as well as the backing private field to go with. The template thingy in VS does not make the field to go with it. Just looking for something a little bit better.
I once saw a web site where you could build this code, then cust-and-paste it from the web page to your code. | You can create custom snippets to do pretty much anything you want. Here is one I used in VS2005 for creating properties with backing fields:
```
<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet Format="1.0.0">
<Header>
<Title>prop</Title>
<!-- the shortcut below will show in your intellisense
window - set it to whatever you wish -->
<Shortcut>_prop</Shortcut>
<Description>Code snippet for a property</Description>
<Author>Andrew</Author>
<SnippetTypes>
<SnippetType>Expansion</SnippetType>
</SnippetTypes>
</Header>
<Snippet>
<Declarations>
<Literal>
<ID>type</ID>
<Default>String</Default>
<ToolTip>property type</ToolTip>
</Literal>
<Literal>
<ID>pname</ID>
<Default>_name</Default>
<ToolTip>private field name</ToolTip>
</Literal>
<Literal>
<ID>name</ID>
<Default>Name</Default>
<ToolTip>property name</ToolTip>
</Literal>
</Declarations>
<Code Language="csharp">
<![CDATA[$type$ $pname$;
public $type$ $name$
{
get { return this.$pname$; }
set { this.$pname$ = value; }
}$end$]]>
</Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>
```
Save this in a file called `whatever.snippet` in this location:
> `"C:\Documents and Settings\<YOU>\My Documents\Visual Studio 2005\Code Snippets\Visual C#\My Code Snippets"` | I looked around on StackOverlfow before I posted to try to find the answer, because I was sure this was already addressed before. I hated to post, but I did look first, I promise.
I kept looking more, a I found this other helpful thread here:
[How to generate getters and setters in Visual Studio?](https://stackoverflow.com/questions/3017/how-to-generate-getters-and-setters-in-visual-studio) | Code generator tool to generate a property and backing field | [
"",
"c#",
""
] |
```
class B
{
public:
int a;
void fn();
}
```
If I create an object of B, using
```
B* pb = new B;
```
Where is the memory of fn() locate?
Is there a pointer in object that pointing at the memory loaction of fn()?
If yes, why sizeof(B) returns the value as if there is no pointer in object at all? | > Where is the memory of fn() locate?
Since it's a normal function, somewhere in the code section of your program. This location is the same for *all instances* of the class. In fact, it has got nothing to do with the instantiation of `B` via `pb`.
> Is there a pointer in object that pointing at the memory loaction of fn()?
No. For a normal member function this isn't required since the address is known at compile time (or, at the latest, at link time); it therefore doesn't have to be stored separately at runtime.
For virtual functions, the situation is different. Virtual function pointers are stored in an array (called “virtual function-pointer table” or “vtable” for short). Each class has one such vtable and each instance to a class stores a pointer to that vtable. This is necessary because if a pointer/reference of type `Base` points to a sub-class `Derived`, the compiler has no way of knowing which function to call; rather, the correct function is calculated at runtime by looking it up in the associated vtable. The vtable pointer is also evident in the `sizeof` the object. | This:
```
class B
{
public:
int a;
void fn();
};
```
Is for all practical purposes equivelant to the C code:
```
struct B
{
int a;
};
void fn(B* bInstance);
```
Except in the C++ version bInstance is replaced with the this pointer. Both function's memory exists on the stack. So converting to the struct equivelant, what do you think the sizeof(B) would be? | A question about sizeof and class member function | [
"",
"c++",
"sizeof",
""
] |
So what I want to do is to coordinate some effects using jQuery for some AJAX calls that I'm working with. My problem is that the fadeIn for the second div fires at the same time with the fadeOut for the first div.
This could apply to other events as well so I'm curious, is there any way to make fadeId launch ONLY after fadeOut is done ?
```
jQuery("#div1").fadeOut("slow");
jQuery("#div2").fadeIn("slow");
```
Thanks | You can do this by using a callback. Check out the jQuery [docs](http://docs.jquery.com/Effects/fadeOut#speedcallback).
```
jQuery("#div1").fadeOut("slow", function() { jQuery(this).fadeIn('slow'); });
```
Pretty much all of the jQuery effects take a callback to execute after the effect is over. | That's usually good enough for a quick animation but if you want to queue up a number of effects, you can also utilize the queue() and dequeue() methods like this:
```
$("#div1").fadeOut();
$("#div1").queue(function()
{
$(this).fadeIn();
$(this).dequeue();
});
$("#div").queue(function()
{
$(this).html("And now I'm sliding up. Wee!");
$(this).slideUp("slow");
$(this).dequeue();
});
```
Here, queue() adds a function to the object's queue (which doesn't have to be an animation) and dequeue() executes that function.
See the [docs](http://docs.jquery.com/Core/queue) for more information. | jQuery queue events | [
"",
"javascript",
"jquery",
""
] |
I have a block of text and I want to get its lines **without** losing the \r and \n at the end. Right now, I have the following (suboptimal code):
```
string[] lines = tbIn.Text.Split('\n')
.Select(t => t.Replace("\r", "\r\n")).ToArray();
```
So I'm wondering - is there a better way to do it?
**Accepted answer**
```
string[] lines = Regex.Split(tbIn.Text, @"(?<=\r\n)(?!$)");
``` | The following seems to do the job:
```
string[] lines = Regex.Split(tbIn.Text, @"(?<=\r\n)(?!$)");
```
(?<=\r\n) uses 'positive lookbehind' to match after \r\n without consuming it.
(?!$) uses negative lookahead to prevent matching at the end of the input and so avoids a final line that is just an empty string. | If you are just going to replace the newline (`\n`) then do something like this:
```
string[] lines = tbIn.Text.Split('\n')
.Select(t => t + "\r\n").ToArray();
```
### Edit: Regex.Replace allows you to split on a string.
```
string[] lines = Regex.Split(tbIn.Text, "\r\n")
.Select(t => t + "\r\n").ToArray();
``` | How to split a string while preserving line endings? | [
"",
"c#",
".net",
"string",
"split",
""
] |
We logon users to Active Directory via LDAP using the Java LDAP API. We want to enhance our logon functionality to further check if the user is in a given AD group. Does anyone know how to do this?
Current code:
```
import javax.naming.*;
import javax.naming.ldap.*;
LdapContext ctx = null;
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.SECURITY_AUTHENTICATION,"simple");
env.put(Context.PROVIDER_URL, Config.get("ldap-url"));
try {
Control[] connCtls = new Control[] {new FastBindConnectionControl()};
ctx = new InitialLdapContext(env, connCtls);
ctx.addToEnvironment(Context.SECURITY_PRINCIPAL, "DOMAIN\\" + username);
ctx.addToEnvironment(Context.SECURITY_CREDENTIALS, password);
ctx.reconnect(connCtls);
/* TODO: Only return true if user is in group "ABC" */
return true; //User authenticated
} catch (Exception e) {
return false; //User could NOT be authenticated
} finally {
...
}
```
---
**Update**: See the solution below. | We solved this with the class below. Just call the authenticate method:
```
import java.text.MessageFormat;
import java.util.*;
import javax.naming.*;
import org.apache.log4j.Level;
public class LdapGroupAuthenticator {
public static final String DISTINGUISHED_NAME = "distinguishedName";
public static final String CN = "cn";
public static final String MEMBER = "member";
public static final String MEMBER_OF = "memberOf";
public static final String SEARCH_BY_SAM_ACCOUNT_NAME = "(SAMAccountName={0})";
public static final String SEARCH_GROUP_BY_GROUP_CN = "(&(objectCategory=group)(cn={0}))";
/*
* Prepares and returns CN that can be used for AD query
* e.g. Converts "CN=**Dev - Test Group" to "**Dev - Test Group"
* Converts CN=**Dev - Test Group,OU=Distribution Lists,DC=DOMAIN,DC=com to "**Dev - Test Group"
*/
public static String getCN(String cnName) {
if (cnName != null && cnName.toUpperCase().startsWith("CN=")) {
cnName = cnName.substring(3);
}
int position = cnName.indexOf(',');
if (position == -1) {
return cnName;
} else {
return cnName.substring(0, position);
}
}
public static boolean isSame(String target, String candidate) {
if (target != null && target.equalsIgnoreCase(candidate)) {
return true;
}
return false;
}
public static boolean authenticate(String domain, String username, String password) {
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://1.2.3.4:389");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, domain + "\\" + username);
env.put(Context.SECURITY_CREDENTIALS, password);
DirContext ctx = null;
String defaultSearchBase = "DC=DOMAIN,DC=com";
String groupDistinguishedName = "DN=CN=DLS-APP-MyAdmin-C,OU=DLS File Permissions,DC=DOMAIN,DC=com";
try {
ctx = new InitialDirContext(env);
// userName is SAMAccountName
SearchResult sr = executeSearchSingleResult(ctx, SearchControls.SUBTREE_SCOPE, defaultSearchBase,
MessageFormat.format( SEARCH_BY_SAM_ACCOUNT_NAME, new Object[] {username}),
new String[] {DISTINGUISHED_NAME, CN, MEMBER_OF}
);
String groupCN = getCN(groupDistinguishedName);
HashMap processedUserGroups = new HashMap();
HashMap unProcessedUserGroups = new HashMap();
// Look for and process memberOf
Attribute memberOf = sr.getAttributes().get(MEMBER_OF);
if (memberOf != null) {
for ( Enumeration e1 = memberOf.getAll() ; e1.hasMoreElements() ; ) {
String unprocessedGroupDN = e1.nextElement().toString();
String unprocessedGroupCN = getCN(unprocessedGroupDN);
// Quick check for direct membership
if (isSame (groupCN, unprocessedGroupCN) && isSame (groupDistinguishedName, unprocessedGroupDN)) {
Log.info(username + " is authorized.");
return true;
} else {
unProcessedUserGroups.put(unprocessedGroupDN, unprocessedGroupCN);
}
}
if (userMemberOf(ctx, defaultSearchBase, processedUserGroups, unProcessedUserGroups, groupCN, groupDistinguishedName)) {
Log.info(username + " is authorized.");
return true;
}
}
Log.info(username + " is NOT authorized.");
return false;
} catch (AuthenticationException e) {
Log.info(username + " is NOT authenticated");
return false;
} catch (NamingException e) {
throw new SystemException(e);
} finally {
if (ctx != null) {
try {
ctx.close();
} catch (NamingException e) {
throw new SystemException(e);
}
}
}
}
public static boolean userMemberOf(DirContext ctx, String searchBase, HashMap processedUserGroups, HashMap unProcessedUserGroups, String groupCN, String groupDistinguishedName) throws NamingException {
HashMap newUnProcessedGroups = new HashMap();
for (Iterator entry = unProcessedUserGroups.keySet().iterator(); entry.hasNext();) {
String unprocessedGroupDistinguishedName = (String) entry.next();
String unprocessedGroupCN = (String)unProcessedUserGroups.get(unprocessedGroupDistinguishedName);
if ( processedUserGroups.get(unprocessedGroupDistinguishedName) != null) {
Log.info("Found : " + unprocessedGroupDistinguishedName +" in processedGroups. skipping further processing of it..." );
// We already traversed this.
continue;
}
if (isSame (groupCN, unprocessedGroupCN) && isSame (groupDistinguishedName, unprocessedGroupDistinguishedName)) {
Log.info("Found Match DistinguishedName : " + unprocessedGroupDistinguishedName +", CN : " + unprocessedGroupCN );
return true;
}
}
for (Iterator entry = unProcessedUserGroups.keySet().iterator(); entry.hasNext();) {
String unprocessedGroupDistinguishedName = (String) entry.next();
String unprocessedGroupCN = (String)unProcessedUserGroups.get(unprocessedGroupDistinguishedName);
processedUserGroups.put(unprocessedGroupDistinguishedName, unprocessedGroupCN);
// Fetch Groups in unprocessedGroupCN and put them in newUnProcessedGroups
NamingEnumeration ns = executeSearch(ctx, SearchControls.SUBTREE_SCOPE, searchBase,
MessageFormat.format( SEARCH_GROUP_BY_GROUP_CN, new Object[] {unprocessedGroupCN}),
new String[] {CN, DISTINGUISHED_NAME, MEMBER_OF});
// Loop through the search results
while (ns.hasMoreElements()) {
SearchResult sr = (SearchResult) ns.next();
// Make sure we're looking at correct distinguishedName, because we're querying by CN
String userDistinguishedName = sr.getAttributes().get(DISTINGUISHED_NAME).get().toString();
if (!isSame(unprocessedGroupDistinguishedName, userDistinguishedName)) {
Log.info("Processing CN : " + unprocessedGroupCN + ", DN : " + unprocessedGroupDistinguishedName +", Got DN : " + userDistinguishedName +", Ignoring...");
continue;
}
Log.info("Processing for memberOf CN : " + unprocessedGroupCN + ", DN : " + unprocessedGroupDistinguishedName);
// Look for and process memberOf
Attribute memberOf = sr.getAttributes().get(MEMBER_OF);
if (memberOf != null) {
for ( Enumeration e1 = memberOf.getAll() ; e1.hasMoreElements() ; ) {
String unprocessedChildGroupDN = e1.nextElement().toString();
String unprocessedChildGroupCN = getCN(unprocessedChildGroupDN);
Log.info("Adding to List of un-processed groups : " + unprocessedChildGroupDN +", CN : " + unprocessedChildGroupCN);
newUnProcessedGroups.put(unprocessedChildGroupDN, unprocessedChildGroupCN);
}
}
}
}
if (newUnProcessedGroups.size() == 0) {
Log.info("newUnProcessedGroups.size() is 0. returning false...");
return false;
}
// process unProcessedUserGroups
return userMemberOf(ctx, searchBase, processedUserGroups, newUnProcessedGroups, groupCN, groupDistinguishedName);
}
private static NamingEnumeration executeSearch(DirContext ctx, int searchScope, String searchBase, String searchFilter, String[] attributes) throws NamingException {
// Create the search controls
SearchControls searchCtls = new SearchControls();
// Specify the attributes to return
if (attributes != null) {
searchCtls.setReturningAttributes(attributes);
}
// Specify the search scope
searchCtls.setSearchScope(searchScope);
// Search for objects using the filter
NamingEnumeration result = ctx.search(searchBase, searchFilter,searchCtls);
return result;
}
private static SearchResult executeSearchSingleResult(DirContext ctx, int searchScope, String searchBase, String searchFilter, String[] attributes) throws NamingException {
NamingEnumeration result = executeSearch(ctx, searchScope, searchBase, searchFilter, attributes);
SearchResult sr = null;
// Loop through the search results
while (result.hasMoreElements()) {
sr = (SearchResult) result.next();
break;
}
return sr;
}
}
``` | None of above code snippets didn't worked for me. After 1 day spending on Google and tomcat source following code worked well to find user groups.
```
import java.util.Hashtable;
import javax.naming.CompositeName;
import javax.naming.Context;
import javax.naming.Name;
import javax.naming.NameParser;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
public class MemberOfTest{
private static final String contextFactory = "com.sun.jndi.ldap.LdapCtxFactory";
private static final String connectionURL = "ldap://HOST:PORT";
private static final String connectionName = "CN=Query,CN=Users,DC=XXX,DC=XX";
private static final String connectionPassword = "XXX";
// Optioanl
private static final String authentication = null;
private static final String protocol = null;
private static String username = "XXXX";
private static final String MEMBER_OF = "memberOf";
private static final String[] attrIdsToSearch = new String[] { MEMBER_OF };
public static final String SEARCH_BY_SAM_ACCOUNT_NAME = "(sAMAccountName=%s)";
public static final String SEARCH_GROUP_BY_GROUP_CN = "(&(objectCategory=group)(cn={0}))";
private static String userBase = "DC=XXX,DC=XXX";
public static void main(String[] args) throws NamingException {
Hashtable<String, String> env = new Hashtable<String, String>();
// Configure our directory context environment.
env.put(Context.INITIAL_CONTEXT_FACTORY, contextFactory);
env.put(Context.PROVIDER_URL, connectionURL);
env.put(Context.SECURITY_PRINCIPAL, connectionName);
env.put(Context.SECURITY_CREDENTIALS, connectionPassword);
if (authentication != null)
env.put(Context.SECURITY_AUTHENTICATION, authentication);
if (protocol != null)
env.put(Context.SECURITY_PROTOCOL, protocol);
InitialDirContext context = new InitialDirContext(env);
String filter = String.format(SEARCH_BY_SAM_ACCOUNT_NAME, username);
SearchControls constraints = new SearchControls();
constraints.setSearchScope(SearchControls.SUBTREE_SCOPE);
constraints.setReturningAttributes(attrIdsToSearch);
NamingEnumeration results = context.search(userBase, filter,constraints);
// Fail if no entries found
if (results == null || !results.hasMore()) {
System.out.println("No result found");
return;
}
// Get result for the first entry found
SearchResult result = (SearchResult) results.next();
// Get the entry's distinguished name
NameParser parser = context.getNameParser("");
Name contextName = parser.parse(context.getNameInNamespace());
Name baseName = parser.parse(userBase);
Name entryName = parser.parse(new CompositeName(result.getName())
.get(0));
// Get the entry's attributes
Attributes attrs = result.getAttributes();
Attribute attr = attrs.get(attrIdsToSearch[0]);
NamingEnumeration e = attr.getAll();
System.out.println("Member of");
while (e.hasMore()) {
String value = (String) e.next();
System.out.println(value);
}
}
}
``` | Java LDAP - Determine if user in a given group? | [
"",
"java",
"active-directory",
""
] |
Hi im quite new to PHP, i have created a form for very large csv files to be uploaded to my server. Some one mentioned to me that the browser can time out due to the uploading file being to big, is this true? and if so, can it be prevented?
Thanks for your help! | You need a proper value for the following php.ini settings:
1. [max\_input\_time](http://www.php.net/manual/en/info.configuration.php#ini.max-input-time) (**not** max\_execution\_time!)
2. [upload\_max\_filesize](http://www.php.net/manual/en/ini.core.php#ini.upload-max-filesize)
3. [post\_max\_size](http://www.php.net/manual/en/ini.core.php#ini.post-max-size)
and maybe
4. [memory\_limit](http://www.php.net/manual/en/ini.core.php#ini.memory-limit) | There are some configuration directives that can cause large uploads to fail if their values are too small:
PHP
* **[`max_input_time`](http://docs.php.net/manual/en/info.configuration.php#ini.max-input-time)** Maximum time in seconds a script is allowed to parse input data, like POST, GET and file uploads
* **[`upload_max_filesize`](http://docs.php.net/manual/en/ini.core.php#ini.upload-max-filesize)** Maximum size of an uploaded file.
* **[`post_max_size`](http://docs.php.net/manual/en/ini.core.php#ini.post-max-size)** Maximum size of post data allowed.
Apache
* **[`TimeOut`](http://httpd.apache.org/docs/2.2/mod/core.html#timeout)** Amount of time the server will wait for certain events before failing a request
* **[`LimitRequestBody`](http://httpd.apache.org/docs/2.2/mod/core.html#limitrequestbody)** Restricts the total size of the HTTP request body sent from the client
There are probably some more than this. | Can file uploads time out in PHP? | [
"",
"php",
"upload",
""
] |
I have a template matrix class class defined in a header called "Matrix.h".
Certain matrices are used repeatedly in my program. I thought that I would define these in the "Matrix.h" header file, like so:
```
const Matrix<GLfloat> B_SPLINE_TO_BEZIER_MATRIX(4, 4, values);
```
When I do this g++ complains that I redefined the constant in question. This happens because I include Matrix.h in two different source files. When the object files for these are compiled, both end up with a definition of the matrix above, causing the error message.
My question is how do I avoid this situation? I want a constant that is accessible to more than one file, but I don't know where to put it. | If you don't want to split it between a header and implementation file,
1. Declare your constant `static` (or declare it in anonymous namespace) to make definition private. Linker will not complain, but it will result in multiple private copies across compilation units.
```
static Matrix<GLfloat> B_SPLINE_TO_BEZIER_MATRIX(4, 4, values);
```
2. Make an inline function that returns the constant. Inline function definitions produce "weak" symbols in object file, so linker will eliminate duplicates and pick one.
```
inline const Matrix<GLfloat>&
GET_B_SPLINE_TO_BEZIER_MATRIX() {
const static Matrix<GLfloat> B_SPLINE_TO_BEZIER_MATRIX(4, 4, values);
return B_SPLINE_TO_BEZIER_MATRIX;
}
``` | You avoid it by:
* Declaring it `extern` in the header. A symbol can be declared any number of times.
* Defining it in the implementation, only once. | How does one avoid accidentally redeclaring global constants in C++? | [
"",
"c++",
"global-variables",
"constants",
""
] |
I have an enum which is defined like this:
```
public enum eRat { A = 0, B=3, C=5, D=8 };
```
So given value `eRat.B`, I want to get the next one which is `eRat.C`
The solution I see is (without range checking)
```
Array a = Enum.GetValues(typeof(eRat));
int i=0 ;
for (i = 0; i < a.GetLength(); i++)
{
if (a.GetValue(i) == eRat.B)
break;
}
return (eRat)a.GetValue(i+1):
```
Now that is too much complexity, for something that simple. Do you know any better solution?? Something like `eRat.B+1` or `Enum.Next(Erat.B)`?
Thanks | Thanks to everybody for your answers and feedback. I was surprised to get so many of them. Looking at them and using some of the ideas, I came up with this solution, which works best for me:
```
public static class Extensions
{
public static T Next<T>(this T src) where T : struct
{
if (!typeof(T).IsEnum) throw new ArgumentException(String.Format("Argument {0} is not an Enum", typeof(T).FullName));
T[] Arr = (T[])Enum.GetValues(src.GetType());
int j = Array.IndexOf<T>(Arr, src) + 1;
return (Arr.Length==j) ? Arr[0] : Arr[j];
}
}
```
The beauty of this approach, that it is simple and universal to use. Implemented as generic extension method, you can call it on any enum this way:
```
return eRat.B.Next();
```
Notice, I am using generalized extension method, thus I don't need to specify type upon call, just `.Next()`. | Probably a bit overkill, but:
```
eRat value = eRat.B;
eRat nextValue = Enum.GetValues(typeof(eRat)).Cast<eRat>()
.SkipWhile(e => e != value).Skip(1).First();
```
or if you want the first that is numerically bigger:
```
eRat nextValue = Enum.GetValues(typeof(eRat)).Cast<eRat>()
.First(e => (int)e > (int)value);
```
or for the next bigger numerically (doing the sort ourselves):
```
eRat nextValue = Enum.GetValues(typeof(eRat)).Cast<eRat>()
.Where(e => (int)e > (int)value).OrderBy(e => e).First();
```
Hey, with LINQ as your hammer, the world is full of nails ;-p | How to get next (or previous) enum value in C# | [
"",
"c#",
".net",
"enums",
""
] |
I just recently started a new personal project, with a goal of having it able to scale from the start.
I got a suggestion for the structure, to create something like this:
```
<solution>
|-- project.client.sql.queries
|-- project.admin.sql.queries
|-- project.client.business.logic
|-- project.admin.business.logic
|-- project.client.web.ui (include references of the business logic + SQL queries projects )
|-- project.admin.web.ui
```
This way, I would have everything structured and easy to follow for future expansion. My problem resides in the fact that I want to use only SQL express to start, and maybe move on to SQL server later when necessary.
So if I add the `.mdf` file into the `app_code` of the client side and create a `.dbml` (the linq structure file) how can I use linq into the SQL query ? I don't have access to the `ConfigurationManager` of the web.ui project. Do I have to include a reference into the SQL queries project just as I did for the web.ui ? Otherwise linq doesn't seem to work properly.
I'm looking mostly for the best practices, since I've been told that code behind should not include any business logic or SQL queries - they should have their own class libraries. | Attach the database and create a linq to sql dbml file in your data project. When you initialise your dataservice class, pass in the connection string from web.config and set the connection string in your datacontext constructor. More info here: [Connection string hell in .NET / LINQ-SQL / ASP.NET](https://stackoverflow.com/questions/236412/connection-string-hell-in-net-linq-sql-asp-net) | Here is the best advice anyone can ever give you at this point in time:
> The crappy first version is infinitly better then the perfect version that doesnt exist.
I forget where I got that from, Ive seen that advice many places. It is 100% true. | Solution structure / best practices | [
"",
"c#",
".net",
"asp.net",
"visual-studio-2008",
""
] |
I'm faced with a situation that I think can only be solved by using a ref parameter. However, this will mean changing a method to always accept a ref parameter when I only need the functionality provided by a ref parameter 5% of the time.
This makes me think "whoa, crazy, must find another way". Am I being stupid? What sort of problems can be caused by a ref parameter?
**Edit**
Further details were requested, I don't think they are entirely relevant to what I was asking but here we go.
I'm wanting to either save a new instance (which will update with the ID which may later be used) or retrieve an existing instance that matches some logic and update that, save it then change the reference of the new instance to point to the existing one.
Code may make it clearer:
```
protected override void BeforeSave(Log entity)
{
var newLog = entity;
var existingLog = (from log in repository.All()
where log.Stuff == newLog.Stuff
&& log.Id != newLog.Id
select log).SingleOrDefault();
if (existingLog != null)
{
// update the time
existingLog.SomeValue = entity.SomeValue;
// remove the reference to the new entity
entity = existingLog;
}
}
// called from base class which usually does nothing before save
public void Save(TEntity entity)
{
var report = validator.Validate(entity);
if (report.ValidationPassed)
{
BeforeSave(entity);
repository.Save(entity);
}
else
{
throw new ValidationException { Report = report };
}
}
```
It's the fact that I would be adding it in only for one child (so far) of the base class that prevents me using an overload (due to the fact I would have to duplicate the Save method). I also have the problem whereby I need to force them to use the ref version in this instance otherwise things won't work as expected. | Can you add an overload? Have one signature without the ref parameter, and one with it.
Ref parameters *can* be useful, and I'm glad they exist in C#, but they shouldn't be used without thought. Often if a method is effectively returning two values, it would be better either to split the method into two parts, or encapsulate both values in a single type. Neither of these covers every case though - there are definitely times when `ref` is the best option. | Perhaps use an **overloaded function** for this **5% case** and leave the other function as is.
Unnecessary ref parameters can lead to bad design patterns, but if you have a specific need, there's *no problem* with doing this. | What's so bad about ref parameters? | [
"",
"c#",
"oop",
""
] |
I read in an earlier [answer](https://stackoverflow.com/questions/576988/python-specific-antipatterns/577731#577731) that exception handling is cheap in Python so we shouldn't do pre-conditional checking.
I have not heard of this before, but I'm relatively new to Python. Exception handling means a dynamic call and a static return, whereas an `if` statement is static call, static return.
How can doing the checking be bad and the `try-except` be good, seems to be the other way around. Can someone explain this to me? | You might find this post helpful: [**Try / Except Performance in Python: A Simple Test**](http://paltman.com/2008/jan/18/try-except-performance-in-python-a-simple-test/) where Patrick Altman did some simple testing to see what the performance is in various scenarios pre-conditional checking (specific to dictionary keys in this case) and using only exceptions. Code is provided as well if you want to adapt it to test other conditionals.
The conclusions he came to:
> From these results, I think it is fair
> to quickly determine a number of
> conclusions:
>
> 1. If there is a high likelihood that the element doesn't exist, then
> you are better off checking for it
> with has\_key.
> 2. If you are not going to do anything with the Exception if it is
> raised, then you are better off not
> putting one have the except
> 3. If it is likely that the element does exist, then there is a very
> slight advantage to using a try/except
> block instead of using has\_key,
> however, the advantage is very slight. | Don't sweat the small stuff. You've already picked one of the slower scripting languages out there, so trying to optimize down to the opcode is not going to help you much. The reason to choose an interpreted, dynamic language like Python is to optimize your time, not the CPU's.
If you use common language idioms, then you'll see all the benefits of fast prototyping and clean design and your code will naturally run faster as new versions of Python are released and the computer hardware is upgraded.
If you have performance problems, then profile your code and optimize your slow algorithms. But in the mean time, use exceptions for exceptional situations since it will make any refactoring you ultimately do along these lines a lot easier. | Cheap exception handling in Python? | [
"",
"python",
"performance",
"exception",
""
] |
I've got a bunch of DLL projects that I'm pulling into my application, each contains their own Settings.settings/app.config. When I compile the app and run for debugging, everything works just fine, but come deployment time I can't get my DLLs to read their own settings files.
I've been doing some reading and it has become apparent that there's a couple of methods to getting each dll to read its own configuration - one is to dedicate a .dll.config to the library and the other is to embed the dll's configuration in the process.exe.config.
I'm having significant issues trying to implement either and I wondered if anyone has any good docs on this - there appears to be a shortage on the Net.
I'd like a separate .dll.config for each of the libraries if possible, but in a pinch, getting each of my libraries to read their own section of the process.exe.config will do.
Can anyone point me in the right direction because I'm so close to rolling this application out but this stumbling block is causing me a significant headache.
**Edit:** When I merge the configuration files, I start getting TypeInitializer exceptions when I initialize objects withing my libraries. This is likely just me being retarded, but does someone have a working example of a merged config file and some basic demonstrative code for reading it from multiple assemblies? | What are the "significant issues" you encountered? I started with embedding the dll's config in the exe's config, which worked, but was cumbersome. I now have all the config stuff in one dll project. The only thing I needed to do to make that work (besides copying the settings over) was to change the Settings class to be public.
Here's an example of a merged app.config that works:
```
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" >
<section name="SharedConfig.Client.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
<!-- Begin copy from library app.config -->
<section name="SharedConfig.Library.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
<!-- End copy from library app.config -->
</sectionGroup>
</configSections>
<applicationSettings>
<SharedConfig.Client.Properties.Settings>
<setting name="Bar" serializeAs="String">
<value>BarFromClient</value>
</setting>
</SharedConfig.Client.Properties.Settings>
<!-- Begin copy from library app.config -->
<SharedConfig.Library.Properties.Settings>
<setting name="Bar" serializeAs="String">
<value>BarFromLibrary</value>
</setting>
</SharedConfig.Library.Properties.Settings>
<!-- End copy from library app.config -->
</applicationSettings>
</configuration>
``` | Have each class library define configuration settings in a custom ConfigurationSection.
Then add custom section handlers to your process.exe.config file.
[This MSDN article](http://msdn.microsoft.com/en-us/library/2tw134k3.aspx) is pretty comprehensive in its explanation, with examples in both VB and C#. | How do I reference configuration information from within multiple class libraries? | [
"",
"c#",
".net",
"vb.net",
".net-3.5",
".net-2.0",
""
] |
I want to be better at using NUnit for testing the applications I write, but I often find that the unit tests I write have a direct link to the environment or underlying database on the development machine instead.
Let me make an example.
I'm writing a class which has the single responsibility of retriving a string, which has been stored in the registry by another application. The key is stored in HKCU\Software\CustomApplication\IniPath.
The Test I end up writing looks like this;
```
[Test]
public void GetIniDir()
{
RegistryReader r = new RegistryReader();
Assert.AreEqual(@"C:\Programfiles\CustomApplication\SomeDir", r.IniDir);
}
```
But the problem here is that the string @"C:\Programfiles\CustomApplication\SomeDir" is really just correct right now. Tomorrow it might have changed to @"C:\Anotherdir\SomeDir", and suddenly that breaks my unit tests, even though the code hasn't changed.
This problem is also seen when I create a class which does CRUD operations against a database. The data in the database can change all the time, and this in turn makes the tests fail. So even if my class does what it is intended to do it will fail because the database returns more customers that it had when I originally wrote the test.
```
[Test]
public void GetAllCustomersCount()
{
DAL d = new DAL();
Assert.AreEqual(249, d.GetCustomerCount());
}
```
Do you guys have any tips on writing Tests which do not rely on the surrounding environment as much? | The solution to this problem is well-known: [mocking](http://en.wikipedia.org/wiki/Mock_object). Refactor your code to interfaces, then develop fake classes to implement those interfaces or mock them with a mocking framework, such as [RhinoMocks](http://ayende.com/projects/rhino-mocks.aspx), [easyMock](http://www.easymock.org/), [Moq](http://code.google.com/p/moq/), et. al. Using fake or mock classes allow you to define what the interface returns for your test without having to actually interact with the external entity, such as a database.
For more info on mocking via SO, try this Google search: <http://www.google.com/search?q=mock+site:stackoverflow.com>. You may also be interesting in the definitions at: [What's the difference between faking, mocking, and stubbing?](https://stackoverflow.com/questions/346372/whats-the-difference-between-faking-mocking-and-stubbing)
Additionally, good development practices, such as dependency injection (as @Patrik suggests), which allows the decoupling of your classes from its dependencies, and the avoidance of static objects, which makes unit testing harder, will facilitate your testing. Using TDD practices -- where the tests are developed first -- will help you to naturally develop applications that incorporate these design principles. | The easiest way is to make the dependencies explicit using dependency injection. For example, your first example has a dependency on the registry, make this dependency explicit by passing an IRegistry (an interface that you'll define) instance and then only use this passed in dependency to read from the registry. This way you can pass in an IRegistry-stub when testing that always return a known value, in production you instead use an implementation that actually reads from the registry.
```
public interface IRegistry
{
string GetCurrentUserValue(string key);
}
public class RegistryReader
{
public RegistryReader(IRegistry registry)
{
...
// make the dependency explicit in the constructor.
}
}
[TestFixture]
public class RegistryReaderTests
{
[Test]
public void Foo_test()
{
var stub = new StubRegistry();
stub.ReturnValue = "known value";
RegistryReader testedReader = new RegistryReader(stub);
// test here...
}
public class StubRegistry
: IRegistry
{
public string ReturnValue;
public string GetCurrentUserValue(string key)
{
return ReturnValue;
}
}
}
```
In this quick example i use manual stubbing, of course you could use any mocking framework for this. | NUnit testing the application, not the environment or database | [
"",
"c#",
"unit-testing",
"nunit",
""
] |
I encountered what may be a leap year in .NET's `DateTime` handling, specifically `ToLocalTime()`. Here's some code which reproduces the problem (I'm in the Pacific time zone):
```
DateTime dtStartLocal = DateTime.Parse("2009-02-28T23:00:00.0-08:00");
DateTime dtEndLocal = dtStartLocal.AddYears(3);
DateTime dtStartUtc = dtStartLocal.ToUniversalTime();
DateTime dtEndUtc = dtStartUtc.AddYears(3);
DateTime dtEndLocal2 = dtEndUtc.ToLocalTime();
DateTime dtStartLocal2 = dtStartUtc.ToLocalTime();
Console.WriteLine("START: 1={0}, 2={0}", dtStartLocal, dtStartLocal2);
Console.WriteLine("END : 1={0}, 2={1}", dtEndLocal, dtEndLocal2);
Console.ReadLine();
```
The output is:
> START: 1=2/28/2009 11:00:00 PM, 2=2/28/2009 11:00:00 PM
> END : 1=2/28/2012 11:00:00 PM, 2=2/29/2012 11:00:00 PM
Notice the variable which I did `ToUniversalTime().AddYears(3).ToLocalTime()` is different than just `AddYears(3)`, it's one day ahead.
Has anyone encountered this? If this is expected, can someone explain the logic behind it?
NOTE: Yes, the best approach is to work entirely in UTC and not flip flop between them. This isn't something which is effecting me, but a peculiarity I encountered. Essentially I misunderstood how `AddYears()` worked and now I can see why it's doing what it's doing (see my selected answer below). | I think that this is working correctly.
```
DateTime dtStartUtc = dtStartLocal.ToUniversalTime();
```
PST is UTC-8. Therefore, this converts the time to March 1, 2009, 07:00:00.
```
DateTime dtEndUtc = dtStartUtc.AddYears(3);
```
This adds three years to the previous time, putting it at March 1, 2012, 07:00:00.
```
DateTime dtEndLocal2 = dtEndUtc.ToLocalTime();
```
This converts the end time back to PST, which would be February 29, 2012, 11:00:00.
I'd say this is just a side affect of converting between local and UTC time. | Print the timezone/correction factor. When you do the .ToUniversialTime() it essentially adds the 8 hours from your original time ("-08:00"), which would put it at 11:00 the next day starting from 23:00 hours February 28th. So when you add 3 years to it, it's the 11:00 AM on the 29th. Had you done 2 years, it would have been March 1st, it has nothing to do with the leap year. | Leap year bug calling ToUniversalTime().AddYears().ToLocalTime()? | [
"",
"c#",
".net",
"datetime",
"leap-year",
""
] |
Can JS submit name/vale pairs through a document.testform.submit(); ?
or does it have to be submitted through the html tags, for example
```
<INPUT TYPE="text" NAME="inputbox1" VALUE="This is such a great form!" SIZE=50><P>
``` | Typically you include an <input type="hidden"> in the form, and set the value you want in the event handler before it gets submitted.
```
<form method="post" action="thing" id="sandwich"><fieldset>
<input type="text" name="inputbox1" value="This is such a great form!" />
<input type="hidden" name="jsremark" />
</fieldset></form>
<script type="text/javascript">
document.getElementById('sandwich').onsubmit= function() {
this.elements.jsremark.value= 'Secretly it aint that great';
return true;
}
</script>
``` | no, you'll have to mash it yourself into JSON using javascript | Submit name value pair from javascript? | [
"",
"javascript",
"forms",
"submit",
""
] |
I've got four variables and I want to check if any one of them is null. I can do
```
if (null == a || null == b || null == c || null == d) {
...
}
```
but what I really want is
```
if (anyNull(a, b, c, d)) {
...
}
```
but I don't want to write it myself. Does this function exist in any common Java library? I checked Commons Lang and didn't see it. It should use varargs to take any number of arguments. | The best you can do with the Java library is, I think:
```
if (asList(a, b, c, d).contains(null)) {
``` | I don't know if it's in commons, but it takes about ten seconds to write:
```
public static boolean anyNull(Object... objs) {
for (Object obj : objs)
if (obj == null)
return true;
return false;
}
``` | Is there a varargs null check function in Java or Apache Commons? | [
"",
"java",
"validation",
"null",
"variadic-functions",
""
] |
I just inherited a Java application and upon inspection of the code, I'm seeing what IMHO is a bastardization of the Spring framework. You see, the Java team seems to have an aversion to interfaces, so we end up with things like this:
```
@Autowired
private SomeConcreteFinalClass _myField;
```
There's no Spring configuration, no bean defined, no chance that I can test the containing object in isolation. This is essentially an annotation-based factory with the overhead of Spring.
Am I out-of-line, or is this like using an elephant gun to kill flies? I just have to have a reality check since *everyone else on the team* thinks this is perfectly acceptable.
**Edit**
In many cases these annotated factories appear in complex processing classes that would benefit immensely from isolated testing. The team also frowns upon testing though.
There's no mystery here, I hope. If I have a concrete class, that's not behind an interface, and there's no corresponding Spring bean to "setup" the object, then it's inarguably a glorified factory that can be implemented with 10 lines of code.
Spring is not used in any other way in the system; this is it.
My goals right now:
* Institute a testing policy
* Educate
the team on the virtues of component
isolation
* Move these Autowired fields
behind interfaces
I guess the final question is: is there any benefit to keeping these fields Autowired if we're not testing or in any other way utilizing the framework. I'd just as well `new` the object if the dependency is immutable. | I would agree with you that this is an abuse of what Spring can do (as opposed to what Spring SHOULD do). Is the person who designed the application this way around? It would be interesting to hear their justification for building an application this way. | It's certainly limiting what is possible, but it's not a total waste of time either.
Since the type in question is `final`, it can't be mocked-up easily for isolated testing.
However, instances of that type may still be highly configurable through their properties. Using Spring to inject an instance that is properly configured at runtime simplifies the containing class immensely. The class can focus on its responsibilities, relying on Spring to provide it with the collaborators that it needs.
Interfaces *are* over used; this may be a response to that. I think that starting with final, concrete classes for many types, then using readily-available refactoring tools to extract interfaces later, when a specific need is identified, is a defensible position in many contexts. | Spring as a glorified factory; is this acceptable? | [
"",
"java",
"spring",
""
] |
Instead of having the typical disks with labels to the right, I want the options to be presented as clickable buttons. The selected option should appear to be pushed or pressed in.
I want to do this in HTML, but an example of this are the top left buttons in the program Audacity where you select the cursor/tool mode.
What's the best way to do this?
[](https://i.stack.imgur.com/ELh9n.jpg)
(source: [freemusicsoftware.info](http://freemusicsoftware.info/screenshots/audacity-linux-small.jpg)) | There are a number of JavaScript plugins for doing this:
* [Prototype demo](http://codetalks.org/source/widgets/checkbox/checkbox1.html)
Just replace the images they're using with your images and you should be good to go. | Probably the best way is to create a real radio button, and then control the rendering of an element based upon the status on the radio button with javascript. If the radio button is selected, render background-a, else background-b (or use a sprite). Control the status of the radio button via the click event of your custom element. | HTML Radio buttons styled as Toggle Buttons | [
"",
"javascript",
"html",
"controls",
""
] |
Does it make sense to start learning JavaFx if I do not have any background in UI programming? Is it more advisable to learn Swing first and then move on to JavaFx ?
I tried the [getting started tutorial](http://javafx.com/docs/gettingstarted/javafx/) on JavaFx website in Netbeans and the code looked extremely complicated to me. I am wondering if JavaFx is too advanced for a beginnner GUI developer. | Looking at the JavaFX tutorial I would say that:
1. you don't need to know Swing to use JavaFX
2. if you find the JavaFX tutorial hard learning Swing won't be easier
If you have no programming background at all then starting in any language is going to be a challenge. If you know a little programming in a language then it is still going to be a challenge.
My advice is to dive in and work at it. A quick google search (JavaFX Hello World) has a number of hits... I took a quick look at [this one](http://www.dieajax.com/2007/08/23/10-minute-tutorial-javafx-hello-world/) and I'd say start with it. | I don't think knowing swing will give you much of an upper hand with JavaFX. JavaFX seems more like scripting rather than actual java programming. You can learn JavaFX fine without swing. | Should I learn Swing before learning JavaFx? | [
"",
"java",
"javafx",
""
] |
I use the JAMA.matrix package..how do i print the columns of a matrix | You can invoke the getArray() method on the matrix to get a double[][] representing the elements.
Then you can loop through that array to display whatever columns/rows/elements you want.
See the [API](http://math.nist.gov/javanumerics/jama/doc/) for more methods. | The easiest way would probably be to [transpose](http://math.nist.gov/javanumerics/jama/doc/Jama/Matrix.html#transpose()) the matrix, then print each row. Taking part of the example from the [API](http://math.nist.gov/javanumerics/jama/doc/Jama/Matrix.html):
```
double[][] vals = {{1.,2.,3},{4.,5.,6.},{7.,8.,10.}};
Matrix a = new Matrix(vals);
Matrix aTransposed = a.transpose();
double[][] valsTransposed = aTransposed.getArray();
// now loop through the rows of valsTransposed to print
for(int i = 0; i < valsTransposed.length; i++) {
for(int j = 0; j < valsTransposed[i].length; j++) {
System.out.print( " " + valsTransposed[i][j] );
}
}
```
As duffymo pointed out in a comment, it *would* be more efficient to bypass the transposition and just write the nested for loops to print down the columns instead of across the rows. If you need to print both ways that would result in twice as much code. That's a common enough tradeoff (speed for code size) that I leave it to you to decide. | How do i print the columns of a JAMA matrix? | [
"",
"java",
"matrix",
"jama",
""
] |
Is there a simple solution to do the equivalent of Java's comments:
```
<%-- this is a comment inside a template, it does not appear in the output HTML --%>
```
Even if you use short php tags, you still have to wrap the comments with comment syntax, on top of the php tags:
```
<? /* this is a comment of the html template */ ?>
```
I'm considering doing some kind of filter on the output templates, to remove all html comments, or better yet, custom comments like the Java syntax above, but how would you do that in the most efficient way? You'd have to run a regexp right?
The reason for my question is simply that in a MVC framrwork, using components, and re-usable html templates (think YUI), I need to document clearly those templates, in a readable way.. | Not that I know of, but the short-tag plus the block comments are very easy - about as easy to type as the JSP comments you mentioned above:
```
<?/* This is a comment */?>
```
or even
```
<?// this is a comment ?>
```
With a more elaborate PHP templating systems, such as Smarty, there are other syntaxes. | I'll just add my 2 cents for this about your short tag thing.
You will need to carefully think of this before having your comments all around your templates. Short tag is not supported everywhere it is not something standard. It's usefull but most likely to cause troubles.
Therefore make sure to use the full php tag ( | How do you comment html templates in Php (in a practical way)? | [
"",
"php",
"regex",
"templates",
"comments",
""
] |
> **Possible Duplicate:**
> [What is object serialization?](https://stackoverflow.com/questions/447898/what-is-object-serialization)
I've made a small RSS Reader app using Swing and Eclipse keeps telling me "The serializable class MochaRSSView does not declare a static final serialVersionUID field of type long"
What is serialization and what benefits would it have? | Serializable is a marker interfaces that tells the JVM it can write out the state of the object to some stream (basically read all the members, and write out their state to a stream, or to disk or something). The default mechanism is a binary format. You can also use it to clone things, or keep state between invocations, send objects across the network etc.
You can let eclipse generate one for you (basically just a long random but unique ID). That means you can control when you think a class would be compatible with a serialized version, or not.
(Note: that all the non transient member variables must be of a serializable class, or you will get an error - as the JVM will recurse through the structure writing out the state of each object down to the level of writing primitives to the ObjectOutputStream). | **Java Serialization-----Have you ever seen what is inside a serialized object? I will explain you what is java serialization, then provide you with a sample for serialization. Finally most importantly, lets explore what is inside a serialized object and what it means. That is internals of java serialization and how does it works. If you want to have your own implementation of java serialization, this article will provide you with a good platform to launch.**
What is Java Serialization?
Primary purpose of java serialization is to write an object into a stream, so that it can be transported through a network and that object can be rebuilt again. When there are two different parties involved, you need a protocol to rebuild the exact same object again. Java serialization API just provides you that. Other ways you can leverage the feature of serialization is, you can use it to perform a deep copy.
Why I used ‘primary purpose’ in the above definition is, sometimes people use java serialization as a replacement for database. Just a placeholder where you can persist an object across sessions. This is not the primary purpose of java serialization. Sometimes, when I interview candidates for Java I hear them saying java serialization is used for storing (to preserve the state) an object and retrieving it. They use it synonymously with database. This is a wrong perception for serialization.
How do you serialize?
When you want to serialize an object, that respective class should implement the marker interface serializable. It just informs the compiler that this java class can be serialized. You can tag properties that should not be serialized as transient. You open a stream and write the object into it. Java API takes care of the serialization protocol and persists the java object in a file in conformance with the protocol. De-serialization is the process of getting the object back from the file to its original form.
Here protocol means, understanding between serializing person and de-serializing person. What will be the contents of file containing the serialized object? | What is serialization in Java? | [
"",
"java",
""
] |
Let's say you have two instances of the same bean type, and you'd like to display a summary of what has changed between the two instances - for example, you have a bean representing a user's settings in your application, and you'd like to be able to display a list of what has changed in the new settings the user is submitting (instance #1) versus what is stored already for the user (instance #2).
Is there a commonly used algorithm or design pattern for a task such as this, perhaps something that can be abstracted and re-used for different types of beans? (I'm having a hard time thinking of a good name for this type of problem to know what to Google on). I've checked commons-beanutils and nothing popped out at me. | If you are talking about comparing values, I would consider using reflection and just comparing them field by field.
Something like this:
```
Field[] oldFields = oldInstance.class.getDeclaredFields();
Field[] newFields = newInstance.class.getDeclaredFields();
StringBuilder changes = new StringBuilder();
Arrays.sort(oldFields);
Arrays.sort(newFields);
int i = 0;
for(Field f : oldFields)
{
if(!f.equals(newFields[i]))
{
changes.append(f.getName()).append(" has changed.\n");
}
i++;
}
```
This code hasn't been tested. You might need to get the values in the fields and compare them instead of just comparing fields to each other, but it should work in theory. | The reflection not mantains the order of the Field in next calling: it's safier order the arrays.
```
/*
*declarations of variables
*/
Arrays.sort(oldFields);//natural order - choice 1
Arrays.sort(newFields, new Ordinator());//custom Comparator - choice 2
/*
*logic of comparations between elements
*/
```
In choice 2 you can decide the logic of sorting (HOW SORTING THE ELEMENTS) with an inner class Ordinator [extending Comparator](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Comparator.html).
PS the code is a draft | Common algorithm for generating a diff of the fields in two beans? | [
"",
"java",
"algorithm",
"diff",
"javabeans",
""
] |
Just a quick and no doubt easy question. I'm pretty new to PHP and am looking through some existing code. I have tried to find the answer to my question on google, but to no avail.
Can somebody please let me know what the '&' sign before the parameter $var does??
```
function setdefault(&$var, $default="")
{
if (! isset($var))
{
$var = $default;
}
}
``` | Passes it by reference.
**Huh?**
Passing by reference means that you pass the address of the variable instead of the value. Basically you're making a pointer to the variable.
<http://us.php.net/language.references.pass> | It means that the function gets the reference to the original value of the argument `$var`, instead of a copy of the value.
Example:
```
function add(&$num) { $num++; }
$number = 0;
add($number);
echo $number; // this outputs "1"
```
If `add()` would not have the ampersand-sign in the function signature, the echo would output "0", because the original value was never changed. | '&' before the parameter name | [
"",
"php",
"parameters",
""
] |
I am using GUIDs as my primary key for all my other tables, but I have a requirement that needs to have an incrementing number. I tried to create a field in the table with the auto increment but MySql complained that it needed to be the primary key.
My application uses MySql 5, nhibernate as the ORM.
Possible solutions I have thought of are:
* change the primary key to the auto-increment field but still have the Id as a GUID so the rest of my app is consistent.
* create a composite key with both the GUID and the auto-increment field.
My thoughts at the moment are leaning towards the composite key idea.
EDIT: The Row ID (Primary Key) is the GUID currently. I would like to add an an INT Field that is Auto Incremented so that it is human readable. I just didn't want to move away from current standard in the app of having GUID's as primary-keys. | A GUID value is intended to be unique across tables and even databases so, make the auto\_increment column primary index and make a UNIQUE index for the GUID | I would lean the other way.
Why? Because creating a composite key gives the impression to the next guy who comes along that it's OK to have the same GUID in the table twice but with different sequence numbers. | Can you use auto-increment in MySql with out it being the primary Key | [
"",
"sql",
"mysql",
"nhibernate",
"primary-key",
"auto-increment",
""
] |
I'm building a movies website... I need to display info about each movie, including genres, actors, and a lot of info (IMDB.com like)...
I created a 'movies' table including an ID and some basic information.
For the genres I created a 'genres' table including 2 columns: ID and genre.
Then I use a 'genres2movies' table with two columns:movieID and the genreID, to connect between the genres and the movies tables...
This way, for example, if a movie have 5 different genres I get the movieID in 5 different rows of the'genres2movies' table. Its better than including the genre each time for each movie but...
There is a better way for doing this???
I need to do this also for actors, languages and countries so performance and database size is really important.
Thanks!!! | You are in the right track. That's the way to do many-to-many relationships. Database size won't grow much because you use integers and for speed you must set up correct indexes for those IDs. When making SELECt queries check out the EXPLAIN - it helps to find the bottlenecks of speed. | It sounds like you are following proper normalisation rules at the moment, which is exactly what you want.
However, you may find that if performance is a key factor you may want to de-normalise some parts of your data, since JOINs between tables are relatively expensive operations.
It's usually a trade-off between proper/full normalisation and performance | Elegant database design help... (MySQL/PHP) | [
"",
"php",
"mysql",
"database",
"database-design",
""
] |
I have a page with an iframe whose source page is in a separate domain. From time to time, the source page generates an alert. When it does so, it stops what it is doing until the user clicks OK to the alert.
What I would like to do is programmatically click OK on this alert so the source page can get back to being useful. Is this possible? | JavaScript is single-threaded, which means when you call a function, it blocks until it returns. When you call alert(), that passes control to the browser which decides how to handle it. It is not Javascript which is popping the UI dialog, it is the browser. alert() does not return until the browser receives the "OK" event and returns control. The javascript thread is halted until that happens.
So for at least two different reasons stated in the above paragraph, the answer is **no** :) | I'm pretty sure it's not possible. | Is there a way to simulate a click on an alert in JavaScript? | [
"",
"javascript",
"alerts",
""
] |
In JavaScript, `xmlHttpRequest.responseXML()` returns a `DOM Document` object. The `DOM Document` object is created from an XML-structured HTTP response body.
At what point during the life of an `xmlHttpRequest` object is the XML string parsed into the `DOM Document`?
I can imagine it may occur in one of two places.
* When `responseXML()` is called.
No need to waste resources parsing the XML string into a DOM until you know it's actually needed.
* When the HTTP response has been received.
If the server returns a text/xml content-type, it's clear you've requested XML and you're probably going to want the response body parsed into a DOM as you otherwise can't do much with the requested data.
Both options have some merit, although I'm inclined to say that the XML string is parsed only when `responseXML` is called.
At what point does parsing of the XML string occur?
Reaons for asking: I need to measure browser-based XML deserialisation performance, with the aim of comparing this to JSON deserialisation performance. | It would make a great deal of sense for the stream to be parsed as its received. Waiting until the response is complete (or the responseXml property is called) means an extra delay is introduced between receiving the final bytes and the DOM being built. It would seem a better approach would be to build the DOM in parallel with receiving the stream, hence on completion the DOM is ready for use.
Note a big clue is in the fact that MSXML doesn't provide a DOM if the Content type doesn't specify the an XML type and Mozilla's implementation allows you to tell it to treat the content received as if it were XML despite Content-Type indicating otherwise.
Neither of the above be necessary if the DOM isn't parsed until the property is accessed. The property may as well simply attempt the parse. Its because a DOM may be built as the content is received that the above is necessary. | I wouldn't be surprised if this is browser dependent. Why not profile all three? | At which point in the life of an xmlHttpRequest object is serialised XML parsed into a DOM? | [
"",
"javascript",
"ajax",
"dom",
"xmlhttprequest",
""
] |
```
for (int z = 0; z < alParmValues.Count; z++)
{
//string[] def;
string[] asd = alParmValues[z].ToString().Split(',');//this is of type string.collections and u cant cast it to a arraylist or array
//if (HUTT.clsParameterValues.bCustomObj == false)
string[] def = alMethSign[z].ToString().Substring(alMethSign[z].ToString().IndexOf('(') + 1, alMethSign[z].ToString().IndexOf(')') - (alMethSign[z].ToString().IndexOf('(') + 1)).Split(',');
}
```
I have to access both the string arrays outside the loop. Is there a better solution to this? I can't use an ArrayList or declare them as public so how can I access them? | To access something outside of a loop, just declare it outside of the loop, then work with it after your loop processing is done:
```
string[] arr = ...
for (int z = 0; z < alParmValues.Count; z++)
{
// work with arr...
}
var item = arr[3]; // Accessed outside of loop.
```
However, there seem to be a few things wrong with your code. I'd recommend thinking a little bit more about the loop body and what you're trying to do there. Consider this line, for example:
```
for (int z = 0; z < alParmValues.Count; z++)
{
// ...
string[] asd = alParmValues[z].ToString().Split(',');
// There aren't any more references to asd after this point in the loop,
// so this assignment serves no purpose and only keeps its last assigned
// value.
}
```
This assignment is pointless; every time you go through the loop, you just overwrite the previous value of `asd`, and you never use it later in the loop. | The scope of both `asd` and `def` are limited to the body of the `for` loop. If you have to access them you need to declare them outside the loop. Is there a problem in putting them out?
Take a look at the [Collection Classes Tutorial](http://msdn.microsoft.com/en-us/library/aa288462(VS.71).aspx) on MSDN. | how to access a string array outside loop | [
"",
"c#",
"arrays",
"loops",
""
] |
What's the simplest way to concatenate two [WAV](http://en.wikipedia.org/wiki/WAV) files in Java 1.6? (Equal frequency and all, nothing fancy.)
(This is probably sooo simple, but my [Google-fu](http://en.wiktionary.org/wiki/Google-fu) seems weak on this subject today.) | Here is the barebones code:
```
import java.io.File;
import java.io.IOException;
import java.io.SequenceInputStream;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
public class WavAppender {
public static void main(String[] args) {
String wavFile1 = "D:\\wav1.wav";
String wavFile2 = "D:\\wav2.wav";
try {
AudioInputStream clip1 = AudioSystem.getAudioInputStream(new File(wavFile1));
AudioInputStream clip2 = AudioSystem.getAudioInputStream(new File(wavFile2));
AudioInputStream appendedFiles =
new AudioInputStream(
new SequenceInputStream(clip1, clip2),
clip1.getFormat(),
clip1.getFrameLength() + clip2.getFrameLength());
AudioSystem.write(appendedFiles,
AudioFileFormat.Type.WAVE,
new File("D:\\wavAppended.wav"));
} catch (Exception e) {
e.printStackTrace();
}
}
}
``` | The WAV header should be not be too hard to parse, and if I read [this header description](http://ccrma.stanford.edu/courses/422/projects/WaveFormat/) correctly, you can just strip the first 44 bytes from the second WAV and simply append the bytes to the first one. After that, you should of course change some of the header fields of the first WAV so that they contain the correct new length. | Join two WAV files from Java? | [
"",
"java",
"audio",
"javasound",
""
] |
I am using MSSQL 2005 Server and I have the following SQL query.
```
IF @CategoryId IN (1,2,3)
BEGIN
INSERT INTO @search_temp_table
SELECT * FROM (SELECT d.DataId, (SELECT [Name] FROM Category WHERE CategoryId = d.CategoryId) AS 'Category', d.Description, d.CompanyName, d.City, d.CategoryId,
d.CreatedOn, d.Rank, d.voteCount, d.commentCount, d.viewCount
FROM Data d
INNER JOIN Keyword k
ON d.DataId = k.DataId
WHERE FREETEXT(k.Keyword, @SearchQ) AND d.CategoryId=@CategoryId AND d.IsSearch=1 AND d.IsApproved=1 ) AS Search_Data
END
ELSE
BEGIN
INSERT INTO @search_temp_table
SELECT * FROM (SELECT d.DataId, (SELECT [Name] FROM Category WHERE CategoryId = d.CategoryId) AS 'Category', d.Description, d.CompanyName, d.City, d.CategoryId,
d.CreatedOn, d.Rank, d.voteCount, d.commentCount, d.viewCount
FROM Data d
INNER JOIN Keyword k
ON d.DataId = k.DataId
WHERE FREETEXT(k.Keyword, @SearchQ) AND d.IsSearch=1 AND d.IsApproved=1 ) AS Search_Data
END
```
In the above query I have the category condition,
```
d.CategoryId=@CategoryId
```
which is executed when any category is passed, if no category is passed then I am not considering category condition in where clause, To implement category condition only when if the category in (1,2,3) I have used If-Clause, but can't we do this in single where query?? that means just check if the values is there in the category (or if it's easy then we can only check for 1,2,3 values) then that condition will be applied else query will not consider the category condition.
Is there any way, using CASE, or NOT NULL statements ?? | Similiar to marks answer you can do the following:
```
WHERE FREETEXT(k.Keyword, @SearchQ)
AND d.IsSearch=1
AND d.IsApproved=1
AND ((@CategoryId NOT IN (1,2,3)) OR (d.CategoryId = @CategoryId))
) AS Search_Data
```
This way you eliminiate the function call | If @CategoryId is NULL when you don't want to filter by it you can use the below condition...
```
ISNULL(@CategoryId, d.CategoryId) = d.CategoryId
```
So if it's NULL then it equals itself and wont filter
**EDIT**
I like Marc Miller's COALESCE example and you could use either and I really shouldn't comment on the performance of one verses the other but...
My gut tells me ISNULL should win out but have a [look](http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/performance-isnull-vs-coalesce.aspx) at some of the debates on this issue if you have nothing better to do (or if performance is REALLY critical in this query).
**NOTE:** If the d.CategoryId in the table can be NULL then this approach will fail and the CASE WHEN THEN approach elsewhere on this question should be used | How to make single where condition for this SQL query? | [
"",
"sql",
"sql-server-2005",
"stored-procedures",
""
] |
I work in a shop that has a number of very good C# developers who have been using ASP.NET WebForms and would like to move to a MVC framework. To make things more complicated, we would also like to be able to run this under mono.
So my question is, are there any good MVC frameworks for mono, that have been tried and tested in the real world or are we better with Windows Server and IIS? | I've used [Castle Monorail](http://www.castleproject.org/MonoRail/) in Mono 2.0, and haven't had any problems. | I'm pretty sure you can run django with ironpython under mono. I haven't ever tried it but maybe you should check it out.
Sorry if this was unhelpful as you mentioned C# which django is not... | MVC Web Framework and Mono | [
"",
"c#",
".net",
"mono",
""
] |
I am working on a project in C# at the moment which is quite simple.
I have a status box, two buttons and a dataGridView.
When the Form loads the dataGridView is filled correctly.
What I would like to do is then update that table every 45 seconds to reflect any changes in the database.
I am looking for suggestions on a technique to achieve this. I have been searching for clear information but it seems somewhat lacking. | 1. Add a `Timer` control to your form. (It's in the components category)
2. Set its `Interval` property to `45000` (the value represents milliseconds)
3. Either set the `Enabled` property of the timer to `True` in the form designer, or somewhere in your code.
4. Add a handler for the timer's `Tick` event (you can get this by double-clicking the timer)
5. Inside the `Tick` handler, update your `dataGridView`
Your handler will look like this:
```
private void timer1_Tick(object sender, EventArgs e)
{
// Update DataGridView
}
```
If you need to suspend updates for some reason, you can call `timer1.Stop()` to stop the timer from running, and use `timer1.Start()` to start it up again. | Like others have suggested, use a Timer to requery the Database. The only thing I'd like to add is that when you re-query the database, don't just set the DataGridView's datasource to the new table. Rather, Merge it with the existing table. The reason for this is because if the user is in middle of the grid for example looking at a particular row, if you reset the DataSource to a new table, the entire grid will refresh and they will lose their place. Annoying as hell! If you *merge* it though, it will be seamless to the user.
[DataTable.Merge](http://msdn.microsoft.com/en-us/library/system.data.datatable.merge.aspx)
The one thing to be aware when using the Merge method is that the table needs to have a primary key. Double check to make sure that the DataTable itself has a primary key. Not always does it pull it back from the database. You may need to do something like:
```
table.PrimaryKey = new DataColumn[] {table.Columns["ID"]};
``` | How do you update a datagridview in C# every minute | [
"",
"c#",
"winforms",
"datagridview",
""
] |
I'm relatively new to Object Oriented Programming. I pretty much understand the concepts, but practically speaking, I am having a really hard time finding information about how to best use Models in my Zend Framework applications.
Specifically, I have a Model (that doesn't extend anything) that doesn't use a Database Table. It uses getters and setters to access its protected members. I find myself struggling with how to best display this model in the view. I don't want logic in my view templates, but I find myself in the following situation:
In my controller:
```
$object = new Object();
$object->setName('Foo Bar');
$this->view->object = $object;
```
In my view template:
```
<h2><?= $this->object->getName() ?></h2>
```
I don't really like calling functions in my view templates but I don't know a better way to do this. I don't want my Model's members to be public, but I basically want to achieve the same results:
```
<h2><?= $this->object->name ?></h2>
```
I don't want my controller to do all the work of having to know everything about the model:
```
$object = new Object();
$object->setName('Foo Bar');
$this->view->object = $object;
$this->view->object->name = $object->getName();
```
What is the best practice of using models in the Zend Framework? Can anyone recommend any tutorial that would help me understand this Model/View dilemma in Zend Framework? | One possibility is to use the magic \_\_set and \_\_get methods in PHP. I use them like so within my abstract Model class:
```
abstract class Model_Abstract
{
protected $_data;
// Private Data Members assigned to protected $_data
public function __construct($data = null)
{
// Makes it so that I can pass in an associative array as well as
// an StdObject.
if(!is_object($data)) {
$data = (object) $data;
}
$this->_data = $data;
}
public function __get($key)
{
if (method_exists($this, '_get' . ucfirst($key))) {
$method = '_get' . ucfirst($key);
return $this->$method();
}
else {
return $this->_data->$key;
}
}
public function __set($key, $val)
{
if ( method_exists( $this, '_set' . ucfirst($key) ) ) {
$method = '_set' . ucfirst($key);
return $this->$method($val);
}
else {
$this->_data->$key = $val;
return $this->_data->$key;
}
}
}
class Model_User extends Model_Abstract
{
//Example overriding method for the property firstName in the $_data collection.
protected function _getFirstName()
{
// Do some special processing and then output the first name.
}
}
```
This makes it so that you can specify getters and setters for properties as necessary but makes it so that you don't have to define boilerplate functions for every property, just the ones where you want to do some sort of processing on it before returning the value. For example I use the functionality in a number of places to change ISO compliant dates (as stored in MySQL) into a more compact and readable format for users.
As far as what to place in your controller, I would recommend looking at [this post](https://stackoverflow.com/questions/432435/zendframework-where-to-place-get-and-post-http-request-handling) for some specific feedback on what handling to place within your controller.
Some feel that they would rather have a helper that automatically loads models into the view and skirts the controller altogether. Personally I would say that within the context of Zend Framework and PHP it makes plenty of sense to pass models into the view from the controller because the state of the models in the view frequently depends on what came from the request (which should definitely be handled in the controller).
**Update:** As per criticisms in the comments, one thing that I would point out is that your database access layer and domain (or model) layer are really two different things, though with the Active Record they are blended together. I asked [this question](https://stackoverflow.com/questions/373054/how-to-properly-create-domain-using-zend-framework) a while back and received some useful feedback on this matter. Whatever you decide to do with the model, you'll want to provide a consistent API for all domain objects regardless of where the data for the model comes from.
I suppose that one benefit offered by Saem's answer is that it offers the ability to directly map properties / function return values from one or more domain objects to the view object. Theoretically the usage within the view then looks like this:
```
// Mapped from Model_User::_data->last_name and Model_User::_data->first_name
$this->name
``` | If only other developers are going to be working with the templates, I would recommend just passing in the models. Here is a link to a Jeff Atwood post on MVC [Understanding Model-View-Controller](http://www.codinghorror.com/blog/archives/001112.html) | Zend Framework: Using Models and Views, best practices | [
"",
"php",
"zend-framework",
"model",
""
] |
I know how to do this with pure PHP but I need to do this without reloading the page. Is there anyway with jQuery to effectively pull back some database results (based on what a user has input in the first text field on a form) then populate some of the remaining fields with data pulled back from a db query?
Essentially I would like to see the user move away from the text field (either by tabbing out or by click in the next field) and boom, the query is submitted using the value entered in that field and the subsequent fields are then populated w/o a page reload.
I am familiar with the basics of jQuery but I haven't used it to do anything like this in which I am pulling data back from the server and trying to populate it client side.
Any suggestions / examples on how to best get started with this would be very much appreciated. Thanks.
* Nicholas | Assuming this example HTML:
```
<input type="text" name="email" id="email" />
<input type="text" name="first_name" id="first_name" />
<input type="text" name="last_name" id="last_name" />
```
You could have this javascript:
```
$("#email").bind("change", function(e){
$.getJSON("http://yourwebsite.com/lokup.php?email=" + $("#email").val(),
function(data){
$.each(data, function(i,item){
if (item.field == "first_name") {
$("#first_name").val(item.value);
} else if (item.field == "last_name") {
$("#last_name").val(item.value);
}
});
});
});
```
Then just you have a PHP script (in this case lookup.php) that takes an email in the query string and returns a JSON formatted array back with the values you want to access. This is the part that actually hits the database to look up the values:
```
<?php
//look up the record based on email and get the firstname and lastname
...
//build the JSON array for return
$json = array(array('field' => 'first_name',
'value' => $firstName),
array('field' => 'last_name',
'value' => $last_name));
echo json_encode($json );
?>
```
You'll want to do other things like sanitize the email input, etc, but should get you going in the right direction. | Automatically fill all form fields from an array
<http://jsfiddle.net/brynner/wf0rk7tz/2/>
JS
```
function fill(a){
for(var k in a){
$('[name="'+k+'"]').val(a[k]);
}
}
array_example = {"God":"Jesus","Holy":"Spirit"};
fill(array_example);
```
HTML
```
<form>
<input name="God">
<input name="Holy">
</form>
``` | Dynamically fill in form values with jQuery | [
"",
"javascript",
"jquery",
"database",
"ajax",
"dynamic",
""
] |
Say I have the following EJB (using ejb3):
```
@Stateless(name="Queries")
@Remote(Queries.class)
@Local(Queries.class)
public final class QueriesEJB implements Queries {
...
}
```
The class is available through both a local and a remote interface.
**How can I inject the local interface for this EJB in another part of the app?**
Specifically, I'm not sure how to create an @EJB annotation that selects the *local* interface. For example, is the following sufficient?
```
@EJB(name="Queries") private Queries queries;
```
In particular I want to avoid creating separate local and remote interfaces simply for the purpose of distinguishing via @EJB's 'beanInterface' property. | According to the spec you cannot have an interface that is Remote and Local at the same time. However, you create a super-interface, put all methods there, and then create 2 sub-interfaces. Having done that, simply use @EJB. This way you need to maintain only one interface at all.
EDIT: See section 3.2 in "EJB3 spec simplified" at <http://jcp.org/aboutJava/communityprocess/final/jsr220/index.html> | When a EJB is deployed, the container looks at the interfaces and identifies local and remote interfaces. I would say that the EJB container already uses the local interface in your example. It simply does not make sence to use a remote interface in this case because the container has the choice to use the local one.
If you want to be sure try to use the JNDI name of the local interface as parameter of the @EJB annotation.
```
@EJB(name="java:comp/env/ejb/EntitySupplierLocal")
```
In the example above I added local to the interface name. In your case you have to take a look at the JNDI context to get the right name or you even know it ;).
Generally I recommend to use a base interface that has the business methodes defined and extend a local and a remote interface. So you do not have to duplicate methodes and you are able to extend functionality for local and remote access seperatly.
```
public interface Queries () { .. }
@Local
public interface QueriesLocal extends Queries () { .. }
@Remote
public interface QueriesRemote extends Queries () { .. }
``` | How can I select the local interface for an EJB? | [
"",
"java",
"jakarta-ee",
"ejb-3.0",
"ejb",
""
] |
I'm trying to build cairomm for gtkmm on windows using mingw. Compilation breaks at a function call which has a parameter which does a reinterpret\_cast of a bool to a void\*.
```
cairo_font_face_set_user_data(cobj(), &USER_DATA_KEY_DEFAULT_TEXT_TO_GLYPHS, reinterpret_cast<void*>(true), NULL);
```
This is where the code breaks, and reason is "invalid reinterpret\_cast from bool to void\*". Why is this happening, and how can I modify this line to get it to compile? Need help | I see this is user data and you have control over what is done with the value, cast the bool to an int first: `reinterpret_cast<void *> (static_cast<int> (true))`. Doing this makes sense in that the void\* parameter takes the place of template functions in this ANSI-C library. All you need is a true/false value. So, there should be no danger in temporarily encoding this as a pointer as long as it is well documented as such. Really, you would be better off with this: `reinterpret_cast<void *> (1)` or `reinterpret_cast<void *> (+true)`. | It looks like it should work, according to the standard. Section 3.9.1-7 says bool is an integral type, and 5.2.10-5 says a value of integral type can be explicitly converted to a pointer using reinterpret\_cast. It appears that your compiler is not fully standard.
Could you get away with changing the "true" to a 1? Converting between integers and pointer types is an old and dishonorable tradition in C and hence C++, and it would be surprising to find a compiler that wouldn't do it.
Or, if you really really have to do this, try (void \*)true. Then wash your hands. | How to cast from bool to void*? | [
"",
"c++",
"type-conversion",
""
] |
I'm working on a web form where I wish to (after form submission) highlight those input fields that weren't entered correctly.
The highlight effect I wish to create is an endlessly looping animation between `background-color: #fcc`; and `#fff`; in the faulty input fields, using jQuery. When one of those fields gain focus, I wish to stop the animation of that field.
I'm fairly off-beat in jQuery and JS, so if anyone could point me in the right direction, I'd be sincerely grateful. | Check out these two jQuery plugins:
**Pulse**: <http://james.padolsey.com/javascript/simple-pulse-plugin-for-jquery/>
**Seek Attention**: <http://enhance.qd-creative.co.uk/demo/seekAttention/> (link now dead)
I think Pulse is what you were asking for, but Seek Attention could be useful in some cases as well.
Here is a very *rudimentary* sample I created using the pulse plug in.
```
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js" type="text/javascript"></script>
<script src="http://enhance.qd-creative.co.uk/demos/pulse/pulse.jquery.js" type="text/javascript"></script>
<script type="text/javascript">
function doSomething() {
if ($('.BadTextBox').val() == "") {
$('.BadTextBox').pulse({ backgroundColors: ['#fcc', '#fff'] });
}
else {
$('.BadTextBox').css({'background-color': '#fff'}).stop();
}
}
</script>
<input type="text" class="BadTextBox" onblur="doSomething();" />
```
When the user navigates away from the text box it starts pulsing if empty. If they go back and fill it out, it stops pulsing. | I did something similar
Firstly create the javascript function variable
```
var PulsePut = function (){
if ($(this).val() == "") {
$(this).pulse({ backgroundColors: ['#ffffee', '#fff'] });
}
else {
$(this).css({'background-color': '#fff'}).stop();
} }
```
Then add a class to the inputs
```
<input type="text" class="PulsePut" />
```
Finally, to initialise the function
```
$(document).ready(function(){
$('.PulsePut').blur(PulsePut); }
```
This will make any input you have with the class .PulsePut pulse if empty. | Endless background-color animation in jQuery, how? | [
"",
"javascript",
"jquery",
"loops",
"jquery-animate",
""
] |
I have a Stream object that is populated with the contents of an XSD file I have as an embedded resource on a project I am working on like so:
```
using ( Stream xsdStream = assembly.GetManifestResourceStream( xsdFile ) )
{
// Save the contents of the xsdStream here...
}
```
Within this using block I would like to prompt the user with a Save File dialog on the web where they can choose to save off this XSD file contained within the stream.
What is the best way to accomplish this? I am completely lost and can't seem to Google the right terms to get a relevant answer.
Thanks! | If you aren't using AJAX, you can use Response.WriteFile. Else I'd use a MemoryStream. That's how I did it [here](http://zi255.com/?Req=Post&PID=187). Sorry it's in VB.NET, I haven't transcoded it. Note this also lets you download a file THROUGH the webserver, i.e. if your file is on an app server w/o public access.
```
Imports System.Data
Imports System.Data.SqlClient
Imports System.Data.Sql
Imports System.Net
Imports System.IO
Partial Class DownloadFile
Inherits System.Web.UI.Page
Protected Sub page_load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
Dim url As String = Request.QueryString("DownloadUrl")
If url Is Nothing Or url.Length = 0 Then Exit Sub
'Initialize the input stream
Dim req As HttpWebRequest = WebRequest.Create(url)
Dim resp As HttpWebResponse = req.GetResponse()
Dim bufferSize As Integer = 1
'Initialize the output stream
Response.Clear()
Response.AppendHeader("Content-Disposition:", "attachment; filename=download.zip")
Response.AppendHeader("Content-Length", resp.ContentLength.ToString)
Response.ContentType = "application/download"
'Populate the output stream
Dim ByteBuffer As Byte() = New Byte(bufferSize) {}
Dim ms As MemoryStream = New MemoryStream(ByteBuffer, True)
Dim rs As Stream = req.GetResponse.GetResponseStream()
Dim bytes() As Byte = New Byte(bufferSize) {}
While rs.Read(ByteBuffer, 0, ByteBuffer.Length) > 0
Response.BinaryWrite(ms.ToArray())
Response.Flush()
End While
'Cleanup
Response.End()
ms.Close()
ms.Dispose()
rs.Dispose()
ByteBuffer = Nothing
End Sub
End Class
``` | You will want to set the content-disposition:
```
Response.AddHeader "Content-Disposition","attachment; filename=" & xsdFile
```
You will also want to set the Content-Type to text/plain and Content-Length to the size of the file. Then you write the contents of the file. | Best way to save a Stream to a file in asp.net 3.5? | [
"",
"c#",
"asp.net",
"resources",
"stream",
""
] |
> **Possible Duplicate:**
> [When do you use Java’s @Override annotation and why?](https://stackoverflow.com/questions/94361/when-do-you-use-javas-override-annotation-and-why)
Is there any reason to annotate a method with `@Override` other than to have the compiler check that the superclass has that method? | As you describe, @Override creates a compile-time check that a method is being overridden. This is very useful to make sure you do not have a silly signature issue when trying to override.
For example, I have seen the following error:
```
public class Foo {
private String id;
public boolean equals(Foo f) { return id.equals(f.id);}
}
```
This class compiles as written, but adding the @Override tag to the equals method will cause a compilation error as it does not override the equals method on Object. This is a simple error, but it can escape the eye of even a seasoned developer | It not only makes the compiler check - although that would be enough to make it useful; it also documents the developer's intention.
For instance, if you override a method but don't use it anywhere from the type itself, someone coming to the code later may wonder why on earth it's there. The annotation explains its purpose. | What is @Override for in Java? | [
"",
"java",
"overriding",
""
] |
I want to show to the user a state that the result is loading.
How can I change cursor or gif while loading result in div with $MyDiv.load("page.php") ? | 1. Initially style the loading image non-visible.
2. Change the style to visible when you begin loading.
3. Change the style to non-visible when loading finished using a callback argument to load().
Example:
```
$("#loadImage").show();
$("#MyDiv").load("page.php", {limit: 25}, function(){
$("#loadImage").hide();
});
``` | ```
$("body").css("cursor", "progress");
```
Just remember to return it afterwards:
```
$MyDiv.load("page.php", function () {
// this is the callback function, called after the load is finished.
$("body").css("cursor", "auto");
});
``` | jquery wait cursor while loading html in div | [
"",
"javascript",
"jquery",
"html",
"dom",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.