Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm looking for the best JavaScript editor available as an Eclipse plugin. I've been using Spket which is good. But, is there more better one?
|
Disclaimer, I work at Aptana. I would point out there are some nice features for JS that you might not get so easily elsewhere. One is plugin-level integration of JS libraries that provide CodeAssist, samples, snippets and easy inclusion of the libraries files into your project; we provide the plugins for many of the more commonly used libraries, including YUI, jQuery, Prototype, dojo and EXT JS.
Second, we have a server-side JavaScript engine called Jaxer that not only lets you run any of your JS code on the server but adds file, database and networking functionality so that you don't have to use a scripting language but can write the entire app in JS.
|
[Eclipse HTML Editor Plugin](http://amateras.sourceforge.jp/cgi-bin/fswiki_en/wiki.cgi?page=EclipseHTMLEditor)
I too have struggled with this totally obvious question. It seemed crazy that this wasn't an extremely easy-to-find feature with all the web development happening in Eclipse these days.
I was very turned off by Aptana because of how bloated it is, and the fact that it starts up a local web server (by default on port 8000) everytime you start Eclipse and [you can't disable this functionality](http://forums.aptana.com/viewtopic.php?t=5269). Adobe's port of JSEclipse is now a 400Mb plugin, which is equally insane.
However, I just found a super-lightweight JavaScript editor called [Eclipse HTML Editor Plugin](http://amateras.sourceforge.jp/cgi-bin/fswiki_en/wiki.cgi?page=EclipseHTMLEditor), made by Amateras, which was exactly what I was looking for.
|
JavaScript editor within Eclipse
|
[
"",
"javascript",
"eclipse",
"plugins",
"editor",
""
] |
On the other end of the spectrum, I would be happy if I could install a wiki and share the login credentials between [WordPress](http://en.wikipedia.org/wiki/WordPress) and the wiki. I hacked [MediaWiki](http://en.wikipedia.org/wiki/MediaWiki) a while ago to share logins with another site (in [ASP Classic](http://en.wikipedia.org/wiki/Active_Server_Pages)) via session cookies, and it was a pain to do and even worse to maintain. Ideally, I would like to find a plug-in or someone who knows a more elegant solution.
|
The tutorial *[WordPress, bbPress & MediaWiki](https://bbpress.org/forums/topic/mediawiki-bbpress-and-wordpress-integration/)* should get you on the right track to integrating MediaWiki into your WordPress install. It's certainly going to be a *lot* easier than hacking WordPress to have wiki features, especially with the sort of granular permissions you're describing.
|
[WPMW](http://ciarang.com/wiki/page/WPMW), a solution for integrating a MediaWiki within a WordPress installation, might help.
|
WordPress MediaWiki integration
|
[
"",
"php",
"mysql",
"wordpress",
"lamp",
"mediawiki",
""
] |
I'm currently writing an ASP.Net app from the UI down. I'm implementing an MVP architecture because I'm sick of Winforms and wanted something that had a better separation of concerns.
So with MVP, the Presenter handles events raised by the View. Here's some code that I have in place to deal with the creation of users:
```
public class CreateMemberPresenter
{
private ICreateMemberView view;
private IMemberTasks tasks;
public CreateMemberPresenter(ICreateMemberView view)
: this(view, new StubMemberTasks())
{
}
public CreateMemberPresenter(ICreateMemberView view, IMemberTasks tasks)
{
this.view = view;
this.tasks = tasks;
HookupEventHandlersTo(view);
}
private void HookupEventHandlersTo(ICreateMemberView view)
{
view.CreateMember += delegate { CreateMember(); };
}
private void CreateMember()
{
if (!view.IsValid)
return;
try
{
int newUserId;
tasks.CreateMember(view.NewMember, out newUserId);
view.NewUserCode = newUserId;
view.Notify(new NotificationDTO() { Type = NotificationType.Success });
}
catch(Exception e)
{
this.LogA().Message(string.Format("Error Creating User: {0}", e.Message));
view.Notify(new NotificationDTO() { Type = NotificationType.Failure, Message = "There was an error creating a new member" });
}
}
}
```
I have my main form validation done using the built in .Net Validation Controls, but now I need to verify that the data sufficiently satisfies the criteria for the Service Layer.
Let's say the following Service Layer messages can show up:
* E-mail account already exists (failure)
* Refering user entered does not exist (failure)
* Password length exceeds datastore allowed length (failure)
* Member created successfully (success)
Let's also say that more rules will be in the service layer that the UI cannot anticipate.
Currently I'm having the service layer throw an exception if things didn't go as planned. Is that a sufficent strategy? Does this code smell to you guys? If I wrote a service layer like this would you be annoyed at having to write Presenters that use it in this way? Return codes seem too old school and a bool is just not informative enough.
---
> **Edit not by OP: merging in follow-up comments that were posted as answers by the OP**
---
Cheekysoft, I like the concept of a ServiceLayerException. I already have a global exception module for the exceptions that I don't anticipate. Do you find making all these custom exceptions tedious? I was thinking that catching base Exception class was a bit smelly but wasn't exactly sure how progress from there.
tgmdbm, I like the clever use of the lambda expression there!
---
Thanks Cheekysoft for the follow-up. So I'm guessing that would be the strategy if you don't mind the user being displayed a separate page (I'm primarily a web developer) if the Exception is not handled.
However, if I want to return the error message in the same view where the user submitted the data that caused the error, I would then have to catch the Exception in the Presenter?
Here's what the CreateUserView looks like when the Presenter has handled the ServiceLayerException:

For this kind of error, it's nice to report it to the same view.
Anyways, I think we're going beyond the scope of my original question now. I'll play around with what you've posted and if I need further details I'll post a new question.
|
That sounds just right to me. Exceptions are preferable as they can be thrown up to the top of the service layer from anywhere inside the service layer, no matter how deeply nested inside the service method implementation it is. This keeps the service code clean as you know the calling presenter will always get notification of the problem.
**Don't catch Exception**
However, [don't catch Exception](https://stackoverflow.com/questions/21938/is-it-really-that-bad-to-catch-a-general-exception) in the presenter, I know its tempting because it keeps the code shorter, but you need to catch specific exceptions to avoid catching the system-level exceptions.
**Plan a Simple Exception Hierarchy**
If you are going to use exceptions in this way, you should design an exception hierarchy for your own exception classes.
At a minumum create a ServiceLayerException class and throw one of these in your service methods when a problem occurs. Then if you need to throw an exception that should/could be handled differently by the presenter, you can throw a specific subclass of ServiceLayerException: say, AccountAlreadyExistsException.
Your presenter then has the option of doing
```
try {
// call service etc.
// handle success to view
}
catch (AccountAlreadyExistsException) {
// set the message and some other unique data in the view
}
catch (ServiceLayerException) {
// set the message in the view
}
// system exceptions, and unrecoverable exceptions are allowed to bubble
// up the call stack so a general error can be shown to the user, rather
// than showing the form again.
```
Using inheritance in your own exception classes means you are not required to catch multipile exceptions in your presenter -- you can if there's a need to -- and you don't end up accidentally catching exceptions you can't handle. If your presenter is already at the top of the call stack, add a catch( Exception ) block to handle the system errors with a different view.
I always try and think of my service layer as a seperate distributable library, and throw as specific an exception as makes sense. It is then up to the presenter/controller/remote-service implementation to decide if it needs to worry about the specific details or just to treat problems as a generic error.
|
As Cheekysoft suggests, I would tend to move all major exceptions into an ExceptionHandler and let those exceptions bubble up. The ExceptionHandler would render the appropriate view for the type of exception.
Any validation exceptions however should be handled in the view but typically this logic is common to many parts of your application. So I like to have a helper like this
```
public static class Try {
public static List<string> This( Action action ) {
var errors = new List<string>();
try {
action();
}
catch ( SpecificException e ) {
errors.Add( "Something went 'orribly wrong" );
}
catch ( ... )
// ...
return errors;
}
}
```
Then when calling your service just do the following
```
var errors = Try.This( () => {
// call your service here
tasks.CreateMember( ... );
} );
```
Then in errors is empty, you're good to go.
You can take this further and extend it with custome exception handlers which handle *uncommon* exceptions.
|
How Do You Communicate Service Layer Messages/Errors to Higher Layers Using MVP?
|
[
"",
"c#",
"asp.net",
"exception",
"mvp",
"n-tier-architecture",
""
] |
Prior to C# generics, everyone would code collections for their business objects by creating a collection base that implemented IEnumerable
IE:
```
public class CollectionBase : IEnumerable
```
and then would derive their Business Object collections from that.
```
public class BusinessObjectCollection : CollectionBase
```
Now with the generic list class, does anyone just use that instead? I've found that I use a compromise of the two techniques:
```
public class BusinessObjectCollection : List<BusinessObject>
```
I do this because I like to have strongly typed names instead of just passing Lists around.
What is **your** approach?
|
I am generally in the camp of just using a List directly, unless for some reason I need to encapsulate the data structure and provide a limited subset of its functionality. This is mainly because if I don't have a specific need for encapsulation then doing it is just a waste of time.
However, with the aggregate initializes feature in C# 3.0, there are some new situations where I would advocate using customized collection classes.
Basically, C# 3.0 allows any class that implements `IEnumerable` and has an Add method to use the new aggregate initializer syntax. For example, because Dictionary defines a method Add(K key, V value) it is possible to initialize a dictionary using this syntax:
```
var d = new Dictionary<string, int>
{
{"hello", 0},
{"the answer to life the universe and everything is:", 42}
};
```
The great thing about the feature is that it works for add methods with any number of arguments. For example, given this collection:
```
class c1 : IEnumerable
{
void Add(int x1, int x2, int x3)
{
//...
}
//...
}
```
it would be possible to initialize it like so:
```
var x = new c1
{
{1,2,3},
{4,5,6}
}
```
This can be really useful if you need to create static tables of complex objects. For example, if you were just using `List<Customer>` and you wanted to create a static list of customer objects you would have to create it like so:
```
var x = new List<Customer>
{
new Customer("Scott Wisniewski", "555-555-5555", "Seattle", "WA"),
new Customer("John Doe", "555-555-1234", "Los Angeles", "CA"),
new Customer("Michael Scott", "555-555-8769", "Scranton PA"),
new Customer("Ali G", "", "Staines", "UK")
}
```
However, if you use a customized collection, like this one:
```
class CustomerList : List<Customer>
{
public void Add(string name, string phoneNumber, string city, string stateOrCountry)
{
Add(new Customer(name, phoneNumber, city, stateOrCounter));
}
}
```
You could then initialize the collection using this syntax:
```
var customers = new CustomerList
{
{"Scott Wisniewski", "555-555-5555", "Seattle", "WA"},
{"John Doe", "555-555-1234", "Los Angeles", "CA"},
{"Michael Scott", "555-555-8769", "Scranton PA"},
{"Ali G", "", "Staines", "UK"}
}
```
This has the advantage of being both easier to type and easier to read because their is no need to retype the element type name for each element. The advantage can be particularly strong if the element type is long or complex.
That being said, this is only useful if you need static collections of data defined in your app. Some types of apps, like compilers, use them all the time. Others, like typical database apps don't because they load all their data from a database.
My advice would be that if you either need to define a static collection of objects, or need to encapsulate away the collection interface, then create a custom collection class. Otherwise I would just use `List<T>` directly.
|
It's [recommended](http://blogs.msdn.com/fxcop/archive/2006/04/27/faq-why-does-donotexposegenericlists-recommend-that-i-expose-collection-lt-t-gt-instead-of-list-lt-t-gt-david-kean.aspx) that in public API's not to use List<T>, but to use Collection<T>
If you are inheriting from it though, you should be fine, afaik.
|
List<BusinessObject> or BusinessObjectCollection?
|
[
"",
"c#",
".net",
"generics",
"collections",
"class-design",
""
] |
What is the best way to iterate through a strongly-typed generic List in C#.NET and VB.NET?
|
For C#:
```
foreach(ObjectType objectItem in objectTypeList)
{
// ...do some stuff
}
```
Answer for VB.NET from **Purple Ant**:
```
For Each objectItem as ObjectType in objectTypeList
'Do some stuff '
Next
```
|
With any generic implementation of IEnumerable the best way is:
```
//C#
foreach( var item in listVariable) {
//do stuff
}
```
There is an important exception however. IEnumerable involves an overhead of Current() and MoveNext() that is what the foreach loop is actually compiled into.
When you have a simple array of structs:
```
//C#
int[] valueTypeArray;
for(int i=0; i < valueTypeArray.Length; ++i) {
int item = valueTypeArray[i];
//do stuff
}
```
Is quicker.
---
**Update**
Following a discussion with @Steven Sudit (see comments) I think my original advice may be out of date or mistaken, so I ran some tests:
```
// create a list to test with
var theList = Enumerable.Range(0, 100000000).ToList();
// time foreach
var sw = Stopwatch.StartNew();
foreach (var item in theList)
{
int inLoop = item;
}
Console.WriteLine("list foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
int cnt = theList.Count;
for (int i = 0; i < cnt; i++)
{
int inLoop = theList[i];
}
Console.WriteLine("list for : " + sw.Elapsed.ToString());
// now run the same tests, but with an array
var theArray = theList.ToArray();
sw.Reset();
sw.Start();
foreach (var item in theArray)
{
int inLoop = item;
}
Console.WriteLine("array foreach: " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
// time for
cnt = theArray.Length;
for (int i = 0; i < cnt; i++)
{
int inLoop = theArray[i];
}
Console.WriteLine("array for : " + sw.Elapsed.ToString());
Console.ReadKey();
```
So, I ran this in release with all optimisations:
```
list foreach: 00:00:00.5137506
list for : 00:00:00.2417709
array foreach: 00:00:00.1085653
array for : 00:00:00.0954890
```
And then debug without optimisations:
```
list foreach: 00:00:01.1289015
list for : 00:00:00.9945345
array foreach: 00:00:00.6405422
array for : 00:00:00.4913245
```
So it appears fairly consistent, `for` is quicker than `foreach` and arrays are quicker than generic lists.
However, this is across 100,000,000 iterations and the difference is about .4 of a second between the fastest and slowest methods. Unless you're doing massive performance critical loops it just isn't worth worrying about.
|
What is the best way to iterate through a strongly-typed generic List<T>?
|
[
"",
"c#",
".net",
"vb.net",
"generics",
"collections",
""
] |
I am writing a webapp using CodeIgniter that requires authentication. I created a model which handles all my authentication. However, I can't find a way to access this authentication model from inside another model. Is there a way to access a model from inside another mode, or a better way to handle authentication inside CodeIgniter?
|
In general, you don't want to create objects inside an object. That's a bad habit, instead, write a clear API and inject a model into your model.
```
<?php
// in your controller
$model1 = new Model1();
$model2 = new Model2();
$model2->setWhatever($model1);
?>
```
|
It seems you can load models inside models, although you probably should solve this another way. See [CodeIgniter forums](http://codeigniter.com/forums/viewthread/49625) for a discussion.
```
class SomeModel extends Model
{
function doSomething($foo)
{
$CI =& get_instance();
$CI->load->model('SomeOtherModel','NiceName',true);
// use $CI instead of $this to query the other models
$CI->NiceName->doSomethingElse();
}
}
```
Also, I don't understand what Till is saying about that you shouldn't create objects inside objects. Of course you should! Sending objects as arguments looks much less clear to me.
|
Can you access a model from inside another model in CodeIgniter?
|
[
"",
"php",
"codeigniter",
"authentication",
"model",
""
] |
One of the fun parts of multi-cultural programming is number formats.
* Americans use 10,000.50
* Germans use 10.000,50
* French use 10 000,50
My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10.
Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator.
The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser.
Does the .net Framework contain some mythical black magic floating point parser that is better than `Double.(Try)Parse()` in trying to auto-detect the number format?
|
I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again.
|
I don't know the ASP.NET side of the problem but .NET has a pretty powerful class: [System.Globalization.CultureInfo](https://learn.microsoft.com/dotnet/api/system.globalization.cultureinfo). You can use the following code to parse a string containing a double value:
```
double d = double.Parse("100.20", CultureInfo.CurrentCulture);
// -- OR --
double d = double.Parse("100.20", CultureInfo.CurrentUICulture);
```
If ASP.NET somehow (i.e. using HTTP Request headers) passes current user's CultureInfo to either CultureInfo.CurrentCulture or CultureInfo.CurrentUICulture, these will work fine.
|
Floating Point Number parsing: Is there a Catch All algorithm?
|
[
"",
"c#",
".net",
"asp.net",
"internationalization",
"globalization",
""
] |
Reading through [this question](https://stackoverflow.com/questions/39879/why-doesnt-javascript-support-multithreading) on multi-threaded javascript, I was wondering if there would be any security implications in allowing javascript to spawn mutliple threads. For example, would there be a risk of a malicious script repeatedly spawning thread after thread in an attempt to overwhelm the operating system or interpreter and trigger entrance into "undefined behavior land", or is it pretty much a non-issue? Any other ways in which an attack might exploit a hypothetical implementation of javascript that supports threads that a non-threading implementation would be immune to?
**Update:** Note that locking up a browser isn't the same as creating an undefined behavior exploit.
|
No, multiple threads would not add extra security problems in a perfect implementation. Threaded javascript would add complexity to the javascript interpreter which makes it more likely to have an exploitable bug. But threads alone are not going to add any security issues.
Threads are not present in javascript because "Threads Suck" - read more from the language designer (<http://weblogs.mozillazine.org/roadmap/archives/2007/02/threads_suck.html>)
|
Well, you can already lock up a browser and *seriously* slow down a system with badly-behaved JS. Enlightened browsers have implemented checks for this sort of thing, and will stop it before it gets out of hand.
I would tend to assume that threads would be dealt with in a similar manner.
---
Perhaps you could explain what you mean by "undefined behavior" then? An interpreter that allowed untrusted script to *directly* control the number of OS-native threads being run would be *incredibly* naive - i don't know how Gears runs things, but since the API is centered around `Worker`s in `WorkerPool`s, i would be very surprised if they aren't limiting the total number of native threads in use to some very low number.
|
Security implications of multi-threaded javascript
|
[
"",
"javascript",
"multithreading",
""
] |
Using C# and System.Data.SqlClient, is there a way to retrieve a list of parameters that belong to a stored procedure on a SQL Server before I actually execute it?
I have an a "multi-environment" scenario where there are multiple versions of the same database schema. Examples of environments might be "Development", "Staging", & "Production". "Development" is going to have one version of the stored procedure and "Staging" is going to have another.
All I want to do is validate that a parameter is going to be there before passing it a value and calling the stored procedure. Avoiding that SqlException rather than having to catch it is a plus for me.
Joshua
|
You can use SqlCommandBuilder.DeriveParameters() (see [SqlCommandBuilder.DeriveParameters - Get Parameter Information for a Stored Procedure - ADO.NET Tutorials](https://web.archive.org/web/20110304121600/http://www.davidhayden.com/blog/dave/archive/2006/11/01/SqlCommandBuilderDeriveParameters.aspx)) or there's [this way](http://www.codeproject.com/KB/database/enumeratesps.aspx) which isn't as elegant.
|
You want the [SqlCommandBuilder.DeriveParameters(SqlCommand)](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommandbuilder.deriveparameters.aspx) method. Note that it requires an additional round trip to the database, so it is a somewhat significant performance hit. You should consider caching the results.
An example call:
```
using (SqlConnection conn = new SqlConnection(CONNSTRING))
using (SqlCommand cmd = new SqlCommand("StoredProc", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
SqlCommandBuilder.DeriveParameters(cmd);
cmd.Parameters["param1"].Value = "12345";
// ....
}
```
|
How can I retrieve a list of parameters from a stored procedure in SQL Server
|
[
"",
"c#",
"sql-server",
"ado.net",
""
] |
We're working on a Log Viewer. The use will have the option to filter by user, severity, etc. In the Sql days I'd add to the query string, but I want to do it with Linq. How can I conditionally add where-clauses?
|
if you want to only filter if certain criteria is passed, do something like this
```
var logs = from log in context.Logs
select log;
if (filterBySeverity)
logs = logs.Where(p => p.Severity == severity);
if (filterByUser)
logs = logs.Where(p => p.User == user);
```
Doing so this way will allow your Expression tree to be exactly what you want. That way the SQL created will be exactly what you need and nothing less.
|
If you need to filter base on a List / Array use the following:
```
public List<Data> GetData(List<string> Numbers, List<string> Letters)
{
if (Numbers == null)
Numbers = new List<string>();
if (Letters == null)
Letters = new List<string>();
var q = from d in database.table
where (Numbers.Count == 0 || Numbers.Contains(d.Number))
where (Letters.Count == 0 || Letters.Contains(d.Letter))
select new Data
{
Number = d.Number,
Letter = d.Letter,
};
return q.ToList();
}
```
|
How can I conditionally apply a Linq operator?
|
[
"",
"c#",
"linq",
"linq-to-sql",
""
] |
I'm using the Infragistics grid and I'm having a difficult time using a drop-down list as the value selector for one of my columns.
I tried reading the documentation but Infragistics' documentation is not so good. I've also taken a look at this [discussion](http://news.infragistics.com/forums/p/9063/45792.aspx) with no luck.
What I'm doing so far:
```
col.Type = ColumnType.DropDownList;
col.DataType = "System.String";
col.ValueList = myValueList;
```
where `myValueList` is:
```
ValueList myValueList = new ValueList();
myValueList.Prompt = "My text prompt";
myValueList.DisplayStyle = ValueListDisplayStyle.DisplayText;
foreach(MyObjectType item in MyObjectTypeCollection)
{
myValueList.ValueItems.Add(item.ID, item.Text); // Note that the ID is a string (not my design)
}
```
When I look at the page, I expect to see a drop-down list in the cells for this column, but my columns are empty.
|
I've found what was wrong.
The column must allow updates.
```
uwgMyGrid.Columns.FromKey("colTest").AllowUpdate = AllowUpdate.Yes;
```
|
Here's an example from one of my pages:
```
UltraWebGrid uwgMyGrid = new UltraWebGrid();
uwgMyGrid.Columns.Add("colTest", "Test Dropdown");
uwgMyGrid.Columns.FromKey("colTest").Type = ColumnType.DropDownList;
uwgMyGrid.Columns.FromKey("colTest").ValueList.ValueListItems.Insert(0, "ONE", "Choice 1");
uwgMyGrid.Columns.FromKey("colTest").ValueList.ValueListItems.Insert(1, "TWO", "Choice 2");
```
|
UltraWebGrid: How to use a drop-down list in a column
|
[
"",
"c#",
"asp.net",
"grid",
"infragistics",
"ultrawebgrid",
""
] |
We have an application with a good amount of jQuery JSON calls to server side code. Because of this, we have a large amount of binding code to parse responses and bind the appropriate values to the form. This is a two part question.
1. What is the reccomended approach for dealing with a large number of forms that all have different data. Right now were are trying to take a structured approach in setting up a js "class" for each page, with an init, wireClickEvents etc.. to try to have everything conformed.
2. Is there any "best practices" with creating repetitive jQuery code or any type of reccomended structure other than just throwing a bunch of functions in a js file?
|
Not 100% sure example what you are asking, but personally, and I use MochiKit, I create JavaScript "classes" (or widgets, if you prefer) for every significant client-side UI structure. These know, of course, how to populate themselves with data.
I don't know what more there is to say - writing UI code for the browser in JavaScript is no different than writing UI code for other types of apps, as far as I am concerned. Build classes and instantiate them as needed, populate them with data, have them throw events, etc. etc.
Am I up in the night on this? :)
---
EDIT: In other words, yes - do what you are doing, for the most part. I see too many novice JavaScript hackers write a bunch of poorly-cohesive functions that don't appear to be a part of anything specific other than they are all in a single file. Hope that makes sense.
|
You should probably look into a framework like [knockout.js](http://www.knockoutjs.com) This way you can just update your models and the forms will update automatically.
|
Best practices with jQuery form binding code in an application
|
[
"",
"javascript",
"jquery",
"ooad",
""
] |
A question related to [Regular cast vs. static\_cast vs. dynamic\_cast](https://stackoverflow.com/questions/28002):
What cast syntax style do you prefer in C++?
* C-style cast syntax: `(int)foo`
* C++-style cast syntax: `static_cast<int>(foo)`
* constructor syntax: `int(foo)`
They may not translate to exactly the same instructions (do they?) but their effect should be the same (right?).
If you're just casting between the built-in numeric types, I find C++-style cast syntax too verbose. As a former Java coder I tend to use C-style cast syntax instead, but my local C++ guru insists on using constructor syntax.
What do you think?
|
It's best practice *never* to use C-style casts for three main reasons:
* as already mentioned, no checking is performed here. The programmer simply cannot know which of the various casts is used which weakens strong typing
* the new casts are intentionally visually striking. Since casts often reveal a weakness in the code, it's argued that making casts visible in the code is a good thing.
* this is especially true if searching for casts with an automated tool. Finding C-style casts reliably is nearly impossible.
As palm3D noted:
> I find C++-style cast syntax too verbose.
This is intentional, for the reasons given above.
The constructor syntax (official name: function-style cast) is semantically *the same* as the C-style cast and should be avoided as well (except for variable initializations on declaration), for the same reasons. It is debatable whether this should be true even for types that define custom constructors but in Effective C++, Meyers argues that even in those cases you should refrain from using them. To illustrate:
```
void f(auto_ptr<int> x);
f(static_cast<auto_ptr<int> >(new int(5))); // GOOD
f(auto_ptr<int>(new int(5)); // BAD
```
The `static_cast` here will actually call the `auto_ptr` constructor.
|
According to [Stroustrup](http://www.research.att.com/~bs/bs_faq2.html#static-cast):
> The "new-style casts" were introduced
> to give programmers a chance to state
> their intentions more clearly and for
> the compiler to catch more errors.
So really, its for safety as it does extra compile-time checking.
|
C++ cast syntax styles
|
[
"",
"c++",
"coding-style",
"casting",
""
] |
Can anyone tell me how I can display a status message like "12 seconds ago" or "5 minutes ago" etc in a web page?
|
Here is the php code for the same:
```
function time_since($since) {
$chunks = array(
array(60 * 60 * 24 * 365 , 'year'),
array(60 * 60 * 24 * 30 , 'month'),
array(60 * 60 * 24 * 7, 'week'),
array(60 * 60 * 24 , 'day'),
array(60 * 60 , 'hour'),
array(60 , 'minute'),
array(1 , 'second')
);
for ($i = 0, $j = count($chunks); $i < $j; $i++) {
$seconds = $chunks[$i][0];
$name = $chunks[$i][1];
if (($count = floor($since / $seconds)) != 0) {
break;
}
}
$print = ($count == 1) ? '1 '.$name : "$count {$name}s";
return $print;
}
```
The function takes the number of seconds as input and outputs text such as:
* 10 seconds
* 1 minute
etc
|
```
function timeAgo($timestamp){
$datetime1=new DateTime("now");
$datetime2=date_create($timestamp);
$diff=date_diff($datetime1, $datetime2);
$timemsg='';
if($diff->y > 0){
$timemsg = $diff->y .' year'. ($diff->y > 1?"'s":'');
}
else if($diff->m > 0){
$timemsg = $diff->m . ' month'. ($diff->m > 1?"'s":'');
}
else if($diff->d > 0){
$timemsg = $diff->d .' day'. ($diff->d > 1?"'s":'');
}
else if($diff->h > 0){
$timemsg = $diff->h .' hour'.($diff->h > 1 ? "'s":'');
}
else if($diff->i > 0){
$timemsg = $diff->i .' minute'. ($diff->i > 1?"'s":'');
}
else if($diff->s > 0){
$timemsg = $diff->s .' second'. ($diff->s > 1?"'s":'');
}
$timemsg = $timemsg.' ago';
return $timemsg;
}
```
|
How to display "12 minutes ago" etc in a PHP webpage?
|
[
"",
"php",
""
] |
Whats the best/easiest way to obtain a count of items within an IEnumerable collection without enumerating over all of the items in the collection?
Possible with LINQ or Lambda?
|
You will have to enumerate to get a count. Other constructs like the List keep a running count.
|
In any case, you have to loop through it. Linq offers the `Count` method:
```
var result = myenum.Count();
```
|
The best way to get a count of IEnumerable<T>
|
[
"",
"c#",
"linq",
""
] |
For a given class I would like to have tracing functionality i.e. I would like to log every method call (method signature and actual parameter values) and every method exit (just the method signature).
How do I accomplish this assuming that:
* I don't want to use any 3rd party
AOP libraries for C#,
* I don't want to add duplicate code to all the methods that I want to trace,
* I don't want to change the public API of the class - users of the class should be able to call all the methods in exactly the same way.
To make the question more concrete let's assume there are 3 classes:
```
public class Caller
{
public static void Call()
{
Traced traced = new Traced();
traced.Method1();
traced.Method2();
}
}
public class Traced
{
public void Method1(String name, Int32 value) { }
public void Method2(Object object) { }
}
public class Logger
{
public static void LogStart(MethodInfo method, Object[] parameterValues);
public static void LogEnd(MethodInfo method);
}
```
How do I invoke *Logger.LogStart* and *Logger.LogEnd* for every call to *Method1* and *Method2* without modifying the *Caller.Call* method and without adding the calls explicitly to *Traced.Method1* and *Traced.Method2*?
Edit: What would be the solution if I'm allowed to slightly change the Call method?
|
C# is not an AOP oriented language. It has some AOP features and you can emulate some others but making AOP with C# is painful.
I looked up for ways to do exactly what you wanted to do and I found no easy way to do it.
As I understand it, this is what you want to do:
```
[Log()]
public void Method1(String name, Int32 value);
```
and in order to do that you have two main options
1. Inherit your class from MarshalByRefObject or ContextBoundObject and define an attribute which inherits from IMessageSink. [This article](http://www.developerfusion.co.uk/show/5307/3/) has a good example. You have to consider nontheless that using a MarshalByRefObject the performance will go down like hell, and I mean it, I'm talking about a 10x performance lost so think carefully before trying that.
2. The other option is to inject code directly. In runtime, meaning you'll have to use reflection to "read" every class, get its attributes and inject the appropiate call (and for that matter I think you couldn't use the Reflection.Emit method as I think Reflection.Emit wouldn't allow you to insert new code inside an already existing method). At design time this will mean creating an extension to the CLR compiler which I have honestly no idea on how it's done.
The final option is using an [IoC framework](http://en.wikipedia.org/wiki/Inversion_of_control). Maybe it's not the perfect solution as most IoC frameworks works by defining entry points which allow methods to be hooked but, depending on what you want to achive, that might be a fair aproximation.
|
The simplest way to achieve that is probably to use [PostSharp](http://www.postsharp.net). It injects code inside your methods based on the attributes that you apply to it. It allows you to do exactly what you want.
Another option is to use the [profiling API](http://msdotnetsupport.blogspot.com/2006/08/net-profiling-api-tutorial.html) to inject code inside the method, but that is really hardcore.
|
How do I intercept a method call in C#?
|
[
"",
"c#",
"reflection",
"aop",
""
] |
I've been taking a look at some different products for .NET which propose to speed up development time by providing a way for business objects to map seamlessly to an automatically generated database. I've never had a problem writing a data access layer, but I'm wondering if this type of product will really save the time it claims. I also worry that I will be giving up too much control over the database and make it harder to track down any data level problems. Do these type of products make it better or worse in the already tough case that the database and business object structure must change?
For example:
[Object Relation Mapping from Dev Express](http://www.devexpress.com/Products/NET/ORM/)
In essence, is it worth it? Will I save "THAT" much time, effort, and future bugs?
|
I have used SubSonic and EntitySpaces. Once you get the hang of them, I beleive they can save you time, but as complexity of your app and volume of data grow, you may outgrow these tools. You start to lose time trying to figure out if something like a performance issue is related to the ORM or to your code. So, to answer your question, I think it depends. I tend to agree with Eric on this, high volume enterprise apps are not a good place for general purpose ORMs, but in standard fare smaller CRUD type apps, you might see some saved time.
|
I've found [iBatis](http://ibatis.apache.org/) from the Apache group to be an excellent solution to this problem. My team is currently using iBatis to map all of our calls from Java to our MySQL backend. It's been a huge benefit as it's easy to manage all of our SQL queries and procedures because they're all located in XML files, not in our code. Separating SQL from your code, no matter what the language, is a great help.
Additionally, iBatis allows you to write your own data mappers to map data to and from your objects to the DB. We wanted this flexibility, as opposed to a Hibernate type solution that does everything for you, but also (IMO) limits your ability to perform complex queries.
There is a .NET version of iBatis as well.
|
Simple Object to Database Product
|
[
"",
"c#",
".net",
"database",
"orm",
""
] |
If I have Python code
```
class A():
pass
class B():
pass
class C(A, B):
pass
```
and I have class `C`, is there a way to iterate through it's super classed (`A` and `B`)? Something like pseudocode:
```
>>> magicGetSuperClasses(C)
(<type 'A'>, <type 'B'>)
```
One solution seems to be [inspect module](http://docs.python.org/lib/module-inspect.html) and `getclasstree` function.
```
def magicGetSuperClasses(cls):
return [o[0] for o in inspect.getclasstree([cls]) if type(o[0]) == type]
```
but is this a "Pythonian" way to achieve the goal?
|
`C.__bases__` is an array of the super classes, so you could implement your hypothetical function like so:
```
def magicGetSuperClasses(cls):
return cls.__bases__
```
But I imagine it would be easier to just reference `cls.__bases__` directly in most cases.
|
@John: Your snippet doesn't work -- you are returning the *class* of the base classes (which are also known as metaclasses). You really just want `cls.__bases__`:
```
class A: pass
class B: pass
class C(A, B): pass
c = C() # Instance
assert C.__bases__ == (A, B) # Works
assert c.__class__.__bases__ == (A, B) # Works
def magicGetSuperClasses(clz):
return tuple([base.__class__ for base in clz.__bases__])
assert magicGetSuperClasses(C) == (A, B) # Fails
```
Also, if you're using Python 2.4+ you can use [generator expressions](http://www.python.org/dev/peps/pep-0289/) instead of creating a list (via []), then turning it into a tuple (via `tuple`). For example:
```
def get_base_metaclasses(cls):
"""Returns the metaclass of all the base classes of cls."""
return tuple(base.__class__ for base in clz.__bases__)
```
That's a somewhat confusing example, but genexps are generally easy and cool. :)
|
Python super class reflection
|
[
"",
"python",
"reflection",
""
] |
I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?
Edit:
For example I have the following
```
Name id Col1 Col2
Row1 1 6 1
Row2 2 2 3
Row3 3 9 5
Row4 4 16 8
```
I want to combine all the following Updates into one query
```
UPDATE table SET Col1 = 1 WHERE id = 1;
UPDATE table SET Col1 = 2 WHERE id = 2;
UPDATE table SET Col2 = 3 WHERE id = 3;
UPDATE table SET Col1 = 10 WHERE id = 4;
UPDATE table SET Col2 = 12 WHERE id = 4;
```
|
Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
Using your example:
```
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)
ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
```
|
Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
Using your example, you could do it like:
```
UPDATE table SET Col1 = CASE id
WHEN 1 THEN 1
WHEN 2 THEN 2
WHEN 4 THEN 10
ELSE Col1
END,
Col2 = CASE id
WHEN 3 THEN 3
WHEN 4 THEN 12
ELSE Col2
END
WHERE id IN (1, 2, 3, 4);
```
|
Multiple Updates in MySQL
|
[
"",
"mysql",
"sql",
"sql-update",
""
] |
Since both a `Table Scan` and a `Clustered Index Scan` essentially scan all records in the table, why is a Clustered Index Scan supposedly better?
As an example - what's the performance difference between the following when there are many records?:
```
declare @temp table(
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
-----------------------------
declare @temp table(
RowID int not null identity(1,1) primary key,
SomeColumn varchar(50)
)
insert into @temp
select 'SomeVal'
select * from @temp
```
|
In a table without a clustered index (a heap table), data pages are not linked together - so traversing pages requires a [lookup into the Index Allocation Map](http://msdn.microsoft.com/en-us/library/ms188270.aspx).
A clustered table, however, has it's [data pages linked in a doubly linked list](http://msdn.microsoft.com/en-us/library/ms177443.aspx) - making sequential scans a bit faster. Of course, in exchange, you have the overhead of dealing with keeping the data pages in order on `INSERT`, `UPDATE`, and `DELETE`. A heap table, however, requires a second write to the IAM.
If your query has a `RANGE` operator (e.g.: `SELECT * FROM TABLE WHERE Id BETWEEN 1 AND 100`), then a clustered table (being in a guaranteed order) would be more efficient - as it could use the index pages to find the relevant data page(s). A heap would have to scan all rows, since it cannot rely on ordering.
And, of course, a clustered index lets you do a CLUSTERED INDEX SEEK, which is pretty much optimal for performance...a heap with no indexes would always result in a table scan.
So:
* For your example query where you select all rows, the only difference is the doubly linked list a clustered index maintains. This should make your clustered table just a tiny bit faster than a heap with a large number of rows.
* For a query with a `WHERE` clause that can be (at least partially) satisfied by the clustered index, you'll come out ahead because of the ordering - so you won't have to scan the entire table.
* For a query that is not satisified by the clustered index, you're pretty much even...again, the only difference being that doubly linked list for sequential scanning. In either case, you're suboptimal.
* For `INSERT`, `UPDATE`, and `DELETE` a heap may or may not win. The heap doesn't have to maintain order, but does require a second write to the IAM. I think the relative performance difference would be negligible, but also pretty data dependent.
Microsoft has a [whitepaper](http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/clusivsh.mspx) which compares a clustered index to an equivalent non-clustered index on a heap (not exactly the same as I discussed above, but close). Their conclusion is basically to put a clustered index on all tables. I'll do my best to summarize their results (again, note that they're really comparing a non-clustered index to a clustered index here - but I think it's relatively comparable):
* `INSERT` performance: clustered index wins by about 3% due to the second write needed for a heap.
* `UPDATE` performance: clustered index wins by about 8% due to the second lookup needed for a heap.
* `DELETE` performance: clustered index wins by about 18% due to the second lookup needed and the second delete needed from the IAM for a heap.
* single `SELECT` performance: clustered index wins by about 16% due to the second lookup needed for a heap.
* range `SELECT` performance: clustered index wins by about 29% due to the random ordering for a heap.
* concurrent `INSERT`: heap table wins by 30% under load due to page splits for the clustered index.
|
<http://msdn.microsoft.com/en-us/library/aa216840(SQL.80).aspx>
The Clustered Index Scan logical and physical operator scans the clustered index specified in the Argument column. When an optional WHERE:() predicate is present, only those rows that satisfy the predicate are returned. If the Argument column contains the ORDERED clause, the query processor has requested that the rows' output be returned in the order in which the clustered index has sorted them. If the ORDERED clause is not present, the storage engine will scan the index in the optimal way (not guaranteeing the output to be sorted).
<http://msdn.microsoft.com/en-us/library/aa178416(SQL.80).aspx>
The Table Scan logical and physical operator retrieves all rows from the table specified in the Argument column. If a WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.
|
What's the difference between a Table Scan and a Clustered Index Scan?
|
[
"",
"sql",
"sql-server",
"indexing",
""
] |
In Maven, dependencies are usually set up like this:
```
<dependency>
<groupId>wonderful-inc</groupId>
<artifactId>dream-library</artifactId>
<version>1.2.3</version>
</dependency>
```
Now, if you are working with libraries that have frequent releases, constantly updating the <version> tag can be somewhat annoying. Is there any way to tell Maven to always use the latest available version (from the repository)?
|
***NOTE:***
*The mentioned `LATEST` and `RELEASE` metaversions [have been dropped **for plugin dependencies** in Maven 3 "for the sake of reproducible builds"](https://cwiki.apache.org/confluence/display/MAVEN/Maven+3.x+Compatibility+Notes#Maven3.xCompatibilityNotes-PluginMetaversionResolution), over 6 years ago.
(They still work perfectly fine for regular dependencies.)
For plugin dependencies please refer to this **[Maven 3 compliant solution](https://stackoverflow.com/a/1172805/363573)***.
---
If you always want to use the newest version, Maven has two keywords you can use as an alternative to version ranges. You should use these options with care as you are no longer in control of the plugins/dependencies you are using.
> When you depend on a plugin or a dependency, you can use the a version value of LATEST or RELEASE. LATEST refers to the latest released or snapshot version of a particular artifact, the most recently deployed artifact in a particular repository. RELEASE refers to the last non-snapshot release in the repository. In general, it is not a best practice to design software which depends on a non-specific version of an artifact. If you are developing software, you might want to use RELEASE or LATEST as a convenience so that you don't have to update version numbers when a new release of a third-party library is released. When you release software, you should always make sure that your project depends on specific versions to reduce the chances of your build or your project being affected by a software release not under your control. Use LATEST and RELEASE with caution, if at all.
See the [POM Syntax section of the Maven book](http://www.sonatype.com/books/maven-book/reference/pom-relationships-sect-pom-syntax.html#pom-relationships-sect-latest-release) for more details. Or see this doc on [Dependency Version Ranges](http://www.mojohaus.org/versions-maven-plugin/examples/resolve-ranges.html), where:
* A square bracket ( `[` & `]` ) means "closed" (inclusive).
* A parenthesis ( `(` & `)` ) means "open" (exclusive).
Here's an example illustrating the various options. In the Maven repository, com.foo:my-foo has the following metadata:
```
<?xml version="1.0" encoding="UTF-8"?><metadata>
<groupId>com.foo</groupId>
<artifactId>my-foo</artifactId>
<version>2.0.0</version>
<versioning>
<release>1.1.1</release>
<versions>
<version>1.0</version>
<version>1.0.1</version>
<version>1.1</version>
<version>1.1.1</version>
<version>2.0.0</version>
</versions>
<lastUpdated>20090722140000</lastUpdated>
</versioning>
</metadata>
```
If a dependency on that artifact is required, you have the following options (other [version ranges](https://cwiki.apache.org/confluence/display/MAVENOLD/Dependency+Mediation+and+Conflict+Resolution#DependencyMediationandConflictResolution-DependencyVersionRanges) can be specified of course, just showing the relevant ones here):
Declare an exact version (will always resolve to 1.0.1):
```
<version>[1.0.1]</version>
```
Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version):
```
<version>1.0.1</version>
```
Declare a version range for all 1.x (will currently resolve to 1.1.1):
```
<version>[1.0.0,2.0.0)</version>
```
Declare an open-ended version range (will resolve to 2.0.0):
```
<version>[1.0.0,)</version>
```
Declare the version as LATEST (will resolve to 2.0.0) (removed from maven 3.x)
```
<version>LATEST</version>
```
Declare the version as RELEASE (will resolve to 1.1.1) (removed from maven 3.x):
```
<version>RELEASE</version>
```
Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the [Maven super POM](http://maven.apache.org/guides/introduction/introduction-to-the-pom.html). You can do this with either "-Prelease-profile" or "-DperformRelease=true"
---
It's worth emphasising that any approach that allows Maven to pick the dependency versions (LATEST, RELEASE, and version ranges) can leave you open to build time issues, as later versions can have different behaviour (for example the dependency plugin has previously switched a default value from true to false, with confusing results).
It is therefore generally a good idea to define exact versions in releases. As [Tim's answer](https://stackoverflow.com/questions/30571/how-do-i-tell-maven-to-use-the-latest-version-of-a-dependency/1172805#1172805) points out, the [maven-versions-plugin](http://www.mojohaus.org/versions-maven-plugin/) is a handy tool for updating dependency versions, particularly the [versions:use-latest-versions](http://www.mojohaus.org/versions-maven-plugin/use-latest-versions-mojo.html) and [versions:use-latest-releases](http://www.mojohaus.org/versions-maven-plugin/use-latest-releases-mojo.html) goals.
|
Now I know this topic is old, but reading the question and the OP supplied answer it seems the [Maven Versions Plugin](http://www.mojohaus.org/versions-maven-plugin/) might have actually been a better answer to his question:
In particular the following goals could be of use:
* **versions:use-latest-versions** searches the pom for all versions
which have been a newer version and
replaces them with the latest
version.
* **versions:use-latest-releases** searches the pom for all non-SNAPSHOT
versions which have been a newer
release and replaces them with the
latest release version.
* **versions:update-properties** updates properties defined in a
project so that they correspond to
the latest available version of
specific dependencies. This can be
useful if a suite of dependencies
must all be locked to one version.
The following other goals are also provided:
* **versions:display-dependency-updates** scans a project's dependencies and
produces a report of those
dependencies which have newer
versions available.
* **versions:display-plugin-updates** scans a project's plugins and
produces a report of those plugins
which have newer versions available.
* **versions:update-parent** updates the parent section of a project so
that it references the newest
available version. For example, if
you use a corporate root POM, this
goal can be helpful if you need to
ensure you are using the latest
version of the corporate root POM.
* **versions:update-child-modules** updates the parent section of the
child modules of a project so the
version matches the version of the
current project. For example, if you
have an aggregator pom that is also
the parent for the projects that it
aggregates and the children and
parent versions get out of sync, this
mojo can help fix the versions of the
child modules. (Note you may need to
invoke Maven with the -N option in
order to run this goal if your
project is broken so badly that it
cannot build because of the version
mis-match).
* **versions:lock-snapshots** searches the pom for all -SNAPSHOT
versions and replaces them with the
current timestamp version of that
-SNAPSHOT, e.g. -20090327.172306-4
* **versions:unlock-snapshots** searches the pom for all timestamp
locked snapshot versions and replaces
them with -SNAPSHOT.
* **versions:resolve-ranges** finds dependencies using version ranges and
resolves the range to the specific
version being used.
* **versions:use-releases** searches the pom for all -SNAPSHOT versions
which have been released and replaces
them with the corresponding release
version.
* **versions:use-next-releases** searches the pom for all non-SNAPSHOT
versions which have been a newer
release and replaces them with the
next release version.
* **versions:use-next-versions** searches the pom for all versions
which have been a newer version and
replaces them with the next version.
* **versions:commit** removes the pom.xml.versionsBackup files. Forms
one half of the built-in "Poor Man's
SCM".
* **versions:revert** restores the pom.xml files from the
pom.xml.versionsBackup files. Forms
one half of the built-in "Poor Man's
SCM".
Just thought I'd include it for any future reference.
|
How do I tell Maven to use the latest version of a dependency?
|
[
"",
"java",
"maven",
"dependencies",
"maven-2",
"maven-metadata",
""
] |
Is anyone else having trouble running Swing applications from IntelliJ IDEA 8 Milestone 1? Even the simplest application of showing an empty JFrame seems to crash the JVM. I don't get a stack trace or anything, it looks like the JVM itself crashes and Windows shows me a pop-up that says the usual "This process is no longer responding" message.
Console applications work fine, and my Swing code works fine when launching from Netbeans or from the command line. I'm running Windows Vista x64 with the JDK 1.6 Update 10 beta, which may be a configuration the Jetbrains guys haven't run into yet.
|
I have actually experienced problems from using the JDK 6u10 beta myself and had to downgrade to JDK 6u7 for the time being. This solved some of my problems with among other things swing.
Also, i have been running IJ8M1 since the 'release' and I am very satisfied with it, especially if you regard the "beta" tag. It feels snappier and also supports multiple cores which makes my development machine rejoice. ;p
Anyway, i use WinXP32 and IJ8M1 with JDK 6u7 and that is afaik very stable indeed.
|
Ask your question directly on the IDEA website. They always react fast and the problem you have is probably either fixed or documented.
|
Problems running Swing application with IDEA 8M1
|
[
"",
"java",
"swing",
"ide",
"jvm",
"intellij-idea",
""
] |
I'm using LINQ to SQL classes in a project where the database design is still in a bit of flux.
Is there an easy way of synchronising the classes with the schema, or do I need to manually update the classes if a table design changes?
|
You can use SQLMetal.exe to generate your dbml and or cs/vb file. Use a pre-build script to start it and target the directory where your datacontext project belongs.
```
C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin\x64\sqlmetal.exe
/server:<SERVER>
/database:<database>
/code:"path\Solution\DataContextProject\dbContext.cs"
/language:csharp
/namespace:<your namespace>
```
|
I haven't tried it myself, but [Huagati DBML/EDMX Tools](http://www.huagati.com/dbmltools/) is recommended by other people.
> Huagati DBML/EDMX Tools is an add-in
> for Visual Studio that adds
> functionality to the Linq2SQL/DBML
> diagram designer in Visual Studio
> 2008, and to the ADO.NET Entity
> Framework designer in Visual Studio
> 2008 SP1. The add-in adds new menu
> options for updating Linq2SQL designer
> diagrams with database changes, for
> renaming Linq-to-SQL (DBML) and EF
> (EDMX) classes and properties to use
> .net naming conventions, and for
> adding documentation/descriptions to
> Linq-to-SQL generated classes from the
> database properties.

|
Best way to update LINQ to SQL classes after database schema change
|
[
"",
"c#",
".net",
"linq-to-sql",
""
] |
I'm trying to create a webapplication where I want to be able to plug-in separate assemblies. I'm using MVC preview 4 combined with Unity for dependency injection, which I use to create the controllers from my plugin assemblies. I'm using WebForms (default aspx) as my view engine.
If I want to use a view, I'm stuck on the ones that are defined in the core project, because of the dynamic compiling of the ASPX part. I'm looking for a proper way to enclose ASPX files in a different assembly, without having to go through the whole deployment step. Am I missing something obvious? Or should I resort to creating my views programmatically?
---
Update: I changed the accepted answer. Even though Dale's answer is very thorough, I went for the solution with a different virtual path provider. It works like a charm, and takes only about 20 lines in code altogether I think.
|
Essentially this is the same issue as people had with WebForms and trying to compile their UserControl ASCX files into a DLL. I found this <http://www.codeproject.com/KB/aspnet/ASP2UserControlLibrary.aspx> that might work for you too.
|
It took me way too long to get this working properly from the various partial samples, so here's the full code needed to get views from a Views folder in a shared library structured the same as a regular Views folder but with everything set to build as embedded resources. It will only use the embedded file if the usual file does not exist.
The first line of Application\_Start:
```
HostingEnvironment.RegisterVirtualPathProvider(new EmbeddedViewPathProvider());
```
The VirtualPathProvider
```
public class EmbeddedVirtualFile : VirtualFile
{
public EmbeddedVirtualFile(string virtualPath)
: base(virtualPath)
{
}
internal static string GetResourceName(string virtualPath)
{
if (!virtualPath.Contains("/Views/"))
{
return null;
}
var resourcename = virtualPath
.Substring(virtualPath.IndexOf("Views/"))
.Replace("Views/", "OrangeGuava.Common.Views.")
.Replace("/", ".");
return resourcename;
}
public override Stream Open()
{
Assembly assembly = Assembly.GetExecutingAssembly();
var resourcename = GetResourceName(this.VirtualPath);
return assembly.GetManifestResourceStream(resourcename);
}
}
public class EmbeddedViewPathProvider : VirtualPathProvider
{
private bool ResourceFileExists(string virtualPath)
{
Assembly assembly = Assembly.GetExecutingAssembly();
var resourcename = EmbeddedVirtualFile.GetResourceName(virtualPath);
var result = resourcename != null && assembly.GetManifestResourceNames().Contains(resourcename);
return result;
}
public override bool FileExists(string virtualPath)
{
return base.FileExists(virtualPath) || ResourceFileExists(virtualPath);
}
public override VirtualFile GetFile(string virtualPath)
{
if (!base.FileExists(virtualPath))
{
return new EmbeddedVirtualFile(virtualPath);
}
else
{
return base.GetFile(virtualPath);
}
}
}
```
The final step to get it working is that the root Web.Config must contain the right settings to parse strongly typed MVC views, as the one in the views folder won't be used:
```
<pages
validateRequest="false"
pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
<controls>
<add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
</controls>
</pages>
```
A couple of additional steps are required to get it working with Mono. First, you need to implement GetDirectory, since all files in the views folder get loaded when the app starts rather than as needed:
```
public override VirtualDirectory GetDirectory(string virtualDir)
{
Log.LogInfo("GetDirectory - " + virtualDir);
var b = base.GetDirectory(virtualDir);
return new EmbeddedVirtualDirectory(virtualDir, b);
}
public class EmbeddedVirtualDirectory : VirtualDirectory
{
private VirtualDirectory FileDir { get; set; }
public EmbeddedVirtualDirectory(string virtualPath, VirtualDirectory filedir)
: base(virtualPath)
{
FileDir = filedir;
}
public override System.Collections.IEnumerable Children
{
get { return FileDir.Children; }
}
public override System.Collections.IEnumerable Directories
{
get { return FileDir.Directories; }
}
public override System.Collections.IEnumerable Files
{
get {
if (!VirtualPath.Contains("/Views/") || VirtualPath.EndsWith("/Views/"))
{
return FileDir.Files;
}
var fl = new List<VirtualFile>();
foreach (VirtualFile f in FileDir.Files)
{
fl.Add(f);
}
var resourcename = VirtualPath.Substring(VirtualPath.IndexOf("Views/"))
.Replace("Views/", "OrangeGuava.Common.Views.")
.Replace("/", ".");
Assembly assembly = Assembly.GetExecutingAssembly();
var rfl = assembly.GetManifestResourceNames()
.Where(s => s.StartsWith(resourcename))
.Select(s => VirtualPath + s.Replace(resourcename, ""))
.Select(s => new EmbeddedVirtualFile(s));
fl.AddRange(rfl);
return fl;
}
}
}
```
Finally, strongly typed views will almost but not quite work perfectly. Model will be treated as an untyped object, so to get strong typing back you need to start your shared views with something like
```
<% var Model2 = Model as IEnumerable<AppModel>; %>
```
|
Views in separate assemblies in ASP.NET MVC
|
[
"",
"c#",
"asp.net-mvc",
"plugins",
""
] |
I'm going to start a new project - rewriting an existing system (PHP + SQL Server) from scratch because of some very serious limitations by design.
We have some quite good knowledge of SQL Server (currently we're using SQL Server 2000 in existing system) and we would like to employ its newer version (2008 I guess) in our new project.
I am really fond of technologies that Java offers - particularly Spring Framework and Wicket and I am quite familiar with Java from others projects and assignments before.
Therefore, we consider using Java and Microsoft SQL Server.
There are two JDBC drivers for SQL Server - jTDS and Microsoft's one - <http://msdn.microsoft.com/en-us/data/aa937724.aspx>. I think we should test both of them.
Are there any limitations in such solution I should know of? Has someone experience with such a technology combination?
|
I've worked on a project using MSQL Server in conjunction with a Java Stack. It works very well and as long, since JDBC shouldn't really care about your database. We used ehcache together with Hibernate and had problems with the MS JDBC Driver, so we switched to jtds and it works really good.
It's quite a while ago, so you still might wanna give the MS driver a chance...
|
I don't know about Java and 2008... but you shouldn't have too much trouble with Java and SQL2000. As lubos suggested, you'd be doing yourself a favour to look at c# but if you're much more comfortable with Java then there shouldn't be any real limitations as the JDBC connector [is supported by Microsoft](http://msdn.microsoft.com/en-us/data/aa937724.aspx)
|
Java + SQL Server - a viable solution?
|
[
"",
"java",
"sql-server",
""
] |
What would be the best way to have a list of items with a checkbox each in Java Swing?
I.e. a JList with items that have some text and a checkbox each?
|
Create a custom `ListCellRenderer` and asign it to the `JList`.
This custom `ListCellRenderer` must return a `JCheckbox` in the implementantion of `getListCellRendererComponent(...)` method.
But this `JCheckbox` will not be editable, is a simple paint in the screen is up to you to choose when this `JCheckbox` must be 'ticked' or not,
For example, show it ticked when the row is selected (parameter `isSelected`), but this way the check status will no be mantained if the selection changes. Its better to show it checked consulting the data below the `ListModel`, but then is up to you to implement the method who changes the check status of the data, and notify the change to the `JList` to be repainted.
I Will post sample code later if you need it
[ListCellRenderer](http://java.sun.com/javase/6/docs/api/javax/swing/ListCellRenderer.html)
|
A wonderful answer is this [`CheckBoxList`](http://www.devx.com/tips/Tip/5342). It implements Telcontar's answer (though 3 years before :)... I'm using it in Java 1.6 with no problems. I've also added an `addCheckbox` method like this (surely could be shorter, haven't used Java in a while):
```
public void addCheckbox(JCheckBox checkBox) {
ListModel currentList = this.getModel();
JCheckBox[] newList = new JCheckBox[currentList.getSize() + 1];
for (int i = 0; i < currentList.getSize(); i++) {
newList[i] = (JCheckBox) currentList.getElementAt(i);
}
newList[newList.length - 1] = checkBox;
setListData(newList);
}
```
I tried out the demo for the Jidesoft stuff, playing with the `CheckBoxList` I encountered some problems (behaviors that didn't work). I'll modify this answer if I find problems with the `CheckBoxList` I linked to.
|
How do I make a list with checkboxes in Java Swing?
|
[
"",
"java",
"swing",
"jcheckbox",
""
] |
I'm able to connect to and read an excel file no problem. But when importing data such as zipcodes that have leading zeros, how do you prevent excel from guessing the datatype and in the process stripping out leading zeros?
|
I believe you have to set the option in your connect string to force textual import rather than auto-detecting it.
```
Provider=Microsoft.ACE.OLEDB.12.0;
Data Source=c:\path\to\myfile.xlsx;
Extended Properties=\"Excel 12.0 Xml;IMEX=1\";
```
Your milage may vary depending on the version you have installed. The IMEX=1 extended property tells Excel to treat intermixed data as text.
|
Prefix with '
|
How do you prevent leading zeros from being stripped when importing an excel doc using c#
|
[
"",
"c#",
"asp.net",
"excel",
"ado.net",
""
] |
I am building a public website using ASP.NET, as part of the deliverable I need to do an Admin Site for data entry of the stuff shown in the public site, I was wondering what techniques or procedures are people using to validate entries using ASP.NET MVC.
|
Take a look at the [JQuery Validation plugin](http://bassistance.de/jquery-plugins/jquery-plugin-validation/) this plugin is amazing,it's clean to implement and has all the features you could ever need, including remote validation via AJAX.
Also a sample MVC controller method can be found [here](http://httpcode.com/blogs/PermaLink,guid,f8970f20-90d1-4af6-b2da-0285f1963b14.aspx) which basically uses the JsonResult action type like:
```
public JsonResult CheckUserName(string username)
{
return Json(CheckValidUsername(username));
}
```
|
IMO using xVal with jQuery and DataAnnotationsModelBinder is the best combination.
Sometimes however, there are validation rules which cannot be checked entirely on the client side, so you need to use remote client-side validation.
I figured out way to generically implement remote client-side validation with xVal / jQuery.validate so that
* Validation rules remain solely in your ASP.NET MVC model
* You write each validation rule just once, and only in easily testable C# code. There is no JavaScript or other client-side counterpart .
* There is no need to branch or otherwise modify xVal or jquery.validate
* All you have to do for each new remote form validation rule is to derive from the base class shown in this article.
I wrote a [blog article](https://devermind.wordpress.com/2009/06/21/asp-net-mvc-tip-3-how-to-cover-all-your-client-side-form-validation-needs-without-writing-any-javascript-part1/) on this describing all the details.
|
What's the best way to implement field validation using ASP.NET MVC?
|
[
"",
"c#",
"asp.net-mvc",
"validation",
""
] |
The following PHP code snippet uses GD to resize a browser-uploaded PNG to 128x128. It works great, except that the transparent areas in the original image are being replaced with a solid color- black in my case.
Even though `imagesavealpha` is set, something isn't quite right.
What's the best way to preserve the transparency in the resampled image?
```
$uploadTempFile = $myField[ 'tmp_name' ]
list( $uploadWidth, $uploadHeight, $uploadType )
= getimagesize( $uploadTempFile );
$srcImage = imagecreatefrompng( $uploadTempFile );
imagesavealpha( $targetImage, true );
$targetImage = imagecreatetruecolor( 128, 128 );
imagecopyresampled( $targetImage, $srcImage,
0, 0,
0, 0,
128, 128,
$uploadWidth, $uploadHeight );
imagepng( $targetImage, 'out.png', 9 );
```
|
```
imagealphablending( $targetImage, false );
imagesavealpha( $targetImage, true );
```
did it for me. Thanks ceejayoz.
note, the target image needs the alpha settings, not the source image.
Edit:
full replacement code. See also answers below and their comments. This is not guaranteed to be be perfect in any way, but did achieve my needs at the time.
```
$uploadTempFile = $myField[ 'tmp_name' ]
list( $uploadWidth, $uploadHeight, $uploadType )
= getimagesize( $uploadTempFile );
$srcImage = imagecreatefrompng( $uploadTempFile );
$targetImage = imagecreatetruecolor( 128, 128 );
imagealphablending( $targetImage, false );
imagesavealpha( $targetImage, true );
imagecopyresampled( $targetImage, $srcImage,
0, 0,
0, 0,
128, 128,
$uploadWidth, $uploadHeight );
imagepng( $targetImage, 'out.png', 9 );
```
|
Why do you make things so complicated? the following is what I use and so far it has done the job for me.
```
$im = ImageCreateFromPNG($source);
$new_im = imagecreatetruecolor($new_size[0],$new_size[1]);
imagecolortransparent($new_im, imagecolorallocate($new_im, 0, 0, 0));
imagecopyresampled($new_im,$im,0,0,0,0,$new_size[0],$new_size[1],$size[0],$size[1]);
```
|
Can PNG image transparency be preserved when using PHP's GDlib imagecopyresampled?
|
[
"",
"php",
"png",
"transparency",
"gd",
"alpha",
""
] |
Instead of relying on my host to send an email, I was thinking of sending the email messages using my **Gmail** account. The emails are personalized emails to the bands I play on my show.
Is it possible to do it?
|
Be sure to use `System.Net.Mail`, not the deprecated `System.Web.Mail`. Doing SSL with `System.Web.Mail` is a gross mess of hacky extensions.
```
using System.Net;
using System.Net.Mail;
var fromAddress = new MailAddress("from@gmail.com", "From Name");
var toAddress = new MailAddress("to@example.com", "To Name");
const string fromPassword = "fromPassword";
const string subject = "Subject";
const string body = "Body";
var smtp = new SmtpClient
{
Host = "smtp.gmail.com",
Port = 587,
EnableSsl = true,
DeliveryMethod = SmtpDeliveryMethod.Network,
UseDefaultCredentials = false,
Credentials = new NetworkCredential(fromAddress.Address, fromPassword)
};
using (var message = new MailMessage(fromAddress, toAddress)
{
Subject = subject,
Body = body
})
{
smtp.Send(message);
}
```
Additionally go to the [*Google Account > Security*](https://myaccount.google.com/security) page and look at the *Signing in to Google > 2-Step Verification* setting.
* If it is enabled, then you have to generate a password allowing .NET to bypass the 2-Step Verification. To do this, click on [*Signing in to Google > App passwords*](https://myaccount.google.com/apppasswords), select app = Mail, and device = Windows Computer, and finally generate the password. Use the generated password in the `fromPassword` constant instead of your standard Gmail password.
* If it is disabled, then you have to turn on [*Less secure app access*](https://myaccount.google.com/lesssecureapps), which is not recommended! So better enable the 2-Step verification.
|
The above answer doesn't work. You have to set `DeliveryMethod = SmtpDeliveryMethod.Network` or it will come back with a "**client was not authenticated**" error. Also it's always a good idea to put a timeout.
Revised code:
```
using System.Net.Mail;
using System.Net;
var fromAddress = new MailAddress("from@gmail.com", "From Name");
var toAddress = new MailAddress("to@yahoo.com", "To Name");
const string fromPassword = "password";
const string subject = "test";
const string body = "Hey now!!";
var smtp = new SmtpClient
{
Host = "smtp.gmail.com",
Port = 587,
EnableSsl = true,
DeliveryMethod = SmtpDeliveryMethod.Network,
Credentials = new NetworkCredential(fromAddress.Address, fromPassword),
Timeout = 20000
};
using (var message = new MailMessage(fromAddress, toAddress)
{
Subject = subject,
Body = body
})
{
smtp.Send(message);
}
```
|
Sending email in .NET through Gmail
|
[
"",
"c#",
".net",
"email",
"smtp",
"gmail",
""
] |
When developing whether its Web or Desktop at which point should a developer switch from SQLite, MySQL, MS SQL, etc
|
It depends on what you are doing. You might switch if:
* You need more scalability or better performance - say from SQLite to SQL Server or Oracle.
* You need access to more specific datatypes.
* You need to support a customer that only runs a particular database.
* You need better DBA tools.
* Your application is using a different platform where your database no longer runs, or it's libraries do not run.
* You have the ability/time/budget to actually make the change. Depending on the situation, the migration could be a bigger project than everything in the project up to that point. Migrations like these are great places to introduce inconsistencies, or to lose data, so a lot of care is required.
There are many more reasons for switching and it all depends on your requirements and the attributes of the databases.
|
You should switch databases at milestone 2.3433, 3ps prior to the left branch of dendrite 8,151,215.
You should switch databases when you have a reason to do so, would be my advice. If your existing database is performing to your expectations, supports the load that is being placed on it by your production systems, has the features you require in your applications and you aren't bored with it, why change? However, if you find your application isn't scaling, or you are designing an application that has high load or scalability requirements and your research tells you your current database platform is weak in that area, or, as was already mentioned, you need some spatial analysis or feature that a particular database has, well there you go.
Another consideration might be taking up the use of a database agnostic ORM tool that can allow you to experiment freely with different database platforms with a simple configuration setting. That was the trigger for us to consider trying out something new in the DB department. If our application can handle any DB the ORM can handle, why pay licensing fees on a commercial database when an open source DB works just as well for the levels of performance we require?
The bottom line, though, is that with databases or any other technology, I think there are no "business rules" that will tell you when it is time to switch - your scenario will tell you it is time to switch because something in your solution won't be quite right, and if you aren't at that point, no need to change.
|
What point should someone decide to switch Database Systems
|
[
"",
"sql",
"database",
""
] |
I have a table of tags and want to get the highest count tags from the list.
Sample data looks like this
```
id (1) tag ('night')
id (2) tag ('awesome')
id (3) tag ('night')
```
using
```
SELECT COUNT(*), `Tag` from `images-tags`
GROUP BY `Tag`
```
gets me back the data I'm looking for perfectly. However, I would like to organize it, so that the highest tag counts are first, and limit it to only send me the first 20 or so.
I tried this...
```
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY COUNT(id) DESC
LIMIT 20
```
and I keep getting an "Invalid use of group function - ErrNr 1111"
What am I doing wrong?
I'm using MySQL 4.1.25-Debian
|
In all versions of MySQL, simply alias the aggregate in the SELECT list, and order by the alias:
```
SELECT COUNT(id) AS theCount, `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY theCount DESC
LIMIT 20
```
|
MySQL prior to version 5 did not allow aggregate functions in ORDER BY clauses.
You can get around this limit with the deprecated syntax:
```
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY 1 DESC
LIMIT 20
```
1, since it's the first column you want to group on.
|
SQL Group By with an Order By
|
[
"",
"mysql",
"sql",
"mysql-error-1111",
""
] |
I am in the middle of reading the excellent [Clean Code](https://rads.stackoverflow.com/amzn/click/com/0132350882)
One discussion is regarding passing nulls into a method.
```
public class MetricsCalculator {
public double xProjection(Point p1, Point p2) {
return (p2.x - p1.x) * 1.5;
}
}
...
calculator.xProjection(null, new Point(12,13));
```
It represents different ways of handling this:
```
public double xProjection(Point p1, Point p2) {
if (p1 == null || p2 == null) {
throw new IllegalArgumentException("Invalid argument for xProjection");
}
return (p2.x - p1.x) * 1.5;
}
public double xProjection(Point p1, Point p2) {
assert p1 != null : "p1 should not be null";
assert p2 != null : "p2 should not be null";
return (p2.x - p1.x) * 1.5;
}
```
I prefer the [assertions](http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html) approach, but I don't like the fact that assertions are turned off by default.
The book finally states:
> In most programming languages there is no good way to deal with a null that is passed by a caller accidentally. Because this is the case, the rational approach is to forbid passing null by default.
It doesn't really go into how you would enforce this restriction?
Do any of you have strong opinions either way.
|
Both the use of assertions and the throwing of exceptions are valid approaches here. Either mechanism can be used to indicate a programming error, not a runtime error, as is the case here.
* Assertions have the advantage of performance as they are typically disabled on production systems.
* Exceptions have the advantage of safety, as the check is always performed.
The choice really depends on the development practices of the project. The project as a whole needs to decide on an assertion policy: if the choice is to enable assertions during all development, then I'd say to use assertions to check this kind of invalid parameter - in a production system, a NullPointerException thrown due to a programming error is unlikely to be able to be caught and handled in a meaningful way anyway and so will act just like an assertion.
Practically though, I know a lot of developers that don't trust that assertions will be enabled when appropriate and so opt for the safety of throwing a NullPointerException.
Of course if you can't enforce a policy for your code (if you're creating a library, for example, and so are dependent on how other developers run your code), you should opt for the safe approach of throwing NullPointerException for those methods that are part of the library's API.
|
General rule is if your method doesn't expect `null` arguments then you should throw [System.ArgumentNullException](http://msdn.microsoft.com/en-us/library/system.argumentnullexception.aspx). Throwing proper `Exception` not only protects you from resource corruption and other bad things but serves as a guide for users of your code saving time spent debugging your code.
Also read an article on [Defensive programming](http://en.wikipedia.org/wiki/Defensive_programming)
|
Passing null to a method
|
[
"",
"java",
"null",
"assert",
""
] |
Most of my C/C++ development involves monolithic module files and absolutely no classes whatsoever, so usually when I need to make a **DLL** with accessible functions I just export them using the standard `__declspec(dllexport)` directive. Then access them either dynamically via `LoadLibrary()` or at compile time with a header and lib file.
How do you do this when you want to export an entire class (and all it's public methods and properties)?
Is it possible to dynamically load that class at runtime and if so, how?
How would you do it with a header and lib for compile time linking?
|
> What about late-binding? As in loading
> it with LoadLibrary() and
> GetProcAddress() ? I'm used being able
> to load the library at run time and it
> would be great if you could do that
> here.
So there are two ways to load the DLL. The first is to reference one or more symbols from the DLL (your classname, for example), supply an appropriate import .LIB and let the linker figure everything out.
The second is to explicitly load the DLL via LoadLibrary.
Either approach works fine for C-level function exports. You can either let the linker handle it or call GetProcAddress as you noted.
But when it comes to exported **classes**, typically only the first approach is used, i.e., implicitly link to the DLL. In this case the DLL is loaded at application start time, and the application fails to load if the DLL can't be found.
If you want to link to a class defined in a DLL, and you want that DLL to be loaded dynamically, sometime after program initiation, you have two options:
1. Create objects of the class using a special factory function, which internally will have to use (a tiny bit of) assembler to "hook up" newly created objects to their appropriate offsets. This has to be done at run-time AFTER the DLL has been loaded, obviously. A good explanation of this approach can be found [here](http://www.codeproject.com/KB/DLL/classesexportedusingLL.aspx).
2. Use a [delay-load DLL](http://msdn.microsoft.com/en-us/library/151kt790.aspx).
All things considered... probably better to just go with implicit linking, in which case you definitely want to use the preprocessor technique shown above. In fact, if you create a new DLL in Visual Studio and choose the "export symbols" option these macros will be created for you.
Good luck...
|
When you build the DLL and the module that will use the DLL, have some kind of #define that you can use to distinguish between one and the other, then you can do something like this in your class header file:
```
#if defined( BUILD_DLL )
#define IMPORT_EXPORT __declspec(dllexport)
#else
#define IMPORT_EXPORT __declspec(dllimport)
#endif
class IMPORT_EXPORT MyClass {
...
};
```
*Edit: crashmstr beat me to it!*
|
Exporting a C++ class from a DLL
|
[
"",
"c++",
"windows",
"dll",
""
] |
I've been writing C and C++ code for almost twenty years, but there's one aspect of these languages that I've never really understood. I've obviously used regular casts i.e.
```
MyClass *m = (MyClass *)ptr;
```
all over the place, but there seem to be two other types of casts, and I don't know the difference. What's the difference between the following lines of code?
```
MyClass *m = (MyClass *)ptr;
MyClass *m = static_cast<MyClass *>(ptr);
MyClass *m = dynamic_cast<MyClass *>(ptr);
```
|
## static\_cast
`static_cast` is used for cases where you basically want to reverse an implicit conversion, with a few restrictions and additions. `static_cast` performs no runtime checks. This should be used if you know that you refer to an object of a specific type, and thus a check would be unnecessary. Example:
```
void func(void *data) {
// Conversion from MyClass* -> void* is implicit
MyClass *c = static_cast<MyClass*>(data);
...
}
int main() {
MyClass c;
start_thread(&func, &c) // func(&c) will be called
.join();
}
```
In this example, you know that you passed a `MyClass` object, and thus there isn't any need for a runtime check to ensure this.
## dynamic\_cast
`dynamic_cast` is useful when you don't know what the dynamic type of the object is. It returns a null pointer if the object referred to doesn't contain the type casted to as a base class (when you cast to a reference, a `bad_cast` exception is thrown in that case).
```
if (JumpStm *j = dynamic_cast<JumpStm*>(&stm)) {
...
} else if (ExprStm *e = dynamic_cast<ExprStm*>(&stm)) {
...
}
```
You can **not** use `dynamic_cast` for downcast (casting to a derived class) **if** the argument type is not polymorphic. For example, the following code is not valid, because `Base` doesn't contain any virtual function:
```
struct Base { };
struct Derived : Base { };
int main() {
Derived d; Base *b = &d;
dynamic_cast<Derived*>(b); // Invalid
}
```
An "up-cast" (cast to the base class) is always valid with both `static_cast` and `dynamic_cast`, and also without any cast, as an "up-cast" is an implicit conversion (assuming the base class is accessible, i.e. it's a `public` inheritance).
## Regular Cast
These casts are also called C-style cast. A C-style cast is basically identical to trying out a range of sequences of C++ casts, and taking the first C++ cast that works, without ever considering `dynamic_cast`. Needless to say, this is much more powerful as it combines all of `const_cast`, `static_cast` and `reinterpret_cast`, but it's also unsafe, because it does not use `dynamic_cast`.
In addition, C-style casts not only allow you to do this, but they also allow you to safely cast to a private base-class, while the "equivalent" `static_cast` sequence would give you a compile-time error for that.
Some people prefer C-style casts because of their brevity. I use them for numeric casts only, and use the appropriate C++ casts when user defined types are involved, as they provide stricter checking.
|
## Static cast
The static cast performs conversions between compatible types. It is similar to the C-style cast, but is more restrictive. For example, the C-style cast would allow an integer pointer to point to a char.
```
char c = 10; // 1 byte
int *p = (int*)&c; // 4 bytes
```
Since this results in a 4-byte pointer pointing to 1 byte of allocated memory, writing to this pointer will either cause a run-time error or will overwrite some adjacent memory.
```
*p = 5; // run-time error: stack corruption
```
In contrast to the C-style cast, the static cast will allow the compiler to check that the pointer and pointee data types are compatible, which allows the programmer to catch this incorrect pointer assignment during compilation.
```
int *q = static_cast<int*>(&c); // compile-time error
```
## Reinterpret cast
To force the pointer conversion, in the same way as the C-style cast does in the background, the reinterpret cast would be used instead.
```
int *r = reinterpret_cast<int*>(&c); // forced conversion
```
This cast handles conversions between certain unrelated types, such as from one pointer type to another incompatible pointer type. It will simply perform a binary copy of the data without altering the underlying bit pattern. Note that the result of such a low-level operation is system-specific and therefore not portable. It should be used with caution if it cannot be avoided altogether.
## Dynamic cast
This one is only used to convert object pointers and object references into other pointer or reference types in the inheritance hierarchy. It is the only cast that makes sure that the object pointed to can be converted, by performing a run-time check that the pointer refers to a complete object of the destination type. For this run-time check to be possible the object must be polymorphic. That is, the class must define or inherit at least one virtual function. This is because the compiler will only generate the needed run-time type information for such objects.
**Dynamic cast examples**
In the example below, a `MyChild` pointer is converted into a `MyBase` pointer using a dynamic cast. This derived-to-base conversion succeeds, because the Child object includes a complete Base object.
```
class MyBase
{
public:
virtual void test() {}
};
class MyChild : public MyBase {};
int main()
{
MyChild *child = new MyChild();
MyBase *base = dynamic_cast<MyBase*>(child); // ok
}
```
The next example attempts to convert a `MyBase` pointer to a `MyChild` pointer. Since the Base object does not contain a complete Child object this pointer conversion will fail. To indicate this, the dynamic cast returns a null pointer. This gives a convenient way to check whether or not a conversion has succeeded during run-time.
```
MyBase *base = new MyBase();
MyChild *child = dynamic_cast<MyChild*>(base);
if (child == 0)
std::cout << "Null pointer returned";
```
If a reference is converted instead of a pointer, the dynamic cast will then fail by throwing a `bad_cast` exception. This needs to be handled using a `try-catch` statement.
```
#include <exception>
// …
try
{
MyChild &child = dynamic_cast<MyChild&>(*base);
}
catch(std::bad_cast &e)
{
std::cout << e.what(); // bad dynamic_cast
}
```
## Dynamic or static cast
The advantage of using a dynamic cast is that it allows the programmer to check whether or not a conversion has succeeded during run-time. The disadvantage is that there is a performance overhead associated with doing this check. For this reason using a static cast would have been preferable in the first example, because a derived-to-base conversion will never fail.
```
MyBase *base = static_cast<MyBase*>(child); // ok
```
However, in the second example the conversion may lead to run-time error. The error will happen if the `MyBase` object contains a `MyBase` instance but not if it contains a `MyChild` instance. In some situations this may not be known until run-time. When this is the case dynamic cast is a better choice than static cast.
```
// Succeeds for a MyChild object
MyChild *child = dynamic_cast<MyChild*>(base);
```
If the base-to-derived conversion had been performed using a static cast instead of a dynamic cast the conversion would not have failed. It would have returned a pointer that referred to an incomplete object. Dereferencing such a pointer can lead to run-time errors.
```
// Allowed, but invalid
MyChild *child = static_cast<MyChild*>(base);
// Incomplete MyChild object dereferenced
(*child);
```
## Const cast
This one is primarily used to add or remove the `const` modifier of a variable.
```
const int myConst = 5;
int *nonConst = const_cast<int*>(&myConst); // removes const
```
Although `const` cast allows the value of a constant to be changed, doing so is still invalid code that may cause a run-time error. This could occur for example if the constant was located in a section of read-only memory.
```
*nonConst = 10; // potential run-time error
```
`const` cast is instead used mainly when there is a function that takes a non-constant pointer argument, even though it does not modify the pointee.
```
void print(int *p)
{
std::cout << *p;
}
```
The function can then be passed a constant variable by using a `const` cast.
```
print(&myConst); // error: cannot convert
// const int* to int*
print(nonConst); // allowed
```
[Source and More Explanations](https://web.archive.org/web/20140104001826/http://www.pvtuts.com/cpp/cpp-type-conversion-ii)
|
Regular cast vs. static_cast vs. dynamic_cast
|
[
"",
"c++",
"pointers",
"casting",
""
] |
I'm looking for the best method to parse various XML documents using a Java application. I'm currently doing this with SAX and a custom content handler and it works great - zippy and stable.
I've decided to explore the option having the same program, that currently recieves a single format XML document, receive two additional XML document formats, with various XML element changes. I was hoping to just swap out the ContentHandler with an appropriate one based on the first "startElement" in the document... but, uh-duh, the ContentHandler is set and **then** the document is parsed!
```
... constructor ...
{
SAXParserFactory spf = SAXParserFactory.newInstance();
try {
SAXParser sp = spf.newSAXParser();
parser = sp.getXMLReader();
parser.setErrorHandler(new MyErrorHandler());
} catch (Exception e) {}
... parse StringBuffer ...
try {
parser.setContentHandler(pP);
parser.parse(new InputSource(new StringReader(xml.toString())));
return true;
} catch (IOException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
...
```
So, it doesn't appear that I can do this in the way I initially thought I could.
That being said, am I looking at this entirely wrong? What is the best method to parse multiple, discrete XML documents with the same XML handling code? [I tried to ask in a more general post earlier... but, I think I was being too vague](https://stackoverflow.com/questions/23106/best-method-to-parse-various-custom-xml-documents-in-java). For speed and efficiency purposes I never really looked at DOM because these XML documents are fairly large and the system receives about 1200 every few minutes. It's just a one way send of information
To make this question too long and add to my confusion; following is a mockup of some various XML documents that I would like to have a single SAX, StAX, or ?? parser cleanly deal with.
products.xml:
```
<products>
<product>
<id>1</id>
<name>Foo</name>
<product>
<id>2</id>
<name>bar</name>
</product>
</products>
```
stores.xml:
```
<stores>
<store>
<id>1</id>
<name>S1A</name>
<location>CA</location>
</store>
<store>
<id>2</id>
<name>A1S</name>
<location>NY</location>
</store>
</stores>
```
managers.xml:
```
<managers>
<manager>
<id>1</id>
<name>Fen</name>
<store>1</store>
</manager>
<manager>
<id>2</id>
<name>Diz</name>
<store>2</store>
</manager>
</managers>
```
|
As I understand it, the problem is that you don't know what format the document is prior to parsing. You could use a delegate pattern. I'm assuming you're not validating against a DTD/XSD/etcetera and that it is OK for the DefaultHandler to have state.
```
public class DelegatingHandler extends DefaultHandler {
private Map<String, DefaultHandler> saxHandlers;
private DefaultHandler delegate = null;
public DelegatingHandler(Map<String, DefaultHandler> delegates) {
saxHandlers = delegates;
}
@Override
public void startElement(String uri, String localName, String name,
Attributes attributes) throws SAXException {
if(delegate == null) {
delegate = saxHandlers.get(name);
}
delegate.startElement(uri, localName, name, attributes);
}
@Override
public void endElement(String uri, String localName, String name)
throws SAXException {
delegate.endElement(uri, localName, name);
}
//etcetera...
```
|
You've done a good job of explaining what you want to do but not why. There are several XML frameworks that simplify marshalling and unmarshalling Java objects to/from XML.
The simplest is [Commons Digester](http://commons.apache.org/digester/) which I typically use to parse configuration files. But if you are want to deal with Java objects then you should look at [Castor](http://castor.org/), [JiBX](http://jibx.sourceforge.net/), [JAXB](http://jaxb.java.net/), [XMLBeans](http://xmlbeans.apache.org/), [XStream](http://xstream.codehaus.org/), or something similar. Castor or JiBX are my two favourites.
|
How would you use Java to handle various XML documents?
|
[
"",
"java",
"xml",
"sax",
"stax",
""
] |
I noticed that many people here use [TextMate](http://macromates.com/) for coding on OS X. I've recently started using it, and although I like its minimalistic interface, it makes it harder to stumble upon cool features if you don't know what you're looking for.
So, what feature have you found most helpful for coding (mainly in Python)? Are there any third-party bundles I should know about, besides what's included?
|
Don't neglect the 'mate' command line tool. You can use it to pipe output into TextMate, so if you do the following...
```
diff file1.py file2.py | mate
```
...it will not only open in TextMate, but it is smart enough to know that you're looking at a diff and highlight lines on screen.
TextMate's SVN integration is great; it also seems to have bundles for some other version control systems as well.
Add GetBundle to browse the bundle repository. I found the jQuery bundle through it and it's very handy.
As others have mentioned, rolling your own bundle for frequently used snippets is very helpful. If you have some snippets that are specific to a project or framework, you might want to prefix all of them with a common letter to keep the namespace tidy.
|
Holding down option while dragging allows you to highlight a block of text. If you type while the highlight is active, your keystrokes appear on multiple lines.
|
What are some useful TextMate features?
|
[
"",
"python",
"macos",
"text-editor",
"textmate",
""
] |
I'm writing a PHP script that involves scraping web pages. Currently, the script analyzes the page line by line, but it breaks if there is a tag that spans multiple lines, like
```
<img src="example.jpg"
alt="example">
```
If worse comes to worst, I could possibly preprocess the page by removing all line breaks, then re-inserting them at the closest `>`, but this seems like a kludge.
Ideally, I'd be able to detect a tag that spans lines, conjoin only those to lines, and continue processing.
So what's the best method to detect this?
|
Perhaps for future projects I'll use a parsing library, but that's kind of aside from the question at hand. This is my current solution. `rstrpos` is strpos, but from the reverse direction. Example use:
```
for($i=0; $i<count($lines); $i++)
{
$line = handle_mulitline_tags(&$i, $line, $lines);
}
```
And here's that implementation:
```
function rstrpos($string, $charToFind, $relativePos)
{
$searchPos = $relativePos;
$searchChar = '';
while (($searchChar != $charToFind)&&($searchPos>-1))
{
$newPos = $searchPos-1;
$searchChar = substr($string,$newPos,strlen($charToFind));
$searchPos = $newPos;
}
if (!empty($searchChar))
{
return $searchPos;
return TRUE;
}
else
{
return FALSE;
}
}
function handle_multiline_tags(&$i, $line, $lines)
{
//if a tag is opened but not closed before a line break,
$open = rstrpos($line, '<', strlen($line));
$close = rstrpos($line, '>', strlen($line));
if(($open > $close)&&($open > -1)&&($close > -1))
{
$i++;
return trim($line).trim(handle_multiline_tags(&$i, $lines[$i], $lines));
}
else
{
return trim($line);
}
}
```
This could probably be optimized in some way, but for my purposes, it's sufficient.
|
This is one of my pet peeves: *never* parse HTML by hand. *Never* parse HTML with regexps. *Never* parse HTML with string comparisons. *Always* use an HTML parser to parse HTML – that's what they're there for.
It's been a long time since I've done any PHP, but a quick search turned up [this PHP5 HTML parser](http://SimpleHTMLDom.SourceForge.Net/ "PHP Simple HTML DOM").
|
How to determine if an html tag splits across multiple lines
|
[
"",
"php",
"html",
"scripting",
""
] |
Using VS2008, C#, .Net 2 and Winforms how can I make a regular Button look "pressed"?
Imagine this button is an on/off switch.
`ToolStripButton` has the Checked property, but the regular Button does not.
|
One method you can used to obtain this option is by placing a "CheckBox" object and changing its "Appearance" from "Normal" to "Button" this will give you the same functionality that I believe you are looking for.
|
You could probably also use the ControlPaint class for this.
|
How to make a button appear as if it is pressed?
|
[
"",
"c#",
".net",
"winforms",
"user-interface",
"button",
""
] |
I saw this in [an answer to another question](https://stackoverflow.com/a/4384/697449), in reference to shortcomings of the Java spec:
> There are more shortcomings and this is a subtle topic. Check [this](http://kiranthakkar.blogspot.com/2007/05/method-overloading-with-new-features-of.html) out:
>
> ```
> public class methodOverloading{
> public static void hello(Integer x){
> System.out.println("Integer");
> }
>
> public static void hello(long x){
> System.out.println("long");
> }
>
> public static void main(String[] args){
> int i = 5;
> hello(i);
> }
> }
> ```
>
> Here "long" would be printed (haven't checked it myself), because the compiler chooses widening over auto-boxing. Be careful when using auto-boxing or don't use it at all!
*Are we sure that this is actually an example of widening instead of autoboxing, or is it something else entirely?*
On my initial scanning, I would agree with the statement that the output would be "long" on the basis of `i` being declared as a primitive and not an object. However, if you changed
```
hello(long x)
```
to
```
hello(Long x)
```
the output would print "Integer"
What's really going on here? I know nothing about the compilers/bytecode interpreters for java...
|
In the first case, you have a widening conversion happening. This can be see when runinng the "javap" utility program (included w/ the JDK), on the compiled class:
```
public static void main(java.lang.String[]);
Code:
0: iconst_ 5
1: istore_ 1
2: iload_ 1
3: i2l
4: invokestatic #6; //Method hello:(J)V
7: return
}
```
Clearly, you see the I2L, which is the mnemonic for the widening Integer-To-Long bytecode instruction. See reference [here](http://java.sun.com/docs/books/jvms/second_edition/html/Instructions2.doc6.html).
And in the other case, replacing the "long x" with the object "Long x" signature, you'll have this code in the main method:
```
public static void main(java.lang.String[]);
Code:
0: iconst_ 5
1: istore_ 1
2: iload_ 1
3: invokestatic #6; //Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer;
6: invokestatic #7; //Method hello:(Ljava/lang/Integer;)V
9: return
}
```
So you see the compiler has created the instruction Integer.valueOf(int), to box the primitive inside the wrapper.
|
Yes it is, try it out in a test. You will see "long" printed. It is widening because Java will choose to widen the int into a long before it chooses to autobox it to an Integer, so the hello(long) method is chosen to be called.
Edit: [the original post being referenced](https://stackoverflow.com/questions/4242/why-cant-i-call-tostring-on-a-java-primitive#4384).
Further Edit: The reason the second option would print Integer is because there is no "widening" into a larger primitive as an option, so it MUST box it up, thus Integer is the only option. Furthermore, java will only autobox to the original type, so it would give a compiler error if you leave the hello(Long) and removed hello(Integer).
|
Is this really widening vs autoboxing?
|
[
"",
"java",
"primitive",
"autoboxing",
""
] |
As the title says, is there a way to run the same Adobe AIR app more than once? I have a little widget I wrote that shows thumbnails from a couple of photo streams, and I'd like to fix it so I can look at more than one stream at a time. Thanks!
|
It seems that this is not possible. From the [documentation](http://livedocs.adobe.com/flex/3/html/help.html?content=app_launch_1.html):
> Only one instance of an AIR application is started. When an already running application is invoked again, AIR dispatches a new invoke event to the running instance.
It also gives a possible workaround:
> It is the responsibility of an AIR to respond to an invoke event and take the appropriate action (such as opening a new document window).
There is [already a bug](http://bugs.adobe.com/jira/browse/SDK-12915) related to this on the bugtracker, but it is marked closed with no explicit resolution given...
|
No, it can't. AIR only allows one running instance of any app with the same ID defined in the app.xml file.
```
<application xmlns="http://ns.adobe.com/air/application/1.0">
<id>ApplicationID</id>
```
To work around this you'll either have to create individually ID'd apps for each stream, or create a master app with child windows for each stream.
|
Can the same Adobe AIR app run more than once?
|
[
"",
"javascript",
"air",
"adobe",
""
] |
I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online?
|
Checkout the [TimeComplexity](http://wiki.python.org/moin/TimeComplexity) page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes.
|
Raymond D. Hettinger does [an excellent talk](http://www.youtube.com/watch?v=hYUsssClE94) ([slides](http://wenku.baidu.com/view/9c6fb20dcc1755270722089d.html)) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw focussed mainly on `set` and `dict`, but `list` was covered too.
There are also some photos of the pertinent slides from EuroPython in [a blog](http://paranoid-engineering.blogspot.com/2008/07/europython-2008-day-3.html).
Here is a summary of my notes on `list`:
* Stores items as an array of pointers. Subscript costs O(1) time. Append costs amortized O(1) time. Insert costs O(n) time.
* Tries to avoid `memcpy` when growing by over-allocating. Many small lists will waste a lot of space, but large lists never waste more than about 12.5% to overallocation.
* Some operations pre-size. Examples given were `range(n)`, `map()`, `list()`, `[None] * n`, and slicing.
* When shrinking, the array is `realloc`ed only when it is wasting 50% of space. `pop` is cheap.
|
Where can I find the time and space complexity of the built-in sequence types in Python
|
[
"",
"python",
"performance",
"complexity-theory",
"big-o",
"sequences",
""
] |
I have an application which *really* should be installed, but does work fine when deployed using JNLP.
However, it would seem that some Java functions such as `Runtime.exec` don't work using the default security options.
I would like to therefore [disable](http://www.joelonsoftware.com/items/2008/07/01.html) UI functionality that relies upon such functions.
So my question is, **how do I detect at runtime whether certain functions are available or not?**
The case study, here of course, is `Runtime.exec`.
|
You want to ask to the [SecurityManager](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/SecurityManager.html) if you have Exec right with the checkExec method.
|
I have also found that adding the following to the JNLP file:
```
<security>
<all-permissions/>
</security>
```
And signing the JAR file allows the app to run with all the permissions needed for `Runtime.exec`.
|
How do I detect if a function is available during JNLP execution?
|
[
"",
"java",
"security",
"deployment",
"permissions",
"jnlp",
""
] |
What would be the easiest way to be able to send and receive raw network packets. Do I have to write my own JNI wrapping of some c API, and in that case what API am I looking for?
EDIT: I want to be able to do what wireshark does, i.e. record all incomming packets on an interface, and in addition be able to send back my own created packets. And I want to do it on a mac.
|
If you start with the idea that you need something *like* a packet sniffer, you'll want to look at <http://netresearch.ics.uci.edu/kfujii/jpcap/doc/>.
|
Raw Socket for Java is a request for JDK for a looong long time. See the request [here](https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4727550). There's a long discussion there where you can look for workarounds and solutions. I once needed this for a simple PING operation, but I can't remember how I resolved this. Sorry :)
|
How do I read and write raw ip packets from java on a mac?
|
[
"",
"java",
"macos",
"networking",
""
] |
I'm implementing a tagging system for a website. There are multiple tags per object and multiple objects per tag. This is accomplished by maintaining a table with two values per record, one for the ids of the object and the tag.
I'm looking to write a query to find the objects that match a given set of tags. Suppose I had the following data (in [object] -> [tags]\* format)
```
apple -> fruit red food
banana -> fruit yellow food
cheese -> yellow food
firetruck -> vehicle red
```
If I want to match (red), I should get apple and firetruck. If I want to match (fruit, food) I should get (apple, banana).
How do I write a SQL query do do what I want?
@Jeremy Ruten,
Thanks for your answer. The notation used was used to give some sample data - my database does have a table with 1 object id and 1 tag per record.
Second, my problem is that I need to get all objects that match all tags. Substituting your OR for an AND like so:
```
SELECT object WHERE tag = 'fruit' AND tag = 'food';
```
Yields no results when run.
|
Given:
* object table (primary key id)
* objecttags table (foreign keys objectId, tagid)
* tags table (primary key id)
```
SELECT distinct o.*
from object o join objecttags ot on o.Id = ot.objectid
join tags t on ot.tagid = t.id
where t.Name = 'fruit' or t.name = 'food';
```
This seems backwards, since you want and, but the issue is, 2 tags aren't on the same row, and therefore, an and yields nothing, since 1 single row cannot be both a fruit and a food.
This query will yield duplicates usually, because you will get 1 row of each object, per tag.
If you wish to really do an and in this case, you will need a `group by`, and a `having count = <number of ors>` in your query for example.
```
SELECT distinct o.name, count(*) as count
from object o join objecttags ot on o.Id = ot.objectid
join tags t on ot.tagid = t.id
where t.Name = 'fruit' or t.name = 'food'
group by o.name
having count = 2;
```
|
Oh gosh I may have mis-interpreted your original comment.
The easiest way to do this in SQL would be to have three tables:
```
1) Tags ( tag_id, name )
2) Objects (whatever that is)
3) Object_Tag( tag_id, object_id )
```
Then you can ask virtually any question you want of the data quickly, easily, and efficiently (provided you index appropriately). If you want to get fancy, you can allow multi-word tags, too (there's an elegant way, and a less elegant way, I can think of).
I assume that's what you've got, so this SQL below will work:
The literal way:
```
SELECT obj
FROM object
WHERE EXISTS( SELECT *
FROM tags
WHERE tag = 'fruit'
AND oid = object_id )
AND EXISTS( SELECT *
FROM tags
WHERE tag = 'Apple'
AND oid = object_id )
```
There are also other ways you can do it, such as:
```
SELECT oid
FROM tags
WHERE tag = 'Apple'
INTERSECT
SELECT oid
FROM tags
WHERE tag = 'Fruit'
```
|
SQL many-to-many matching
|
[
"",
"sql",
"many-to-many",
"tagging",
""
] |
Jeff actually posted about this in [Sanitize HTML](http://refactormycode.com/codes/333-sanitize-html). But his example is in C# and I'm actually more interested in a Java version. Does anyone have a better version for Java? Is his example good enough to just convert directly from C# to Java?
[Update] I have put a bounty on this question because SO wasn't as popular when I asked the question as it is today (\*). As for anything related to security, the more people look into it, the better it is!
(\*) In fact, I think it was still in closed beta
|
Don't do this with regular expressions. Remember, you're not protecting just against valid HTML; you're protecting against the DOM that web browsers create. Browsers can be tricked into producing valid DOM from invalid HTML quite easily.
For example, see this list of [obfuscated XSS attacks](http://ha.ckers.org/xss.html). Are you prepared to tailor a regex to prevent this real world attack on [Yahoo and Hotmail](http://www.greymagic.com/security/advisories/gm005-mc/) on IE6/7/8?
```
<HTML><BODY>
<?xml:namespace prefix="t" ns="urn:schemas-microsoft-com:time">
<?import namespace="t" implementation="#default#time2">
<t:set attributeName="innerHTML" to="XSS<SCRIPT DEFER>alert("XSS")</SCRIPT>">
</BODY></HTML>
```
How about this attack that works on IE6?
```
<TABLE BACKGROUND="javascript:alert('XSS')">
```
How about attacks that are not listed on this site? The problem with Jeff's approach is that it's not a whitelist, as claimed. As someone on [that page](http://refactormycode.com/codes/333-sanitize-html#refactor_13642) adeptly notes:
> The problem with it, is that the html
> must be clean. There are cases where
> you can pass in hacked html, and it
> won't match it, in which case it'll
> return the hacked html string as it
> won't match anything to replace. This
> isn't strictly whitelisting.
I would suggest a purpose built tool like [AntiSamy](http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project). It works by actually parsing the HTML, and then traversing the DOM and removing anything that's not in the *configurable* whitelist. The major difference is the ability to gracefully handle malformed HTML.
The best part is that it actually unit tests for all the XSS attacks on the above site. Besides, what could be easier than this API call:
```
public String toSafeHtml(String html) throws ScanException, PolicyException {
Policy policy = Policy.getInstance(POLICY_FILE);
AntiSamy antiSamy = new AntiSamy();
CleanResults cleanResults = antiSamy.scan(html, policy);
return cleanResults.getCleanHTML().trim();
}
```
|
I extracted from NoScript best Anti-XSS addon, here is its Regex:
Work flawless:
```
<[^\w<>]*(?:[^<>"'\s]*:)?[^\w<>]*(?:\W*s\W*c\W*r\W*i\W*p\W*t|\W*f\W*o\W*r\W*m|\W*s\W*t\W*y\W*l\W*e|\W*s\W*v\W*g|\W*m\W*a\W*r\W*q\W*u\W*e\W*e|(?:\W*l\W*i\W*n\W*k|\W*o\W*b\W*j\W*e\W*c\W*t|\W*e\W*m\W*b\W*e\W*d|\W*a\W*p\W*p\W*l\W*e\W*t|\W*p\W*a\W*r\W*a\W*m|\W*i?\W*f\W*r\W*a\W*m\W*e|\W*b\W*a\W*s\W*e|\W*b\W*o\W*d\W*y|\W*m\W*e\W*t\W*a|\W*i\W*m\W*a?\W*g\W*e?|\W*v\W*i\W*d\W*e\W*o|\W*a\W*u\W*d\W*i\W*o|\W*b\W*i\W*n\W*d\W*i\W*n\W*g\W*s|\W*s\W*e\W*t|\W*i\W*s\W*i\W*n\W*d\W*e\W*x|\W*a\W*n\W*i\W*m\W*a\W*t\W*e)[^>\w])|(?:<\w[\s\S]*[\s\0\/]|['"])(?:formaction|style|background|src|lowsrc|ping|on(?:d(?:e(?:vice(?:(?:orienta|mo)tion|proximity|found|light)|livery(?:success|error)|activate)|r(?:ag(?:e(?:n(?:ter|d)|xit)|(?:gestur|leav)e|start|drop|over)?|op)|i(?:s(?:c(?:hargingtimechange|onnect(?:ing|ed))|abled)|aling)|ata(?:setc(?:omplete|hanged)|(?:availabl|chang)e|error)|urationchange|ownloading|blclick)|Moz(?:M(?:agnifyGesture(?:Update|Start)?|ouse(?:PixelScroll|Hittest))|S(?:wipeGesture(?:Update|Start|End)?|crolledAreaChanged)|(?:(?:Press)?TapGestur|BeforeResiz)e|EdgeUI(?:C(?:omplet|ancel)|Start)ed|RotateGesture(?:Update|Start)?|A(?:udioAvailable|fterPaint))|c(?:o(?:m(?:p(?:osition(?:update|start|end)|lete)|mand(?:update)?)|n(?:t(?:rolselect|extmenu)|nect(?:ing|ed))|py)|a(?:(?:llschang|ch)ed|nplay(?:through)?|rdstatechange)|h(?:(?:arging(?:time)?ch)?ange|ecking)|(?:fstate|ell)change|u(?:echange|t)|l(?:ick|ose))|m(?:o(?:z(?:pointerlock(?:change|error)|(?:orientation|time)change|fullscreen(?:change|error)|network(?:down|up)load)|use(?:(?:lea|mo)ve|o(?:ver|ut)|enter|wheel|down|up)|ve(?:start|end)?)|essage|ark)|s(?:t(?:a(?:t(?:uschanged|echange)|lled|rt)|k(?:sessione|comma)nd|op)|e(?:ek(?:complete|ing|ed)|(?:lec(?:tstar)?)?t|n(?:ding|t))|u(?:ccess|spend|bmit)|peech(?:start|end)|ound(?:start|end)|croll|how)|b(?:e(?:for(?:e(?:(?:scriptexecu|activa)te|u(?:nload|pdate)|p(?:aste|rint)|c(?:opy|ut)|editfocus)|deactivate)|gin(?:Event)?)|oun(?:dary|ce)|l(?:ocked|ur)|roadcast|usy)|a(?:n(?:imation(?:iteration|start|end)|tennastatechange)|fter(?:(?:scriptexecu|upda)te|print)|udio(?:process|start|end)|d(?:apteradded|dtrack)|ctivate|lerting|bort)|DOM(?:Node(?:Inserted(?:IntoDocument)?|Removed(?:FromDocument)?)|(?:CharacterData|Subtree)Modified|A(?:ttrModified|ctivate)|Focus(?:Out|In)|MouseScroll)|r(?:e(?:s(?:u(?:m(?:ing|e)|lt)|ize|et)|adystatechange|pea(?:tEven)?t|movetrack|trieving|ceived)|ow(?:s(?:inserted|delete)|e(?:nter|xit))|atechange)|p(?:op(?:up(?:hid(?:den|ing)|show(?:ing|n))|state)|a(?:ge(?:hide|show)|(?:st|us)e|int)|ro(?:pertychange|gress)|lay(?:ing)?)|t(?:ouch(?:(?:lea|mo)ve|en(?:ter|d)|cancel|start)|ime(?:update|out)|ransitionend|ext)|u(?:s(?:erproximity|sdreceived)|p(?:gradeneeded|dateready)|n(?:derflow|load))|f(?:o(?:rm(?:change|input)|cus(?:out|in)?)|i(?:lterchange|nish)|ailed)|l(?:o(?:ad(?:e(?:d(?:meta)?data|nd)|start)?|secapture)|evelchange|y)|g(?:amepad(?:(?:dis)?connected|button(?:down|up)|axismove)|et)|e(?:n(?:d(?:Event|ed)?|abled|ter)|rror(?:update)?|mptied|xit)|i(?:cc(?:cardlockerror|infochange)|n(?:coming|valid|put))|o(?:(?:(?:ff|n)lin|bsolet)e|verflow(?:changed)?|pen)|SVG(?:(?:Unl|L)oad|Resize|Scroll|Abort|Error|Zoom)|h(?:e(?:adphoneschange|l[dp])|ashchange|olding)|v(?:o(?:lum|ic)e|ersion)change|w(?:a(?:it|rn)ing|heel)|key(?:press|down|up)|(?:AppComman|Loa)d|no(?:update|match)|Request|zoom))[\s\0]*=
```
Test: <http://regex101.com/r/rV7zK8>
I think it block 99% XSS because it is a part of NoScript, a addon that get updated regularly
|
Best regex to catch XSS (Cross-site Scripting) attack (in Java)?
|
[
"",
"java",
"html",
"regex",
"xss",
""
] |
I'm using `ColdFusion` to populate a template that includes HTML unordered lists (`<ul>`s).
Most of these aren't that long, but a few have ridiculously long lengths and could really stand to be in 2-3 columns.
Is there an HTML, ColdFusion or perhaps JavaScript (I'm accepting jQuery solutions) way to do this easily? It's not worth some over-complicated heavyweight solution to save some scrolling.
|
So I dug up this article from A List Apart [CSS Swag: Multi-Column Lists](http://www.alistapart.com/articles/multicolumnlists). I ended up using the first solution, it's not the best but the others require either using complex HTML that can't be generated dynamically, or creating a lot of custom classes, which could be done but would require loads of in-line styling and possibly a huge page.
Other solutions are still welcome though.
|
If Safari and Firefox support is good enough for you, there is a CSS solution:
```
ul {
-webkit-column-count: 3;
-moz-column-count: 3;
column-count: 3;
-webkit-column-gap: 2em;
-moz-column-gap: 2em;
column-gap: 2em;
}
```
I'm not sure about Opera.
|
Wrapping lists into columns
|
[
"",
"javascript",
"jquery",
"html",
"css",
"cfml",
""
] |
Most of time we represent concepts which can never be less than 0. For example to declare length, we write:
```
int length;
```
The name expresses its purpose well but you can assign negative values to it. It seems that for some situations, you can represent your intent more clearly by writing it this way instead:
```
uint length;
```
Some disadvantages that I can think of:
* unsigned types (uint, ulong, ushort) are not CLS compliant so you can't use it with other languages that don't support this
* .Net classes use signed types most of the time so you have to cast
Thoughts?
|
“When in Rome, do as the Romans do.”
While there is theoretically an advantage in using unsigned values where applicable because it makes the code more expressive, this is simply not done in C#. I'm not sure why the developers initially didn't design the interfaces to handle `uints` and make the type CLS compliant but now the train has left the station.
Since consistency is generally important I'd advise taking the C# road and using `int`s.
|
If you decrement a signed number with a value of 0, it becomes negative and you can easily test for this. If you decrement an unsigned number with a value of 0, it underflows and becomes the maximum value for the type - somewhat more difficult to check for.
|
Using Unsigned Primitive Types
|
[
"",
"c#",
"primitive-types",
""
] |
I'm trying to add support for stackoverflow feeds in my rss reader but **SelectNodes** and **SelectSingleNode** have no effect. This is probably something to do with ATOM and xml namespaces that I just don't understand yet.
I have gotten it to work by removing all attributes from the **feed** tag, but that's a hack and I would like to do it properly. So, how do you use **SelectNodes** with atom feeds?
Here's a snippet of the feed.
```
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:creativeCommons="http://backend.userland.com/creativeCommonsRssModule" xmlns:thr="http://purl.org/syndication/thread/1.0">
<title type="html">StackOverflow.com - Questions tagged: c</title>
<link rel="self" href="http://stackoverflow.com/feeds/tag/c" type="application/atom+xml" />
<subtitle>Check out the latest from StackOverflow.com</subtitle>
<updated>2008-08-24T12:25:30Z</updated>
<id>http://stackoverflow.com/feeds/tag/c</id>
<creativeCommons:license>http://www.creativecommons.org/licenses/by-nc/2.5/rdf</creativeCommons:license>
<entry>
<id>http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server</id>
<title type="html">What is the best way to communicate with a SQL server?</title>
<category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="c++" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="sql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="mysql" /><category scheme="http://stackoverflow.com/feeds/tag/c/tags" term="database" />
<author><name>Ed</name></author>
<link rel="alternate" href="http://stackoverflow.com/questions/22901/what-is-the-best-way-to-communicate-with-a-sql-server" />
<published>2008-08-22T05:09:04Z</published>
<updated>2008-08-23T04:52:39Z</updated>
<summary type="html"><p>I am going to be using c/c++, and would like to know the best way to talk to a MySQL server. Should I use the library that comes with the server installation? Are they any good libraries I should consider other than the official one?</p></summary>
<link rel="replies" type="application/atom+xml" href="http://stackoverflow.com/feeds/question/22901/answers" thr:count="2"/>
<thr:total>2</thr:total>
</entry>
</feed>
```
## The Solution
```
XmlDocument doc = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("atom", "http://www.w3.org/2005/Atom");
doc.Load(feed);
// successful
XmlNodeList itemList = doc.DocumentElement.SelectNodes("atom:entry", nsmgr);
```
|
Don't confuse the namespace names in the XML file with the namespace names for your namespace manager. They're both shortcuts, and they don't necessarily have to match.
So you can register "<http://www.w3.org/2005/Atom>" as "atom", and then do a SelectNodes for "atom:entry".
|
You might need to add a XmlNamespaceManager.
```
XmlDocument document = new XmlDocument();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(document.NameTable);
nsmgr.AddNamespace("creativeCommons", "http://backend.userland.com/creativeCommonsRssModule");
// AddNamespace for other namespaces too.
document.Load(feed);
```
It is needed if you want to call SelectNodes on a document that uses them. What error are you seeing?
|
SelectNodes not working on stackoverflow feed
|
[
"",
"c#",
".net",
"rss",
"atom-feed",
""
] |
In Ruby on Rails, I'm attempting to update the `innerHTML` of a div tag using the `form_remote_tag` helper. This update happens whenever an associated select tag receives an onchange event. The problem is, `<select onchange="this.form.submit();">`; doesn't work. Nor does `document.forms[0].submit()`. The only way to get the onsubmit code generated in the form\_remote\_tag to execute is to create a hidden submit button, and invoke the click method on the button from the select tag. Here's a working ERb partial example.
```
<% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%>
<% content_tag :div, :id => 'content' do -%>
<%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.commit.click" %>
<%= submit_tag 'submit_button', :style => "display: none" %>
<% end %>
<% end %>
```
What I want to do is something like this, but it doesn't work.
```
<% form_remote_tag :url => product_path, :update => 'content', :method => 'get' do -%>
<% content_tag :div, :id => 'content' do -%>
# the following line does not work
<%= select_tag :update, options_for_select([["foo", 1], ["bar", 2]]), :onchange => "this.form.onsubmit()" %>
<% end %>
<% end %>
```
So, is there any way to remove the invisible submit button for this use case?
There seems to be some confusion. So, let me explain. The basic problem is that `submit()` doesn't call the `onsubmit()` code rendered into the form.
The actual HTML form that Rails renders from this ERb looks like this:
```
<form action="/products/1" method="post" onsubmit="new Ajax.Updater('content', '/products/1', {asynchronous:true, evalScripts:true, method:'get', parameters:Form.serialize(this)}); return false;">
<div style="margin:0;padding:0">
<input name="authenticity_token" type="hidden" value="4eacf78eb87e9262a0b631a8a6e417e9a5957cab" />
</div>
<div id="content">
<select id="update" name="update" onchange="this.form.commit.click">
<option value="1">foo</option>
<option value="2">bar</option>
</select>
<input name="commit" style="display: none" type="submit" value="submit_button" />
</div>
</form>
```
I want to axe the invisible submit button, but using a straight form.submit appears to not work. So, I need some way to call the form's onsubmit event code.
Update: Orion Edwards solution would work if there wasn't a `return(false);` generated by Rails. I'm not sure which is worse though, sending a phantom click to an invisible submit button or calling eval on the `getAttribute('onsubmit')` call after removing the return call with a javascript string replacement!
|
If you didn't actually want to submit the form, but just invoke whatever code happened to be in the onsubmit, you could possibly do this: (untested)
```
var code = document.getElementById('formId').getAttribute('onsubmit');
eval(code);
```
|
I realize this question is kind of old, but what the heck are you doing eval for?
```
document.getElementById('formId').onsubmit();
document.getElementById('formId').submit();
```
or
```
document.formName.onsubmit();
document.formName.submit();
```
When the DOM of a document is loaded, the events are not strings any more, they are functions.
```
alert(typeof document.formName.onsubmit); // function
```
So there's no reason to convert a function to a string just so you can eval it.
|
Is it possible to call Javascript's onsubmit event programmatically on a form?
|
[
"",
"javascript",
"ruby-on-rails",
"ruby",
""
] |
Currently we have a hybrid ASP/PHP setup connecting to a SQL Server 2005 database. But all the query work is done on the client side, I'd like to move some of this to PHP.
What driver and/or connection string is needed to connect to Sql Svr and what is the syntax to use in PHP?
---
Update: OK so I was definitely trying to avoid using anything to do with copying DLLs etc. I'll look into the [SQL2K5PHP](http://www.codeplex.com/SQL2K5PHP) driver (thanks Vincent). @jcarrascal for the sake of clarity, by "client side" I mean our application is an internal web app that runs as an [HTA](http://msdn.microsoft.com/en-us/library/ms536471.aspx), with all queries done via javascript calls to an ASP which actually submits the DB request.
|
You have two options:
1) **php\_mssql extension** : If you'd like something that has the same API mysql and mysqli has, then use the php\_mssql extension. But there is a catch, the bundled ntwdblib.dll file with PHP is not working. You have to find this file from a SQL Server 2000 installation or you can find it on the Internet. This API is supposedly not very reliable but I have been using it without problem for about one year.
<http://ca.php.net/mssql>
2) **Microsoft SQL Server 2005 PHP Driver** : If you'd like something more modern but which does not have the same API and is missing some important functions (mssql\_num\_rows). The big plus is that it is supported by Microsoft and is likely to work with a future version.
<http://msdn.microsoft.com/en-us/data/cc299381.aspx>
|
Just use the mssql\_connect() function like this:
```
$conn = mssql_connect('localhost', 'sa' , '123456')
or die('Can\'t connect.');
mssql_select_db('database', $conn)
or die('Can\'t select the database');
```
Functions relating to SQL Server are defined in the [PHP manual for the MSSQL driver](http://www.php.net/manual/en/book.mssql.php).
One question though, "all the query work is done on the client side" WTF? :D
|
PHP with SQL Server 2005+
|
[
"",
"php",
"sql-server",
""
] |
I have a ListBox that has a style defined for ListBoxItems. Inside this style, I have some labels and a button. One that button, I want to define a click event that can be handled on my page (or any page that uses that style). How do I create an event handler on my WPF page to handle the event from my ListBoxItems style?
Here is my style (affected code only):
```
<Style x:Key="UsersTimeOffList" TargetType="{x:Type ListBoxItem}">
...
<Grid>
<Button x:Name="btnRemove" Content="Remove" Margin="0,10,40,0" Click="btnRemove_Click" />
</Grid>
</Style>
```
Thanks!
|
Take a look at [RoutedCommand](http://msdn.microsoft.com/en-us/library/system.windows.input.routedcommand.aspx)s.
Define your command in myclass somewhere as follows:
```
public static readonly RoutedCommand Login = new RoutedCommand();
```
Now define your button with this command:
```
<Button Command="{x:Static myclass.Login}" />
```
You can use CommandParameter for extra information..
Now last but not least, start listening to your command:
In the constructor of the class you wish to do some nice stuff, you place:
```
CommandBindings.Add(new CommandBinding(myclass.Login, ExecuteLogin));
```
or in XAML:
```
<UserControl.CommandBindings>
<CommandBinding Command="{x:Static myclass.Login}" Executed="ExecuteLogin" />
</UserControl.CommandBindings>
```
And you implement the delegate the CommandBinding needs:
```
private void ExecuteLogin(object sender, ExecutedRoutedEventArgs e)
{
//Your code goes here... e has your parameter!
}
```
You can start listening to this command everywhere in your visual tree!
Hope this helps
PS You can also define the CommandBinding with a CanExecute delegate which will even disable your command if the CanExecute says so :)
PPS Here is another example: [RoutedCommands in WPF](http://www.wpfwiki.com/Default.aspx?Page=WPF%20Q13.8&AspxAutoDetectCookieSupport=1)
|
As Arcturus posted, RoutedCommands are a great way to achieve this. However, if there's only the one button in your DataTemplate then this might be a bit simpler:
You can actually handle any button's Click event from the host ListBox, like this:
```
<ListBox Button.Click="removeButtonClick" ... />
```
Any buttons contained within the ListBox will fire that event when they're clicked on. From within the event handler you can use e.OriginalSource to get a reference back to the button that was clicked on.
Obviously this is too simplistic if your ListBoxItems have more than one button, but in many cases it works just fine.
|
WPF Listbox style with a button
|
[
"",
"c#",
"wpf",
""
] |
How do I convert the value of a PHP variable to string?
I was looking for something better than concatenating with an empty string:
```
$myText = $myVar . '';
```
Like the `ToString()` method in Java or .NET.
|
You can use the [casting operators](https://www.php.net/manual/en/language.types.type-juggling.php):
```
$myText = (string)$myVar;
```
There are more details for string casting and conversion in the [Strings section](https://www.php.net/manual/en/language.types.string.php#language.types.string.casting) of the PHP manual, including special handling for booleans and nulls.
|
This is done with typecasting:
```
$strvar = (string) $var; // Casts to string
echo $var; // Will cast to string implicitly
var_dump($var); // Will show the true type of the variable
```
In a class you can define what is output by using the magical method [`__toString`](http://www.php.net/manual/en/language.oop5.magic.php#language.oop5.magic.tostring). An example is below:
```
class Bottles {
public function __toString()
{
return 'Ninety nine green bottles';
}
}
$ex = new Bottles;
var_dump($ex, (string) $ex);
// Returns: instance of Bottles and "Ninety nine green bottles"
```
Some more type casting examples:
```
$i = 1;
// int 1
var_dump((int) $i);
// bool true
var_dump((bool) $i);
// string "1"
var_dump((string) 1);
```
|
PHP equivalent of .NET/Java's toString()
|
[
"",
"php",
"string",
""
] |
We have a PHP project that we would like to version control. Right now there are three of us working on a development version of the project which resides in an external folder to which all of our Eclipse IDEs are linked, and thus no version control.
What is the right way and the best way to version control this?
We have an SVN set up, but we just need to find a good way to check in and out that allows us to test on the development server. Any ideas?
|
We were in a similar situation, and here's what we ended up doing:
* Set up two branches -- the release and development branch.
* For the development branch, include a post-commit hook that deploys the repository to the dev server, so you can test.
* Once you're ready, you merge your changes into the release branch. I'd also suggest putting in a post-commit hook for deployment there.
You can also set up individual development servers for each of the team members, on their workstations. I find that it speeds things up a bit, although you do have some more setup time.
We had to use a single development server, because we were using a proprietary CMS and ran into licensing issues. So our post-commit hook was a simple FTP bot.
|
Here is what we do:
* Each dev has a VM that is configured like our integration server
* The integration server has space for Trunk, each user, and a few slots for branches
* The production server
* Hooks are in Subversion to e-mail when commits are made
At the beginning of a project, the user makes a branch and checks it out on their personal VM as well as grabs a clean copy of the database. They do their work, committing as they go.
Once they have finished everything in their own personal space they log into the integration server and check out their branch, run their tests, etc. When all that passes their branch is merged into Trunk.
Trunk is rebuilt, the full suite of tests are run, and if all is good it gets the big ol' stamp of approval, tagged in SVN, and promoted to Production at the end of the night.
If at any point a commit by someone else is made, we get an e-mail and can merge those changes into our individual branches.
|
Version control PHP Web Project
|
[
"",
"php",
"svn",
"version-control",
"cvs",
""
] |
We would like to have user defined formulas in our c++ program.
e.g. The value *v = x + ( y - (z - 2)) / 2*. Later in the program the user would define x,y and z -> the program should return the result of the calculation.
Somewhen later the formula may get changed, so the next time the program should parse the formula and add the new values.
Any ideas / hints how to do something like this ?
So far I just came to the solution to write a parser to calculate these formulas - maybe any ideas about that ?
|
If it will be used frequently and if it will be extended in the future, I would almost recommend adding either Python or Lua into your code. [Lua](http://www.lua.org/) is a very lightweight scripting language which you can hook into and provide new functions, operators etc. If you want to do more robust and complicated things, use Python instead.
|
You can represent your formula as a tree of operations and sub-expressions. You may want to define types or constants for Operation types and Variables.
You can then easily enough write a method that recurses through the tree, applying the appropriate operations to whatever values you pass in.
|
calculating user defined formulas (with c++)
|
[
"",
"c++",
""
] |
For whatever reason, our company has a coding guideline that states:
`Each class shall have it's own header and implementation file.`
So if we wrote a class called `MyString` we would need an associated **MyStringh.h** and **MyString.cxx**.
Does anyone else do this? Has anyone seen any compiling performance repercussions as a result? Does 5000 classes in 10000 files compile just as quickly as 5000 classes in 2500 files? If not, is the difference noticeable?
[We code C++ and use GCC 3.4.4 as our everyday compiler]
|
The term here is **translation unit** and you really want to (if possible) have one class per translation unit ie, one class implementation per .cpp file, with a corresponding .h file of the same name.
It's usually more efficient (from a compile/link) standpoint to do things this way, especially if you're doing things like incremental link and so forth. The idea being, translation units are isolated such that, when one translation unit changes, you don't have to rebuild a lot of stuff, as you would have to if you started lumping many abstractions into a single translation unit.
Also you'll find many errors/diagnostics are reported via file name ("Error in Myclass.cpp, line 22") and it helps if there's a one-to-one correspondence between files and classes. (Or I suppose you could call it a 2 to 1 correspondence).
|
## Overwhelmed by thousands lines of code?
Having one set of header/source files per class in a directory can seem overkill. And if the number of classes goes toward 100 or 1000, it can even be frightening.
But having played with sources following the philosophy "let's put together everything", the conclusion is that only the one who wrote the file has any hope to not be lost inside. Even with an IDE, it is easy to miss things because **when you're playing with a source of 20,000 lines, you just close your mind for anything not exactly referring to your problem.**
*Real life example: the class hierarchy defined in those thousand lines sources closed itself into a diamond-inheritance, and some methods were overridden in child classes by methods with exactly the same code. This was easily overlooked (who wants to explore/check a 20,000 lines source code?), and when the original method was changed (bug correction), the effect was not as universal as excepted.*
## Dependancies becoming circular?
I had this problem with templated code, but I saw similar problems with regular C++ and C code.
Breaking down your sources into 1 header per struct/class lets you:
* Speed up compilation because you can use symbol forward-declaration instead of including whole objects
* Have circular dependencies between classes (§) (i.e. class A has a pointer to B, and B has a pointer to A)
In source-controlled code, class dependencies could lead to regular moving of classes up and down the file, just to make the header compile. You don't want to study the evolution of such moves when comparing the same file in different versions.
**Having separate headers makes the code more modular, faster to compile, and makes it easier to study its evolution through different versions diffs**
*For my template program, I had to divide my headers into two files: The .HPP file containing the template class declaration/definition, and the .INL file containing the definitions of the said class methods.*
Putting all this code inside one and only one unique header would mean putting class definitions at the beginning of this file, and the method definitions at the end.
And then, if someone needed only a small part of the code, with the one-header-only solution, they still would have to pay for the slower compilation.
(§) Note that you can have circular dependencies between classes if you know which class owns which. This is a discussion about classes having knowledge of the existence of other classes, not shared\_ptr circular dependencies antipattern.
## One last word: Headers should be self-sufficients
One thing, though, that must be respected by a solution of multiple headers and multiple sources.
**When you include one header, no matter which header, your source must compile cleanly.**
Each header should be self-sufficient. You're supposed to develop code, not treasure-hunting by greping your 10,000+ source files project to find which header defines the symbol in the 1,000 lines header you need to include just because of *one* enum.
This means that either each header defines or forward-declare all the symbols it uses, or include all the needed headers (and only the needed headers).
## Question about circular dependencies
[underscore-d](https://stackoverflow.com/users/2757035/underscore-d) asks:
> Can you explain how using separate headers makes any difference to circular dependencies? I don't think it does. We can trivially create a circular dependency even if both classes are fully declared in the same header, simply by forward-declaring one in advance before we declare a handle to it in the other. Everything else seems to be great points, but the idea that separate headers facilitate circular dependencies seems way off
>
> underscore\_d, Nov 13 at 23:20
Let's say you have 2 class templates, A and B.
Let's say the definition of class A (resp. B) has a pointer to B (resp. A). Let's also say the methods of class A (resp. B) actually call methods from B (resp. A).
You have a circular dependency both in the definition of the classes, and the implementations of their methods.
If A and B were normal classes, and A and B's methods were in .CPP files, there would be no problem: You would use a forward declaration, have a header for each class definitions, then each CPP would include both HPP.
But as you have templates, you actually have to reproduce that patterns above, but with headers only.
This means:
1. a definition header `A.def.hpp` and `B.def.hpp`
2. an implementation header `A.inl.hpp` and `B.inl.hpp`
3. for convenience, a "naive" header `A.hpp` and `B.hpp`
Each header will have the following traits:
1. In `A.def.hpp` (resp. `B.def.hpp`), you have a forward declaration of class B (resp. A), which will enable you to declare a pointer/reference to that class
2. `A.inl.hpp` (resp. `B.inl.hpp`) will include both `A.def.hpp` and `B.def.hpp`, which will enable methods from A (resp. B) to use the class B (resp. A).
3. `A.hpp` (resp. `B.hpp`) will directly include both `A.def.hpp` and `A.inl.hpp` (resp. `B.def.hpp` and `B.inl.hpp`)
4. Of course, all headers need to be self sufficient, and protected by header guards
The naive user will include `A.hpp` and/or `B.hpp`, thus ignoring the whole mess.
And having that organization means the library writer can solve the circular dependencies between A and B while keeping both classes in separate files, easy to navigate once you understand the scheme.
Please note that it was an edge case (two templates knowing each other). I expect most code to *not* need that trick.
|
Multiple classes in a header file vs. a single header file per class
|
[
"",
"c++",
"performance",
"file-organization",
""
] |
I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use `import os` and then use a command line speech program to say "Process complete". I much rather it be a simple "bell."
I know that there's a function that can be used in *Cocoa* apps, `NSBeep`, but I don't think that has much anything to do with this.
I've also tried
```
print(\a)
```
but that didn't work.
I'm using a Mac, if you couldn't tell by my *Cocoa* comment, so that may help.
|
Have you tried :
```
import sys
sys.stdout.write('\a')
sys.stdout.flush()
```
That works for me here on Mac OS 10.5
Actually, I think your original attempt works also with a little modification:
```
print('\a')
```
(You just need the single quotes around the character sequence).
|
If you have PyObjC (the Python - Objective-C bridge) installed or are running on OS X 10.5's system python (which ships with PyObjC), you can do
```
from AppKit import NSBeep
NSBeep()
```
to play the system alert.
|
Python Sound ("Bell")
|
[
"",
"python",
"macos",
"audio",
"terminal",
""
] |
I want to parse a config file sorta thing, like so:
```
[KEY:Value]
[SUBKEY:SubValue]
```
Now I started with a `StreamReader`, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me.
One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
|
> I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt. :')
Have you looked at [YAML](http://www.yaml.org/)?
You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc
here's an example
```
customer:
name: Orion
age: 26
addresses:
- type: Work
number: 12
street: Bob Street
- type: Home
number: 15
street: Secret Road
```
There appears to be a [C# library here](http://yaml-net-parser.sourceforge.net/default.html), which I haven't used personally, but yaml is pretty simple, so "how hard can it be?" :-)
I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)
|
I was looking at almost this exact problem the other day: [this article](http://trackerrealm.com/blogs/2007/04/tokenize-string-with-c-regular.html) on string tokenizing is exactly what you need. You'll want to define your tokens as something like:
```
@"(?<level>\s) | " +
@"(?<term>[^:\s]) | " +
@"(?<separator>:)"
```
The article does a pretty good job of explaining it. From there you just start eating up tokens as you see fit.
Protip: For an [LL(1) parser](http://en.wikipedia.org/wiki/LL_parser) (read: easy), tokens cannot share a prefix. If you have `abc` as a token, you cannot have `ace` as a token
Note: The article's missing the | characters in its examples, just throw them in.
|
Best method of Textfile Parsing in C#?
|
[
"",
"c#",
"fileparse",
""
] |
I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming?
|
After developing [my own Genetic Programming didactic application](http://code.google.com/p/evo-lisa-clone/), I found a complete Genetic Programming Framework called [AForge.NET Genetics](http://code.google.com/p/aforge/source/browse/#svn/trunk/Sources/Genetic). It's a part of the [Aforge.NET library](http://code.google.com/p/aforge/). It's licensed under LGPL.
|
MSDN had an article last year about genetic programming: [Genetic Algorithms: Survival of the Fittest with Windows Forms](http://msdn.microsoft.com/en-us/magazine/cc163934.aspx)
|
Genetic Programming in C#
|
[
"",
"c#",
"genetic-algorithm",
"genetic-programming",
"evolutionary-algorithm",
""
] |
I have a ListView control, and I'm trying to figure out the easiest/best way to disallow changing the selected row(s), without *hiding* the selected row(s).
I know there's a `HideSelection` property, but that only works when the `ListView` is still enabled (but not focused). I need the selection to be viewable even when the ListView is disabled.
How can I implement this?
|
You could also make the ListView ownerdraw. You then have complete control over how the items look whether they are selected or not or whether the ListView itself is enabled or not. The DrawListViewItemEventArgs provides a way to ask the ListView to draw individual parts of the item so you only have to draw the bits you're interested in. For example, you can draw the background of the item but leave it up to the ListView to draw the text.
|
There are two options, change the selected rows disabled colors. Or change all the other rows to simulate they are disabled except for the selected one. The first option is obviously the easiest, and the second option obviously is going to need some extra protections.
I have actually done the first option before and it works quite well. You just have to remember to change the colors back to the defaults in case another row is selected later on in the process.
|
Disabling a ListView in C#, but still showing the current selection
|
[
"",
"c#",
".net",
"winforms",
"listview",
""
] |
Every time I create a new project I copy the last project's ant file to the new one and make the appropriate changes (trying at the same time to make it more flexible for the next project). But since I didn't really thought about it at the beginning, the file started to look really ugly.
Do you have an Ant template that can be easily ported in a new project? Any tips/sites for making one?
Thank you.
|
An alternative to making a template is to evolve one by gradually generalising your current project's Ant script so that there are fewer changes to make the next time you copy it for use on a new project. There are several things you can do.
Use ${ant.project.name} in file names, so you only have to mention your application name in the project element. For example, if you generate myapp.jar:
```
<project name="myapp">
...
<target name="jar">
...
<jar jarfile="${ant.project.name}.jar" ...
```
Structure your source directory structure so that you can package your build by copying whole directories, rather than naming individual files. For example, if you are copying JAR files to a web application archive, do something like:
```
<copy todir="${war}/WEB-INF/lib" flatten="true">
<fileset dir="lib" includes="**/*.jar">
</copy>
```
Use properties files for machine-specific and project-specific build file properties.
```
<!-- Machine-specific property over-rides -->
<property file="/etc/ant/build.properties" />
<!-- Project-specific property over-rides -->
<property file="build.properties" />
<!-- Default property values, used if not specified in properties files -->
<property name="jboss.home" value="/usr/share/jboss" />
...
```
Note that Ant properties cannot be changed once set, so you override a value by defining a new value *before* the default value.
|
You can give <http://import-ant.sourceforge.net/> a try.
It is a set of build file snippets that can be used to create simple custom build files.
|
Best Apache Ant Template
|
[
"",
"java",
"ant",
""
] |
Suppose I have a stringbuilder in C# that does this:
```
StringBuilder sb = new StringBuilder();
string cat = "cat";
sb.Append("the ").Append(cat).(" in the hat");
string s = sb.ToString();
```
would that be as efficient or any more efficient as having:
```
string cat = "cat";
string s = String.Format("The {0} in the hat", cat);
```
If so, why?
**EDIT**
After some interesting answers, I realised I probably should have been a little clearer in what I was asking. I wasn't so much asking for which was quicker at concatenating a string, but which is quicker at ***injecting*** one string into another.
In both cases above I want to inject one or more strings into the middle of a predefined template string.
Sorry for the confusion
|
**NOTE:** This answer was written when .NET 2.0 was the current version. This may no longer apply to later versions.
`String.Format` uses a `StringBuilder` internally:
```
public static string Format(IFormatProvider provider, string format, params object[] args)
{
if ((format == null) || (args == null))
{
throw new ArgumentNullException((format == null) ? "format" : "args");
}
StringBuilder builder = new StringBuilder(format.Length + (args.Length * 8));
builder.AppendFormat(provider, format, args);
return builder.ToString();
}
```
The above code is a snippet from mscorlib, so the question becomes "is `StringBuilder.Append()` faster than `StringBuilder.AppendFormat()`"?
Without benchmarking I'd probably say that the code sample above would run more quickly using `.Append()`. But it's a guess, try benchmarking and/or profiling the two to get a proper comparison.
This chap, Jerry Dixon, did some benchmarking:
> <http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm>
**Updated:**
Sadly the link above has since died. However there's still a copy on the Way Back Machine:
> <http://web.archive.org/web/20090417100252/http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm>
At the end of the day it depends whether your string formatting is going to be called repetitively, i.e. you're doing some serious text processing over 100's of megabytes of text, or whether it's being called when a user clicks a button now and again. Unless you're doing some huge batch processing job I'd stick with String.Format, it aids code readability. If you suspect a perf bottleneck then stick a profiler on your code and see where it really is.
|
From the [MSDN documentation](http://msdn.microsoft.com/en-us/library/system.text.stringbuilder.aspx):
> The performance of a concatenation operation for a String or StringBuilder object depends on how often a memory allocation occurs. A String concatenation operation always allocates memory, whereas a StringBuilder concatenation operation only allocates memory if the StringBuilder object buffer is too small to accommodate the new data. Consequently, the String class is preferable for a concatenation operation if a fixed number of String objects are concatenated. In that case, the individual concatenation operations might even be combined into a single operation by the compiler. A StringBuilder object is preferable for a concatenation operation if an arbitrary number of strings are concatenated; for example, if a loop concatenates a random number of strings of user input.
|
Is String.Format as efficient as StringBuilder
|
[
"",
"c#",
"vb.net",
"performance",
"stringbuilder",
"string.format",
""
] |
I've been trying to understand how Ruby blocks work, and to do that I've been trying to implement them in C.
One easy way to implement closures is to pass a `void*` to the enclosing stack to the closure/function but Ruby blocks also seem to handle returns and break statements from the scope that uses the block.
```
loop do
break i if (i >= 4000)
i *= 2
end
```
I think one of the closures proposals for Java works like this also.
So, how would you implement Ruby-blocks/Java-closures in C?
|
The concept of closures requires the concept of contexts. C's context is based on the stack and the registers of the CPU, so to create a block/closure, you need to be able to manipulate the stack pointer in a correct (and reentrant) way, and store/restore registers as needed.
The way this is done by interpreters or virtual machines is to have a `context` structure or something similar, and not use the stack and registers directly. This structure keeps track of a stack and optionally some registers, if you're designing a register based VM. At least, that's the simplest way to do it (though slightly less performant than actually mapping things correctly).
|
I haven't actually implemented any of this, so take it with a sack of salt.
There are two parts to a closure: the data environment and the code environment. Like you said, you can probably pass a void\* to handle references to data. You could probably use setjmp and longjmp to implement the non-linear control flow jumps that the Ruby break requires.
If you want closures you should probably be programming in a language that actually supports them. :-)
UPDATE: Interesting things are happening in Clang. They've prototyped a closure for C. <http://lists.cs.uiuc.edu/pipermail/cfe-dev/2008-August/002670.html> might prove to be interesting reading.
|
Ruby blocks/Java closures in C
|
[
"",
"java",
"c",
"ruby",
""
] |
My question concerns c# and how to access Static members ... Well I don't really know how to explain it (which kind of is bad for a question isn't it?) I will just give you some sample code:
```
Class test<T>{
int method1(Obj Parameter1){
//in here I want to do something which I would explain as
T.TryParse(Parameter1);
//my problem is that it does not work ... I get an error.
//just to explain: if I declare test<int> (with type Integer)
//I want my sample code to call int.TryParse(). If it were String
//it should have been String.TryParse()
}
}
```
So thank you guys for your answers (By the way the question is: how would I solve this problem without getting an error). This probably quite an easy question for you!
---
Edit: Thank you all for your answers!
Though I think the try - catch phrase is the most elegant, I know from my experience with vb that it can really be a bummer. I used it once and it took about 30 minutes to run a program, which later on only took 2 minutes to compute just because I avoided try - catch.
This is why I chose the switch statement as the best answer. It makes the code more complicated but on the other hand I imagine it to be relatively fast and relatively easy to read. (Though I still think there should be a more elegant way ... maybe in the next language I learn)
---
Though if you have some other suggestion I am still waiting (and willing to participate)
|
One more way to do it, this time some reflection in the mix:
```
static class Parser
{
public static bool TryParse<TType>( string str, out TType x )
{
// Get the type on that TryParse shall be called
Type objType = typeof( TType );
// Enumerate the methods of TType
foreach( MethodInfo mi in objType.GetMethods() )
{
if( mi.Name == "TryParse" )
{
// We found a TryParse method, check for the 2-parameter-signature
ParameterInfo[] pi = mi.GetParameters();
if( pi.Length == 2 ) // Find TryParse( String, TType )
{
// Build a parameter list for the call
object[] paramList = new object[2] { str, default( TType ) };
// Invoke the static method
object ret = objType.InvokeMember( "TryParse", BindingFlags.InvokeMethod, null, null, paramList );
// Get the output value from the parameter list
x = (TType)paramList[1];
return (bool)ret;
}
}
}
// Maybe we should throw an exception here, because we were unable to find the TryParse
// method; this is not just a unable-to-parse error.
x = default( TType );
return false;
}
}
```
The next step would be trying to implement
```
public static TRet CallStaticMethod<TRet>( object obj, string methodName, params object[] args );
```
With full parameter type matching etc.
|
The problem is that TryParse isn't defined on an interface or base class anywhere, so you can't make an assumption that the type passed into your class will have that function. Unless you can contrain T in some way, you'll run into this a lot.
[Constraints on Type Parameters](http://msdn.microsoft.com/en-us/library/d5x73970.aspx "C# Programming Guide (MSDN)")
|
Generics in c# & accessing the static members of T
|
[
"",
"c#",
"generics",
"static",
"methods",
"data-access",
""
] |
I'm using two different libraries in my project, and both of them supply a basic rectangle `struct`. The problem with this is that there seems to be no way to insert a conversion between the types, so I can't call a function in one library with the result from a function in the other. If I was the author of either of these, I could create conversions, from the outside, I can't.
library a:
```
typedef struct rectangle { sint16 x; sint16 y; uint16 w; uint16 h; } rectangle;
```
library b:
```
class Rect {
int x; int y; int width; int height;
/* ... */
};
```
Now, I can't make a converter `class`, because C++ will only look for a conversion in one step. This is probably a good thing, because there would be a lot of possibilities involving creating new objects of all kinds of types.
I can't make an operator that takes the `struct` from `a` and supplies an object of the `class` from `b`:
```
foo.cpp:123 error: ‘operator b::Rect(const rectangle&)’ must be a nonstatic member function
```
So, is there a sensible way around this?
## edit:
I should perhaps also point out that I'd really like some solution that makes working with the result seamless, since I don't expect to be that coder. (Though I agree, old-school, explicit, conversion would have been a good choice. The other branch, [`reinterpret_cast`](http://en.cppreference.com/w/cpp/language/reinterpret_cast) has the same problem..)
## edit2:
Actually, none of the suggestions really answer my actual question, [Konrad Rudolph](https://stackoverflow.com/users/1968/konrad-rudolph) seems to be correct. C++ actually can't do this. Sucks, but true. (If it makes any difference, I'm going to try subclassing as suggested by [CodingTheWheel](https://stackoverflow.com/users/90/codingthewheel).
|
If you can't modify the structures then you have no alternative to writing a manual conversion function because overloading conversion operators only works within the class body. There's no other way.
|
Create an intermediate shim type "`RectangleEx`", and define custom conversions to/from the 3rd-party string types. Whenever you speak to either API, do so through the shim class.
Another way would be to derive a `class` from either `rect` or `Rectangle`, and insert conversions/constructors there.
|
How can I convert types in C++?
|
[
"",
"c++",
"oop",
"types",
""
] |
I am trying to implement an "out of proc" COM server written in C#. How do I do this?
I need the C# code to be "out of proc" from my main C++ application, because I cannot load the .NET runtime into my main process space
*WHY?:*
My C++ code is in a DLL that is loaded into many different customer EXE's, some of which use different versions of the .NET runtime. Since there can only be one runtime loaded into a single process, my best bet seems to be to put my C# code into another process.
|
You can create COM+ components using System.EnterpriseServices.ServicedComponent. Consequently, you'll be able to create out-of-proc and in-proc (client) component activation as well as all COM+ benefits of pooling, remoting, run as a windows service etc.
|
[Here](http://www.codeproject.com/KB/COM/BuildCOMServersInDotNet.aspx) we can read that it is possible, but the exe will be loaded as an library and not started in it's own process like an exe. I don't know if that is a problem for you? It also contains some possible solutions if you do want to make it act like a real out of process com server. But maybe using another way of inter process communication is better. Like .Net Remoting.
|
What do I need to do to implement an "out of proc" COM server in C#?
|
[
"",
"c#",
"com",
"interop",
""
] |
I am merging a CVS branch and one of the larger changes is the replacement wherever it occurs of a Singleton pattern with abstract classes that have a static initialisation block and all static methods.
Is this something that's worth keeping since it will require merging a lot of conflicts, what sort of situation would I be looking at for this refactoring to be worthwhile?
We are running this app under Weblogic 8.1 (so JDK 1.4.2)
---
sorry Thomas, let me clarify..
the HEAD version has the traditional singleton pattern (private constructor, getInstance() etc)
the branch version has no constructor, is a 'public abstract class' and modified all the methods on the object to be 'static'. The code that used to exist in the private constructor is moved into a static block.
Then all usages of the class are changed which causes multiple conflicts in the merge.
There are a few cases where this change was made.
|
From a strict runtime performance point of view, the difference is really negligible. The main difference between the two lies down in the fact that the "static" lifecycle is linked to the classloader, whereas for the singleton it's a regular instance lifecycle. Usually it's better to stay away from the ClassLoader business, you avoid some tricky problems, especially when you try to reload the web application.
|
I would use a singleton if it needed to store any state, and static classes otherwise. There's no point in instantiating something, even a single instance, unless it needs to store something.
|
Java Singleton vs static - is there a real performance benefit?
|
[
"",
"java",
"design-patterns",
"singleton",
""
] |
I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers.
|
Check if your code is including the **windows.h** header file and either your code or other third-party headers have their own **min()**/**max()** definitions. If yes, then prepend your **windows.h** inclusion with a definition of **NOMINMAX** like this:
```
#define NOMINMAX
#include <windows.h>
```
|
Another possibility could be from side effects. Most min/max macros will include the parameters multiple times and may not do what you expect. Errors and warnings could also be generated.
```
max(a,i++) expands as ((a) > (i++) ? (a) : (i++))
```
afterwards i is either plus 1 or plus 2
The () in the expansion are to avoid problems if you call it with formulae. Try expanding max(a,b+c)
|
Strange C++ errors with code that has min()/max() calls
|
[
"",
"c++",
"c",
""
] |
I have a SQL script that inserts data (via INSERT statements currently numbering in the thousands) One of the columns contains a unique identifier (though not an IDENTITY type, just a plain ol' int) that's actually unique across a few different tables.
I'd like to add a scalar function to my script that gets the next available ID (i.e. last used ID + 1) but I'm not sure this is possible because there doesn't seem to be a way to use a global or static variable from within a UDF, I can't use a temp table, and I can't update a permanent table from within a function.
Currently my script looks like this:
```
declare @v_baseID int
exec dbo.getNextID @v_baseID out --sproc to get the next available id
--Lots of these - where n is a hardcoded value
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', @v_baseID + n )
exec dbo.UpdateNextID @v_baseID + lastUsedn --sproc to update the last used id
```
But I would like it to look like this:
```
--Lots of these
insert into tableOfStuff (someStuff, uniqueID) values ('stuff', getNextID() )
```
Hardcoding the offset is a pain in the arse, and is error prone. Packaging it up into a simple scalar function is very appealing, but I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.
We're using SQL Server 2005 at the moment.
*edits for clarification:*
Two users hitting it won't happen. This is an upgrade script that will be run only once, and never concurrently.
The actual sproc isn't prefixed with sp\_, fixed the example code.
In normal usage, we do use an id table and a sproc to get IDs as needed, I was just looking for a cleaner way to do it in this script, which essentially just dumps a bunch of data into the db.
|
> I'm starting to think it can't be done that way since there doesn't seem to be a way to maintain the offset counter between calls. Is that right, or is there something I'm missing.
You aren't missing anything; SQL Server does not support global variables, and it doesn't support data modification within UDFs. And even if you wanted to do something as kludgy as using CONTEXT\_INFO (see <http://weblogs.sqlteam.com/mladenp/archive/2007/04/23/60185.aspx>), you can't set that from within a UDF anyway.
Is there a way you can get around the "hardcoding" of the offset by making that a variable and looping over the iteration of it, doing the inserts within that loop?
|
If you have 2 users hitting it at the same time they will get the same id. Why didn't you use an id table with an identity instead, insert into that and use that as the unique (which is guaranteed) id, this will also perform much faster
> > sp\_getNextID
never ever prefix procs with sp\_, this has performance implication because the optimizer first checks the master DB to see if that proc exists there and then th local DB, also if MS decide to create a sp\_getNextID in a service pack yours will never get executed
|
Can I maintain state between calls to a SQL Server UDF?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I'm unsure whether the following code would ensure all conditions given in Comparator's Javadoc.
```
class TotalOrder<T> implements Comparator<T> {
public boolean compare(T o1, T o2) {
if (o1 == o2 || equal(o1, o2)) return 0;
int h1 = System.identityHashCode(o1);
int h2 = System.identityHashCode(o2);
if (h1 != h2) {
return h1 < h2 ? -1 : 1;
}
// equals returned false but identity hash code was same, assume o1 == o2
return 0;
}
boolean equal(Object o1, Object o2) {
return o1 == null ? o2 == null : o1.equals(o2);
}
}
```
Will the code above impose a total ordering on all instances of any class, even if that class does not implement Comparable?
|
> Hey, look at what I found!
>
> <http://gafter.blogspot.com/2007/03/compact-object-comparator.html>
Oh yes, I forgot about the IdentityHashMap (Java 6 and above only). Just have to pay attention at releasing your comparator.
|
Hey, look at what I found!
<http://gafter.blogspot.com/2007/03/compact-object-comparator.html>
This is exactly what I was looking for.
|
Impose a total ordering on all instances of *any* class in Java
|
[
"",
"java",
"algorithm",
""
] |
I have read about partial methods in the latest [C# language specification](http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx), so I understand the principles, but I'm wondering how people are actually using them. Is there a particular design pattern that benefits from partial methods?
|
Partial methods have been introduced for similar reasons to why partial classes were in .Net 2.
A partial class is one that can be split across multiple files - the compiler builds them all into one file as it runs.
The advantage for this is that Visual Studio can provide a graphical designer for part of the class while coders work on the other.
The most common example is the Form designer. Developers don't want to be positioning buttons, input boxes, etc by hand most of the time.
* In .Net 1 it was auto-generated code in a `#region` block
* In .Net 2 these became separate designer classes - the form is still one class, it's just split into one file edited by the developers and one by the form designer
This makes maintaining both much easier. Merges are simpler and there's less risk of the VS form designer accidentally undoing coders' manual changes.
In .Net 3.5 Linq has been introduced. Linq has a DBML designer for building your data structures, and that generates auto-code.
The extra bit here is that code needed to provide methods that developers might want to fill in.
As developers will extend these classes (with extra partial files) they couldn't use abstract methods here.
The other issue is that most of the time these methods wont be called, and calling empty methods is a waste of time.
Empty methods [are not optimised out](https://stackoverflow.com/questions/11783/in-net-will-empty-method-calls-be-optimized-out).
So Linq generates empty partial methods. If you don't create your own partial to complete them the C# compiler will just optimise them out.
So that it can do this partial methods always return void.
If you create a new Linq DBML file it will auto-generate a partial class, something like
```
[System.Data.Linq.Mapping.DatabaseAttribute(Name="MyDB")]
public partial class MyDataContext : System.Data.Linq.DataContext
{
...
partial void OnCreated();
partial void InsertMyTable(MyTable instance);
partial void UpdateMyTable(MyTable instance);
partial void DeleteMyTable(MyTable instance);
...
```
Then in your own partial file you can extend this:
```
public partial class MyDataContext
{
partial void OnCreated() {
//do something on data context creation
}
}
```
If you don't extend these methods they get optimised right out.
Partial methods can't be public - as then they'd have to be there for other classes to call. If you write your own code generators I can see them being useful, but otherwise they're only really useful for the VS designer.
The example I mentioned before is one possibility:
```
//this code will get optimised out if no body is implemented
partial void DoSomethingIfCompFlag();
#if COMPILER_FLAG
//this code won't exist if the flag is off
partial void DoSomethingIfCompFlag() {
//your code
}
#endif
```
Another potential use is if you had a large and complex class spilt across multiple files you might want partial references in the calling file. However I think in that case you should consider simplifying the class first.
|
Partial methods are very similar in concept to the GoF [Template Method](http://www.dofactory.com/Patterns/PatternTemplate.aspx) behavioural pattern ([Design Patterns](http://books.google.com/books?id=aQ1RAAAAMAAJ&q=design+patterns&dq=design+patterns&ei=pNO-SJqWEY_-jgH1pvHpDA&pgis=1), p325).
They allow the behaviour of an algorithm or operation to be defined in one place and implemented or changed elsewhere enabling extensibility and customisation. I've started to use partial methods in C# 3.0 instead of template methods because the I think the code is cleaner.
One nice feature is that unimplemented partial methods incur no runtime overhead as they're compiled away.
|
How are partial methods used in C# 3.0?
|
[
"",
"c#",
"design-patterns",
".net-3.5",
"partial-methods",
""
] |
I am trying to code TDD style in PHP and one of my biggest stumbling blocks (other than lack of a decent IDE) is that I have to make my own hacked together IoC container just to inject all my mock objects properly.
Has anyone used an Ioc container in PHP? All I've been able to find is [PHP IOC on the ever-annoying phpclasses.org](http://www.phpclasses.org/browse/package/3382.html) and it seems to have almost no documentation and not much of a following.
|
I played with some DI Frameworks for PHP, but I haven't used one in production. Have some links:
1. [http://www.stubbles.net/](http://www.stubbles.net) which
I think is the oldest I tried
2. <http://php.xjconf.net/>
3. [FLOW3](http://flow3.typo3.org/) - I belive this one will
become a very nice framework (its
beta right now)
You mentioned you would use it for TDD - so maybe have a look at [Dependency Injection for Unit Tests in PHP](http://techblog.tilllate.com/2007/08/21/using-dependency-injection-for-unit-tests/)
|
[Phemto](http://phemto.sourceforge.net/) is being developed again, and looks quite promising IMHO.
A few other similar projects, that you might want to look at:
[bucket](http://github.com/troelskn/bucket) (Disclaimer: I'm the principal author of this one)
The [php-port of picocontainer](http://fisheye.codehaus.org/browse/picocontainer/php/picocontainer/trunk/container), has been around for a long time. I don't think it's being actively deveoped any more, but on the other hand, I believe that it's quite stable. It's been a long time since I looked at it though.
~~A rather new project, I recently stumbled upon is [Crafty](http://phpcrafty.sourceforge.net/index.php). Not sure how many people uses it though.~~
~~I'm also watching [sphicy](http://www.beberlei.de/sphicy/), which looks interesting.~~
|
Inversion of Control Container for PHP?
|
[
"",
"php",
"frameworks",
""
] |
How can I build a loop in JavaScript?
|
**For loops**
```
for (i = startValue; i <= endValue; i++) {
// Before the loop: i is set to startValue
// After each iteration of the loop: i++ is executed
// The loop continues as long as i <= endValue is true
}
```
**For...in loops**
```
for (i in things) {
// If things is an array, i will usually contain the array keys *not advised*
// If things is an object, i will contain the member names
// Either way, access values using: things[i]
}
```
It is bad practice to use `for...in` loops to itterate over arrays. It goes against the [ECMA 262](http://www.ecma-international.org/publications/standards/Ecma-262.htm) standard and can cause problems when non-standard attributes or methods are added to the Array object, e.g. by [Prototype](http://www.prototypejs.org/api/array).
*(Thanks to [Chase Seibert](https://stackoverflow.com/users/7679/chase-seibert) for pointing this out in the comments)*
**While loops**
```
while (myCondition) {
// The loop will continue until myCondition is false
}
```
|
Here is an example of a for loop:
We have an array of items **nodes**.
```
for(var i = 0; i< nodes.length; i++){
var node = nodes[i];
alert(node);
}
```
|
How do I build a loop in JavaScript?
|
[
"",
"javascript",
"loops",
""
] |
Assuming you can't use LINQ for whatever reason, is it a better practice to place your queries in stored procedures, or is it just as good a practice to execute *ad hoc* queries against the database (say, SQL Server for argument's sake)?
|
In my experience writing mostly WinForms Client/Server apps these are the simple conclusions I've come to:
**Use Stored Procedures:**
1. For any complex data work. If you're going to be doing something truly requiring a cursor or temp tables it's usually fastest to do it within SQL Server.
2. When you need to lock down access to the data. If you don't give table access to users (or role or whatever) you can be sure that the only way to interact with the data is through the SP's you create.
**Use ad-hoc queries:**
1. For CRUD when you don't need to restrict data access (or are doing so in another manner).
2. For simple searches. Creating SP's for a bunch of search criteria is a pain and difficult to maintain. If you can generate a reasonably fast search query use that.
In most of my applications I've used both SP's and ad-hoc sql, though I find I'm using SP's less and less as they end up being code just like C#, only harder to version control, test, and maintain. I would recommend using ad-hoc sql unless you can find a specific reason not to.
|
I can't speak to anything other than SQL Server, but the performance argument is **not** significantly valid there unless you're on 6.5 or earlier. SQL Server has been caching ad-hoc execution plans for roughly a decade now.
|
Which is better: Ad hoc queries or stored procedures?
|
[
"",
"sql",
"stored-procedures",
""
] |
I want to show HTML content inside Flash. Is there some way to do this?
I am talking about full blown HTML (with JavaScript if possible).
|
[Here is a decent article](http://www.wdvl.com/Reviews/Graphics/Flash5/external.html) on how to accomplish that.
[@Flubba](https://stackoverflow.com/questions/22909/is-there-some-way-to-show-html-content-inside-flash#24396): I didn't say "*great*" article, I said "*decent*" - there is a big difference. Besides, no one else had answered and it had been around a while. I figured a "decent" answer was better than none. I am no Flash expert, so...
|
flashQuery supports HTML tags and CSS rules for Flash. It transforms flash into a really browser. Here it is: <http://www.flashquery.org/>
|
Is there some way to show HTML content inside Flash?
|
[
"",
"javascript",
"html",
"flash",
"adobe",
""
] |
I have a console app that needs to display the state of items, but rather than having text scroll by like mad I'd rather see the current status keep showing up on the same lines. For the sake of example:
> `Running... nn% complete`
> `Buffer size: bbbb bytes`
should be the output, where 'nn' is the current percentage complete, and 'bbbb' is a buffer size, updated periodically on the same lines of the console.
The first approach I took simply printed the correct number of backspaces to the console before printing the new state, but this has an obnoxious flicker that I want to get rid of. I also want to stick to either standard library or MS-provided functionality (VC 8) so as not to introduce another dependency for this one simple need.
|
You can use [SetConsoleCursorPosition](http://msdn.microsoft.com/en-us/library/ms686025.aspx). You'll need to call [GetStdHandle](http://msdn.microsoft.com/en-us/library/ms683231.aspx) to get a handle to the output buffer.
|
Joseph, JP, and CodingTheWheel all provided valuable help.
For my simple case, the most straight-forward approach seemed to be based on [CodingTheWheel's answer](https://stackoverflow.com/questions/45286/how-can-i-overwrite-the-same-portion-of-the-console-in-a-windows-native-c-conso#45316):
```
// before entering update loop
HANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO bufferInfo;
GetConsoleScreenBufferInfo(h, &bufferInfo);
// update loop
while (updating)
{
// reset the cursor position to where it was each time
SetConsoleCursorPosition(h, bufferInfo.dwCursorPosition);
//...
// insert combinations of sprintf, printf, etc. here
//...
}
```
For more complicated problems, the full [console API](http://msdn.microsoft.com/en-us/library/ms682073.aspx) as provided by [JP's answer](https://stackoverflow.com/questions/45286/how-can-i-overwrite-the-same-portion-of-the-console-in-a-windows-native-c-conso#45317), in coordination with the examples provided via the [link](http://www.benryves.com/tutorials/?t=winconsole) from [Joseph's answer](https://stackoverflow.com/questions/45286/how-can-i-overwrite-the-same-portion-of-the-console-in-a-windows-native-c-conso#45302) may prove useful, but I found the work necessary to use `CHAR_INFO` too tedious for such a simple app.
|
How can I overwrite the same portion of the console in a Windows native C++ console app, without using a 3rd Party library?
|
[
"",
"c++",
"windows",
"console",
""
] |
I have the following arrays:
```
$artist = ["the roots", "michael jackson", "billy idol", "more", "and more", "and_YET_MORE"];
$count = [5, 3, 9, 1, 1, 3];
```
I want to generate a tag cloud that will have artists with a higher number in `$count` enclosed in `h6` tags and the lowest enclosed `h1` tags.
|
You will want to add a logarithmic function to it too. (taken from tagadelic, my Drupal module to create tag clouds <http://drupal.org/project/tagadelic>):
```
db_query('SELECT COUNT(*) AS count, id, name FROM ... ORDER BY count DESC');
$steps = 6;
$tags = array();
$min = 1e9;
$max = -1e9;
while ($tag = db_fetch_object($result)) {
$tag->number_of_posts = $tag->count; #sets the amount of items a certain tag has attached to it
$tag->count = log($tag->count);
$min = min($min, $tag->count);
$max = max($max, $tag->count);
$tags[$tag->tid] = $tag;
}
// Note: we need to ensure the range is slightly too large to make sure even
// the largest element is rounded down.
$range = max(.01, $max - $min) * 1.0001;
foreach ($tags as $key => $value) {
$tags[$key]->weight = 1 + floor($steps * ($value->count - $min) / $range);
}
```
Then in your view or template:
```
foreach ($tags as $tag) {
$output .= "<h$tag->weight>$tag->name</h$tag->weight>"
}
```
|
Off the top of my head...
```
$artist = array("the roots","michael jackson","billy idol","more","and more","and_YET_MORE");
$count = array(5,3,9,1,1,3);
$highest = max($count);
for (int $x = 0; $x < count($artist); $x++)
{
$normalized = $count[$x] / $highest;
$heading = ceil($normalized * 6); // 6 heading types
echo "<h".$heading.">".$artist[$x]."</h".$heading.">";
}
```
|
Create a tag cloud by wrapping array values in h1 through h6 tags based on a mapping array of weights
|
[
"",
"php",
"arrays",
"mapping",
"tag-cloud",
""
] |
I'm looking for a "safe" eval function, to implement spreadsheet-like calculations (using numpy/scipy).
The functionality to do this (the [rexec module](http://docs.python.org/lib/module-rexec.html)) has been removed from Python since 2.3 due to apparently unfixable security problems. There are several third-party hacks out there that purport to do this - the most thought-out solution that I have found is
[this Python Cookbok recipe](http://code.activestate.com/recipes/496746/), "safe\_eval".
Am I reasonably safe if I use this (or something similar), to protect from malicious code, or am I stuck with writing my own parser? Does anyone know of any better alternatives?
EDIT: I just discovered [RestrictedPython](http://pypi.python.org/pypi/RestrictedPython), which is part of Zope. Any opinions on this are welcome.
|
Depends on your definition of safe I suppose. A lot of the security depends on what you pass in and what you are allowed to pass in the context. For instance, if a file is passed in, I can open arbitrary files:
```
>>> names['f'] = open('foo', 'w+')
>>> safe_eval.safe_eval("baz = type(f)('baz', 'w+')", names)
>>> names['baz']
<open file 'baz', mode 'w+' at 0x413da0>
```
Furthermore, the environment is very restricted (you cannot pass in modules), thus, you can't simply pass in a module of utility functions like re or random.
On the other hand, you don't need to write your own parser, you could just write your own evaluator for the python ast:
```
>>> import compiler
>>> ast = compiler.parse("print 'Hello world!'")
```
That way, hopefully, you could implement safe imports. The other idea is to use Jython or IronPython and take advantage of Java/.Net sandboxing capabilities.
|
If you simply need to write down and read some data structure in Python, and don't need the actual capacity of executing custom code, this one is a better fit:
<http://code.activestate.com/recipes/364469-safe-eval/>
It garantees that no code is executed, only static data structures are evaluated: strings, lists, tuples, dictionnaries.
|
Is "safe_eval" really safe?
|
[
"",
"python",
"security",
""
] |
How do I list the symbols being exported from a .so file? If possible, I'd also like to know their source (e.g. if they are pulled in from a static library).
I'm using gcc 4.0.2, if that makes a difference.
|
The standard tool for listing symbols is `nm`, you can use it simply like this:
```
nm -gD yourLib.so
```
If you want to see symbols of a C++ library, add the "-C" option which demangle the symbols (it's far more readable demangled).
```
nm -gDC yourLib.so
```
If your .so file is in elf format, you have two options:
Either `objdump` (`-C` is also useful for demangling C++):
```
$ objdump -TC libz.so
libz.so: file format elf64-x86-64
DYNAMIC SYMBOL TABLE:
0000000000002010 l d .init 0000000000000000 .init
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free
0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location
0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable
```
Or use `readelf`:
```
$ readelf -Ws libz.so
Symbol table '.dynsym' contains 112 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000002010 0 SECTION LOCAL DEFAULT 10
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND free@GLIBC_2.2.5 (14)
3: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __errno_location@GLIBC_2.2.5 (14)
4: 0000000000000000 0 NOTYPE WEAK DEFAULT UND _ITM_deregisterTMCloneTable
```
|
If your `.so` file is in elf format, you can use readelf program to extract symbol information from the binary. This command will give you the symbol table:
```
readelf -Ws /usr/lib/libexample.so
```
You only should extract those that are defined in this `.so` file, not in the libraries referenced by it. Seventh column should contain a number in this case. You can extract it by using a simple regex:
```
readelf -Ws /usr/lib/libstdc++.so.6 | grep '^\([[:space:]]\+[^[:space:]]\+\)\{6\}[[:space:]]\+[[:digit:]]\+'
```
or, as proposed by [Caspin](https://stackoverflow.com/users/28817/caspin),:
```
readelf -Ws /usr/lib/libstdc++.so.6 | awk '{print $8}';
```
|
How do I list the symbols in a .so file
|
[
"",
"c++",
"c",
"gcc",
"symbols",
"name-mangling",
""
] |
How do you pass `$_POST` values to a page using `cURL`?
|
Should work fine.
```
$data = array('name' => 'Ross', 'php_master' => true);
// You can POST a file by prefixing with an @ (for <input type="file"> fields)
$data['file'] = '@/home/user/world.jpg';
$handle = curl_init($url);
curl_setopt($handle, CURLOPT_POST, true);
curl_setopt($handle, CURLOPT_POSTFIELDS, $data);
curl_exec($handle);
curl_close($handle)
```
We have two options here, `CURLOPT_POST` which turns HTTP POST on, and `CURLOPT_POSTFIELDS` which contains an array of our post data to submit. This can be used to submit data to `POST` `<form>`s.
---
It is important to note that `curl_setopt($handle, CURLOPT_POSTFIELDS, $data);` takes the $data in two formats, and that this determines how the post data will be encoded.
1. `$data` as an `array()`: The data will be sent as `multipart/form-data` which is not always accepted by the server.
```
$data = array('name' => 'Ross', 'php_master' => true);
curl_setopt($handle, CURLOPT_POSTFIELDS, $data);
```
2. `$data` as url encoded string: The data will be sent as `application/x-www-form-urlencoded`, which is the default encoding for submitted html form data.
```
$data = array('name' => 'Ross', 'php_master' => true);
curl_setopt($handle, CURLOPT_POSTFIELDS, http_build_query($data));
```
I hope this will help others save their time.
See:
* [`curl_init`](http://www.php.net/manual/en/function.curl-init.php)
* [`curl_setopt`](http://www.php.net/manual/en/function.curl-setopt.php)
|
[Ross has the right idea](https://stackoverflow.com/questions/28395/passing-post-values-with-curl#28411) for POSTing the usual parameter/value format to a url.
I recently ran into a situation where I needed to POST some XML as Content-Type "text/xml" without any parameter pairs so here's how you do that:
```
$xml = '<?xml version="1.0"?><stuff><child>foo</child><child>bar</child></stuff>';
$httpRequest = curl_init();
curl_setopt($httpRequest, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($httpRequest, CURLOPT_HTTPHEADER, array("Content-Type: text/xml"));
curl_setopt($httpRequest, CURLOPT_POST, 1);
curl_setopt($httpRequest, CURLOPT_HEADER, 1);
curl_setopt($httpRequest, CURLOPT_URL, $url);
curl_setopt($httpRequest, CURLOPT_POSTFIELDS, $xml);
$returnHeader = curl_exec($httpRequest);
curl_close($httpRequest);
```
In my case, I needed to parse some values out of the HTTP response header so you may not necessarily need to set `CURLOPT_RETURNTRANSFER` or `CURLOPT_HEADER`.
|
Passing $_POST values with cURL
|
[
"",
"php",
"post",
"curl",
""
] |
In my ASP.NET User Control I'm adding some JavaScript to the `window.onload` event:
```
if (!Page.ClientScript.IsStartupScriptRegistered(this.GetType(), onloadScriptName))
Page.ClientScript.RegisterStartupScript(this.GetType(), onloadScriptName,
"window.onload = function() {myFunction();};", true);
```
My problem is, if there is already something in the `onload` event, than this overwrites it. How would I go about allowing two user controls to each execute JavaScript in the `onload` event?
**Edit:** Thanks for the info on third party libraries. I'll keep them in mind.
|
Most of the "solutions" suggested are Microsoft-specific, or require bloated libraries. Here's one good way. This works with W3C-compliant browsers and with Microsoft IE.
```
if (window.addEventListener) // W3C standard
{
window.addEventListener('load', myFunction, false); // NB **not** 'onload'
}
else if (window.attachEvent) // Microsoft
{
window.attachEvent('onload', myFunction);
}
```
|
There still is an ugly solution (which is far inferior to using a framework or `addEventListener`/`attachEvent`) that is to save the current `onload` event:
```
function addOnLoad(fn)
{
var old = window.onload;
window.onload = function()
{
old();
fn();
};
}
addOnLoad(function()
{
// your code here
});
addOnLoad(function()
{
// your code here
});
addOnLoad(function()
{
// your code here
});
```
Note that frameworks like jQuery will provide a way to execute code when the DOM is ready and not when the page loads.
DOM being ready means that your HTML has loaded but not external components like images or stylesheets, allowing you to be called long before the load event fires.
|
Add multiple window.onload events
|
[
"",
"javascript",
"asp.net",
"events",
"listener",
""
] |
I have two applications written in Java that communicate with each other using XML messages over the network. I'm using a SAX parser at the receiving end to get the data back out of the messages. One of the requirements is to embed binary data in an XML message, but SAX doesn't like this. Does anyone know how to do this?
UPDATE: I got this working with the [Base64](http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html) class from the [apache commons codec library](http://commons.apache.org/codec/), in case anyone else is trying something similar.
|
You could encode the binary data using base64 and put it into a Base64 element; the below article is a pretty good one on the subject.
[Handling Binary Data in XML Documents](http://www.xml.com/pub/a/98/07/binary/binary.html)
|
XML is so versatile...
```
<DATA>
<BINARY>
<BIT index="0">0</BIT>
<BIT index="1">0</BIT>
<BIT index="2">1</BIT>
...
<BIT index="n">1</BIT>
</BINARY>
</DATA>
```
XML is like violence - If it doesn't solve your problem, you're not using enough of it.
EDIT:
BTW: Base64 + CDATA is probably the best solution
(EDIT2:
Whoever upmods me, please also upmod the real answer. We don't want any poor soul to come here and actually implement my method because it was the highest ranked on SO, right?)
|
How do you embed binary data in XML?
|
[
"",
"java",
"xml",
"binary",
"binary-data",
""
] |
In C#, `int` and `Int32` are the same thing, but I've read a number of times that `int` is preferred over `Int32` with no reason given. Is there a reason, and should I care?
|
[ECMA-334](https://www.ecma-international.org/publications/standards/Ecma-334.htm):2006 *C# Language Specification* (p18):
> Each of the predefined types is shorthand for a system-provided type. For example, the keyword `int` refers to the struct `System.Int32`. As a matter of style, use of the keyword is favoured over use of the complete system type name.
|
The two are indeed synonymous; `int` will be a little more familiar looking, `Int32` makes the 32-bitness more explicit to those reading your code. I would be inclined to use `int` where I just need 'an integer', `Int32` where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an `int` if appropriate, but should take care changing `Int32`s in the same way.
The resulting code will be identical: the difference is purely one of readability or code appearance.
|
Should I use int or Int32
|
[
"",
"c#",
"variable-types",
""
] |
Does anyone have a technique for generating SQL table create (and data insert) commands pragmatically from a CSV (or sheet in a .xls) file?
I've got a third party database system which I'd like to populate with data from a csv file (or sheet in a xls file) but the importer supplied can't create the table structure automatically as it does the import. My csv file has lots of tables with lots of columns so I'd like to automate the table creation process as well as the data importing if possible but I'm unsure about how to go about generating the create statement...
|
In SQL server it is as easy as
```
SELECT * INTO NewTablenNmeHere
FROM OPENROWSET( 'Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=C:\testing.xls','SELECT * FROM [Sheet1$]')
```
|
```
BULK
INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
```
|
CSV (or sheet in XLS) to SQL create (and insert) statements with .Net?
|
[
"",
".net",
"sql",
"csv",
"xls",
""
] |
I'd like to make a debug logging function with the same parameters as `printf`. But one that can be removed by the pre-processor during optimized builds.
For example:
```
Debug_Print("Warning: value %d > 3!\n", value);
```
I've looked at variadic macros but those aren't available on all platforms. `gcc` supports them, `msvc` does not.
|
I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax:
```
#include <stdio.h>
void XTrace0(LPCTSTR lpszText)
{
::OutputDebugString(lpszText);
}
void XTrace(LPCTSTR lpszFormat, ...)
{
va_list args;
va_start(args, lpszFormat);
int nBuf;
TCHAR szBuffer[512]; // get rid of this hard-coded buffer
nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args);
::OutputDebugString(szBuffer);
va_end(args);
}
```
Then a typical #ifdef switch:
```
#ifdef _DEBUG
#define XTRACE XTrace
#else
#define XTRACE
#endif
```
Well that can be cleaned up quite a bit but it's the basic idea.
|
This is how I do debug print outs in C++. Define 'dout' (debug out) like this:
```
#ifdef DEBUG
#define dout cout
#else
#define dout 0 && cout
#endif
```
In the code I use 'dout' just like 'cout'.
```
dout << "in foobar with x= " << x << " and y= " << y << '\n';
```
If the preprocessor replaces 'dout' with '0 && cout' note that << has higher precedence than && and short-circuit evaluation of && makes the whole line evaluate to 0. Since the 0 is not used the compiler generates no code at all for that line.
|
How do you create a debug only function that takes a variable argument list? Like printf()
|
[
"",
"c++",
"c",
"c-preprocessor",
""
] |
I would like to find out safe ways of implementing three dimensional arrays of integers in C++, using pointer arithmetic / dynamic memory allocation, or, alternatively using `STL` techniques such as vectors.
Essentially I want my integer array dimensions to look like:
```
[ x ][ y ][ z ]
```
x and y are in the range 20-6000
z is known and equals 4.
|
Have a look at the Boost [multi-dimensional array](http://www.boost.org/doc/libs/release/libs/multi_array) library. Here's an example (adapted from the Boost documentation):
```
#include "boost/multi_array.hpp"
int main() {
// Create a 3D array that is 20 x 30 x 4
int x = 20;
int y = 30;
int z = 4;
typedef boost::multi_array<int, 3> array_type;
typedef array_type::index index;
array_type my_array(boost::extents[x][y][z]);
// Assign values to the elements
int values = 0;
for (index i = 0; i != x; ++i) {
for (index j = 0; j != y; ++j) {
for (index k = 0; k != z; ++k) {
my_array[i][j][k] = values++;
}
}
}
}
```
|
Each pair of square brackets is a dereferencing operation (when applied to a pointer). As an example, the following pairs of lines of code are equivalent:
```
x = myArray[4];
x = *(myArray+4);
```
```
x = myArray[2][7];
x = *((*(myArray+2))+7);
```
To use your suggested syntax you are simply dereferencing the value returned from the first dereference.
```
int*** myArray = (some allocation method, keep reading);
//
// All in one line:
int value = myArray[x][y][z];
//
// Separated to multiple steps:
int** deref1 = myArray[x];
int* deref2 = deref1[y];
int value = deref2[z];
```
To go about allocating this array, you simply need to recognise that you don't actually have a three-dimensional array of integers. You have an array of arrays of arrays of integers.
```
// Start by allocating an array for array of arrays
int*** myArray = new int**[X_MAXIMUM];
// Allocate an array for each element of the first array
for(int x = 0; x < X_MAXIMUM; ++x)
{
myArray[x] = new int*[Y_MAXIMUM];
// Allocate an array of integers for each element of this array
for(int y = 0; y < Y_MAXIMUM; ++y)
{
myArray[x][y] = new int[Z_MAXIMUM];
// Specify an initial value (if desired)
for(int z = 0; z < Z_MAXIMUM; ++z)
{
myArray[x][y][z] = -1;
}
}
}
```
Deallocating this array follows a similar process to allocating it:
```
for(int x = 0; x < X_MAXIMUM; ++x)
{
for(int y = 0; y < Y_MAXIMUM; ++y)
{
delete[] myArray[x][y];
}
delete[] myArray[x];
}
delete[] myArray;
```
|
Three dimensional arrays of integers in C++
|
[
"",
"c++",
"arrays",
"multidimensional-array",
""
] |
I've got a bunch of FoxPro (VFP9) DBF files on my Ubuntu system, is there a library to open these in Python? I only need to read them, and would preferably have access to the memo fields too.
**Update**: Thanks @cnu, I used Yusdi Santoso's [`dbf.py`](http://www.physics.ox.ac.uk/users/santoso/dbf.py.src) and it works nicely. One gotcha: The memo file name extension must be lower case, i.e. `.fpt`, not `.FPT` which was how the filename came over from Windows.
|
You can try this [recipe on Active State](http://code.activestate.com/recipes/362715/).
There is also a [DBFReader module](http://code.google.com/p/lino/source/browse/lino/utils/dbfreader.py) which you can try.
For support for [memo fields](http://www.physics.ox.ac.uk/users/santoso/dbf.py.src).
|
I prefer [dbfpy](http://sourceforge.net/projects/dbfpy/). It supports both reading and writing of `.DBF` files and can cope with most variations of the format. It's the only implementation I have found that could both read and write the legacy DBF files of some older systems I have worked with.
|
What's the easiest way to read a FoxPro DBF file from Python?
|
[
"",
"python",
"foxpro",
"dbf",
"visual-foxpro",
""
] |
Is it possible to actually make use of placement new in portable code when using it for arrays?
It appears that the pointer you get back from new[] is not always the same as the address you pass in (5.3.4, note 12 in the standard seems to confirm that this is correct), but I don't see how you can allocate a buffer for the array to go in if this is the case.
The following example shows the problem. Compiled with Visual Studio, this example results in memory corruption:
```
#include <new>
#include <stdio.h>
class A
{
public:
A() : data(0) {}
virtual ~A() {}
int data;
};
int main()
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = new(pBuffer) A[NUMELEMENTS];
// With VC++, pA will be four bytes higher than pBuffer
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// Debug runtime will assert here due to heap corruption
delete[] pBuffer;
return 0;
}
```
Looking at the memory, the compiler seems to be using the first four bytes of the buffer to store a count of the number of items in it. This means that because the buffer is only `sizeof(A)*NUMELEMENTS` big, the last element in the array is written into unallocated heap.
So the question is can you find out how much additional overhead your implementation wants in order to use placement new[] safely? Ideally, I need a technique that's portable between different compilers. Note that, at least in VC's case, the overhead seems to differ for different classes. For instance, if I remove the virtual destructor in the example, the address returned from new[] is the same as the address I pass in.
|
Personally I'd go with the option of not using placement new on the array and instead use placement new on each item in the array individually. For example:
```
int main(int argc, char* argv[])
{
const int NUMELEMENTS=20;
char *pBuffer = new char[NUMELEMENTS*sizeof(A)];
A *pA = (A*)pBuffer;
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i] = new (pA + i) A();
}
printf("Buffer address: %x, Array address: %x\n", pBuffer, pA);
// dont forget to destroy!
for(int i = 0; i < NUMELEMENTS; ++i)
{
pA[i].~A();
}
delete[] pBuffer;
return 0;
}
```
Regardless of the method you use, make sure you manually destroy each of those items in the array before you delete pBuffer, as you could end up with leaks ;)
*Note*: I haven't compiled this, but I think it should work (I'm on a machine that doesn't have a C++ compiler installed). It still indicates the point :) Hope it helps in some way!
---
Edit:
The reason it needs to keep track of the number of elements is so that it can iterate through them when you call delete on the array and make sure the destructors are called on each of the objects. If it doesn't know how many there are it wouldn't be able to do this.
|
@Derek
5.3.4, section 12 talks about the array allocation overhead and, unless I'm misreading it, it seems to suggest to me that it is valid for the compiler to add it on placement new as well:
> This overhead may be applied in all array new-expressions, including those referencing the library function operator new[](std::size\_t, void\*) and other placement allocation functions. The amount of overhead may vary from one invocation of new to another.
That said, I think VC was the only compiler that gave me trouble with this, out of it, GCC, Codewarrior and ProDG. I'd have to check again to be sure, though.
|
Can placement new for arrays be used in a portable way?
|
[
"",
"c++",
"arrays",
"compiler-construction",
"portability",
"overhead",
""
] |
What is the easiest way to compare strings in Python, ignoring case?
Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads).
I guess I'm looking for an equivalent to C's stricmp().
[Some more context requested, so I'll demonstrate with a trivial example:]
Suppose you want to sort a looong list of strings. You simply do theList.sort().
This is O(n \* log(n)) string comparisons and no memory management (since all
strings and list elements are some sort of smart pointers). You are happy.
Now, you want to do the same, but ignore the case (let's simplify and say
all strings are ascii, so locale issues can be ignored).
You can do theList.sort(key=lambda s: s.lower()), but then you cause two new
allocations per comparison, plus burden the garbage-collector with the duplicated
(lowered) strings.
Each such memory-management noise is orders-of-magnitude slower than simple string comparison.
Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp)
and it is as fast and as memory-friendly as theList.sort(). You are happy again.
The problem is any Python-based case-insensitive comparison involves implicit string
duplications, so I was expecting to find a C-based comparisons (maybe in module string).
Could not find anything like that, hence the question here.
(Hope this clarifies the question).
|
In response to your clarification...
You could use [ctypes](http://docs.python.org/lib/ctypes-ctypes-tutorial.html) to execute the c function "strcasecmp". Ctypes is included in Python 2.5. It provides the ability to call out to dll and shared libraries such as libc. Here is a quick example (Python on Linux; see link for Win32 help):
```
from ctypes import *
libc = CDLL("libc.so.6") // see link above for Win32 help
libc.strcasecmp("THIS", "this") // returns 0
libc.strcasecmp("THIS", "THAT") // returns 8
```
may also want to reference [strcasecmp documentation](http://linux.die.net/man/3/strcasecmp)
Not really sure this is any faster or slower (have not tested), but it's a way to use a C function to do case insensitive string comparisons.
~~~~~~~~~~~~~~
[ActiveState Code - Recipe 194371: Case Insensitive Strings](http://code.activestate.com/recipes/194371/ "ActiveState Code")
is a recipe for creating a case insensitive string class. It might be a bit over kill for something quick, but could provide you with a common way of handling case insensitive strings if you plan on using them often.
|
Here is a benchmark showing that using [`str.lower`](http://docs.python.org/2/library/string.html?highlight=lower#string.lower) is faster than the accepted answer's proposed method (`libc.strcasecmp`):
```
#!/usr/bin/env python2.7
import random
import timeit
from ctypes import *
libc = CDLL('libc.dylib') # change to 'libc.so.6' on linux
with open('/usr/share/dict/words', 'r') as wordlist:
words = wordlist.read().splitlines()
random.shuffle(words)
print '%i words in list' % len(words)
setup = 'from __main__ import words, libc; gc.enable()'
stmts = [
('simple sort', 'sorted(words)'),
('sort with key=str.lower', 'sorted(words, key=str.lower)'),
('sort with cmp=libc.strcasecmp', 'sorted(words, cmp=libc.strcasecmp)'),
]
for (comment, stmt) in stmts:
t = timeit.Timer(stmt=stmt, setup=setup)
print '%s: %.2f msec/pass' % (comment, (1000*t.timeit(10)/10))
```
typical times on my machine:
```
235886 words in list
simple sort: 483.59 msec/pass
sort with key=str.lower: 1064.70 msec/pass
sort with cmp=libc.strcasecmp: 5487.86 msec/pass
```
So, the version with `str.lower` is not only the fastest by far, but also the most portable and pythonic of all the proposed solutions here.
I have not profiled memory usage, but the original poster has still not given a compelling reason to worry about it. Also, who says that a call into the libc module doesn't duplicate any strings?
NB: The `lower()` string method also has the advantage of being locale-dependent. Something you will probably not be getting right when writing your own "optimised" solution. Even so, due to bugs and missing features in Python, this kind of comparison may give you wrong results in a unicode context.
|
Ignore case in Python strings
|
[
"",
"python",
"string",
"case-insensitive",
""
] |
I've got a JavaScript "object", built this way:
```
function foo()
{
this.length = 0;
}
foo.prototype.getLength = function()
{
return this.length;
}
...
```
I know how to emulate namespaces with singleton JavaScript objects, but what is the best way to "namepace" an object such as that above that will intanced?
I know that several JavaScript libraries have namepacing capabilities, but I'm using jQuery and would rather not add another library to the mix. I'd like to be able to provide my own, perhaps by exploiting jQuery, intrinsic namespacing scheme for the JS objects of mine that need to be instanced.
Thanks
rp
|
Simple:
```
if(!MyNamespace) MyNamespace = {};
MyNamespace.foo = function() {
this.length = 0;
};
MyNamespace.foo.prototype.getLength = function() {
return this.length;
};
```
|
Javascript doesn't really have namespace or packages like other languages. Instead it has closures. If you have an application that consists of multiple functions, variables and objects, then you should put them inside a single global object. This will have the same effect as a namespace.
For example:
```
var namespace = {
this.foo: function(){
...
},
this.foo.prototype.getLength: function(){
...
}
}
```
You could also create a set of nested objects and simulate packages:
```
loadPackage = function(){
var path = arguments[0];
for(var i=1; i<arguments.length; i++){
if(!path[arguments[i]]){
path[arguments[i]] = {};
}
path = path[arguments[i]];
}
return path;
}
loadPackage(this, "com", "google", "mail") = {
username: "gundersen",
login: function(password){
...
}
}
this.com.google.mail.login("mySecretPassword");
```
|
How to provide namespaces in JavaScript with instanced objects
|
[
"",
"javascript",
"namespaces",
""
] |
I'm looking for a way to poll different servers and check that SQL server is up and running. I'm writing my code in C#. I don't particularly care about individual databases, just that SQL server is running and responsive.
Any ideas?
|
Well, the brute force solution is to attempt to initiate a connection with the database on each server. That will tell you whether it's running, though you could have timeout issues.
The more elegant (but more difficult... isn't that always the way?) solution would be to use WMI to connect to the remote machine and find out if the SQL server process is running.
|
`System.Data.Sql.SqlDataSourceEnumerator` will return all instances of SQL Server currently running.
[MSDN Link](http://msdn.microsoft.com/en-us/library/system.data.sql.sqldatasourceenumerator.aspx)
|
Detecting if SQL server is running
|
[
"",
"c#",
"sql-server",
""
] |
I'm using `int` as an example, but this applies to any value type in .Net
In .Net 1 the following would throw a compiler exception:
```
int i = SomeFunctionThatReturnsInt();
if( i == null ) //compiler exception here
```
Now (in .Net 2 or 3.5) that exception has gone.
I know why this is:
```
int? j = null; //nullable int
if( i == j ) //this shouldn't throw an exception
```
The problem is that because `int?` is nullable and `int` now has a implicit cast to `int?`. The syntax above is compiler magic. Really we're doing:
```
Nullable<int> j = null; //nullable int
//compiler is smart enough to do this
if( (Nullable<int>) i == j)
//and not this
if( i == (int) j)
```
So now, when we do `i == null` we get:
```
if( (Nullable<int>) i == null )
```
Given that C# is doing compiler logic to calculate this anyway why can't it be smart enough to not do it when dealing with absolute values like `null`?
|
I don't think this is a compiler problem *per se*; an integer value is never null, but the idea of equating them isn't invalid; it's a valid function that always returns false. And the compiler knows; the code
```
bool oneIsNull = 1 == null;
```
compiles, but gives a compiler warning: `The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type '<null>'`.
So if you want the compiler error back, go to the project properties and turn on 'treat warnings as errors' for this error, and you'll start seeing them as build-breaking problems again.
|
Odd ... compiling this with VS2008, targetting .NET 3.5:
```
static int F()
{
return 42;
}
static void Main(string[] args)
{
int i = F();
if (i == null)
{
}
}
```
I get a compiler warning
```
warning CS0472: The result of the expression is always 'false' since a value of type 'int' is never equal to 'null' of type 'int?'
```
And it generates the following IL ... which presumably the JIT will optimize away
```
L_0001: call int32 ConsoleApplication1.Program::F()
L_0006: stloc.0
L_0007: ldc.i4.0
L_0008: ldc.i4.0
L_0009: ceq
L_000b: stloc.1
L_000c: br.s L_000e
```
Can you post a code snippet?
|
.Net 2+: why does if( 1 == null ) no longer throw a compiler exception?
|
[
"",
"c#",
".net-3.5",
"compiler-construction",
".net-2.0",
""
] |
I'm considering dumping boost as a dependency... atm the only thing that I really need is `shared_ptr<>`, and I can get that from `std::tr1`, available in gcc suite 4.\*
|
AFAIK, all of the distros package V 4.+ nowadays.
|
These days, I believe most Linux distros do not ship with the development system by default. But I'm pretty sure g++ v4 is the 'standard' development C++ compiler if you install the C++ development environment at all. g++ v3 is usually just available as a special install. For openSUSE 11, gcc 4.3 is the current package installed when you pick the Base Development pattern.
|
Does every Linux distro ship with gcc/g++ 4.* these days?
|
[
"",
"c++",
"linux",
"gcc",
"distro",
""
] |
Using C#, I need a class called `User` that has a username, password, active flag, first name, last name, full name, etc.
There should be methods to *authenticate* and *save* a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters?
|
Many great responses to this are also on my question: "[Beginning TDD - Challenges? Solutions? Recommendations?](https://stackoverflow.com/questions/24965/beginning-tdd-challenges-solutions-recommendations)"
May I also recommend taking a look at my [blog post](http://cantgrokwontgrok.blogspot.com/2008/09/tdd-getting-started-with-test-driven.html) (which was partly inspired by my question), I have got some good feedback on that. Namely:
> **I Don’t Know Where to Start?**
>
> * Start afresh. Only think about writing tests when you are writing new
> code. This can be re-working of old
> code, or a completely new feature.
> * Start simple. Don’t go running off and trying to get your head round
> a testing framework as well as being
> TDD-esque. Debug.Assert works fine.
> Use it as a starting point. It doesn’t
> mess with your project or create
> dependencies.
> * Start positive. You are trying to improve your craft, feel good about
> it. I have seen plenty of developers
> out there that are happy to stagnate
> and not try new things to better
> themselves. You are doing the right
> thing, remember this and it will help
> stop you from giving up.
> * Start ready for a challenge. It is quite hard to start getting into
> testing. Expect a challenge, but
> remember – challenges can be overcome.
>
> **Only Test For What You Expect**
>
> I had real problems when I first
> started because I was constantly sat
> there trying to figure out every
> possible problem that could occur and
> then trying to test for it and fix.
> This is a quick way to a headache.
> Testing should be a real YAGNI
> process. If you know there is a
> problem, then write a test for it.
> Otherwise, don’t bother.
>
> **Only Test One Thing**
>
> Each test case should only ever test
> one thing. If you ever find yourself
> putting “and” in the test case name,
> you’re doing something wrong.
I hope this means we can move on from "getters and setters" :)
|
Test your code, not the language.
A unit test like:
```
Integer i = new Integer(7);
assert (i.instanceOf(integer));
```
is only useful if you are writing a compiler and there is a non-zero chance that your `instanceof` method is not working.
Don't test stuff that you can rely on the language to enforce. In your case, I'd focus on your authenticate and save methods - and I'd write tests that made sure they could handle null values in any or all of those fields gracefully.
|
How do you know what to test when writing unit tests?
|
[
"",
"c#",
"unit-testing",
"tdd",
""
] |
I am trying to `INSERT INTO` a table using the input from another table. Although this is entirely feasible for many database engines, I always seem to struggle to remember the correct syntax for the `SQL` engine of the day ([MySQL](http://en.wikipedia.org/wiki/MySQL), [Oracle](http://en.wikipedia.org/wiki/Oracle_Database), [SQL Server](http://en.wikipedia.org/wiki/Microsoft_SQL_Server), [Informix](http://en.wikipedia.org/wiki/IBM_Informix), and [DB2](http://en.wikipedia.org/wiki/IBM_DB2)).
Is there a silver-bullet syntax coming from an SQL standard (for example, [SQL-92](http://en.wikipedia.org/wiki/SQL-92)) that would allow me to insert the values without worrying about the underlying database?
|
Try:
```
INSERT INTO table1 ( column1 )
SELECT col1
FROM table2
```
This is standard ANSI SQL and should work on any DBMS
It definitely works for:
* Oracle
* MS SQL Server
* MySQL
* Postgres
* SQLite v3
* Teradata
* DB2
* Sybase
* Vertica
* HSQLDB
* H2
* AWS RedShift
* SAP HANA
* Google Spanner
|
[Claude Houle's answer](https://stackoverflow.com/a/25971/282110): should work fine, and you can also have multiple columns and other data as well:
```
INSERT INTO table1 ( column1, column2, someInt, someVarChar )
SELECT table2.column1, table2.column2, 8, 'some string etc.'
FROM table2
WHERE table2.ID = 7;
```
I've only used this syntax with Access, SQL 2000/2005/Express, MySQL, and PostgreSQL, so those should be covered. It should also work with SQLite3.
|
Insert into ... values ( SELECT ... FROM ... )
|
[
"",
"sql",
"database",
"syntax",
"database-agnostic",
"ansi-sql-92",
""
] |
I want to copy a file from A to B in C#. How do I do that?
|
The `File.Copy(path, destination)` method:
[MSDN Link](http://msdn.microsoft.com/en-us/library/system.io.file.copy.aspx)
|
Without any error handling code:
```
File.Copy(path, path2);
```
|
How to copy a file in C#
|
[
"",
"c#",
".net",
"file",
""
] |
What's the simplest-to-use techonlogy available to save an arbitrary Java object graph as an XML file (and to be able to rehydrate the objects later)?
|
The easiest way here is to serialize the object graph.
Java 1.4 has built in support for serialization as XML.
A solution I have used successfully is XStream (<http://x-stream.github.io/)-> it's a small library that will easily allow you to serialize and deserialize to and from XML.
The downside is you can only very limited define the resulting XML; which might not be neccessary in your case.
|
Apache digester is fairly easy: <http://commons.apache.org/digester/>
JAXB is newer and comes with annotation goodness: <https://jaxb.dev.java.net>
|
Saving Java Object Graphs as XML file
|
[
"",
"java",
"xml",
""
] |
The RFC for a Java class is set of all methods that can be invoked in response to a message to an object of the class or by some method in the class.
RFC = M + R where
M = Number of methods in the class.
R = Total number of other methods directly invoked from the M.
Thinking C is the .class and J is the .java file of which we need to calculate RFC.
```
class J{
a(){}
b(){}
c(){
e1.e();
e1.f();
e1.g();
}
h(){
i.k();
i.j();
}
m(){}
n(){
i.o();
i.p();
i.p();
i.p();
}
}
```
here M=6
and R=9 (Don't worry about call inside a loop. It's considered as a single call)
Calculating M is easy. Load C using classloader and use reflection to get the count of methods.
Calculating R is not direct. We need to count the number of method calls from the class. First level only.
For calculating R I must use regex. Usually format would be (calls without using . are not counted)
```
[variable_name].[method_name]([zero or more parameters]);
```
or
```
[variable_name].[method_name]([zero or more parameters])
```
with out semicolon when call return is directly becomes parameter to another method.
or
```
[variable_name].[method_name]([zero or more parameters]).method2();
```
this becomes two method calls
What other patterns of the method call can you think of? Is there any other way other than using RegEx that can be used to calculate R.
---
**UPDATE:**
[@McDowell](https://stackoverflow.com/questions/19952/rfc-calculation-in-java-need-help-with-algorithm#19983 "@McDowell")
Looks like using BCEL I can simplify the whole process. Let me try it.
|
You could use the [Byte Code Engineering Library](http://jakarta.apache.org/bcel/index.html) with binaries. You can use a [DescendingVisitor](http://jakarta.apache.org/bcel/apidocs/org/apache/bcel/classfile/DescendingVisitor.html) to visit a class' members and references. I've used it to [find class dependencies](http://illegalargumentexception.blogspot.com/2008/04/java-finding-binary-class-dependencies.html).
Alternatively, you could reuse some model of the source files. I'm pretty sure the Java editor in the [Eclipse JDT](http://www.eclipse.org/jdt/) is backed by some form of model.
|
You should find your answer in the [Java language specification](http://java.sun.com/docs/books/jls/third_edition/html/j3TOC.html).
You have forgot static method call, method call inside parameters...
|
Algorithm to perform RFC calculation in Java
|
[
"",
"java",
"regex",
"algorithm",
"reflection",
""
] |
I'm trying to install [Laconica](http://laconi.ca/), an open-source Microblogging application on my Windows development server using XAMPP as per the [instructions provided](http://laconi.ca/trac/wiki/InstallationWindows).
The website cannot find PEAR, and throws the below errors:
> Warning: require\_once(PEAR.php) [function.require-once]: failed to open stream: No such file or directory in C:\xampplite\htdocs\laconica\lib\common.php on line 31
>
> Fatal error: require\_once() [function.require]: Failed opening required 'PEAR.php' (include\_path='.;\xampplite\php\pear\PEAR') in C:\xampplite\htdocs\laconica\lib\common.php on line 31
1. PEAR is located in `C:\xampplite\php\pear`
2. `phpinfo()` shows me that the include path is `.;\xampplite\php\pear`
What am I doing wrong? Why isn't the PEAR folder being included?
|
You need to fix your `include_path` system variable to point to the correct location.
To fix it edit the `php.ini` file. In that file you will find a line that says, "`include_path = ...`". (You can find out what the location of php.ini by running `phpinfo()` on a page.) Fix the part of the line that says, "`\xampplite\php\pear\PEAR`" to read "`C:\xampplite\php\pear`". Make sure to leave the semi-colons before and/or after the line in place.
Restart PHP and you should be good to go. To restart PHP in IIS you can restart the application pool assigned to your site or, better yet, restart IIS all together.
|
If you are using the portable XAMPP installation and Windows 7, and, like me have the version after they removed the XAMPP shell from the control panel none of the suggested answers here will do you much good as the packages will not install.
The problem is with the config file. I found the correct settings after a lot of trial and error.
Simply pull up a command window in the \xampp\php directory and run
```
pear config-set doc_dir :\xampp\php\docs\PEAR
pear config-set cfg_dir :\xampp\php\cfg
pear config-set data_dir :\xampp\php\data\PEAR
pear config-set test_dir :\xampp\php\tests
pear config-set www_dir :\xampp\php\www
```
you will want to replace the ':' with the actual drive letter that your portable drive is running on at the moment. Unfortunately, this needs to be done any time this drive letter changes, but it did get the module I needed installed.
|
Getting PEAR to work on XAMPP (Apache/MySQL stack on Windows)
|
[
"",
"php",
"pear",
"laconica",
""
] |
What is the best way to manage a list of windows (keeping them in order) to be able to promote the next window to the top-level when the current top-level window is closed.
This is for a web application, so we're using jQuery Javascript.
We'd talked through a few simplistic solutions, such as using an array and just treating [0] index as the top-most window.
I'm wondering if there's any potentially more efficient or useful alternative to what we had brainstormed.
|
A stack if you want to just close the window on top.
A queue if you also need to open windows at the end.
|
I don't really know javascript, but couldn't you create a stack of windows?
|
Best way to keep an ordered list of windows (from most-recently created to oldest)?
|
[
"",
"javascript",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.