Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am a JavaScript newbie and I am experiencing the following problem:
I would write JS code in a separate file, include the file in the html code. JS code runs great. No problems. I would go ahead and make some changes to the JS code, click 'refresh' in the browser window and sometimes there is a problem. The changes I have made to the JS code have messed things up. The code doesn't work the way it's supposed. So I start looking for the problem, but the code is perfectly fine. So I clear the browser's cache - still nothing. I undo the changes to JS code, everything works. I put the new code back in - after a few 'refresh' clicks - all of a sudden it works. I am having this problem using Safari 4, Firefox 2.0. I have not tried a different browser.
My question is - Do I have to disable browser caching using some JS technique or simply from the browser or there is a different problem? Thank you for your time and help! | When you reload/refresh a web page most of the time scripts are reloaded from cache. You can force the browser to reload the external script file(s) by holding down the shift key while clicking the refresh button.
If that doesn't work you might want to check if there is a proxy server sitting between you and the web page. If it is a local web page, the shift button should do the trick. | If you reference your JS file with a random key that would defeat caching:
eg:
```
var randomnumber=Math.floor(Math.random()*10000)
var scriptfile='http://www.whatever.com/myjs.js?rnd='+randnumber;
```
Good for debugging if nothing else. | JavaScript and possible browser cache problem | [
"",
"javascript",
"html",
""
] |
I am interested to know what specifically Page.IsPostBack means. I am fully aware of it's day to day use in a standard ASP.NET page, that it indicates that the user is
submitting data back to the server side. See [Page:IsPostBack Property](http://msdn.microsoft.com/en-us/library/system.web.ui.page.ispostback.aspx)
But given this HTML
```
<html>
<body>
<form method="post" action="default.aspx">
<input type="submit" value="submit" />
</form>
</body>
</html>
```
When clicking on the Submit button, the pages Page\_Load method is invoked, but the Page.IsPostBack is returning false. I don't want to add `runat=server`.
How do I tell the difference between the pages first load, and a Request caused by the client hitting submit?
**update**
I've added in `<input type="text" value="aa" name="ctrl" id="ctrl" />` so the Request.Form has an element, and Request.HTTPMethod is POST, but IsPostBack is still false? | One way to do this is to extend the ASP.NET Page class, "override" the IsPostBack property and let all your pages derive from the extended page.
```
public class MyPage : Page
{
public new bool IsPostBack
{
get
{
return
Request.Form.Keys.Count > 0 &&
Request.RequestType.Equals("POST", StringComparison.OrdinalIgnoreCase);
}
}
}
``` | Check the Request.Form collection to see if it is non-empty. Only a POST will have data in the Request.Form collection. Of course, if there is no form data then the request is indistinguishable from a GET.
As to the question in your title, IsPostBack is set to true when the request is a POST from a server-side form control. Making your form client-side only, defeats this. | What does IsPostBack actually mean? | [
"",
"c#",
"asp.net",
""
] |
There was a thread on this in comp.lang.javascript recently where
victory was announced but no code was posted:
On an HTML page how do you find the lower left corner coordinates of an element (image or button, say)
reliably across browsers and page styles? The method advocated in "Ajax in Action" (copy I have) doesn't seem to work in IE under some circumstances. To make the problem easier, let's assume we can set the global document style to be "traditional" or "transitional" or whatever.
Please provide code or a pointer to code please (a complete function that works on all browsers) -- don't just say "that's easy" and blather about what traversing the DOM -- if I want to read that kind of thing I'll go back to comp.lang.javascript. Please scold me if this is a repeat and point me to the solution -- I did try to find it. | In my experience, the only sure-fire way to get stuff like this to work is using [JQuery](http://docs.jquery.com/CSS) (don't be afraid, it's just an external script file you have to include). Then you can use a statement like
```
$('#element').position()
```
or
```
$('#element').offset()
```
to get the current coordinates, which works excellently across any and all browsers I've encountered so far. | I found this Solution from the web... This Totally Solved my Problem.
Please check this link for the origin.
<http://www.quirksmode.org/js/findpos.html>
```
/** This script finds the real position,
* so if you resize the page and run the script again,
* it points to the correct new position of the element.
*/
function findPos(obj){
var curleft = 0;
var curtop = 0;
if (obj.offsetParent) {
do {
curleft += obj.offsetLeft;
curtop += obj.offsetTop;
} while (obj = obj.offsetParent);
return {X:curleft,Y:curtop};
}
}
```
Works Perfectly in Firefox, IE8, Opera (Hope in others too)
Thanks to those who share their knowledge...
Regards,
**ADynaMic** | how to find coordinates of an HTML button or image, cross browser? | [
"",
"javascript",
"html",
"dom",
""
] |
I'll be interviewing for a J2EE job using the Spring Framework next week. I've used Spring in my last couple of positions, but I probably want to brush up on it.
What should I keep in mind, and what web sites should look at, to brush up? | I think excellent way to brush up on spring framework skills are to cover concepts given in DZone's RefCards. They are concise PDF format documents.
* [Spring Configuration RefCardz](http://refcardz.dzone.com/refcardz/spring-configuration)
* [Spring Annotations RefCards](http://refcardz.dzone.com/refcardz/spring-annotations)
* [Spring and Flex Configuration](http://refcardz.dzone.com/refcardz/flex-spring-integration)
Hope this helps!!
Peacefulfire | I wouldn't ask about the framework in itself, but in which cases would be convenient to apply its features, like when to use LoadTimeWeaving aspects or whether to DI domain objects or not.
I see spring as a tool for solving problems, and I'd like the other person to tell me how would he apply it, when, and most important in which case he *wouldn't* use it. | I'm interviewing for a j2EE position using the Spring Framework; help me brush up | [
"",
"java",
"spring",
"jakarta-ee",
""
] |
```
create table check2(f1 varchar(20),f2 varchar(20));
```
creates a table with the default collation `latin1_general_ci`;
```
alter table check2 collate latin1_general_cs;
show full columns from check2;
```
shows the individual collation of the columns as 'latin1\_general\_ci'.
Then what is the effect of the alter table command? | To change the default character set and collation of a table *including those of existing columns* (note the **convert to** clause):
```
alter table <some_table> convert to character set utf8mb4 collate utf8mb4_unicode_ci;
```
Edited the answer, thanks to the prompting of some comments:
> Should avoid recommending utf8. It's almost never what you want, and often leads to unexpected messes. The utf8 character set is not fully compatible with UTF-8. The utf8mb4 character set is what you want if you want UTF-8. – Rich Remer Mar 28 '18 at 23:41
and
> That seems quite important, glad I read the comments and thanks @RichRemer . Nikki , I think you should edit that in your answer considering how many views this gets. See here <https://dev.mysql.com/doc/refman/8.0/en/charset-unicode-utf8.html> and here [What is the difference between utf8mb4 and utf8 charsets in MySQL?](https://stackoverflow.com/q/30074492/772035) – Paulpro Mar 12 at 17:46 | MySQL has 4 levels of collation: server, database, table, column.
If you change the collation of the server, database or table, you don't change the setting for each column, but you change the default collations.
E.g if you change the default collation of a database, each new table you create in that database will use that collation, and if you change the default collation of a table, each column you create in that table will get that collation. | How to change the default collation of a table? | [
"",
"mysql",
"sql",
"collation",
""
] |
Does anyone know of a DOM inspector javascript library or plugin?
I want to use this code inside a website I am creating, I searched a lot but didn't find what I wanted except this one: <http://slayeroffice.com/tools/modi/v2.0/modi_help.html>
**UPDATE:**
Seems that no one understood my question :( I want to find an example or plug-in which let me implement DOM inspector. I don't want a tool to inspect DOMs with; I want to write my own. | I found this one:
<http://userscripts.org/scripts/review/3006>
And this one also is fine:
[DOM Mouse-Over Element Selection and Isolation](http://snippets.dzone.com/posts/show/4513)
Which is very simple with few lines of code and give me something good to edit a little and get exactly what i wanted. | I am also looking for the same thing, and in addition to <http://slayeroffice.com/tools/modi/v2.0/modi_help.html> i found: <http://www.selectorgadget.com/> ( <https://github.com/iterationlabs/selectorgadget/> )
Also came across this <https://github.com/josscrowcroft/Simple-JavaScript-DOM-Inspector>
Unfortunately I haven't found anything based on jQuery. But "Javascript DOM Inspector" seems to be the right keywords to look for this kind of thing. | Does anyone know a DOM inspector javascript library or plugin? | [
"",
"javascript",
"jquery",
"html",
"dom",
""
] |
I need to set a breakpoint to certain event, but I don't know, where is it defined, because I've got giant bunch of minimized JavaScript code, so I can't find it manually.
Is it possible to somehow set a breakpoint to for example the click event of an element with ID `registerButton`, or find somewhere which function is bound to that event?
I found the Firefox add-on **Javascript Deobfuscator**, which shows currently executed JavaScript, which is nice, but the code I need to debug is using **jQuery**, so there's loads of function calls even on the simplest event, so I can't use that either.
Is there any debugger made especially for **jQuery**?
Does anybody know some tool that turns *minified JavaScript* back into *formatted* code like turn `function(){alert("aaa");v=3;}` back into
```
function() {
alert("aaa");
v = 3;
}
``` | Well it might all be too much trouble than it's worth, but it looks like you have three things to do:
1. De-minify the source. I like [this online tool](http://jsbeautifier.org/) for a quick and dirty. Just paste your code and click the button. Has never let me down, even on the most funky of JavaScript.
2. *All* jQuery event binders get routed to `"jQuery.event.add"` ([here's what it looks like in the unbuilt source](https://github.com/jquery/jquery/blob/master/src/event.js)), so you need to find that method and set a breakpoint there.
3. If you have reached this far, all you need to do is inspect the callstack at the breakpoint to see who called what. Note that since you're at an internal spot in the library you will need to check a few jumps out (since the code calling `"jQuery.event.add"` was most likely just other jQuery functions).
Note that 3) requires Firebug for FF3. If you are like me and prefer to debug with Firebug for FF2, you can use the age-old `arguments.callee.caller.toString()` method for inspecting the callstack, inserting as many `".caller`"s as needed.
---
**Edit**: Also, see ["How to debug Javascript/jQuery event bindings with FireBug (or similar tool)"](https://stackoverflow.com/questions/570960/how-to-debug-javascript-jquery-event-bindings-with-firebug-or-similar-tool/571087#571087).
You may be able to get away with:
```
// inspect
var clickEvents = jQuery.data($('#foo').get(0), "events").click;
jQuery.each(clickEvents, function(key, value) {
alert(value) // alerts function body
})
```
The above trick will let you see *your* event handling code, and you can just start hunting it down in *your* source, as opposed to trying to set breakpoint in jQuery's source. | First replace minified jquery or any other source you use with formated. Another useful trick I found is using profiler in firebug. The profiler shows which functions are being executed and you can click on one and go there to set a breakpoint. | Debugging JavaScript events with Firebug | [
"",
"javascript",
"jquery",
"debugging",
"firebug",
""
] |
I'm using WatiN testing tool and i'm writing c#.net scripts. I've a scenario where i need to change the theme of my web page, so to do this i need to click on a image button which opens a ajax popup with the image and "Apply Theme" button which is below the image now i need to click on the button so how to do this please suggest some solution. | So first click your button that throws up the popup, and .WaitUntilExists() for the button inside the popup.
```
IE.Button("ShowPopup").click()
IE.Button("PopupButtonID").WaitUntilExists()
IE.Button("PopupButtonID").click()
```
This may not work in the case the button on the popup exists but is hidden from view. In that case you could try the .WaitUntil() and specify an attribute to look for.
```
IE.Button("ButtonID").WaitUntil("display","")
``` | The Ajax pop-up itself shouldn't pose a problem if you handle the timing of the control loading asynchronously. If you are using the ajax control toolkit, you can solve it like this
```
int timeout = 20;
for (i=0; i < timeout; i++)
{
bool blocked = Convert.ToBoolean(ie.Eval("Sys.WebForms.PageRequestManager.getInstance().get_isInAsyncPostBack();"));
if (blocked)
{
System.Threading.Thread.Sleep(200);
}
else
{
break;
}
}
```
With the control visible you then should be able to access it normally.
Watin 1.1.4 added support for WaitUntil on controls as well, but I haven't used it personally.
```
// Wait until some textfield is enabled
textfield.WaitUntil("disable", false.ToSting, 10);
``` | Watin script for handling Ajax popup | [
"",
"c#",
".net",
"watin",
""
] |
I need to call [VirtualAllocEx](http://www.pinvoke.net/default.aspx/kernel32/VirtualAllocEx.html) and it returns IntPtr.
I call that function to get an empty address so I can write my codecave there(this is in another process).
How do I convert the result into UInt32,so I could call WriteProcessMemory() lately with that address? | You could just cast it with (uint)ptr I believe (If it won't cast nicely, try ptr.ToInt32() or ToInt64() first. At least I don't know of any issues with this approach, haven't used it -that- much though. Given UInt32 has larger range than Int32 and same as Int64 on non-negative side it should be good enough.
Although not sure how the Int32 behaves on 64 bit architectures. Badly I'd imagine as the reason for IntPtr is to provide platform independant way to store pointers. | When you call WriteProcessMemory, you should be passing an IntPtr for the address rather than a UInt32 (because WriteProcessMemory expects a pointer, not an integer). So you should be able to use the IntPtr returned by VirtualAllocEx directly without the need to convert it to a UInt32. | IntPtr to Uint32? C# | [
"",
"c#",
"intptr",
""
] |
I'm playing around with some code katas and trying to get a better understanding of java generics at the same time. I've got this little method that prints arrays like I like to see them and I have a couple of helper methods which accept an array of 'things' and an index and returns the array of the 'things' above or below the index (it's a binary search algorithm).
Two questions,
#1 Can i avoid the cast to T in splitBottom and splitTop? It doesn't feel right, or I'm going about this the wrong way (don't tell me to use python or something .. ;) )
#2 Do I have to write seperate methods to deal with primitive arrays or is there a better solution?
```
public class Util {
public static <T> void print(T[] array) {
System.out.print("{");
for (int i = 0; i < array.length; i++) {
System.out.print(array[i]);
if (i < array.length - 1) {
System.out.print(", ");
}
}
System.out.println("}");
}
public static <T> T[] splitTop(T[] array, int index) {
Object[] result = new Object[array.length - index - 1];
System.arraycopy(array, index + 1, result, 0, result.length);
return (T[]) result;
}
public static <T> T[] splitBottom(T[] array, int index) {
Object[] result = new Object[index];
System.arraycopy(array, 0, result, 0, index);
return (T[]) result;
}
public static void main(String[] args) {
Integer[] integerArray = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
print(integerArray);
print(splitBottom(integerArray, 3));
print(splitTop(integerArray, 3));
String[] stringArray = {"one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten"};
print(stringArray);
print(splitBottom(stringArray, 3));
print(splitTop(stringArray, 3));
int[] intArray = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
// ???
}
}
``` | Generics don't handle primitives in a consistent fashion. This is because Generics are not like templates in C++, it is just a compile time addition to a single class.
When generic are compiled, you end up with Object[] in the above example as the implementing type. As int[] and byte[] etc, do not extend Object[] you cannot use them inter-changeably even if the code involved would be identical (again generics are not templates)
The only class int[] and Object[] share is Object. You can write the above methods Object as the type (see System.arraycopy, Array.getLength, Array.get, Array.set) | > 1 Can i avoid the cast to T in splitBottom and splitTop? It doesn't
> feel right, or I'm going about this
> the wrong way (don't tell me to use
> python or something .. ;) )
Not only can you not avoid it, but you shouldn't do it. In Java, different types of arrays are actually different runtime types. An array that was created as an `Object[]` cannot be assigned to a variable of AnythingElse[]. The cast there will not fail immediately, because in generics the type T is erased, but later it will throw a ClassCastException when code tries it to use it as a Something[] as you promised them, but it is not.
The solution is to either use the `Arrays.copyOf...` methods in Java 6 and later, or if you are using an earlier version of Java, use Reflection to create the correct type of array. For example,
T[] result = (T[])Array.newInstance(array.getClass().getComponentType(), size);
> 2 Do I have to write seperate methods to deal with primitive arrays or is
> there a better solution?
It is probably best to write separate methods. In Java, arrays of primitive types are completely separate from arrays of reference types; and there is no nice way to work with both of them.
It is possible to use Reflection to deal with both at the same time. Reflection has `Array.get()` and `Array.set()` methods that will work on primitive arrays and reference arrays alike. However, you lose type safety by doing this as the only supertype of both primitive arrays and reference arrays is `Object`. | Can you pass an int array to a generic method in java? | [
"",
"java",
"generics",
""
] |
On my journey to learning MVVM I've established some basic understanding of WPF and the ViewModel pattern. I'm using the following abstraction when providing a list and am interested in a single selected item.
```
public ObservableCollection<OrderViewModel> Orders { get; private set; }
public ICollectionView OrdersView
{
get
{
if( _ordersView == null )
_ordersView = CollectionViewSource.GetDefaultView( Orders );
return _ordersView;
}
}
private ICollectionView _ordersView;
public OrderViewModel CurrentOrder
{
get { return OrdersView.CurrentItem as OrderViewModel; }
set { OrdersView.MoveCurrentTo( value ); }
}
```
I can then bind the OrdersView along with supporting sorting and filtering to a list in WPF:
```
<ListView ItemsSource="{Binding Path=OrdersView}"
IsSynchronizedWithCurrentItem="True">
```
This works really well for single selection views. But I'd like to also support multiple selections in the view and have the model bind to the list of selected items.
How would I bind the ListView.SelectedItems to a backer property on the ViewModel? | Add an `IsSelected` property to your child ViewModel (`OrderViewModel` in your case):
```
public bool IsSelected { get; set; }
```
Bind the selected property on the container to this (for ListBox in this case):
```
<ListBox.ItemContainerStyle>
<Style TargetType="{x:Type ListBoxItem}">
<Setter Property="IsSelected" Value="{Binding Mode=TwoWay, Path=IsSelected}"/>
</Style>
</ListBox.ItemContainerStyle>
```
`IsSelected` is updated to match the corresponding field on the container.
You can get the selected children in the view model by doing the following:
```
public IEnumerable<OrderViewModel> SelectedOrders
{
get { return Orders.Where(o => o.IsSelected); }
}
``` | I can assure you: `SelectedItems` is indeed bindable as a XAML `CommandParameter`
There is a simple solution to this common issue; to make it work you must follow **ALL** the following rules:
1. Following [Ed Ball's suggestion](https://stackoverflow.com/a/336258/378115), on your XAML command databinding, define the `CommandParameter` attribute **BEFORE** the `Command` attribute. *This a very time-consuming bug*.

2. Make sure your `ICommand`'s `CanExecute` and `Execute` methods have a parameter of type `object`. This way you can prevent *silenced* cast exceptions that occur whenever the databinding's `CommandParameter` type does not match your `Command` method's parameter type:
```
private bool OnDeleteSelectedItemsCanExecute(object SelectedItems)
{
// Your code goes here
}
private bool OnDeleteSelectedItemsExecute(object SelectedItems)
{
// Your code goes here
}
```
For example, you can either send a `ListView`/`ListBox`'s `SelectedItems` property to your `ICommand` methods or the `ListView`/`ListBox` itself. Great, isn't it?
I hope this prevents someone from spending the huge amount of time I did to figure out how to receive `SelectedItems` as a `CanExecute` parameter. | Managing multiple selections with MVVM | [
"",
"c#",
".net",
"data-binding",
"mvvm",
""
] |
I'm writing a WPF desktop application and have a view that needs to look like a calendar. I only need to show four weeks (fixed). Each week will have a list of items (probably represented with a ListView). I'm debating between two approaches:
1. Using layout controls - a Grid control where each "cell" contains a ListView
2. Using a ListView + GridView and then customizing the style to look more like a calendar
In my opinion, the layout controls would be easier to implement but something about it feels dirty. I would essentially be hard-coding 28 cells each containing a ListView. But again, it would be simple.
I'm thinking the ListView/GridView would be more elegant, but I still need to create a ListView for each cell and then I have to figure out how to style the control in such a way that it looks more like a calendar and less like a table.
Thoughts? | You could use a ListBox, with a UniformGrid. That would take care of building the base layout. With a little triggering on the current month to implement the change of style when the displayed day is not in the current month.
Then, each day can have a ListView or ListBox as part of its template, for displaying tasks/meetings/whatever.
The whole thing would bind to a collection of objects that represents the "days" and their content.
ListBox is usually a good bet for the base of an ItemsControl.
You should also have a look at how the Calendar control is built in WPF, it might help give you some ideas. | Perhaps, the following example can help you:
<http://www.codeproject.com/KB/WPF/WPFOutlookCalendar.aspx> | In WPF, what is the best way to create a calendar view similar to Outlook 2007? | [
"",
"c#",
".net",
"wpf",
"wpf-controls",
""
] |
Does anyone know how to get the current build configuration `$(Configuration)` in C# code? | **Update**
Egors answer to this question ( [here](https://stackoverflow.com/a/60545278/18797) in this answer list) is the correct answer.
~~You can't, not really.
What you can do is define some "Conditional Compilation Symbols", if you look at the "Build" page of you project settings, you can set these there, so you can write #if statements to test them.~~
A DEBUG symbol is automatically injected (by default, this can be switched off) for debug builds.
So you can write code like this
```
#if DEBUG
RunMyDEBUGRoutine();
#else
RunMyRELEASERoutine();
#endif
```
However, don't do this unless you've good reason. An application that works with different behavior between debug and release builds is no good to anyone. | There is [AssemblyConfigurationAttribute](https://learn.microsoft.com/en-us/dotnet/api/system.reflection.assemblyconfigurationattribute?f1url=https%3A%2F%2Fmsdn.microsoft.com%2Fquery%2Fdev16.query%3FappId%3DDev16IDEF1%26l%3DEN-US%26k%3Dk(System.Reflection.AssemblyConfigurationAttribute);k(DevLang-csharp)%26rd%3Dtrue&view=netframework-4.8) in .NET. You can use it in order to get name of build configuration
```
var assemblyConfigurationAttribute = typeof(CLASS_NAME).Assembly.GetCustomAttribute<AssemblyConfigurationAttribute>();
var buildConfigurationName = assemblyConfigurationAttribute?.Configuration;
``` | How to obtain build configuration at runtime? | [
"",
"c#",
"configuration",
""
] |
I'm having a problem with the call to SQLGetDiagRec. It works fine in ascii mode, but in unicode it causes our app to crash, and i just can't see why. All the documentation i've been able to find seems to indicate that it should handle the ascii/unicode switch internally. The code i'm using is:
```
void clImportODBCFileTask::get_sqlErrorInfo( const SQLSMALLINT _htype, const SQLHANDLE _hndle )
{
SQLTCHAR SqlState[6];
SQLTCHAR Msg[SQL_MAX_MESSAGE_LENGTH];
SQLINTEGER NativeError;
SQLSMALLINT i, MsgLen;
SQLRETURN nRet;
memset ( SqlState, 0, sizeof(SqlState) );
memset ( Msg, 0, sizeof(Msg) );
// Get the status records.
i = 1;
//JC - 2009/01/16 - Start fix for bug #26878
m_oszerrorInfo.Empty();
nRet = SQLGetDiagRec(_htype, _hndle, i, SqlState, &NativeError, Msg, sizeof(Msg), &MsgLen);
m_oszerrorInfo = Msg;
}
```
everything is alright until this function tries to return, then the app crashes. It never gets back to the line of code after the call to get\_sqlErrorInfo.
I know that's where the problem is because i've put diagnostics code in and it gets past the SQLGetDiagRec okay and it fnishes this function.
If i comment the SQLGetDiagRec line it works fine.
It always works fine on my development box whether or not it's running release or debug.
Any help on this problem would be greatly appreciated.
Thanks | Well i found the correct answer, so I thought i would include it here for future reference. The documentation i saw was wrong. SQLGetDiagRec doesn't handle Unicode i needed to use SQLGetDiagRecW. | The problem is probably in the `sizoef(Msg)`. It should be the number of characters:
```
sizeof(Msg)/sizoef(TCHAR)
``` | SQLGetDiagRec causes crash in Unicode release build | [
"",
"c++",
"unicode",
"odbc",
""
] |
I'm an experienced Java programmer that for the last two years have
programmed for necessity in C# and Javascript. Now with this two languages
I have used some interesting features like closures and anonymous function (in effect with the c/c++ I had already used pointer functions) and I've appreciated a lot how the code
has became clearer and my style more productive. Really also the event management (event delegation pattern) is clearer then that used by Java...
Now, in my opinion, it seems that Java is not so innovative as it was in past...but
why???
C# is evolving (with a lot of new features), C++0x is evolving (it will support lambda expression, closures and a lot of new features) and
I'm frustrated that after spending a lot of time with Java programming it is decaying without any good explanation and the JDK 7 will have nothing of innovative in the language features (yes it will optimize the GC, the compiler etc) but the language itself
will have few important evolutionary changing.
So, how will be the future? How can we still believe in Java? Gosling, where are you??? | I am probably not half as good as some of the programmers who have let their comments, but with my current level of intelligence this is what I think -
If a language makes programming easier / expressive / more concise, then is it not a good thing? Is evolution of languages not a good thing?
If C, C++ are excellent languages because they have been used since decades then why did Java became so popular? I guess thats because Java helped in getting rid of some of the annoying problems and reduced the maintenance costs. How many large scale applications are now written in C++ and how many in Java?
I doubt whether is argument of not changing something is better than changing something for a good reason. | C has not changed much in years, still it remains one of the most popular languages. I don't believe Java has to add syntatic sugar to remain relevant. Believe me Java is here for a long time yet. Far better for Java would be reified generics.
You don't have to believe in Java, if you don't like it choose another language, there are many. Java's survival with hinge on business interest, and whether it can achieve business goals. Not on whether its cool or not. | Where is Java going? | [
"",
"java",
"programming-languages",
""
] |
How to get currently running application without using a system process? | It depends on what you look for. If you are interested in the assembly that is calling you,then you can use [GetCallingAssembly](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.getcallingassembly.aspx). You could also use [GetExecutingAssembly](http://msdn.microsoft.com/en-us/library/system.reflection.assembly.getexecutingassembly.aspx). | ```
System.Diagnostics.Process.GetProcesses("MACHINEHAME")
``` | C#: find current process | [
"",
"c#",
""
] |
When going to Java Web Development such as JSP, JSPX & others.
1. What IDE do you consider Eclipse or NetBeans?
2. What are its advantages and disadvantages?
Which is better preferred in-terms of developing Web Applications such as Websites, Web Services and more. I am considering NetBeans because it has already bundled some features that will allow you to create and test web applications. But is there a good reason why choose Eclipse WTP? | From a micro perspective, Netbeans is a more consistent product with certain parts more polished such as the update manager. I am sure you will find all everything you need in there.
Eclipse is sometimes a little less stable simply because there is still alot of work going on and the plugin system is usable at best. Eclipse will be faster because it uses SWT which creates the UI using native code (so, it will look prettier as well).
At a macro perspective thought, I'm sure you've heard on the news of the recent acquisition of Sun by Oracle. Well, let's just say I'm pretty sure Netbeans is pretty low on Oracle's priorities. On the other hand, Eclipse has big blue (IBM) backing it. So, in the long run, if you don't want to end up in a dead end, go for Eclipse. | I used both Eclipse and NetBeans. I like NetBeans more than Eclipse. From Java editor point of view, both have excellent context sensitive help and the usual goodies.
Eclipse sucks when it comes to setting up projects that other team members can open and use. We have a big project (around 600K lines of code) organized in many folders. Eclipse won't let you include source code that is outside the project root folder. Everything has to be below the project root folder. Usually you want to have individual projects and be able to establish dependencies among them. Once it builds, you would check them into your source control. The problem with Eclipse is that a project (i.e .classpath file) dependencies are saved in user's workspace folder. If you care to see this folder, you will find many files that read like *org.eclipse.\** etc. What it means is that you can't put those files in your source control. We have 20 step instruction sheet for someone to go through each time they start a fresh checkout from source control. We ended up not using its default project management stuff (i.e. classpath file etc). Rather we came up with an Ant build file and launch it from inside Eclipse. That is kludgy way. If you had to jump through these many hoops, the IDE basically failed.
I bet Eclipse project management was designed by guys who never used an IDE. Many IDES let you have different configurations to run your code (Release, Debug, Release with JDK 1.5 etc). And they let you save those things as part of your project file. Everyone in the team can use them without a big learning curve. You can create configurations in Eclipse, but you can't save them as part of your project file (i.e it won't go into your source control). I work on half dozen fresh checkouts in a span of 6 months. I get tired to recreate them with each fresh checkout.
On the other hand, NetBeans works as expected. It doesn't have this project management nightmare.
I heard good things about IntelliJ. If you are starting fresh, go with NetBeans.
My 2cents. | Eclipse Web Tools Platform (WTP) vs NetBeans - IDE for Java Web Development | [
"",
"java",
"eclipse",
"netbeans",
""
] |
I am using hibernate, spring, struts framework for my application.
In my application, each of the table has one field called as Version for tracking updation of any records.
Whenever i am updating existing record of my Country table which has version 0, it works fine & update the record update the version field to 1.
But whenever i am trying to update that version 1 record, it gives me error as follows:
```
org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [com.sufalam.business.marketing.model.bean.Country] with identifier [3]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [com.company.business.marketing.model.bean.Country#3]
```
Is there any way to resolve it ? | The version column of Hibernate allows you to implement [optimistic concurrency control](http://en.wikipedia.org/wiki/Optimistic_concurrency_control).
Every time an object should be updated Hibernate checks if the version value stored in the database is the same as the version value in the object. If the two are different the StaleObjectStateException is thrown, meaning someone else has updated the object meanwhile the current session loaded, edited and stored it.
You have to make sure that the version value in your object is set to the correct value. Sometimes if you detach objects from the session and reattach them (merge) the value version column is not set correct (eg in web applications when values are retrieved from forms) | "Row was updated or deleted by another transaction". Last time i received this I was doing things spread across more than one org.hibernate.Session object. | Hibernate @Version Field error | [
"",
"java",
"hibernate",
""
] |
I need to store a multi-dimensional associative array of data in a flat file for caching purposes. I might occasionally come across the need to convert it to JSON for use in my web app but the vast majority of the time I will be using the array directly in PHP.
Would it be more efficient to store the array as JSON or as a PHP serialized array in this text file? I've looked around and it seems that in the newest versions of PHP (5.3), `json_decode` is actually faster than `unserialize`.
I'm currently leaning towards storing the array as JSON as I feel its easier to read by a human if necessary, it can be used in both PHP and JavaScript with very little effort, and from what I've read, it might even be faster to decode (not sure about encoding, though).
Does anyone know of any pitfalls? Anyone have good benchmarks to show the performance benefits of either method? | Depends on your priorities.
If performance is your absolute driving characteristic, then by all means use the fastest one. Just make sure you have a full understanding of the differences before you make a choice
* Unlike `serialize()` you need to add extra parameter to keep UTF-8 characters untouched: `json_encode($array, JSON_UNESCAPED_UNICODE)` (otherwise it converts UTF-8 characters to Unicode escape sequences).
* JSON will have no memory of what the object's original class was (they are always restored as instances of stdClass).
* You can't leverage `__sleep()` and `__wakeup()` with JSON
* By default, only public properties are serialized with JSON. (in `PHP>=5.4` you can implement [JsonSerializable](http://php.net/manual/en/class.jsonserializable.php) to change this behavior).
* JSON is more portable
And there's probably a few other differences I can't think of at the moment.
A simple speed test to compare the two
```
<?php
ini_set('display_errors', 1);
error_reporting(E_ALL);
// Make a big, honkin test array
// You may need to adjust this depth to avoid memory limit errors
$testArray = fillArray(0, 5);
// Time json encoding
$start = microtime(true);
json_encode($testArray);
$jsonTime = microtime(true) - $start;
echo "JSON encoded in $jsonTime seconds\n";
// Time serialization
$start = microtime(true);
serialize($testArray);
$serializeTime = microtime(true) - $start;
echo "PHP serialized in $serializeTime seconds\n";
// Compare them
if ($jsonTime < $serializeTime) {
printf("json_encode() was roughly %01.2f%% faster than serialize()\n", ($serializeTime / $jsonTime - 1) * 100);
}
else if ($serializeTime < $jsonTime ) {
printf("serialize() was roughly %01.2f%% faster than json_encode()\n", ($jsonTime / $serializeTime - 1) * 100);
} else {
echo "Impossible!\n";
}
function fillArray( $depth, $max ) {
static $seed;
if (is_null($seed)) {
$seed = array('a', 2, 'c', 4, 'e', 6, 'g', 8, 'i', 10);
}
if ($depth < $max) {
$node = array();
foreach ($seed as $key) {
$node[$key] = fillArray($depth + 1, $max);
}
return $node;
}
return 'empty';
}
``` | **JSON** is simpler and faster than PHP's serialization format and should be used **unless**:
* You're storing deeply nested arrays:
[`json_decode()`](http://www.php.net/json_decode): "This function will return false if the JSON encoded data is deeper than 127 elements."
* You're storing objects that need to be unserialized as the correct class
* You're interacting with old PHP versions that don't support json\_decode | Preferred method to store PHP arrays (json_encode vs serialize) | [
"",
"php",
"performance",
"arrays",
"json",
"serialization",
""
] |
I may be asking this incorrectly, but can/how can you find fields on a class within itself... for example...
```
public class HtmlPart {
public void Render() {
//this.GetType().GetCustomAttributes(typeof(OptionalAttribute), false);
}
}
public class HtmlForm {
private HtmlPart _FirstPart = new HtmlPart();
[Optional] //<-- how do I find that?
private HtmlPart _SecondPart = new HtmlPart();
}
```
Or maybe I'm just doing this incorrectly... How can I call a method and then check for attributes applied to itself?
**Also, for the sake of the question** - I'm just curious if it was possible to find attribute information *without knowing/accessing the parent class!* | If I understand your question correctly, I think what you are trying to do is not possible...
In the `Render` method, you want to get a possible attribute applied to the object. The attribute belongs to the field `_SecondPart` witch belongs to the class `HtmlForm`.
For that to work you would have to pass the calling object to the `Render` method:
```
public class HtmlPart {
public void Render(object obj) {
FieldInfo[] infos = obj.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance);
foreach (var fi in infos)
{
if (fi.GetValue(obj) == this && fi.IsDefined(typeof(OptionalAttribute), true))
Console.WriteLine("Optional is Defined");
}
}
}
``` | Here's an example of given a single object how to find if any public or private fields on that object have a specific property:
```
var type = typeof(MyObject);
foreach (var field in type.GetFields(BindingFlags.Public |
BindingFlags.NonPublic | BindingFlags.Instance))
{
if (field.IsDefined(typeof(ObsoleteAttribute), true))
{
Console.WriteLine(field.Name);
}
}
```
For the second part of your question you can check if an attribute is defiend on the current method using:
```
MethodInfo.GetCurrentMethod().IsDefined(typeof(ObsoleteAttribute));
```
**Edit**
To answer your edit yes it is possible without knowing the actual type. The following function takes a type Parameter and returns all fields which have a given attribute. Someone somewhere is going to either know the Type you want to search, or will have an instance of the type you want to search.
Without that you'd have to do as Jon Skeet said which is to enumerate over all objects in an assembly.
```
public List<FieldInfo> FindFields(Type type, Type attribute)
{
var fields = new List<FieldInfo>();
foreach (var field in type.GetFields(BindingFlags.Public |
BindingFlags.NonPublic |
BindingFlags.Instance))
{
if (field.IsDefined(attribute, true))
{
fields.Add(field);
}
}
return fields;
}
``` | C# Reflection : Finding Attributes on a Member Field | [
"",
"c#",
"reflection",
"attributes",
"field",
""
] |
I recently answered this question [how-to-call-user-defined-function-in-order-to-use-with-select-group-by-order-by](https://stackoverflow.com/questions/829089/how-to-call-user-defined-function-in-order-to-use-with-select-group-by-order-by/829106#829106)
My answer was to use an inline view to perform the function and then group on that.
In comments the asker has not understood my response and has asked for some sites / references to help explain it.
I've done a quick google and haven't found any great resources that explain in detail what an inline view is and where they are useful.
Does anyone have anything that can help to explain what an inline view is? | From [here](http://www.orafaq.com/wiki/Inline_view):
An inline view is a SELECT statement in the FROM-clause of another SELECT statement. In-line views are commonly used simplify complex queries by removing join operations and condensing several separate queries into a single query. | I think another term (possibly a SQL Server term) is 'derived table'
For instance, this article:
<http://www.mssqltips.com/tip.asp?tip=1042>
or
<http://www.sqlteam.com/article/using-derived-tables-to-calculate-aggregate-values> | T-SQL - What is an inline-view? | [
"",
"sql",
"t-sql",
""
] |
I can name three advantages to using `double` (or `float`) instead of `decimal`:
1. Uses less memory.
2. Faster because floating point math operations are natively supported by processors.
3. Can represent a larger range of numbers.
But these advantages seem to apply only to calculation intensive operations, such as those found in modeling software. Of course, doubles should not be used when precision is required, such as financial calculations. So are there any practical reasons to ever choose `double` (or `float`) instead of `decimal` in "normal" applications?
Edited to add:
Thanks for all the great responses, I learned from them.
One further question: A few people made the point that doubles can more precisely represent real numbers. When declared I would think that they usually more accurately represent them as well. But is it a true statement that the accuracy may decrease (sometimes significantly) when floating point operations are performed? | I think you've summarised the advantages quite well. You are however missing one point. The [`decimal`](http://msdn.microsoft.com/en-us/library/system.decimal.aspx) type is only more accurate at representing *base 10* numbers (e.g. those used in currency/financial calculations). In general, the [`double`](http://msdn.microsoft.com/en-us/library/system.double.aspx) type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use `double` unless you need the `base 10` accuracy that `decimal` offers.
**Edit:**
Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:
1. the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form
2. rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.
In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).
For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the [Wikipedia article](http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems). Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article [*What Every Computer Scientist Should Know About Floating-Point Arithmetic*](https://docs.oracle.com/cd/E19957-01/800-7895/800-7895.pdf). | You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.
In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.
1. You are aggregating floating point values (in which case the precision errors compound)
2. You build values based on the floating point value (for example in a recursive algorithm)
3. You are doing math with a very wide number of significant digits (for example, `123456789.1 * .000000000000000987654321`)
**EDIT**
According to the [reference documentation on C# decimals](http://msdn.microsoft.com/en-us/library/364x0z75(VS.80).aspx):
> The **decimal** keyword denotes a
> 128-bit data type. Compared to
> floating-point types, the decimal type
> has a greater precision and a smaller
> range, which makes it suitable for
> financial and monetary calculations.
So to clarify my above statement:
> I tend to design for decimals in all
> cases, and rely on a profiler to let
> me know if operations on decimal is
> causing bottlenecks or slow-downs.
I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).
Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:
* decimal = 28-29 significant digits
* double = 15-16 significant digits
* float = 7 significant digits
**EDIT 2**
In response to [Konrad Rudolph](https://stackoverflow.com/users/1968/konrad-rudolph)'s comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:
```
private const float THREE_FIFTHS = 3f / 5f;
private const int ONE_MILLION = 1000000;
public static void Main(string[] args)
{
Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10"));
float asSingle = 0f;
double asDouble = 0d;
decimal asDecimal = 0M;
for (int i = 0; i < ONE_MILLION; i++)
{
asSingle += THREE_FIFTHS;
asDouble += THREE_FIFTHS;
asDecimal += (decimal) THREE_FIFTHS;
}
Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION);
Console.WriteLine("Single: {0}", asSingle.ToString("F10"));
Console.WriteLine("Double: {0}", asDouble.ToString("F10"));
Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10"));
Console.ReadLine();
}
```
This outputs the following:
```
Three Fifths: 0.6000000000
Six Hundred Thousand: 600000.0000000000
Single: 599093.4000000000
Double: 599999.9999886850
Decimal: 600000.0000000000
```
As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits. | When should I use double instead of decimal? | [
"",
"c#",
"types",
"floating-point",
"double",
"decimal",
""
] |
I have a situation where I'm refactoring old code, taking apart an old monster project and splitting it (for various reasons) into smaller sub projects. One project is going to end up containing mostly interfaces while their associated implementations are in another project, and I'm not sure about the best way of setting up the package structure.
Should I go for
> *org.company.interfaceproject.util.InterfaceClass* and
> *org.company.implementationproject.util.ImplementationClass*
or
> *org.company.project.util.InterfaceClass* and
> *org.company.project.util.ImplementationClass*
where the first implementation has the advantage of pointing out to which project the files belong, while the second on doesn't mix in the fact that the files are in different projects at all.
I guess there is no right and wrong here, but I'm curious if anybody has any opinions on the matter. | Yes you need to just come up with a naming convention. Usually a combination of both has suited our company to avoid ambiguity. For example, say you had an interface:
```
org.company.service.UserService
```
Then, we would use the following for the implementation class that was wired by, or had, spring dependencies:
```
org.company.service.spring.UserServiceImpl
```
This then has the best of both viewpoints:
1. You have the classes cleanly in a separate package
2. Using this class name convention, it's clear that its an implementation of `UserService`, and still distinguishable even when both packages are imported. | Both have merits. It ultimately depends on you intentions for the project. If your intent is to eventually create alternate implementations of the interfaces it may make more sense to go with option 1. If this will be the only implementation of the interfaces option 2 would be more reasonable. | How to sort associated classes from different projects | [
"",
"java",
"packaging",
""
] |
I was recently reading [this thread](https://stackoverflow.com/questions/233030/worst-php-practice-found-in-your-experience/233746), on some of the worst PHP practices.
In the second answer there is a mini discussion on the use of `extract()`, and im just wondering what all the huff is about.
I personally use it to chop up a given array such as `$_GET` or `$_POST` where I then sanitize the variables later, as they have been conveniently named for me.
Is this bad practice? What is the risk here? What are your thoughts on the use of `extract()`? | I find that it is only bad practice in that it can lead to a number of variables which future maintainers (or yourself in a few weeks) have no idea where they're coming from. Consider this scenario:
```
extract($someArray); // could be $_POST or anything
/* snip a dozen or more lines */
echo $someVariable;
```
Where did `$someVariable` come from? How can anyone tell?
I don't see the problem in accessing the variables from within the array they started in, so you'd really need to present a good case *for* using `extract()` for me to think it's worth it. If you're really concerned about typing out some extra characters then just do this:
```
$a = $someLongNameOfTheVariableArrayIDidntWantToType;
$a['myVariable'];
```
I think the comments here on the security aspects of it are overblown somewhat. The function can take a second parameter that actually gives you fairly good control over the newly created variables, including not overwriting any existing variables (`EXTR_SKIP`), ONLY overwriting existing variables (so you can create a whitelist) (`EXTR_IF_EXISTS`), or adding prefixes to the variables (`EXTR_PREFIX_ALL`). | Come on now. People blame the tool instead of the user.
That's like talking against `unlink()` because you can delete files with it. `extract()` is a function like any other, use it wisely and responsibly. But don't claim it's bad per se, that's just ignorant. | What is so wrong with extract()? | [
"",
"php",
"arrays",
"extract",
""
] |
I have a singleton bean which needs for each call of a function to return a reference to a different (new) prototype bean. The only way that I can think of doing this is to programmatically retrieve a new prototype bean instance from the BeanFactory/ApplicatioContext by invoking its getBean() method. Code sample will follow...
Is there a better way to do this? Only via configuration, hopefully? (Personally, I doubt there is...)
```
<bean id="protoBean" scope="prototype"
class="com.blahblah.ProtoBean" />
<bean id="singletonBean"
class="com.blahblah.SingletonBean" />
public class ProtoBean {
....
}
public class SingletonBean {
private BeanFactory factory;
public ProtoBean dispense() {
return (ProtoBean) factory.getBean("protoBean");
}
....
}
``` | take a look at [Method Injection](http://static.springframework.org/spring/docs/2.5.x/reference/beans.html#beans-factory-method-injection) | From Spring 3.0, we can use `<aop:scoped-proxy>` for dependency injection of the proper scope. Behind the scene, Spring injects proxied objects and is responsible for finding the right scope context, may it be prototype, session or request etc. See the official documentations [here](http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/beans.html#beans-factory-scopes-other-injection).
And to make life easier, Spring has also introduced proxyMode attribute for @Scope, so we are not limited to XML declarations only. For example:
```
@Scope(value = "prototype", proxyMode = ScopedProxyMode.INTERFACES)
```
Make sure to document clearly the injected bean is a proxy to warn others that getClass() and casting may not yield the expected result. Also, make sure equals() and hashCode() in the proxied class use access methods rather than directly accessing class variables. | Spring prototype beans in combination with singleton beans and dependency injection. Is there an approach that is configuration only? | [
"",
"java",
"spring",
""
] |
The Windows API function `CopyFile` has an argument `BOOL bFailIfExists` that allows you to control whether or not you want to overwrite the target file if it exists.
The `boost::filesystem` `copy_file` function has no such argument, and will fail if the target file exists. Is there an elegant way to use the boost copy\_file function and overwrite the target file? Or is it better to simply use the Windows API? My current target platform is Windows, but I prefer to use STL and boost where possible to keep my code platform independent.
Thank you. | There's a third **enum** argument to copy\_file, **boost::filesystem::copy\_option::overwrite\_if\_exists**
```
copy_file(source_path, destination_path, copy_option::overwrite_if_exists);
```
<https://www.boost.org/doc/libs/1_75_0/libs/filesystem/doc/reference.html> | Beware of boost::copy\_file with copy\_option::overwrite\_if\_exists!
If the destination file exists and it is smaller than the source, the function will only overwrite the first size(from\_file) bytes in the target file.
At least for me this was a caveat since I presumed copy\_option::overwrite\_if\_exists affects *files* and not *content* | how to perform boost::filesystem copy_file with overwrite | [
"",
"c++",
"windows",
"boost",
"boost-filesystem",
""
] |
I've been playing with collections and threading and came across the nifty extension methods people have created to ease the use of ReaderWriterLockSlim by allowing the IDisposable pattern.
However, I believe I have come to realize that something in the implementation is a performance killer. I realize that extension methods are not supposed to really impact performance, so I am left assuming that something in the implementation is the cause... the amount of Disposable structs created/collected?
Here's some test code:
```
using System;
using System.Collections.Generic;
using System.Threading;
using System.Diagnostics;
namespace LockPlay {
static class RWLSExtension {
struct Disposable : IDisposable {
readonly Action _action;
public Disposable(Action action) {
_action = action;
}
public void Dispose() {
_action();
}
} // end struct
public static IDisposable ReadLock(this ReaderWriterLockSlim rwls) {
rwls.EnterReadLock();
return new Disposable(rwls.ExitReadLock);
}
public static IDisposable UpgradableReadLock(this ReaderWriterLockSlim rwls) {
rwls.EnterUpgradeableReadLock();
return new Disposable(rwls.ExitUpgradeableReadLock);
}
public static IDisposable WriteLock(this ReaderWriterLockSlim rwls) {
rwls.EnterWriteLock();
return new Disposable(rwls.ExitWriteLock);
}
} // end class
class Program {
class MonitorList<T> : List<T>, IList<T> {
object _syncLock = new object();
public MonitorList(IEnumerable<T> collection) : base(collection) { }
T IList<T>.this[int index] {
get {
lock(_syncLock)
return base[index];
}
set {
lock(_syncLock)
base[index] = value;
}
}
} // end class
class RWLSList<T> : List<T>, IList<T> {
ReaderWriterLockSlim _rwls = new ReaderWriterLockSlim();
public RWLSList(IEnumerable<T> collection) : base(collection) { }
T IList<T>.this[int index] {
get {
try {
_rwls.EnterReadLock();
return base[index];
} finally {
_rwls.ExitReadLock();
}
}
set {
try {
_rwls.EnterWriteLock();
base[index] = value;
} finally {
_rwls.ExitWriteLock();
}
}
}
} // end class
class RWLSExtList<T> : List<T>, IList<T> {
ReaderWriterLockSlim _rwls = new ReaderWriterLockSlim();
public RWLSExtList(IEnumerable<T> collection) : base(collection) { }
T IList<T>.this[int index] {
get {
using(_rwls.ReadLock())
return base[index];
}
set {
using(_rwls.WriteLock())
base[index] = value;
}
}
} // end class
static void Main(string[] args) {
const int ITERATIONS = 100;
const int WORK = 10000;
const int WRITE_THREADS = 4;
const int READ_THREADS = WRITE_THREADS * 3;
// create data - first List is for comparison only... not thread safe
int[] copy = new int[WORK];
IList<int>[] l = { new List<int>(copy), new MonitorList<int>(copy), new RWLSList<int>(copy), new RWLSExtList<int>(copy) };
// test each list
Thread[] writeThreads = new Thread[WRITE_THREADS];
Thread[] readThreads = new Thread[READ_THREADS];
foreach(var list in l) {
Stopwatch sw = Stopwatch.StartNew();
for(int k=0; k < ITERATIONS; k++) {
for(int i = 0; i < writeThreads.Length; i++) {
writeThreads[i] = new Thread(p => {
IList<int> il = p as IList<int>;
int c = il.Count;
for(int j = 0; j < c; j++) {
il[j] = j;
}
});
writeThreads[i].Start(list);
}
for(int i = 0; i < readThreads.Length; i++) {
readThreads[i] = new Thread(p => {
IList<int> il = p as IList<int>;
int c = il.Count;
for(int j = 0; j < c; j++) {
int temp = il[j];
}
});
readThreads[i].Start(list);
}
for(int i = 0; i < readThreads.Length; i++)
readThreads[i].Join();
for(int i = 0; i < writeThreads.Length; i++)
writeThreads[i].Join();
};
sw.Stop();
Console.WriteLine("time: {0} class: {1}", sw.Elapsed, list.GetType());
}
Console.WriteLine("DONE");
Console.ReadLine();
}
} // end class
} // end namespace
```
Here's a typical result:
```
time: 00:00:03.0965242 class: System.Collections.Generic.List`1[System.Int32]
time: 00:00:11.9194573 class: LockPlay.Program+MonitorList`1[System.Int32]
time: 00:00:08.9510258 class: LockPlay.Program+RWLSList`1[System.Int32]
time: 00:00:16.9888435 class: LockPlay.Program+RWLSExtList`1[System.Int32]
DONE
```
As you can see, using the extensions actually makes the performance *WORSE* than just using `lock` (monitor). | Looks like its the price of instantiating millions of structs and the extra bit of invocations.
I would go as far as to say that the ReaderWriterLockSlim is being misused in this sample, a lock is good enough in this case and the performance edge you get with the ReaderWriterLockSlim is negligible compared to the price of explaining these concepts to junior devs.
You get a **huge** advantage with reader writer style locks when it takes a non-negligable amount of time to perform reads and writes. The boost will be biggest when you have a predominantly read based system.
Try inserting a Thread.Sleep(1) while the locks are acquired to see how huge a difference it makes.
See this benchmark:
```
Time for Test.SynchronizedList`1[System.Int32] Time Elapsed 12310 ms
Time for Test.ReaderWriterLockedList`1[System.Int32] Time Elapsed 547 ms
Time for Test.ManualReaderWriterLockedList`1[System.Int32] Time Elapsed 566 ms
```
In my benchmarking I do not really notice much of a difference between the two styles, I would feel comfortable using it provided it had some finalizer protection in case people forget to dispose ....
```
using System.Threading;
using System.Diagnostics;
using System.Collections.Generic;
using System;
using System.Linq;
namespace Test {
static class RWLSExtension {
struct Disposable : IDisposable {
readonly Action _action;
public Disposable(Action action) {
_action = action;
}
public void Dispose() {
_action();
}
}
public static IDisposable ReadLock(this ReaderWriterLockSlim rwls) {
rwls.EnterReadLock();
return new Disposable(rwls.ExitReadLock);
}
public static IDisposable UpgradableReadLock(this ReaderWriterLockSlim rwls) {
rwls.EnterUpgradeableReadLock();
return new Disposable(rwls.ExitUpgradeableReadLock);
}
public static IDisposable WriteLock(this ReaderWriterLockSlim rwls) {
rwls.EnterWriteLock();
return new Disposable(rwls.ExitWriteLock);
}
}
class SlowList<T> {
List<T> baseList = new List<T>();
public void AddRange(IEnumerable<T> items) {
baseList.AddRange(items);
}
public virtual T this[int index] {
get {
Thread.Sleep(1);
return baseList[index];
}
set {
baseList[index] = value;
Thread.Sleep(1);
}
}
}
class SynchronizedList<T> : SlowList<T> {
object sync = new object();
public override T this[int index] {
get {
lock (sync) {
return base[index];
}
}
set {
lock (sync) {
base[index] = value;
}
}
}
}
class ManualReaderWriterLockedList<T> : SlowList<T> {
ReaderWriterLockSlim slimLock = new ReaderWriterLockSlim();
public override T this[int index] {
get {
T item;
try {
slimLock.EnterReadLock();
item = base[index];
} finally {
slimLock.ExitReadLock();
}
return item;
}
set {
try {
slimLock.EnterWriteLock();
base[index] = value;
} finally {
slimLock.ExitWriteLock();
}
}
}
}
class ReaderWriterLockedList<T> : SlowList<T> {
ReaderWriterLockSlim slimLock = new ReaderWriterLockSlim();
public override T this[int index] {
get {
using (slimLock.ReadLock()) {
return base[index];
}
}
set {
using (slimLock.WriteLock()) {
base[index] = value;
}
}
}
}
class Program {
private static void Repeat(int times, int asyncThreads, Action action) {
if (asyncThreads > 0) {
var threads = new List<Thread>();
for (int i = 0; i < asyncThreads; i++) {
int iterations = times / asyncThreads;
if (i == 0) {
iterations += times % asyncThreads;
}
Thread thread = new Thread(new ThreadStart(() => Repeat(iterations, 0, action)));
thread.Start();
threads.Add(thread);
}
foreach (var thread in threads) {
thread.Join();
}
} else {
for (int i = 0; i < times; i++) {
action();
}
}
}
static void TimeAction(string description, Action func) {
var watch = new Stopwatch();
watch.Start();
func();
watch.Stop();
Console.Write(description);
Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds);
}
static void Main(string[] args) {
int threadCount = 40;
int iterations = 200;
int readToWriteRatio = 60;
var baseList = Enumerable.Range(0, 10000).ToList();
List<SlowList<int>> lists = new List<SlowList<int>>() {
new SynchronizedList<int>() ,
new ReaderWriterLockedList<int>(),
new ManualReaderWriterLockedList<int>()
};
foreach (var list in lists) {
list.AddRange(baseList);
}
foreach (var list in lists) {
TimeAction("Time for " + list.GetType().ToString(), () =>
{
Repeat(iterations, threadCount, () =>
{
list[100] = 99;
for (int i = 0; i < readToWriteRatio; i++) {
int ignore = list[i];
}
});
});
}
Console.WriteLine("DONE");
Console.ReadLine();
}
}
}
``` | The code appears to use a struct to avoid object creation overhead, but doesn't take the other necessary steps to keep this lightweight. I believe it boxes the return value from `ReadLock`, and if so negates the entire advantage of the struct. This should fix all the issues and perform just as well as not going through the `IDisposable` interface.
Edit: Benchmarks demanded. These results are normalized so the *manual* method (call `Enter`/`ExitReadLock` and `Enter`/`ExitWriteLock` inline with the protected code) have a time value of 1.00. **The original method is slow because it allocates objects on the heap that the manual method does not. I fixed this problem, and in release mode even the extension method call overhead goes away leaving it identically as fast as the manual method.**
Debug Build:
```
Manual: 1.00
Original Extensions: 1.62
My Extensions: 1.24
```
Release Build:
```
Manual: 1.00
Original Extensions: 1.51
My Extensions: 1.00
```
My code:
```
internal static class RWLSExtension
{
public static ReadLockHelper ReadLock(this ReaderWriterLockSlim readerWriterLock)
{
return new ReadLockHelper(readerWriterLock);
}
public static UpgradeableReadLockHelper UpgradableReadLock(this ReaderWriterLockSlim readerWriterLock)
{
return new UpgradeableReadLockHelper(readerWriterLock);
}
public static WriteLockHelper WriteLock(this ReaderWriterLockSlim readerWriterLock)
{
return new WriteLockHelper(readerWriterLock);
}
public struct ReadLockHelper : IDisposable
{
private readonly ReaderWriterLockSlim readerWriterLock;
public ReadLockHelper(ReaderWriterLockSlim readerWriterLock)
{
readerWriterLock.EnterReadLock();
this.readerWriterLock = readerWriterLock;
}
public void Dispose()
{
this.readerWriterLock.ExitReadLock();
}
}
public struct UpgradeableReadLockHelper : IDisposable
{
private readonly ReaderWriterLockSlim readerWriterLock;
public UpgradeableReadLockHelper(ReaderWriterLockSlim readerWriterLock)
{
readerWriterLock.EnterUpgradeableReadLock();
this.readerWriterLock = readerWriterLock;
}
public void Dispose()
{
this.readerWriterLock.ExitUpgradeableReadLock();
}
}
public struct WriteLockHelper : IDisposable
{
private readonly ReaderWriterLockSlim readerWriterLock;
public WriteLockHelper(ReaderWriterLockSlim readerWriterLock)
{
readerWriterLock.EnterWriteLock();
this.readerWriterLock = readerWriterLock;
}
public void Dispose()
{
this.readerWriterLock.ExitWriteLock();
}
}
}
``` | ReaderWriterLockSlim Extension Method Performance | [
"",
"c#",
"performance",
"multithreading",
"extension-methods",
"locking",
""
] |
We have a restriction that a class cannot act as a base-class for more than 7 classes.
Is there a way to enforce the above rule at compile-time?
I am aware of Andrew Koenig's Usable\_Lock technique to prevent a class from being inherited but it would fail only when we try to instantiate the class. Can this not be done when deriving itself?
The base-class is allowed to know who are its children. So i guess we can declare a combination of friend
classes and encapsulate them to enforce this rule. Suppose we try something like this
```
class AA {
friend class BB;
private:
AA() {}
~AA() {}
};
class BB : public AA {
};
class CC : public AA
{};
```
The derivation of class CC would generate a compiler warning abt inaccessible dtor. We can then flag
such warnings as errors using compiler tweaks (like flag all warnings as errors), but i would not like to rely on such techniques.
Another way, but to me looks rather clumsy is:-
```
class B;
class InheritanceRule{
class A {
public:
A() {}
~A() {}
};
friend class B;
};
class B {
public:
class C : public InheritanceRule::A
{};
};
class D : public InheritanceRule::A{};
```
The derivation of class D will be flagged as a compiler error, meaning all the classes to be derived should be derived inside class B. This will allow atleast an inspection of the number of classes derived from class A but would not prevent anyone from adding more.
Anyone here who has a way of doing it ? Better still if the base-class need not know who are its children.
NOTE: The class which acts as a base-class can itself be instantiated (it is not abstract).
Thanks in advance,
EDIT-1: As per Comment from jon.h, a slight modification
```
// create a template class without a body, so all uses of it fail
template < typename D>
class AllowedInheritance;
class Derived; // forward declaration
// but allow Derived by explicit specialization
template<>
class AllowedInheritance< Derived> {};
template<class T>
class Base : private AllowedInheritance<T> {};
// privately inherit Derived from that explicit specialization
class Derived : public Base<Derived> {};
// Do the same with class Fail Error
// it has no explicit specialization, so it causes a compiler error
class Fail : public Base<Fail> {}; // this is error
int main()
{
Derived d;
return 0;
}
``` | I'm tired as crap, can barely keep my eyes open, so there's probably a more elegant way to do this, and I'm certainly not endorsing the bizarre idea that a Base should have at most seven subclasses.
```
// create a template class without a body, so all uses of it fail
template < typename D, typename B> class AllowedInheritance;
class Base {};
class Derived; // forward declaration
// but allow Derived, Base by explicit specialization
template<> class AllowedInheritance< Derived, Base> {};
// privately inherit Derived from that explicit specialization
class Derived : public Base, private AllowedInheritance<Derived, Base> {};
// Do the same with class Compiler Error
// it has no explicit specialization, so it causes a compiler error
class CompileError: public Base,
private AllowedInheritance<CompileError, Base> { };
//error: invalid use of incomplete type
//‘struct AllowedInheritance<CompileError, Base>’
int main() {
Base b;
Derived d;
return 0;
}
```
Comment from jon.h:
> How does this stop for instance: class Fail : public Base { }; ? \
It doesn't. But then neither did the OP's original example.
[To the OP: your revision of my answer is pretty much a straight application of Coplien's "Curiously recurring template pattern"](http://en.wikipedia.org/wiki/Curiously_Recurring_Template_Pattern)]
I'd considered that as well, but the problem with that there's no inheritance relationship between a `derived1 : pubic base<derived1>` and a `derived2 : pubic base<derived2>`, because `base<derived1>` and `base<derived2>` are two completely unrelated classes.
If your only concern is inheritance of implementation, this is no problem, but if you want inheritance of interface, your solution breaks that.
I think there is a way to get both inheritance and a cleaner syntax; as I mentioned I was pretty tired when I wrote my solution. If nothing else, by making RealBase a base class of Base in your example is a quick fix.
There are probably a number of ways to clean this up. But I want to emphasize that I agree with markh44: even though my solution is cleaner, we're still cluttering the code in support of a rule that makes little sense. Just because this can be done, doesn't mean it should be.
If the base class in question is ten years old and too fragile to be inherited from, the real answer is to fix it. | Sorry, I don't know how to enforce any such limit using the compiler.
Personally I wouldn't bother trying to force the rule into the code itself - you are cluttering the code with stuff that has nothing to do with what the code is doing - it's not clean code.
Rather than jumping through hoops, I'd try to get that rule relaxed. Instead it should be a guideline that could be broken if necessary and in agreement with others in the team.
Of course, I lack the knowledge of exactly what you're doing so the rule *could* be appropriate, but in general it probably isn't.
Any programming "rule" that says you must never do x or you must always do y is almost always wrong! Notice the word "almost" in there.
Sometimes you might **need** more than 7 derived classes - what do you do then? Jump through more hoops. Also, why 7? Why not 6 or 8? It's just so arbitrary - another sign of a poor rule.
If you must do it, as JP says, static analysis is probably the better way. | Restrict inheritance to desired number of classes at compile-time | [
"",
"c++",
"inheritance",
"friend",
""
] |
I'm trying to nut out a highlevel tech spec for a game I'm tinkering with as a personal project. It's a turn based adventure game that's probably closest to [Archon](http://en.wikipedia.org/wiki/Archon_(computer_game)) in terms of what I'm trying to do.
What I'm having trouble with is conceptualising the best way to develop a combat system that I can implement simply at first, but that will allow expansion and complexity to be added in the future.
Specifically I'm having trouble trying to figure out how to handle combat special effects, that is, bonuses or negatives that may be applied or removed by an actor, an item or an environment.
* Do I have the actor handle all effects that are in play for/against them should the game itself check each weapon, armour, actor and location each time it tries to make a decisive roll.
* Are effects handled in individual objects or is there an 'effect' object or a bit of both?
I may well have not explained myself at all well here, and I'm more than happy to try and expand the question if my request is simply too broad and airy. But my intial thinking is that smarter people than me have spent the time and effort in figuring things like this out and frankly I don't want to taint the conversation with the cul-de-sac of my own stupidity too early.
The language in question is javascript, although at this point I don't imagine it makes a great difference. | What you're calling 'special effects' used to be called 'modifiers' but nowadays go by the term popular in MMOs as 'buffs'. Handling these is as easy or as difficult as you want it to be, given that you get to choose how much versatility you want to be able to bestow at each stage.
Fundamentally though, each aspect of the system typically stores a list of the modifiers that apply to it, and you can query them on demand. Typically there are only a handful of modifiers that apply to any one player at any given time so it's not a problem - take the player's statistics and any modifiers imparted by skills/spells/whatever, add on any modifiers imparted by worn equipment, then add anything imparted by the weapon in question. If you come up with a standard interface here (eg. sumModifiersTo(attributeID)) that is used by actors, items, locations, etc., then implementing this can be quick and easy.
Typically the 'effect' objects would be contained within the entity they pertain to: actors have a list of effects, and the items they wear or use have their own list of effects. Where effects are explicitly activated and/or time-limited, it's up to you where you want to store them - eg. if you have magical potions or other consumables, their effects will need to be appended to the Actor rather than the (presumably destroyed) item.
Don't be tempted to try and have the effects modify actor attributes in-place, as you quickly find that it's easy for the attributes to 'drift' if you don't ensure all additions and removals are done following the correct protocol. It also makes it much harder to bypass certain modifiers later. eg. Imagine a magical shield that only protects against other magic - you can pass some sort of predicate to your modifier totalling function that disregards certain types of effect to do this. | Take a look at the book, [Head First Design Patterns](https://rads.stackoverflow.com/amzn/click/com/0596007124), by Elisabeth Freeman. Specifically, read up on the Decorator and Factory patterns and the method of programming to interfaces, not implementations. I found that book to be hugely effective in illustrating some of the complex concepts that may get you going on this.
Hope this helps to point you in the right direction. | Handling combat effects in game development | [
"",
"javascript",
"design-patterns",
""
] |
I need today's date - and zero anything else (" 05/06/08 00:00:00 ")
I've tried
```
Calendar calendar = Calendar.getInstance();
calendar.set(Calendar.HOUR, 0);
Date date1 = calendar.getTime();
System.out.println(date1);
```
Run: (This is seriously messed up)
If the hour on the computer is < 12:00 at noon : Sun Mar 08 00:44:39 IST 2009
If the hour on the computer is > 12:00 at noon : Sun Mar 08 12:46:53 IST 2009
So I gave this up.
All the Date's setters are deprecated (except the epoch time) - so I don't want to use them either
The only thing I could think of is
```
Calendar calendar = Calendar.getInstance();
SimpleDateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy");
String sDate = dateFormat.format(calendar.getTime());
Date today = dateFormat.parse(sDate);
```
But this is such a lame code I can't bring myself to write it.
Any other option?
Thanks! | My standard advice for Java date/time questions: don't use `java.util.{Calendar,Date}`. Use [Joda Time](http://joda-time.sourceforge.net/index.html). That way you can represent a date as a [date](http://joda-time.sourceforge.net/api-release/org/joda/time/LocalDate.html) (with no associated time zone), instead of a date/time. Or you could use a [`DateMidnight`](http://joda-time.sourceforge.net/api-release/org/joda/time/DateMidnight.html) if that's what you want to represent. (Be careful of combinations of time zone and date where there *is* no midnight though...)
What do you need to use the `Date` with? If you can get away with changing to use Joda throughout, that's great. Otherwise, you can use Joda to do what you want and then convert to milliseconds (and then to `java.util.Date`) when you really need to.
(Michael's solution when using `Date`/`Calendar` is fine if you really want to stick within a broken API... but I can't overstate how much better Joda is...) | I use this:
```
public static Date startOfDay(Date date) {
Calendar dCal = Calendar.getInstance();
dCal.setTime(date);
dCal.set(Calendar.HOUR_OF_DAY, 0);
dCal.set(Calendar.MINUTE, 0);
dCal.set(Calendar.SECOND, 0);
dCal.set(Calendar.MILLISECOND, 0);
return dCal.getTime();
}
``` | Getting today's date in java - I've tried the regular ways | [
"",
"java",
"date",
""
] |
Consider the following simple DAG:
```
1->2->3->4
```
And a table, #bar, describing this (I'm using SQL Server 2005):
```
parent_id child_id
1 2
2 3
3 4
//... other edges, not connected to the subgraph above
```
Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1->2 and 3->4. I want to use these to find the rest of my graph.
I can write a recursive CTE as follows (I'm using terminology from [MSDN](http://msdn.microsoft.com/en-us/library/ms186243.aspx)):
```
with foo(parent_id,child_id) as (
// anchor member that happens to select first and last edges:
select parent_id,child_id from #bar where parent_id in (1,3)
union all
// recursive member:
select #bar.* from #bar
join foo on #bar.parent_id = foo.child_id
)
select parent_id,child_id from foo
```
However, this results in edge 3->4 being selected twice:
```
parent_id child_id
1 2
3 4
2 3
3 4 // 2nd appearance!
```
How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference *all data that has been retrieved by the recursive CTE so far* (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by *the last iteration* of the recursive member only.
This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion?
Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied *after* all the (repeated) recursion is done, so I don't think this is an ideal solution.
*Edit* - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only `where foo.child_id not in (1,3)`. That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1->4 and 4->5 to the above set. Edge 4->5 will be captured twice, even with the suggested predicate. :( | The `CTE`'s are recursive.
When your `CTE`'s have multiple initial conditions, that means they also have different recursion stacks, and there is no way to use information from one stack in another stack.
In your example, the recursion stacks will go as follows:
```
(1) - first IN condition
(1, 2)
(1, 2, 3)
(1, 2, 3, 4)
(1, 2, 3) - no more children
(1, 2) - no more children
(1) - no more children, going to second IN condition
(3) - second condition
(3, 4)
(3) - no more children, returning
```
As you can see, these recursion stack do not intersect.
You could probably record the visited values in a temporary table, `JOIN` each value with the temptable and do not follow this value it if it's found, but `SQL Server` does not support these things.
So you just use `SELECT DISTINCT`. | This is the approach I used. It has been tested against several methods and was the most performant. It combines the temp table idea suggested by Quassnoi and the use of both distinct and a left join to eliminate redundant paths to the recursion. The level of the recursion is also included.
I left the failed CTE approach in the code so you could compare results.
If someone has a better idea, I'd love to know it.
```
create table #bar (unique_id int identity(10,10), parent_id int, child_id int)
insert #bar (parent_id, child_id)
SELECT 1,2 UNION ALL
SELECT 2,3 UNION ALL
SELECT 3,4 UNION ALL
SELECT 2,5 UNION ALL
SELECT 2,5 UNION ALL
SELECT 5,6
SET NOCOUNT ON
;with foo(unique_id, parent_id,child_id, ord, lvl) as (
-- anchor member that happens to select first and last edges:
select unique_id, parent_id, child_id, row_number() over(order by unique_id), 0
from #bar where parent_id in (1,3)
union all
-- recursive member:
select b.unique_id, b.parent_id, b.child_id, row_number() over(order by b.unique_id), foo.lvl+1
from #bar b
join foo on b.parent_id = foo.child_id
)
select unique_id, parent_id,child_id, ord, lvl from foo
/***********************************
Manual Recursion
***********************************/
Declare @lvl as int
Declare @rows as int
DECLARE @foo as Table(
unique_id int,
parent_id int,
child_id int,
ord int,
lvl int)
--Get anchor condition
INSERT @foo (unique_id, parent_id, child_id, ord, lvl)
select unique_id, parent_id, child_id, row_number() over(order by unique_id), 0
from #bar where parent_id in (1,3)
set @rows=@@ROWCOUNT
set @lvl=0
--Do recursion
WHILE @rows > 0
BEGIN
set @lvl = @lvl + 1
INSERT @foo (unique_id, parent_id, child_id, ord, lvl)
SELECT DISTINCT b.unique_id, b.parent_id, b.child_id, row_number() over(order by b.unique_id), @lvl
FROM #bar b
inner join @foo f on b.parent_id = f.child_id
--might be multiple paths to this recursion so eliminate duplicates
left join @foo dup on dup.unique_id = b.unique_id
WHERE f.lvl = @lvl-1 and dup.child_id is null
set @rows=@@ROWCOUNT
END
SELECT * from @foo
DROP TABLE #bar
``` | Prevent recursive CTE visiting nodes multiple times | [
"",
"sql",
"sql-server",
"recursion",
"graph",
"common-table-expression",
""
] |
I'm doing some work with strings, and I have a scenario where I need to determine if a string (usually a small one < 10 characters) contains repeated characters.
```
`ABCDE` // does not contain repeats
`AABCD` // does contain repeats, ie A is repeated
```
I can loop through the string.ToCharArray() and test each character against every other character in the char[], but I feel like I am missing something obvious.... maybe I just need coffee. Can anyone help?
**EDIT:**
The string will be sorted, so order is not important so ABCDA => AABCD
The frequency of repeats is also important, so I need to know if the repeat is pair or triplet etc. | If the string is short, then just looping and testing may well be the simplest and most efficient way. I mean you *could* create a hash set (in whatever platform you're using) and iterate through the characters, failing if the character is already in the set and adding it to the set otherwise - but that's only likely to provide any benefit when the strings are longer.
EDIT: Now that we know it's sorted, [mquander's answer](https://stackoverflow.com/questions/829537/testing-for-repeated-characters-in-a-string/829552#829552) is the best one IMO. Here's an implementation:
```
public static bool IsSortedNoRepeats(string text)
{
if (text.Length == 0)
{
return true;
}
char current = text[0];
for (int i=1; i < text.Length; i++)
{
char next = text[i];
if (next <= current)
{
return false;
}
current = next;
}
return true;
}
```
A shorter alternative if you don't mind repeating the indexer use:
```
public static bool IsSortedNoRepeats(string text)
{
for (int i=1; i < text.Length; i++)
{
if (text[i] <= text[i-1])
{
return false;
}
}
return true;
}
```
EDIT: Okay, with the "frequency" side, I'll turn the problem round a bit. I'm still going to assume that the string is sorted, so what we want to know is the length of the longest run. When there are no repeats, the longest run length will be 0 (for an empty string) or 1 (for a non-empty string). Otherwise, it'll be 2 or more.
First a string-specific version:
```
public static int LongestRun(string text)
{
if (text.Length == 0)
{
return 0;
}
char current = text[0];
int currentRun = 1;
int bestRun = 0;
for (int i=1; i < text.Length; i++)
{
if (current != text[i])
{
bestRun = Math.Max(currentRun, bestRun);
currentRun = 0;
current = text[i];
}
currentRun++;
}
// It's possible that the final run is the best one
return Math.Max(currentRun, bestRun);
}
```
Now we can also do this as a general extension method on `IEnumerable<T>`:
```
public static int LongestRun(this IEnumerable<T> source)
{
bool first = true;
T current = default(T);
int currentRun = 0;
int bestRun = 0;
foreach (T element in source)
{
if (first || !EqualityComparer<T>.Default(element, current))
{
first = false;
bestRun = Math.Max(currentRun, bestRun);
currentRun = 0;
current = element;
}
}
// It's possible that the final run is the best one
return Math.Max(currentRun, bestRun);
}
```
Then you can call `"AABCD".LongestRun()` for example. | If the string is sorted, you could just remember each character in turn and check to make sure the next character is never identical to the last character.
Other than that, for strings under ten characters, just testing each character against all the rest is probably as fast or faster than most other things. A bit vector, as suggested by another commenter, may be faster (helps if you have a small set of legal characters.)
Bonus: here's a slick LINQ solution to implement Jon's functionality:
```
int longestRun =
s.Select((c, i) => s.Substring(i).TakeWhile(x => x == c).Count()).Max();
```
So, OK, it's not very fast! You got a problem with that?!
:-) | Testing for repeated characters in a string | [
"",
"c#",
"algorithm",
"string",
""
] |
So I have been neglecting to do any backups of my fogbugz database, and now the fogbugz ldf file is over 2 and half gigs. Thats been built up over the six months we've been using fogbugz.
I backed up the database, then I backed up, and truncated the transaction log, yet the transaction log is still 2 and a half gigs. I did a shrink on the log file and its still 2 and a half gigs. Nothing I do seems to shrink the file in size.
Is there anyway to fix the problem? Or is the only way back at this point to detach the database, delete the log file and then reattach with a new one? | Welcome to the fickle world of SQL Server log management.
SOMETHING is wrong, though I don't think anyone will be able to tell you more than that without some additional information. For example, has this database ever been used for Transactional SQL Server replication? This can cause issues like this if a transaction hasn't been replicated to a subscriber.
In the interim, this should at least allow you to kill the log file:
1. Perform a full backup of your database. Don't skip this. Really.
2. Change the backup method of your database to "Simple"
3. Open a query window and enter "checkpoint" and execute
4. Perform another backup of the database
5. Change the backup method of your database back to "Full" (or whatever it was, if it wasn't already Simple)
6. Perform a final full backup of the database.
You should now be able to shrink the files (if performing the backup didn't do that for you).
Good luck! | After the steps mentioned in [Adam Robinson's answer](https://stackoverflow.com/a/829573), also run the below queries one by one:
1. `USE Database_Name`
2. `select name,recovery_model_desc from sys.databases`
3. `ALTER DATABASE Database_Name SET RECOVERY simple`
4. `DBCC SHRINKFILE (Database_Name_log , 1)` | How do I decrease the size of my sql server log file? | [
"",
"sql",
"sql-server",
""
] |
an example (that might not be real life, but to make my point) :
```
public void StreamInfo(StreamReader p)
{
string info = string.Format(
"The supplied streamreaer read : {0}\n at line {1}",
p.ReadLine(),
p.GetLinePosition()-1);
}
```
`GetLinePosition` here is an imaginary extension method of streamreader.
Is this possible?
Of course I could keep count myself but that's not the question. | It is extremely easy to provide a line-counting wrapper for any TextReader:
```
public class PositioningReader : TextReader {
private TextReader _inner;
public PositioningReader(TextReader inner) {
_inner = inner;
}
public override void Close() {
_inner.Close();
}
public override int Peek() {
return _inner.Peek();
}
public override int Read() {
var c = _inner.Read();
if (c >= 0)
AdvancePosition((Char)c);
return c;
}
private int _linePos = 0;
public int LinePos { get { return _linePos; } }
private int _charPos = 0;
public int CharPos { get { return _charPos; } }
private int _matched = 0;
private void AdvancePosition(Char c) {
if (Environment.NewLine[_matched] == c) {
_matched++;
if (_matched == Environment.NewLine.Length) {
_linePos++;
_charPos = 0;
_matched = 0;
}
}
else {
_matched = 0;
_charPos++;
}
}
}
```
Drawbacks (for the sake of brevity):
1. Does not check constructor argument for null
2. Does not recognize alternate ways to terminate the lines. Will be inconsistent with ReadLine() behavior when reading files separated by raw \r or \n.
3. Does not override "block"-level methods like Read(char[], int, int), ReadBlock, ReadLine, ReadToEnd. TextReader implementation works correctly since it routes everything else to Read(); however, better performance could be achieved by
* overriding those methods via routing calls to \_inner. instead of base.
* passing the characters read to the AdvancePosition. See the sample ReadBlock implementation:
---
```
public override int ReadBlock(char[] buffer, int index, int count) {
var readCount = _inner.ReadBlock(buffer, index, count);
for (int i = 0; i < readCount; i++)
AdvancePosition(buffer[index + i]);
return readCount;
}
``` | I came across this post while looking for a solution to a similar problem where I needed to seek the StreamReader to particular lines. I ended up creating two extension methods to get and set the position on a StreamReader. It doesn't actually provide a line number count, but in practice, I just grab the position before each `ReadLine()` and if the line is of interest, then I keep the start position for setting later to get back to the line like so:
```
var index = streamReader.GetPosition();
var line1 = streamReader.ReadLine();
streamReader.SetPosition(index);
var line2 = streamReader.ReadLine();
Assert.AreEqual(line1, line2);
```
and the important part:
```
public static class StreamReaderExtensions
{
readonly static FieldInfo charPosField = typeof(StreamReader).GetField("charPos", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.DeclaredOnly);
readonly static FieldInfo byteLenField = typeof(StreamReader).GetField("byteLen", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.DeclaredOnly);
readonly static FieldInfo charBufferField = typeof(StreamReader).GetField("charBuffer", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.DeclaredOnly);
public static long GetPosition(this StreamReader reader)
{
// shift position back from BaseStream.Position by the number of bytes read
// into internal buffer.
int byteLen = (int)byteLenField.GetValue(reader);
var position = reader.BaseStream.Position - byteLen;
// if we have consumed chars from the buffer we need to calculate how many
// bytes they represent in the current encoding and add that to the position.
int charPos = (int)charPosField.GetValue(reader);
if (charPos > 0)
{
var charBuffer = (char[])charBufferField.GetValue(reader);
var encoding = reader.CurrentEncoding;
var bytesConsumed = encoding.GetBytes(charBuffer, 0, charPos).Length;
position += bytesConsumed;
}
return position;
}
public static void SetPosition(this StreamReader reader, long position)
{
reader.DiscardBufferedData();
reader.BaseStream.Seek(position, SeekOrigin.Begin);
}
}
```
This works quite well for me and depending on your tolerance for using reflection It thinks it is a fairly simple solution.
Caveats:
1. While I have done some simple testing using various Systems.Text.Encoding options, pretty much all of the data I consume with this are simple text files `(ASCII)`.
2. I only ever use the `StreamReader.ReadLine()` method and while a brief review of the source for StreamReader seems to indicate this will still work when using the other read methods, I have not really tested that scenario. | How to know position(linenumber) of a streamreader in a textfile? | [
"",
"c#",
"streamreader",
""
] |
I want to reset a file upload field when the user selects another option.
Is this possible via JavaScript? I'm suspecting that the file upload element is treated differently because it interacts with the user's file system, and maybe it's immutable.
Basically, what I want is something like (pseudo-code):
```
// Choose selecting existing file
$('#select-file').bind('focus', function() {
// Clear any files currently selected in #upload-file
$('#upload-file').val('');
}) ;
// Choose uploading new one - this works ok
$('#upload-file').bind('focus', function() {
// Clear any files currently selected in #select-file
$('#select-file').val('');
}) ;
```
NB: **This question and its answers span the period from 2009 to today.** Browsers and approaches have changed in that time, please select your solutions with this in mind :) | You can't set the input value in most browsers, but what you can do is create a new element, copy the attributes from the old element, and swap the two.
Given a form like:
```
<form>
<input id="fileInput" name="fileInput" type="file" />
</form>
```
The straight DOM way:
```
function clearFileInput(id)
{
var oldInput = document.getElementById(id);
var newInput = document.createElement("input");
newInput.type = "file";
newInput.id = oldInput.id;
newInput.name = oldInput.name;
newInput.className = oldInput.className;
newInput.style.cssText = oldInput.style.cssText;
// TODO: copy any other relevant attributes
oldInput.parentNode.replaceChild(newInput, oldInput);
}
clearFileInput("fileInput");
```
Simple DOM way. This may not work in older browsers that don't like file inputs:
```
oldInput.parentNode.replaceChild(oldInput.cloneNode(), oldInput);
```
The jQuery way:
```
$("#fileInput").replaceWith($("#fileInput").val('').clone(true));
// .val('') required for FF compatibility as per @nmit026
```
Resetting the whole form via jQuery: <https://stackoverflow.com/a/13351234/1091947> | Simply now in 2014 the input element having an id supports the function `val('')`.
For the input -
```
<input type="file" multiple="true" id="File1" name="choose-file" />
```
This js clears the input element -
```
$("#File1").val('');
``` | Clearing an HTML file upload field via JavaScript | [
"",
"javascript",
"jquery",
"html",
"file-upload",
""
] |
I divided my problem into a short and a long version for the people with little time at hand.
Short version:
I need some architecture for a system with provider and consumer plugins.
Providers should implement intereface IProvider and consumers should implement IConsumer.
The executing application should only be aware of IProvider and IConsumer.
A consumer implementation can ask the executing assembly (by means of a ServiceProcessor) which providers implement InterfaceX and gets a List back.
These IProvider objects should be casted to InterfaceX (in the consumer) to be able to hook the consumer onto some events InterfaceX defines. This will fail because the executing assembly somehow doesn't know this InterfaceX type (cast fails). Solution would be to include InterfaceX into some assembly that both the plugins and the executing assembly reference but this should mean a recompile for every new provider/consumer pair and is highly undesireable.
Any suggestions?
Long version:
I'm developing some sort of generic service that will use plugins for achieving a higher level of re-usability. The service consists of some sort of Observer pattern implementation using Providers and Consumers. Both providers and Consumers should be plugins for the main application. Let me first explain how the service works by listing the projects I have in my solution.
Project A: A Windows Service project for hosting all plugins and basic functionality. A TestGUI Windows Forms project is used for easier debugging. An instance of the ServiceProcessor class from Project B is doing the plugin related stuff. The subfolders "Consumers" and "Providers" of this project contains subfolders where every subfolder holds a consumer or provider plugin assebly respectively.
Project B: A Class library holding the ServiceProcessor class (that does all plugin loading and dispatching between plugins, etc), IConsumer and IProvider.
Project C: A Class library, linked to project B, consisting of TestConsumer (implementing IConsumer) and TestProvider (implementing IProvider). An additional interface (ITest, itself derived from IProvider) is implemented by the TestProvider.
The goal here is that a Consumer plugin can ask the ServiceProcessor which Providers (implementing at least IProvider) it has). The returned IProvider objects should be casted to the other interface it implements (ITest) in the IConsumer implementation so that the consumer can hook event handlers to the ITest events.
When project A starts, the subfolders containing the consumer and provider plugins are loaded. Below are some problems I've encountered so far and tried to solve.
The interface ITest used to reside in Project C, since this only applies to methods and events TestProvider and TestConsumer are aware of. The general idea is to keep project A simple and unaware of what the plugins do with each other.
With ITest in project C there and code in the Initialize method of the TestConsumer that casts the IProvider to ITest (this whould not fail in a single class library itself when an object implementing ITest is known as an IConsumer object) an invalid casting error would occur. This error can be solved by placing the ITest interface into project B that is referenced by project A as well. It is highly unwanted though since we need to recompile project A when a new interface is build.
I tried to put ITest in a single class library referenced by project C only, since only the provider and consumer need to be aware of this interface, but with no success: when loading the plugin the CLR states the referenced project could not be found. This could be solved by hooking on the AssemblyResolve event of the current AppDomain but somehow this seems unwanted as well. ITest went back to Project B again.
I tried to split project C into a separate project for the consumer and provider and both load the assemblies which itself work well: both assemblies are resident in the Assemblies collection or the current AppDomain:
Assembly found: Datamex.Projects.Polaris.Testing.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2813de212e2efcd3
Assembly found: Datamex.Projects.Polaris.Testing.Consumers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ea5901de8cdcb258
Since the Consumer uses the Provider a reference was made from the Consumer to the Provider. Now the AssemblyResolve event fired again stating it needs the following file:
AssemblyName=Datamex.Projects.Polaris.Testing.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2813de212e2efcd3
My questions:
Why is this? This file is already loaded right?
Why is the cast from IProvider to some interface I know it implements impossible? This is probably because the executing program itself doesn't know this interface, but can't this be loaded dynamically?
My ultimate goal:
Consumer plugins ask the ServiceProcessor which Providers it has that do implement Interface x. The providers can be casted to this interface x, without executing assembly being aware of interface x.
Somebody that can help?
Thanks in advance,
Erik | I just tried to recreate your solution as best as I can, and I have no such issues. (Warning, lots of code samples follow....)
First project is the application, this contains one class:
```
public class PluginLoader : ILoader
{
private List<Type> _providers = new List<Type>();
public PluginLoader()
{
LoadProviders();
LoadConsumers();
}
public IProvider RequestProvider(Type providerType)
{
foreach(Type t in _providers)
{
if (t.GetInterfaces().Contains(providerType))
{
return (IProvider)Activator.CreateInstance(t);
}
}
return null;
}
private void LoadProviders()
{
DirectoryInfo di = new DirectoryInfo(PluginSearchPath);
FileInfo[] assemblies = di.GetFiles("*.dll");
foreach (FileInfo assembly in assemblies)
{
Assembly a = Assembly.LoadFrom(assembly.FullName);
foreach (Type type in a.GetTypes())
{
if (type.GetInterfaces().Contains(typeof(IProvider)))
{
_providers.Add(type);
}
}
}
}
private void LoadConsumers()
{
DirectoryInfo di = new DirectoryInfo(PluginSearchPath);
FileInfo[] assemblies = di.GetFiles("*.dll");
foreach (FileInfo assembly in assemblies)
{
Assembly a = Assembly.LoadFrom(assembly.FullName);
foreach (Type type in a.GetTypes())
{
if (type.GetInterfaces().Contains(typeof(IConsumer)))
{
IConsumer consumer = (IConsumer)Activator.CreateInstance(type);
consumer.Initialize(this);
}
}
}
}
```
Obviously this can be tidied up enormously.
Next project is the shared library which contains the following three interfaces:
```
public interface ILoader
{
IProvider RequestProvider(Type providerType);
}
public interface IConsumer
{
void Initialize(ILoader loader);
}
public interface IProvider
{
}
```
Finally there is the plugin project with these classes:
```
public interface ITest : IProvider
{
}
public class TestConsumer : IConsumer
{
public void Initialize(ILoader loader)
{
ITest tester = (ITest)loader.RequestProvider(typeof (ITest));
}
}
public class TestProvider : ITest
{
}
```
Both the application and the plugin projects reference the shared project and the plugin dll is copied to the search directory for the application - but they don't reference one another.
When the PluginLoader is constructed it finds all the IProviders then creates all the IConsumers and calls Initialize on them. Inside the initialize the consumer can request providers from the loader and in the case of this code a TestProvider is constructed and returned. All of this works for me with no fancy control of the loading of assemblies. | It is still in development, but is sounds like a perfect usecase for MEF (to be included in .Net 4) and used internally in VS2010.
> MEF presents a simple solution for the
> runtime extensibility problem. Until
> now, any application that wanted to
> support a plugin model needed to
> create its own infrastructure from
> scratch. Those plugins would often be
> application-specific and could not be
> reused across multiple
> implementations.
Previews are already available on <http://www.codeplex.com/MEF>
The [blog](http://blogs.msdn.com/gblock/) of Glen Block can also be useful. | C# Plugin Architecture with interfaces share between plugins | [
"",
"c#",
"plugins",
"interface",
"observer-pattern",
""
] |
In SQL Server, TOP may be used to return the first n number of rows in a query. For example,
```
SELECT TOP 100 * FROM users ORDER BY id
```
might be used to return the first 100 people that registered for a site. (This is not necessarily the best way, I am just using it as an example).
My question is - What is the equivalent to TOP in other databases, such as Oracle, MySQL, PostgreSQL, etc? If there is not an equivalent keyword, what workarounds can you recommend to achieve the same result? | To select first `100` rows:
`MySQL` and `PostgreSQL`:
```
SELECT *
FROM Table
ORDER BY
column
LIMIT 100
```
`Oracle`:
```
SELECT *
FROM (
SELECT t.*
FROM table
ORDER BY
column
)
WHERE rownum <= 100
```
Note that you need a subquery here. If you don't add a subquery, `ROWNUM` will select first `10` rows in random order and then sort them by `column`.
To select rows between `100` and `300`:
`MySQL`:
```
SELECT *
FROM TABLE
ORDER BY
column
LIMIT 100, 200
```
`PostgreSQL`:
```
SELECT *
FROM Table
ORDER BY
column
OFFSET 100 LIMIT 200
```
`Oracle`:
```
SELECT *
FROM (
SELECT t.*, ROW_NUMBER() OVER (ORER BY column) AS rn
FROM table
)
WHERE rn >= 100
AND rownum <= 200
```
Note that an attempt to simplify it with `ROWNUM BETWEEN 100 AND 200` (as opposed to `rn BETWEEN 100 AND 200` in the outer query) will return nothing in `Oracle`!
`RN BETWEEN 100 AND 200` will work in `Oracle` too but is less efficient.
See the article in my blog for performance details:
* [**Oracle: ROW\_NUMBER vs ROWNUM**](http://explainextended.com/2009/05/06/oracle-row_number-vs-rownum/) | For Postgres and MySQL it's the LIMIT keyword.
```
SELECT *
FROM users
ORDER BY id
LIMIT 100;
``` | Equivalents to SQL Server TOP | [
"",
"sql",
"database",
"keyword",
""
] |
I have a webpage that heavily uses Javascript (AJAX requests, Google Maps API, HTML building, etc) and the page brings my PC to its knees whenever opened in Internet Explorer. I'm looking for some tools to help me find out which functions are taking the most time to finish. I have some basic profiling javascript functions, but those don't help much considering I don't know what function specifically is being slow. The tool has to be for IE, as the page runs fine in other browsers. | You might actually want to check out the Developer tools that come with IE8. I know js debugging and profiling are part of it. | For JavaScript, XmlHttpRequest, DOM Access, Rendering Times and Network traffic for IE6, 7 & 8 you can use the free [dynaTrace AJAX Edition](http://ajax.dynatrace.com) | Measuring Javascript performance in IE | [
"",
"javascript",
"internet-explorer",
"profiling",
""
] |
I have a SimpleXMLElement object $child, and a SimpleXMLElement object $parent.
How can I add $child as a child of $parent? Is there any way of doing this without converting to DOM and back?
The addChild() method only seems to allow me to create a new, empty element, but that doesn't help when the element I want to add $child also has children. I'm thinking I might need recursion here. | I know this isn't the most helpful answer, but especially since you're creating/modifying XML, I'd switch over to using the DOM functions. SimpleXML's good for accessing simple documents, but pretty poor at changing them.
If SimpleXML is treating you kindly in all other places and you want to stick with it, you still have the option of jumping over to the DOM functions temporarily to perform what you need to and then jump back again, using [`dom_import_simplexml()`](http://au.php.net/manual/en/function.dom-import-simplexml.php) and [`simplexml_import_dom()`](http://au.php.net/manual/en/function.simplexml-import-dom.php). I'm not sure how efficient this is, but it might help you out. | Unfortunately [`SimpleXMLElement`](http://php.net/SimpleXMLElement) does not offer anything to bring two elements together. As [@nickf wrote](https://stackoverflow.com/a/767358/367456), it's more fitting for reading than for manipulation. However, the sister extension `DOMDocument` is for editing and you can bring both together via [`dom_import_simplexml()`](http://php.net/dom_import_simplexml). And [@salathe shows in a related answer](https://stackoverflow.com/a/3418317/367456) how this works for specific SimpleXMLElements.
The following shows how this work with input checking and some more options. I do it with two examples. The first example is a function to insert an XML string:
```
/**
* Insert XML into a SimpleXMLElement
*
* @param SimpleXMLElement $parent
* @param string $xml
* @param bool $before
* @return bool XML string added
*/
function simplexml_import_xml(SimpleXMLElement $parent, $xml, $before = false)
{
$xml = (string)$xml;
// check if there is something to add
if ($nodata = !strlen($xml) or $parent[0] == NULL) {
return $nodata;
}
// add the XML
$node = dom_import_simplexml($parent);
$fragment = $node->ownerDocument->createDocumentFragment();
$fragment->appendXML($xml);
if ($before) {
return (bool)$node->parentNode->insertBefore($fragment, $node);
}
return (bool)$node->appendChild($fragment);
}
```
This exemplary function allows to append XML or insert it before a certain element, including the root element. After finding out if there is something to add, it makes use of *DOMDocument* functions and methods to insert the XML as a document fragment, it is also outlined in [How to import XML string in a PHP DOMDocument](https://stackoverflow.com/q/4081090/367456). The usage example:
```
$parent = new SimpleXMLElement('<parent/>');
// insert some XML
simplexml_import_xml($parent, "\n <test><this>now</this></test>\n");
// insert some XML before a certain element, here the first <test> element
// that was just added
simplexml_import_xml($parent->test, "<!-- leave a comment -->\n ", $before = true);
// you can place comments above the root element
simplexml_import_xml($parent, "<!-- this works, too -->", $before = true);
// but take care, you can produce invalid XML, too:
// simplexml_add_xml($parent, "<warn><but>take care!</but> you can produce invalid XML, too</warn>", $before = true);
echo $parent->asXML();
```
This gives the following output:
```
<?xml version="1.0"?>
<!-- this works, too -->
<parent>
<!-- leave a comment -->
<test><this>now</this></test>
</parent>
```
The second example is inserting a `SimpleXMLElement`. It makes use of the first function if needed. It basically checks if there is something to do at all and which kind of element is to be imported. If it is an attribute, it will just add it, if it is an element, it will be serialized into XML and then added to the parent element as XML:
```
/**
* Insert SimpleXMLElement into SimpleXMLElement
*
* @param SimpleXMLElement $parent
* @param SimpleXMLElement $child
* @param bool $before
* @return bool SimpleXMLElement added
*/
function simplexml_import_simplexml(SimpleXMLElement $parent, SimpleXMLElement $child, $before = false)
{
// check if there is something to add
if ($child[0] == NULL) {
return true;
}
// if it is a list of SimpleXMLElements default to the first one
$child = $child[0];
// insert attribute
if ($child->xpath('.') != array($child)) {
$parent[$child->getName()] = (string)$child;
return true;
}
$xml = $child->asXML();
// remove the XML declaration on document elements
if ($child->xpath('/*') == array($child)) {
$pos = strpos($xml, "\n");
$xml = substr($xml, $pos + 1);
}
return simplexml_import_xml($parent, $xml, $before);
}
```
This exemplary function does normalize list of elements and attributes like common in Simplexml. You might want to change it to insert multiple SimpleXMLElements at once, but as the usage example shows below, my example does not support that (see the attributes example):
```
// append the element itself to itself
simplexml_import_simplexml($parent, $parent);
// insert <this> before the first child element (<test>)
simplexml_import_simplexml($parent->children(), $parent->test->this, true);
// add an attribute to the document element
$test = new SimpleXMLElement('<test attribute="value" />');
simplexml_import_simplexml($parent, $test->attributes());
echo $parent->asXML();
```
This is a continuation of the first usage-example. Therefore the output now is:
```
<?xml version="1.0"?>
<!-- this works, too -->
<parent attribute="value">
<!-- leave a comment -->
<this>now</this><test><this>now</this></test>
<!-- this works, too -->
<parent>
<!-- leave a comment -->
<test><this>now</this></test>
</parent>
</parent>
```
I hope this is helpful. You can [find the code in a gist](https://gist.github.com/hakre/4761677) and as [online demo](http://eval.in/9568) / [PHP version overview](http://3v4l.org/1sI05). | In SimpleXML, how can I add an existing SimpleXMLElement as a child element? | [
"",
"php",
"oop",
"simplexml",
""
] |
I have a database job that runs occasionally at night and I need a windows service to poll the database regularly to do some more work after the SQL job is done. I am a looking for a solid example to write a scheduler that can fail gracefully without crashing the service. | Take a look at <http://codeplex.com/TaskService> | Just make it a scheduled task. See
[windows service vs scheduled task](https://stackoverflow.com/questions/390307/windows-service-vs-scheduled-task) | What's the best way to implement a scheduled job using a windows service | [
"",
"c#",
"windows-services",
"scheduling",
""
] |
Sometimes I need to learn the type of an expression while programming in C or C++. Sometimes there's a good IDE or existent documentation to help me, but sometimes not. I often feel such a construct could be useful:
```
void (*myFunc)(int);
printf("%s", nameoftype(myFunc)); //"void (*)(int)"
int i, unsigned int u;
printf("%s", nameoftype(i+u)); //"unsigned int"
```
This is especially true for C++; think accessors of const objects - do they return a const reference or a copy? Think dynamic casts and templated classes.
How can I do this? (i.e. learn the type of an expression)
I use GCC but as far as I know, it does not have such an extension. So I guess I'm curious as to how people solve this problem. (Both compile-time and runtime solutions welcome.) | What are you looking for? Automatic type inference or looking for the type so you can declare a variable correctly manually? (your own answers look like you want to have the second one). In this case, consider using [Geordi](https://github.com/Eelis/geordi):
```
<litb> make type pointer to function taking pointer to array of 10 int returning void
<geordi> void (*)(int (*)[10])
<litb> geordi: { int a = -1; unsigned int b = 0; cout << ETYPE(a + b), ETYPE_DESC(a + b), (a + b); }
<geordi> rvalue unsigned int, rvalue unsigned integer, 4294967295
<litb> geordi: << TYPE_DESC(void (*)(int (*)[10]))
<geordi> pointer to a function taking a pointer to an array of 10 integers and returning nothing
```
Automatic type inference is not currently possible without helper libraries like boost.typeof, which will use compiler extensions like `__typeof__` for GCC. Next C++ will get `auto` (with different semantics than current `auto`) and will be able to do that, together with `decltype` to get the type of an expression.
If you can live with getting out of local context, you can always create a function template like this:
```
template<typename T> void f(T t) { /* ... */ }
int main() { int a = -1; unsigned int b = 0; f(a + b); }
``` | Sometimes I just do:
```
int ***a = expression;
```
and look for the "<expression type> cannot be assigned to pointer-to^3 int" error. This seems to be the most portable workaround. | Determining the type of an expression | [
"",
"c++",
"c",
""
] |
I'm about to develop some sockets related stuff in C++ and would like the software to be as portable between Windows and Linux as possible right from the start (making it portable later is tricky.)
I've looked at different libraries, there is one for C++ from [alhem.net](http://www.alhem.net/Sockets/) and of course there is boost::asio. boost::asio looks very promising but would be a very big dependency for applications this small.
Is it even worth writing the stuff myself or should I just use a library? If I do it myself what would be the main pitfalls? | Winsocks aren't very compatible with Posix sockets:
* In Winsocks a socket is of type `SOCKET`. On Posix it's simply a file descriptor (`int`), on which you can perform normal `read()` and `write()` calls.
* They don't return errors the same way.
* They don't support some options on `recv()` and `send()`.
* You have to initialize and unitialize the Winsocks library with two specials functions.
* I don't think you can close Windows sockets with `shutdown()` or `close()`. It's something like `closesocket()` instead.
There must be more differences, but that's what I can remember right now. If you want portability with Winsocks, you'll have a small library for closing a socket, printing an error message and so on.
I'd probably go with `boost::asio`, personnally (I've never used it, though). | I've developed a few portable wrappers around sockets. Make sure you don't go down the crappy lane of no return that is constituted of WinSock2 events. Other than that, as I see it, the biggest differences are:
* to start networking in Windows, you need to call `::WSAStartup()`, to shut it down in Windows, run `::WSACleanup()`; in Linux do nothing,
* `close()` in Linux is `closesocket()` in Windows,
* default buffer sizes differ between both drivers and operating systems, so make sure to set them using `SO_RCVBUF` and `SO_SNDBUF`,
* SO\_REUSEADDR steals the address on Windows, allows frequent re-opening on Linux; you'd probably only want to use this flag in Linux,
* making a socket non-blocking uses `::ioctlsocket()` in Windows, `::fcntl()` in Linux,
* the header files are different, `<sys/socket.h>` and friends in Linux, `<WinSock.h>` in Windows,
* to go portable, the easiest way is probably to use `::select()` to wait for data to arrive,
* `fd_set`s are totally different on Windows/Linux; this is only relevant if you need to optimize initialization of `fd_set`s, such as when adding/removing arbitrary sockets,
* in Windows, any thread hanging on the socket is released with an error code when the socket is closed, in Linux the thread remains waiting. If the thread is blocking the socket with for instance `::recvfrom()`, you might consider using `::sendto()` to release the stalling thread under Linux.
Everything else I ever needed just worked out of the låda. | Winsock 2 portability | [
"",
"c++",
"winsock",
"portability",
""
] |
I have the following telephone number 866-234-5678.
I have an asp textbox and I am applying the following mask:
```
<cc2:MaskedEditExtender ID="maskPhone"
runat="server"
ClearMaskOnLostFocus="false"
AutoComplete="false"
MaskType="None"
Mask="(999)-999-9999"
InputDirection="LeftToRight"
TargetControlID="txtPhone">
</cc2:MaskedEditExtender>
```
When I load a page with the textbox, the telephone number displays like the following:
(662)-345-678\_ | The mask states 4 digits for the last group. The underscore '\_' displayed is the PromptCharacter of the MaskedEditExtender. | When you set the .Text property in the page\_load, code behind, the value of the rendered `<INPUT` is set, and then the Mask Applied with Javascript after the page finishes rendering in the browser. Because the first character (the `8`) isn't part of the mask as that point, the javascript for the extender seems to overwrite it with the first parenthesis `(` of the mask. It's an odd behaviour but completely replicable.
If you change your code behind to
```
tbxPhone.Text = " 8662345678";
```
This seems to fix it as the padded space is the one that gets truncated but that's damn fugly. Probably best to log a ticket with the devs on CodePlex or take a stab at fixing the extender yourself if you feel up to it :) | Telephone number is not displaying correctly in ajax control toolkit mask? | [
"",
"c#",
"asp.net",
"ajaxcontroltoolkit",
""
] |
In the name of efficiency in game programming, some programmers do not trust several C++ features. One of my friends claims to understand how game industry works, and would come up with the following remarks:
* Do not use smart pointers. Nobody in games does.
* Exceptions should not be (and is usually not) used in game programming for memory and speed.
How much of these statements are true? C++ features have been designed keeping efficiency in mind. Is that efficiency not sufficient for game programming? For 97% of game programming?
The C-way-of-thinking still seems to have a good grasp on the game development community. Is this true?
I watched another video of a talk on multi-core programming in GDC 2009. His talk was almost exclusively oriented towards Cell Programming, where DMA transfer is needed before processing (simple pointer access won't work with the SPE of Cell). He discouraged the use of polymorphism as the pointer has to be "re-based" for DMA transfer. How sad. It is like going back to the square one. I don't know if there is an elegant solution to program C++ polymorphism on the Cell. The topic of DMA transfer is esoteric and I do not have much background here.
I agree that C++ has also not been very nice to programmers who want a small language to hack with, and not read stacks of books. Templates have also scared the hell out of debugging. Do you agree that C++ is too much feared by the gaming community? | Look, most everything you hear *anyone* say about efficiency in programming is magical thinking and superstition. Smart pointers do have a performance cost; especially if you're doing a lot of fancy pointer manipulations in an inner loop, it could make a difference.
Maybe.
But when people *say* things like that, it's usually the result of someone who told them long ago that X was true, without anything but intuition behind it. Now, the Cell/polymorphism issue *sounds* plausible — and I bet it did to the first guy who said it. But I haven't verified it.
You'll hear the very same things said about C++ for operating systems: that it is too slow, that it does things you want to do well, badly.
None the less we built OS/400 (from v3r6 forward) entirely in C++, bare-metal on up, and got a code base that was fast, efficient, and small. It took some work; especially working from bare metal, there are some bootstrapping issues, use of placement new, that kind of thing.
C++ can be a problem just because it's too damn big: I'm rereading Stroustrup's wristbreaker right now, and it's pretty intimidating. But I don't think there's anything inherent that says you can't use C++ in an effective way in game programming. | The last game I worked on was Heavenly Sword on the PS3 and that was written in C++, even the cell code. Before that, I did some PS2 games and PC games and they were C++ as well. Non of the projects used smart pointers. Not because of any efficiency issues but because they were generally not needed. Games, especially console games, do not do dynamic memory allocation using the standard memory managers during normal play. If there are dynamic objects (missiles, enemies, etc) then they are usually pre-allocated and re-used as required. Each type of object would have an upper limit on the number of instances the game can cope with. These upper limits would be defined by the amount of processing required (too many and the game slows to a crawl) or the amount of RAM present (too much and you could start frequently paging to disk which would seriously degrade performance).
Games generally don't use exceptions because, well, games shouldn't have bugs and therefore not be capable of generating exceptions. This is especially true of console games where games are tested by the console manufacturer, although recent platforms like 360 and PS3 do appear to have a few games that can crash. To be honest, I've not read anything online about what the actual cost of having exceptions enabled is. If the cost is incurred only when an exception is thrown then there is no reason not to use them in games, but I don't know for sure and it's probably dependant on the compiler used. Generally, game programmers know when problems can occur that would be handled using an exception in a business application (things like IO and initialisation) and handle them without the use of exceptions (it is possible!).
But then, in the global scale, C++ is slowly decreasing as a language for game development. Flash and Java probably have a much bigger slice of market and they do have exceptions and smart pointers (in the form of managed objects).
As for the Cell pointer access, the problems arise when the code is being DMA'd into the Cell at an arbitrary base addresses. In this instance, any pointers in the code need to be 'fixed up' with the new base address, this includes v-tables, and you don't really want to do this for every object you load into the Cell. If the code is always loaded at a fixed address, then there is never a need to fix-up the pointers. You lose a bit of flexibility though as you're limiting where code can be stored. On a PC, the code never moves during execution so pointer fix-up at runtime is never needed.
I really don't think anyone 'distrusts' C++ features - not trusting the compiler is something else entirely and quite often new, esoteric architectures like the Cell tend to get robust C compilers before C++ ones because a C compiler is much easier to make than a C++ one. | C++ for Game Programming - Love or Distrust? | [
"",
"c++",
"performance",
""
] |
I'm new to C++ and am writing a multi-threaded app whereby different writers will be pushing objects onto a stack and readers pulling them off the stack (or at least pushing the pointer to an object)..
Are there any structures built-into C++ which can handle this without adding locking code etc.? If not, what about the Boost libraries?
EDIT:
Hi. Thanks for the initial great answers. I guess one reason I thought this could be built-in was that I was thinking purely in x86 space and thought that a PUSH/POP of pointers should be an atomic action at the instruction level.
I'm not sure if my initial hunch is true or not, but I guess this would not necessarily be true across all platforms. Though if running on x86, do you get atomic PUSHes and POPs to the stack and if so, does this essentially make it lock-free? | Yep: [Boost.Thread](http://www.boost.org/doc/html/thread.html) is great, and should fit your needs very well. (These days, many people say that you could almost count Boost as built-in functionality.)
There is still no class that you could use out-of-the-box, but once you have the synchronization primitives at hand, it really is quite simple to implement your own thread-safe wrapper around, for example, `std::stack`. It could look something like this (not implementing every method...):
```
template <typename T> class MyThreadSafeStack {
public:
void push(const T& item) {
boost::mutex::scoped_lock lock(m_mutex);
m_stack.push(item);
}
void pop() {
boost::mutex::scoped_lock lock(m_mutex);
m_stack.pop();
}
T top() const { // note that we shouldn't return a reference,
// because another thread might pop() this
// object in the meanwhile
boost::mutex::scoped_lock lock(m_mutex);
return m_stack.top();
}
private:
mutable boost::mutex m_mutex;
std::stack<T> m_stack;
}
```
---
If you are new to C++, please learn about [RAII](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization). Relevant to this case, Boost.Thread has the "scoped lock" classes to make it difficult to shoot yourself in the leg by forgetting to release a lock.
If you ever find yourself writing code like this:
```
void doStuff() {
myLock.lock();
if (!condition) {
reportError();
myLock.unlock();
return;
}
try {
doStuffThatMayThrow();
}
catch (std::exception& e) {
myLock.unlock();
throw e;
}
doMoreStuff();
myLock.unlock();
}
```
, then you should just say no, and go RAII instead (syntax not directly from Boost):
```
void doStuff() {
scoped_lock lock;
if (!condition) {
reportError();
return;
}
doStuffThatMayThrow();
doMoreStuff();
}
```
The point is that when the `scoped_lock` object goes out of scope, its destructor releases the resource -- in this case, the lock. This will always happen, no matter whether you exit the scope by throwing an exception, or by executing the odd `return` statement that your colleague sneakily added in the middle of your function, or simply by reaching the end of the function. | The current C++ standard doesn't address threading at all, so the answer to your first question is no. And in general, it is a bad idea to build locking into basic data structures, because they don't have sufficient information to perform it correctly and/or efficiently. Instead, the locking should be performed in the classes that use the data structures - in other words, in your own application classes. | Thread-safe C++ stack | [
"",
"c++",
"multithreading",
"boost",
"stack",
"thread-safety",
""
] |
What are the pro's and con's of maintaining a pool of frequently used objects and grab one from the pool instead of creating a new one. Something like string interning except that it will be possible for all class objects.
For example it can be considered to be good since it saves gc time and object creation time. On the other hand it can be a synchronization bottleneck if used from multiple threads, demands explicit deallocation and introduces possibility of memory leaks. By tying up memory that could be reclaimed, it places additional pressure on the garbage collector. | Unless the object is expensive to create, I wouldn't bother.
Benefits:
* Fewer objects created - if object creation is expensive, this can be significant. (The canonical example is probably database connections, where "creation" includes making a network connection to the server, providing authentication etc.)
Downsides:
* More complicated code
* Shared resource = locking; potential bottleneck
* Violates GC's expectations of object lifetimes (most objects will be shortlived)
Do you have an actual problem you're trying to solve, or is this speculative? I wouldn't think about doing something like this unless you've got benchmarks/profile runs showing that there's a problem. | First law of optimization: don't do it. Second law: don't do it unless you actually have measured and know for a fact that you need to optimize and where.
Only if objects are really expensive to create, and if they can actually be reused (you can reset the state with only public operations to something that can be reused) it can be effective.
The two gains you mention are not really true: memory allocation in java is *free* (the cost was close to 10 cpu instructions, which is nothing). So reducing the creation of objects only saves you the time spent in the constructor. This can be a gain with really heavy objects that can be reused (database connections, threads) without changing: you reuse the *same* connection, the same thread.
GC time is not reduced. In fact it can be worse. With moving generational GCs (Java is, or was up to 1.5) the cost of a GC run is determined by the number of alive objects, not by the released memory. Alive objects will be moved to another space in memory (this is what makes memory allocation so fast: free memory is contiguous inside each GC block) a couple of times before being marked as *old* and moved into the older generation memory space.
Programming languages and support, as GC, were designed keeping in mind the common usage. If you steer away from the common usage in many cases you may end up with harder to read code that is less efficient. | Object Pooling in Java | [
"",
"java",
"memory-management",
"garbage-collection",
""
] |
I'm trying to select the first row where the cell value starts with the same keychar the user pressed. That's the part that is giving me trouble.
Here's how I'm handling the event (**updated with working solution**):
```
private void dataGridView1_KeyPress(object sender, KeyPressEventArgs e)
{
if (Char.IsLetter(e.KeyChar))
{
for (int i = 0; i < (dataGridView1.Rows.Count); i++)
{
if (dataGridView1.Rows[i].Cells["Name"].Value.ToString().StartsWith(e.KeyChar.ToString(), true, CultureInfo.InvariantCulture))
{
dataGridView1.Rows[i].Cells[0].Selected = true;
return; // stop looping
}
}
}
}
```
I'm sure it's something simple that I'm overlooking, but for the life of me can't figure out what it is.
## EDIT
Updated the code with solution applied | Might be a case issue, is the Value in Cells["Name"] start with a capital letter? Try using ToUpper or ToLower on both; or you could try StartsWith(e.KeyChar, true) to ignoreCase. If you are trying to select the row, you'll want to do dataGridView1.Rows[i].Selected = true | ```
if (Char.IsLetterOrDigit(e.KeyChar))
{
foreach (DataGridViewRow dgvRow in myDgv.Rows)
{
if (dgvRow.Cells["ColumnName"].FormattedValue
.ToString().StartsWith(e.KeyChar.ToString(), true, CultureInfo.InvariantCulture))
{
dgvRow.Selected = true;
break;
}
}
}
```
If the DGV is set up to allow Multi-Select then you'd obviously want to deselect any existing selection. | Search datagridview on user keypress | [
"",
"c#",
".net",
"winforms",
"datagridview",
"keypress",
""
] |
Have a problem that seems easy on paper but i'm having a big problem figuring out how best to write a single query.
I have a table
```
CREATE TABLE `profile_values` (
`fid` int(10) unsigned NOT NULL default '0',
`uid` int(10) unsigned NOT NULL default '0',
`value` text,
PRIMARY KEY (`uid`,`fid`),
KEY `fid` (`fid`)
)
```
The point of this table is to store profile information on a user.
E.g. a typically row would look like this..
```
fid | uid | value
__________________
1 | 77 | Mary
11 | 77 | Poppins
1 | 123 | Steve
11 | 123 | Davis
```
Note:
```
'fid' of '1' represents the first name
'fid' of '11' represents the last name
'uid' is a users id within the site.
```
What I am trying to achieve is to bring back all uid's that satisfy the condition of first name like 'S%' and last name like 'D%'.
The reason i need this is because i have an autocomplete search box for users on the site.
So if i type 'S' in the search box i will see list of all users that begin with the letter 'S', if i now type a whitespace and 'D' i should now be able to see the list of users who match both conditions.
Does anyone have any idea how this can be accomplished?
Thanks in advance. | ```
select firstname.uid, fistname.value, lastname.value
from
profile_values firstname
inner join profile_values lastname
on firstname.uid = lastname.uid
WHERE
firstname.fid = 1
AND lastname.fid = 11
AND fistname.value like 'S%'
AND lastname.value like 'D%'
```
or, with a users table
```
select users.uid, fistname.value, lastname.value
from
users
inner join profile_values firstname
on firstname.uid = users.uid
AND firstname.fid = 1
inner join profile_values lastname
on lastname.uid = users.uid
AND lastname.fid = 11
WHERE
fistname.value like 'S%'
AND lastname.value like 'D%'
```
EDIT:
Forgot to mention: union performs bad on many dbms. If you need a union, there is mostly something wrong with your database model. Use any other alternative before using union. | The table you've shown looks like a classic example of the [inner-platform effect](http://en.wikipedia.org/wiki/Inner-Platform_Effect) anti-pattern.
So rather than answering your question in the short term (which has already been done), I'll answer that in the longer term your goal should probably be to change the database so that first name and last name are stored in their own columns. These can then be indexed so this type of search will be much faster, and they'll be much easier to work with as you can just use the column name directly.
The fact that you use numbers to refer to the type of data stored in the column indicates that there is a finite number of types of data actually in use, which means they could be broken out into separate columns (if space is a concern and many values are null then you may be able to use sparse columns).
Then your query would simply be something like:
```
select * from users where first_name like 's%' and last_name like 'd%'
``` | Problem with sql query. Should i be using union? | [
"",
"sql",
"mysql",
""
] |
I want to run a `tail -f logfile` command on a remote machine using python's paramiko module. I've been attempting it so far in the following fashion:
```
interface = paramiko.SSHClient()
#snip the connection setup portion
stdin, stdout, stderr = interface.exec_command("tail -f logfile")
#snip into threaded loop
print stdout.readline()
```
I'd like the command to run as long as necessary, but I have 2 problems:
1. How do I stop this cleanly? I thought of making a Channel and then using the `shutdown()` command on the channel when I'm through with it- but that seems messy. Is it possible to do something like sent `Ctrl-C` to the channel's stdin?
2. `readline()` blocks, and I could avoid threads if I had a non-blocking method of getting output- any thoughts? | 1) You can just close the client if you wish. The server on the other end will kill the tail process.
2) If you need to do this in a non-blocking way, you will have to use the channel object directly. You can then watch for both stdout and stderr with channel.recv\_ready() and channel.recv\_stderr\_ready(), or use select.select. | Instead of calling exec\_command on the client, get hold of the transport and generate your own channel. The [channel](http://www.lag.net/paramiko/docs/paramiko.Channel-class.html) can be used to execute a command, and you can use it in a select statement to find out when data can be read:
```
#!/usr/bin/env python
import paramiko
import select
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
transport = client.get_transport()
channel = transport.open_session()
channel.exec_command("tail -f /var/log/everything/current")
while True:
rl, wl, xl = select.select([channel],[],[],0.0)
if len(rl) > 0:
# Must be stdout
print channel.recv(1024)
```
The channel object can be read from and written to, connecting with stdout and stdin of the remote command. You can get at stderr by calling `channel.makefile_stderr(...)`.
I've set the timeout to `0.0` seconds because a non-blocking solution was requested. Depending on your needs, you might want to block with a non-zero timeout. | Long-running ssh commands in python paramiko module (and how to end them) | [
"",
"python",
"ssh",
"paramiko",
""
] |
Reading [msdn documentation](http://msdn.microsoft.com/en-us/library/xfhwa508.aspx) for dictionaries it says : "Public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe."
Those this mean that with a dictionary such as this :
```
static object syncObject = new object();
static Dictionary<string,MyObject> mydictionary= new Dictionary<string, MyObject>();
```
Is doing something like the code below unnecessary?
```
lock (syncObject)
{
context = new TDataContext();
mydictionary.Add("key", myObject);
}
``` | It's referring to static methods on the class itself. The statement is actually boilerplate documentation added to most classes. The Dictionary<> class doesn't actually expose any static methods.
You will have to serialize access to your dictionary class yourself using the lock keyword or perhaps even a ReaderWriterLock. Threading though is a pretty complex topic and the lock keyword isn't always appropriate. There's a great book by Joe Duffy "[Concurrent Programming on Windows](https://rads.stackoverflow.com/amzn/click/com/032143482X)" that goes into great depth about protecting shared application resources. | You are misinterpreting "public static **members** of this type" as "public static **instances** of this type". | Static dictionary in .Net Thread safety | [
"",
"c#",
"dictionary",
"thread-safety",
""
] |
How can I make a textbox in my winforms application that accepts new lines of text from anywhere in the application?
I have a main form that contains a textbox. I'd like to directly add text to the box from a method in another class.
**Update**
I tried this in my main form:
```
public void Output(String value)
{
if (txtOutput.Text.Length > 0)
{
txtOutput.AppendText(Environment.NewLine);
}
txtOutput.AppendText(value);
}
```
But I can't call Output from the other class. I'm new to C#, so perhaps I'm missing something obvious.
Regards, Miel.
PS Yes, I know this is bad design, but for now this seems to be the best way to do what I want. The textbox would function like a console. | You'll need to expose the `Text` property of the `TextBox` as a `string` property on your form. For example...
```
public string TextBoxText
{
get { return textBoxName.Text; }
set { textBoxName.Text = value; }
}
```
**Edit**
After reading the question edit, your problem is that you need a reference to a specific instance of the form whereever you're trying to execute that code. You can either pass around a reference (which is the better option), or you could use some smelly code and have a static property that refers to one instance of your form. Something like...
```
public partial class MyForm : Form
{
private static MyForm instance;
public static MyForm Instance
{
get { return instance; }
}
public MyForm() : base()
{
InitializeComponent();
// ....
instance = this;
}
}
```
Using this approach, you could call `MyForm.Instance.Output("test");` | In order to decouple a bit more you could inverse the control a bit:
```
// interface for exposing append method
public interface IAppend
{
void AppendText(string text);
}
// some class that can use the IAppend interface
public class SomeOtherClass
{
private IAppend _appendTarget = null;
public SomeOtherClass(IAppend appendTarget)
{
_appendTarget = appendTarget;
}
private void AppendText(string text)
{
if (_appendTarget != null)
{
_appendTarget.AppendText(text);
}
}
public void MethodThatWillWantToAppendText()
{
// do some stuff
this.AppendText("I will add this.");
}
}
// implementation of IAppend in the form
void IAppend.AppendText(string text)
{
textBox1.AppendText(text);
}
``` | access textbox from anywhere in application | [
"",
"c#",
"winforms",
"textbox",
""
] |
When I allocate a single object, this code works fine. When I try to add array syntax, it segfaults. Why is this? My goal here is to hide from the outside world the fact that class c is using b objects internally. I have posted the program to [codepad](http://codepad.org/DLzw3XQt) for you to play with.
```
#include <iostream>
using namespace std;
// file 1
class a
{
public:
virtual void m() { }
virtual ~a() { }
};
// file 2
class b : public a
{
int x;
public:
void m() { cout << "b!\n"; }
};
// file 3
class c : public a
{
a *s;
public:
// PROBLEMATIC SECTION
c() { s = new b[10]; } // s = new b;
void m() { for(int i = 0; i < 10; i++) s[i].m(); } // s->m();
~c() { delete[] s; } // delete s;
// END PROBLEMATIC SECTION
};
// file 4
int main(void)
{
c o;
o.m();
return 0;
}
``` | One problem is that the expression `s[i]` uses pointer arithmetic to compute the address of the desired object. Since `s` is defined as pointer to `a`, the result is correct for an array of `a`s and incorrect for an array of `b`s. The dynamic binding provided by inheritance only works for methods, nothing else (e.g., no virtual data members, no virtual `sizeof`). Thus when calling the method `s[i].m()` the `this` pointer gets set to what would be the `i`th `a` object in the array. But since in actuality the array is one of `b`s, it ends up (sometimes) pointing to somewhere in the middle of an object and you get a segfault (probably when the program tries to access the object's vtable). You might be able to rectify the problem by virtualizing and overloading `operator[]()`. (I Didn't think it through to see if it will actually work, though.)
Another problem is the `delete` in the destructor, for similar reasons. You might be able to virtualize and overload it too. (Again, just a random idea that popped into my head. Might not work.)
Of course, casting (as suggested by others) will work too. | Creating an array of 10 `b`'s with `new` and then assigning its address to an `a*` is just asking for trouble.
**Do not treat arrays polymorphically.**
For more information see [**ARR39-CPP. Do not treat arrays polymorphically**](https://www.securecoding.cert.org/confluence/display/cplusplus/ARR39-CPP.+Do+not+treat+arrays+polymorphically), at section [06. Arrays and the STL (ARR)](https://www.securecoding.cert.org/confluence/display/cplusplus/06.+Arrays+and+the+STL+%28ARR%29) of the [CERT C++ Secure Coding Standard](https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=637). | C++ new[] into base class pointer crash on array access | [
"",
"c++",
"arrays",
"inheritance",
"polymorphism",
""
] |
Is it possible to add the "DeleteOnNull=true" on a custom class instead of modifying the DBML (generated) class directly?
For example, let's say this is a part of my generated dbml class:
```
[Table(Name="OrderDetails")]
public partial class OrderDetail :
INotifyPropertyChanging, INotifyPropertyChanged
{
// deleted for brevity
[Association(Name="Order_OrderDetail",
Storage="_Order", ThisKey="OrderId",
OtherKey="OrderId", IsForeignKey=true, DeleteOnNull=true)]
public Order Order
{
get { /* deleted */ }
set { /* deleted */ }
}
}
```
So is it possible to put the "DeleteOnNull=true" on a separate class? Is it is? How? I have tried the following without any luck:
```
[MetadataType(typeof(OrderDetailMetadata))]
public partial class OrderDetail {
internal sealed class OrderDetailMetadata
{
[Association(DeleteOnNull = true)]
public object Order;
}
}
``` | Better late than never:
If you have been using the Designer to create your LTS config and entities, you can right-click on the DBML file, and choose "Open With...". Now select XML Editor and click OK.
Find your `Order_OrderDetail` association in this file, then add `DeleteOnNull="true"` to it. Save the file and let LTS re-generate your classes for you. Done! No more having your changes to the generated code file overwritten again! | Maybe you dont need the MetaData type. Will this work in your additional partial:
```
public partial class OrderDetail {
[Association(DeleteOnNull = true)]
public Order Order;
}
```
..i know that by itself `Order` is not defined, but you shouldn't need to build like that. [This MSDN doc](http://msdn.microsoft.com/en-us/library/wa80x488.aspx) suggests that the above should be possible (if i'm reading it correctly). | DeleteOnNull (Association attribute) for Linq to SQL in custom class? | [
"",
"c#",
"linq",
"linq-to-sql",
""
] |
What's the difference between:
```
function bar()
{
for (x=0; x< 100; x++) {}
}
```
And
```
function bar()
{
var x;
for (x=0; x< 100; x++) {}
}
```
If x wasn't declared outside of that function, and so it wasn't a global variable? I always wonder this because I generally don't declare throwaway variables which are used only in a loop, but I wonder if this might break comparability in a browser or such. | The first example will either add or modify the global variable x, which is generally to be avoided if not the desired outcome.
While your second example works as desired (no side effects) an alternative that looks better in my opinion would be
```
function bar()
{
for (var x=0; x< 100; x++) {}
}
``` | A variable is created at the time you declare/use it. If you omit the var keyword than the variable is created automatically in the global scope. So you produce a side effect. This is generally to be avoided.
Assume you use global variables and then you chose the name of variable that some other piece of the software has already taken. This will lead to a situation where to pieces of code overwrite their values. This produces errors and is most of the time hard to debug. In your example you might overwriting a global variable x that another software is using.
Using var is also faster. If you access a global variable it has to scan all scopes up to the global one for the variable name. By using var it is bound to your local scope.
It is good practice to use always use var. Or better: it is always good to select the most narrowed scope for your variables. Now you have global and var. A var declaration is visible in the whole function no matter where you declare it. In javascript 1.7 there is new keyword introduced: let. Let narrows the scope even more. If you declare your for loop with
```
for(let x = 0; x < 100; i++) {}
```
than x is visible only inside the {} block. | Variable scope in Javascript for loop | [
"",
"javascript",
""
] |
Is it possible to add a method to an object class, and use it on all objects? | In Python attributes are implemented using a dictionary :
```
>>> t = test()
>>> t.__dict__["foo"] = "bla"
>>> t.foo
'bla'
```
But for "object", it uses a 'dictproxy' as an interface to prevent such assignement :
```
>>> object.__dict__["test"] = "test"
TypeError: 'dictproxy' object does not support item assignment
```
So no, you can't.
NB : you can't modify the metaclass Type directly neither. But as Python is very flexible, I am sure a Guru could find a way to achieve what you want. Any black wizard around here :-) ? | No, Python's internals take great care to make built-in types NOT mutable -- very different design choices from Ruby's. It's not possible to make object "monkeypatchable" without deeply messing with the C-coded internals and recompiling the Python runtime to make a very different version (this is for the classic CPython, but I believe exactly the same principle holds for other good implementations such as Jython and IronPython, just s/C/Java/ and S/C/C#/ respectively;-). | Object class override or modify | [
"",
"python",
""
] |
I'm trying to serialize a large collection of objects (20,000) objects within the collection. I'm doing this using the following code:
```
XmlSerializer xs = new XmlSerializer(deserialized.GetType());
StringWriter sw;
using (sw = new StringWriter())
{
xs.Serialize(sw, deserialized); // OutOfMemoryException here
}
string packet = sw.ToString();
return packet;
```
Is there a better way of doing this, or am I doing something blatantly wrong? | It looks like it *should* work, but CF does have unpredictable limitations.
Is xml a requirement? I can't remember trying it with 20k records, but another option might be to *try* using a different serializer - for example, [protobuf-net](http://code.google.com/p/protobuf-net/) works on CF2. I can't guarantee it'll work, but it might be worth a shot.
(in particular, I'm currently refactoring the code to try to work around some additional ["generics" limitations](http://marcgravell.blogspot.com/2009/03/compact-framework-woes-revisted.html) within CF - but unless you have a very complex object model this shouldn't affect you).
---
Example showing usage; note that this example also works OK for `XmlSerializer`, but protobuf-net uses only 20% of the space (or 10% of the space if you consider that characters are two bytes each in memory):
```
using System;
using System.Collections.Generic;
using System.IO;
using System.Xml.Serialization;
using ProtoBuf;
[Serializable, ProtoContract]
public class Department
{
[ProtoMember(1)]
public string Name { get; set; }
[ProtoMember(2)]
public List<Person> People { get; set; }
}
[Serializable, ProtoContract]
public class Person
{
[ProtoMember(1)]
public int Id { get; set; }
[ProtoMember(2)]
public string Name { get; set; }
[ProtoMember(3)]
public DateTime DateOfBirth { get; set; }
}
static class Program
{
[MTAThread]
static void Main()
{
Department dept = new Department { Name = "foo"};
dept.People = new List<Person>();
Random rand = new Random(123456);
for (int i = 0; i < 20000; i++)
{
Person person = new Person();
person.Id = rand.Next(50000);
person.DateOfBirth = DateTime.Today.AddDays(-rand.Next(2000));
person.Name = "fixed name";
dept.People.Add(person);
}
byte[] raw;
using (MemoryStream ms = new MemoryStream())
{
Serializer.Serialize(ms, dept);
raw = ms.ToArray(); // 473,399 bytes
}
XmlSerializer ser = new XmlSerializer(typeof(Department));
StringWriter sw = new StringWriter();
ser.Serialize(sw, dept);
string s = sw.ToString(); // 2,115,693 characters
}
}
```
Let me know if you want more help - I can talk about this subject all day ;-p
Note that it can work just from the standard xml attributes (`[XmlElement(Order=1)]`) - I've used the more specific `[ProtoMember(1)]` etc for clarity. This also allows fine-grained control of serialization (zigzag vs twoscompliment, grouped vs length-prefixed, etc). | Do you have any metrics on your application's memory consumption? I'm assuming you're running on WM, which means that each process' address space is limited to 32MB. With a large XML, it's possible that you've actually run out of memory. | C# Compact Framework - OutOfMemoryException with XmlSerializer.Serialize | [
"",
"c#",
"serialization",
"compact-framework",
"stringwriter",
""
] |
Recently I needed to add drag & drop functionality to a Silverlight application. Can anyone recommend a good drag & drop control? | I created a Drag/Drop controller that I think works really well. I have been using this technique for a while, and I have been very happy with it.
<http://houseofbilz.com/archive/2009/02/10/drag-and-drop-with-silverlight.aspx> | Here is a link to the best one I have found so far: <http://nickssoftwareblog.com/2008/10/07/silverlight-20-in-examples-part-drag-and-drop-inside-out/>
The code is available as a download from the blog post, although you have to rename it to a .zip: <http://nickssoftwareblog.files.wordpress.com/2008/10/genericdragdropzip.doc> | Drag and Drop control for Silverlight | [
"",
"c#",
"silverlight",
"drag-and-drop",
""
] |
I'm creating an application that I want to put into the cloud. This application has one main function.
It hosts socket CLIENT sessions on behalf of other users (think of Beejive IM for the iPhone, where it hosts IM sessions for clients to maintain state on those IM networks, allowing the client to connect/disconnect at will, without breaking the IM network connection).
Now, the way I've planned it now, is that one 'worker instance' can likely only handle a finite number of client sessions (let's say 50,000 for argument sake). Those sessions will be very long lived worker tasks.
The issue I'm trying to get my head around is that I will sometimes need to perform tasks to specific client sessions (eg: If I need to disconnect a client session). With Azure, would I be able to queue up a smaller task that only the instance hosting that specific client session would be able to dequeue?
Right now I'm contemplating GoGrid as my provider, and I solve this issue by using Apache's Active Messaging Queue software. My web app enqueues 'disconnect' tasks that are assigned to a specific instance Id. Each client session is therefore assigned to a specific instance id. The instance then only dequeues 'disconnect' tasks that are assigned to it.
I'm wondering if it's feasible to do something similar on Azure, and how I would generally do it. I like the idea of not having to setup many different VM's to scale, but instead just deploying a single package. Also, it would be nice to make use of Azure's Queues instead of integrating a third party product such as Apache ActiveMQ, or even MSMQ. | I'd be very concerned about building a production application on Azure until the feature set, pricing, and licensing terms are finalized. For starters, you can't even do a cost comparison between it and e.g. GoGrid or EC2 or Mosso. So I don't see how it could possibly end up a front-runner. Also, we know that all of these systems will have glitches as they mature. Amazon's services are in much wider use than any of the others, and have been publicly available for much years. IMHO choosing Azure is a recipe for pain as they stabilize.
Have you considered Amazon's [Simple Queue Service](http://aws.amazon.com/sqs/) for queueing? | I think you can absolutely use Windows Azure for this. My recommendation would be to create a queue for each session you're tracking. Then enqueue the disconnect message (for example) on the queue for that session. The worker instance that's handling that connection should be the only one polling that queue, so it should handle performing the task on that connection. | Azure: Will it work for my App? | [
"",
"c#",
"sockets",
"azure",
"cloud",
"gogrid",
""
] |
We're using an old application that stores dates in C / Unix format. C time is basically the number of seconds since Jan 1st, 1970. The dates are stored as an integer in a SQL Server database. I am writing a view for a report that uses these dates.
So far, I'm converting from the UNIX time to a native datetime with:
```
DateAdd(s,3600+unix_time,'1/1/1970')
```
The 3600 is to convert from UTC to our local GMT+1 timezone. This is accurate in the winter, but in the summer it's one hour off due to daylight savings time.
Is there a built-in way to convert from UTC to local time in SQL Server? | Instead of 3600, you'll want to do DateDiff(s, getutcdate(), getdate())+unix\_time, which will give you the correct offset from the UTC time.
Cheers,
Eric | Actually, the answer above neglects daylight savings time. If it's not currently daylight savings time a date within a daylight savings period will be an hour off and vice-versa. Also, lawmakers in many different countries love to make daylight savings more complicated than it needs to be.
Essentially, try to avoid this in SQL Server if at all possible. Client side date handling libraries can usually get it about right. If you need to work with timestamps already in the database, consider converting them to datetime columns or converting date values into timestamps in client code before using them as query parameters.
This is just one of those wheels that consistently gets invented the wrong way. | Daylight savings time in Sql Server | [
"",
"sql",
"sql-server",
"time",
""
] |
If we have a table called FollowUp and has rows [ ID(int) , Value(Money) ]
and we have some rows in it, for example
ID --Value
1------70
2------100
3------150
8------200
20-----250
45-----280
**and we want to make one SQL Query that get each row ID,Value and the previous Row Value in which data appear as follow**
ID --- Value ---Prev\_Value
1 ----- 70 ---------- 0
2 ----- 100 -------- 70
3 ----- 150 -------- 100
8 ----- 200 -------- 150
20 ---- 250 -------- 200
45 ---- 280 -------- 250
i make the following query but i think it's so bad in performance in huge amount of data
```
SELECT FollowUp.ID, FollowUp.Value,
(
SELECT F1.Value
FROM FollowUp as F1 where
F1.ID =
(
SELECT Max(F2.ID)
FROM FollowUp as F2 where F2.ID < FollowUp.ID
)
) AS Prev_Value
FROM FollowUp
```
So can anyone help me to get the best solution for such a problem ? | This sql should perform better then the one you have above, although these type of queries tend to be a little performance intensive... so anything you can put in them to limit the size of the dataset you are looking at will help tremendously. For example if you are looking at a specific date range, put that in.
```
SELECT followup.value,
( SELECT TOP 1 f1.VALUE
FROM followup as f1
WHERE f1.id<followup.id
ORDER BY f1.id DESC
) AS Prev_Value
FROM followup
```
HTH | You can use the OVER statement to generate nicely increasing row numbers.
```
select
rownr = row_number() over (order by id)
, value
from your_table
```
With the numbers, you can easily look up the previous row:
```
with numbered_rows
as (
select
rownr = row_number() over (order by id)
, value
from your_table
)
select
cur.value
, IsNull(prev.value,0)
from numbered_rows cur
left join numbered_rows prev on cur.rownr = prev.rownr + 1
```
Hope this is useful. | How to get a value from previous result row of a SELECT statement? | [
"",
"sql",
"ms-access",
""
] |
I am designing an intranet system for medium business. Should I keep a single log table for all modules or make it separate?
Audit log keeps all admin/staff activities (create, update, delete objects) and the log structure is universal for any kind of module.
And also is it a good idea if I pull report based on log records? My log table keeps object type and object id so I could fetch data for any object and at any time based on event, object name and object id.
What is the best approach for reporting in such cases? | Well, when you go to review your logs, which would you rather do, look in one place where you can see everything or have to check several different places, each of which just shows one piece of the system in isolation?
Keep in mind that, with a single table, it's trivial to filter out entries which are irrelevant or which the user does not have authorization to view. Combining several individual logs into a single, comprehensive view is a bit trickier to do, plus it has the additional disadvantage of, under most designs, requiring you to revisit the code which does the combining each time a new log table is added.
I definitely say a single log is preferable. The only situation I can think of where multiple segregated logs would be appropriate would be if security concerns were strong enough to require that log entries with different visibility must be physically segregated - and, in such a case, you'd probably be looking at separate log servers, not just separate tables. | See [log4php](http://www.vxr.it/log4php/). [log4j](http://logging.apache.org/log4j/1.2/manual.html) solved much of the logging problems by introducing log hierarchy and levels. I don't know how good log4php is, but it should be a starter. | Single centralized or separated log table for modules? | [
"",
"php",
"logging",
"crm",
""
] |
I'm currently developing a portlet for Liferay (using the Spring MVC framework). Now I just used the displaytag library for implementing paging on a list I'm displaying on the portlet.
My problem now is that I would need to detect whether the current request has been started by the paging control of the displaytag library. What I found is that when doing paging, a parameter is added in the URL that looks like "d-4157739-p=2" which indicates the current page that is shown. So I could do
```
int isPagingRequest = PortletRequestUtils.getIntParameter(request, "d-1332617-p", -1);
```
..and if the isPagingRequest (which I could change to a boolean) has a value then the request has been initiated by the displaytag paging. This is however very bad coding so I'd like to avoid it. Moreover the number between the "d" and "p" varies which makes it really hard to detect it.
Does someone have a suggestion how I can detect whether the current request has been provocated by a paging??
Thanks a lot | Displaytag provides a class "ParamEncoder" that (I think its in its constructor) produces the checksums for you based off the table name of your table (the id or uid element - this must be set to produce valid checksums (the numbers between d and -(parameter)). Check it out. TableTagParameters includes the constants needed for the parameters as well - so with a combination of these two, you can retrieve the appropriate variable key to retrieve from the request. | One option could be to add your own parameter to the value in the **requestURI** attribute. So for example, you could add this:
```
requestURI="mylistsource.action?ispage=true"
```
to the table tag, where mylistsource.action is your server action that generates the list in the first place.
On the server you can then look for the "ispage" parameter. | Displaytag library - detecting a paging request | [
"",
"java",
"jstl",
"portlet",
""
] |
I am currently storing normalized versions of strings in my SQL Server database in lower case. For example, in my Users table, I have a UserName and a LoweredUserName field. Depending on the context, I either use T-SQL's LOWER() function or C#'s String.ToLower() method to generate the lower case version of the user name to fill the LoweredUserName field. According to [Microsoft's guidelines](http://msdn.microsoft.com/en-us/library/ms973919.aspx) and [Visual Studio's code analysis rule CA1308](http://msdn.microsoft.com/en-us/library/bb386042.aspx), I should be using C#'s String.ToUpperInvariant() instead of ToLower(). According to Microsoft, this is both a performance and globalization issue: converting to upper case is safe, while converting to lower case can cause a loss of information (for example, [the Turkish 'I' problem](http://msdn.microsoft.com/en-us/library/system.string.toupperinvariant(VS.85).aspx)).
If I move to using ToUpperInvariant for string normalization, I will have to change my database schema as well, since my schema is based on [Microsoft's ASP.NET Membership](http://msdn.microsoft.com/en-us/library/ms229862(VS.80).aspx) framework (see [this related question](https://stackoverflow.com/questions/98908/what-is-the-point-of-the-lowered-columns-in-asp-net-membership-tables)), which normalizes strings to lower case.
Isn't Microsoft contradicting itself by telling us to use upper case normalization in C#, while it's own code in the Membership tables and procedures is using lower case normalization? Should I switch everything to upper case normalization, or just continue using lower case normalization? | To answer your first question, yes Microsoft is a bit inconsistent. To answer your second question, no do not switch anything until you have confirmed that this is causing a bottleneck in your application.
Think how much forward progress you can make on you project instead of wasting time switching everything. Your development time is much more valuable than the savings you would get from such a change.
Remember:
> Premature optimization is the root of all evil (or at least most of it) in programming. - [Donald Knuth](http://en.wikiquote.org/wiki/Donald_Knuth) | According to [CA1308](http://msdn.microsoft.com/en-us/library/bb386042.aspx), the reason to do this is that some characters cannot be roundtrip converted from upper to lower case. The important thing is that you always move in one direction, so if your standard is to always move to lower case then there is no reason to change it. | Normalization of Strings With String.ToUpperInvariant() | [
"",
"c#",
"asp.net",
"sql-server",
"asp.net-membership",
"code-analysis",
""
] |
I have a C# project with some gif and mp3 files
how I can combine those files within my project?
(I don't want them to be visible to users) | You need to include them in the project as resources, and then access them later by reading from the DLL.
For gif files you can simply drop them on the resources (in the project->properties dialog) and then access them via
```
var img = Properties.Resources.GifName;
```
For mp3 files you will probably need to use embedded resources, and then read them out as a stream. To do this, drag the item to a folder in your project dedicated to these types of files. Right click on the file in the explorer and show the properties pane, and set the "build action" to "embedded resource".
You can then use code something like this (untested translation from vb, sorry), to get the thing back out as a stream. It's up to you to transform the stream into something your player can handle.
```
using System.Linq; // from System.Core. otherwise just translate linq to for-each
using System.IO;
public Stream GetStream(string fileName) {
// assume we want a resource from the same that called us
var ass = Assembly.GetCallingAssembly();
var fullName = GetResourceName(fileName, ass);
// ^^ should = MyCorp.FunnyApplication.Mp3Files.<filename>, or similar
return ass.GetManifestResourceStream(fullName);
}
// looks up a fully qualified resource name from just the file name. this is
// so you don't have to worry about any namespace issues/folder depth, etc.
public static string GetResourceName(string fileName, Assembly ass) {
var names = ass.GetManifestResourceNames().Where(n => n.EndsWith(fileName)).ToArray();
if (names.Count() > 1) throw new Exception("Multiple matches found.");
return names[0];
}
var mp3Stream = GetStream("startup-sound.mp3");
var mp3 = new MyMp3Class(mp3stream); // some player-related class that uses the stream
```
Here are a few links to get you started
* [Microsoft Guide](http://support.microsoft.com/kb/319292)
* [Reading Resources from the DLL](http://www.csharper.net/blog/getting_an_embedded_resource_file_out_of_an_assembly.aspx) | Add them as embedded resources and they'll be compiled into the dll.
First add them to the project, by just dragging them to the solution explorer, then go to their properties to change their build action to embedded resources. | How to Hide gif or mp3 files in my project? | [
"",
"c#",
""
] |
If you have two threads invoking a static function at the same moment in time, is there a concurrency risk? And if that function uses a static member of the class, is there even a bigger problem?
* Are the two calls seperated from each other? (the function is like copied for the two threads?)
* Are they automatically queued?
For instance in next example, is there a risk?
```
private static int a = 5;
public static int Sum()
{
int b = 4;
a = 9;
int c = a + b;
return c;
}
```
And next example, is there a risk?
```
public static int Sum2()
{
int a = 5;
int b = 4;
int c = a + b;
return c;
}
```
**Update:** And indeed, if both functions are in the same class, what is the risk then?
thx, Lieven Cardoen | Yes, there is a concurrency risk when you modify a static variable in static methods.
The static functions themselves have distinct sets of local variables, but any static variables are shared.
In your specific samples you're not being exposed, but that's just because you're using constants (and assigning the same values to them). Change the code sample slightly and you'll be exposed.
**Edit:**
If you call *both* Sum1() AND Sum2() from different threads you're in trouble, there's no way to guarantee the value of a and b in this statement: **int c = a + b;**
```
private static int a = 5;
public static int Sum1()
{
int b = 4;
a = 9;
int c = a + b;
return c;
}
public static int Sum2()
{
int b = 4;
int c = a + b;
return c;
}
```
You can also achieve concurrency problems with multiple invocations *of a single method* like this:
```
public static int Sum3(int currentA)
{
a = currentA;
int b = 4;
int c = a + b;
int d = a * b; // a may have changed here
return c + d;
}
```
The issue here is that the value of a may change mid-method due to other invocations changing it. | Yes, there is a risk. That's why you'll see in MSDN doc, it will often say "This class is threadsafe for static members" (or something like that). It means when MS wrote the code, they intentionally used synchronization primitives to make the static members threadsafe. This is common when writing libraries and frameworks, because it is easier to make static members threadsafe than instance members, because you don't know what the library user is going to want to do with instances. If they made instance members threadsafe for many of the library classes, they would put too many restrictions on you ... so often they let you handle it.
So you likewise need to make your static members threadsafe (or document that they aren't).
By the way, static **constructors** are threadsafe in a sense. The CLR will make sure they are called only once and will prevent 2 threads from getting into a static constructor.
EDIT: Marc pointed out in the comments an edge case in which static constructors are not threadsafe. If you use reflection to explicitly call a static constructor, apparently you can call it more than once. So I revise the statement as follows: as long as you are relying on the CLR to decide when to call your static constructor, then the CLR will prevent it from being called more than once, and it will also prevent the static ctor from being called re-entrantly. | Static Function Concurrency ASP.NET | [
"",
"c#",
"asp.net",
"concurrency",
"static",
""
] |
I'm using Visual Studio 2005, C# with Framework 2.0. I'd like to use auto complete, but would like the list to come from a table in my database.
Is there a way to databind the AutoCompleteSoure? | You might want to take a look at this [blogpost](http://csharpdotnetfreak.blogspot.com/2009/01/winforms-autocomplete-textbox-using-c.html). | You can accomplish what you want to do by using the AjaxControlToolkit AutoComplete
<http://www.asp.net/AJAX/AjaxControlToolkit/Samples/AutoComplete/AutoComplete.aspx> | How do I link autocomplete on a textbox to a database table? C# 2.0 | [
"",
"c#",
"data-binding",
"textbox",
"autocomplete",
""
] |
I am currently developing on an advertising system, which have been running just fine for a while now, apart from recently when our views per day have shot up from about 7k to 328k. Our server cannot take the pressure on this anymore - and knowing that I am not the best SQL guy around (hey, I can make it work, but not always in the best way) I am asking here for some optimization guidelines. I hope that some of you will be able to give rough ideas on how to improve this - I don't specifically need code, just to see the light :).
As it is at the moment, when an advert is supposed to be shown a PHP script is called, which in return calls a stored procedure. This stored procedure does several checks, it tests up against our customer database to see if the person showing the advert (given by a primary key id) is an actual customer under the given locale (our system is running on several languages which are all run as separate sites). Next up is all the advert details fetched out (image location as an url, height and width of the advert) - and lest step calls a separate stored procedure to test if the advert is allowed to be shown (is the campaign expired by either date or number of adverts allowed to show?) and if the customer has access to it (we got 2 access systems running, a blacklist and a whitelist one) and lastly what type of campaign we're running, is the view unique and so forth.
The code consists of a couple of stored procedures that I will post in here.
--- procedure called from PHP
```
CREATE PROCEDURE [dbo].[ExecView]
(
@publisherId bigint,
@advertId bigint,
@localeId int,
@ip varchar(15),
@ipIsUnique bit,
@success bit OUTPUT,
@campaignId bigint OUTPUT,
@advert varchar(500) OUTPUT,
@advertWidth int OUTPUT,
@advertHeight int OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @unique bit
DECLARE @approved bit
DECLARE @publisherEarning money
DECLARE @advertiserCost money
DECLARE @originalStatus smallint
DECLARE @advertUrl varchar(500)
DECLARE @return int
SELECT @success = 1, @advert = NULL, @advertHeight = NULL, @advertWidth = NULL
--- Must be valid publisher, ie exist and actually be a publisher
IF dbo.IsValidPublisher(@publisherId, @localeId) = 0
BEGIN
SELECT @success = 0
RETURN 0
END
--- Must be a valid advert
EXEC @return = FetchAdvertDetails @advertId, @localeId, @advert OUTPUT, @advertUrl OUTPUT, @advertWidth OUTPUT, @advertHeight OUTPUT
IF @return = 0
BEGIN
SELECT @success = 0
RETURN 0
END
EXEC CanAddStatToAdvert 2, @advertId, @publisherId, @ip, @ipIsUnique, @success OUTPUT, @unique OUTPUT, @approved OUTPUT, @publisherEarning OUTPUT, @advertiserCost OUTPUT, @originalStatus OUTPUT, @campaignId OUTPUT
IF @success = 1
BEGIN
INSERT INTO dbo.Stat (AdvertId, [Date], Ip, [Type], PublisherEarning, AdvertiserCost, [Unique], Approved, PublisherCustomerId, OriginalStatus)
VALUES (@advertId, GETDATE(), @ip, 2, @publisherEarning, @advertiserCost, @unique, @approved, @publisherId, @originalStatus)
END
END
```
--- IsValidPublisher
```
CREATE FUNCTION [dbo].[IsValidPublisher]
(
@publisherId bigint,
@localeId int
)
RETURNS bit
AS
BEGIN
DECLARE @customerType smallint
DECLARE @result bit
SET @customerType = (SELECT [Type] FROM dbo.Customer
WHERE CustomerId = @publisherId AND Deleted = 0 AND IsApproved = 1 AND IsBlocked = 0 AND LocaleId = @localeId)
IF @customerType = 2
SET @result = 1
ELSE
SET @result = 0
RETURN @result
END
```
-- Fetch advert details
```
CREATE PROCEDURE [dbo].[FetchAdvertDetails]
(
@advertId bigint,
@localeId int,
@advert varchar(500) OUTPUT,
@advertUrl varchar(500) OUTPUT,
@advertWidth int OUTPUT,
@advertHeight int OUTPUT
)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT @advert = T1.Advert, @advertUrl = T1.TargetUrl, @advertWidth = T1.Width, @advertHeight = T1.Height FROM Advert as T1
INNER JOIN Campaign AS T2 ON T1.CampaignId = T2.Id
WHERE T1.Id = @advertId AND T2.LocaleId = @localeId AND T2.Deleted = 0 AND T2.[Status] <> 1
IF @advert IS NULL
RETURN 0
ELSE
RETURN 1
END
```
--- CanAddStatToAdvert
```
CREATE PROCEDURE [dbo].[CanAddStatToAdvert]
@type smallint, --- Type of stat to add
@advertId bigint,
@publisherId bigint,
@ip varchar(15),
@ipIsUnique bit,
@success bit OUTPUT,
@unique bit OUTPUT,
@approved bit OUTPUT,
@publisherEarning money OUTPUT,
@advertiserCost money OUTPUT,
@originalStatus smallint OUTPUT,
@campaignId bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON;
DECLARE @campaignLimit int
DECLARE @campaignStatus smallint
DECLARE @advertsLeft int
DECLARE @campaignType smallint
DECLARE @campaignModeration smallint
DECLARE @count int
SELECT @originalStatus = 0
SELECT @success = 1
SELECT @approved = 1
SELECT @unique = 1
SELECT @campaignId = CampaignId FROM dbo.Advert
WHERE Id = @advertId
IF @campaignId IS NULL
BEGIN
SELECT @success = 0
RETURN
END
SELECT @campaignLimit = Limit, @campaignStatus = [Status], @campaignType = [Type], @publisherEarning = PublisherEarning, @advertiserCost = AdvertiserCost, @campaignModeration = ModerationType FROM dbo.Campaign
WHERE Id = @campaignId
IF (@type <> 0 AND @type <> 2 AND @type <> @campaignType) OR ((@campaignType = 0 OR @campaignType = 2) AND (@type = 1)) -- if not a click or view type, then type must match the campaign (ie, only able to do leads on lead campaigns, no isales or etc), click and view campaigns however can do leads too
BEGIN
SELECT @success = 0
RETURN
END
-- Take advantage of the fact that the variable only gets touched if there is a record,
-- which is supposed to override the existing one, if there is one
SELECT @publisherEarning = Earning FROM dbo.MapCampaignPublisherEarning
WHERE CanpaignId = @campaignId AND PublisherId = @publisherId
IF @campaignStatus = 1
BEGIN
SELECT @success = 0
RETURN
END
IF NOT @campaignLimit IS NULL
BEGIN
SELECT @advertsLeft = AdvertsLeft FROM dbo.Campaign WHERE Id = @campaignId
IF @advertsLeft < 1
BEGIN
SELECT @success = 0
RETURN
END
END
IF @campaignModeration = 0 -- blacklist
BEGIN
SELECT @count = COUNT([Status]) FROM dbo.MapCampaignModeration WHERE CampaignId = @campaignId AND PublisherId = @publisherId AND [Status] = 3
IF @count > 0
BEGIN
SELECT @success = 0
RETURN
END
END
ELSE -- whitelist
BEGIN
SELECT @count = COUNT([Status]) FROM dbo.MapCampaignModeration WHERE CampaignId = @campaignId AND PublisherId = @publisherId AND [Status] = 2
IF @count < 1
BEGIN
SELECT @success = 0
RETURN
END
END
IF @ipIsUnique = 1
BEGIN
SELECT @unique = 1
END
ELSE
BEGIN
IF (SELECT COUNT(T1.Id) FROM dbo.Stat AS T1
INNER JOIN dbo.IQ_Advert AS T2
ON T1.AdvertId = T2.Id
WHERE T2.CampaignId = @campaignId
AND T1.[Type] = @type
AND T1.[Unique] = 1
AND T1.PublisherCustomerId = @publisherId
AND T1.Ip = @ip
AND DATEADD(SECOND, 86400, T1.[Date]) > GETDATE()
) = 0
SELECT @unique = 1
ELSE
BEGIN
SELECT @unique = 0, @originalStatus = 1 -- not unique, and set status to be ip conflict
END
END
IF @unique = 0 AND @type <> 0 AND @type <> 2
BEGIN
SELECT @unique = 1, @approved = 0
END
IF @originalStatus = 0
SELECT @originalStatus = 5
IF @approved = 0 OR @type <> @campaignType
BEGIN
SELECT @publisherEarning = 0, @advertiserCost = 0
END
END
```
I am thinking this needs more than just a couple of indexes thrown in to help it, but rather a total rethinking of how to handle it. I have been heard that running this as a batch would help, but I am not sure how to get this implemented, and really not sure if i can implement it in a such way where I keep all these nice checks before the actual insert or if I have to give up on some of this.
Anyhow, all help would be appreciated, if you need any of the table layouts, let me know :).
Thanks for taking the time to look at it :) | Make sure to reference tables with the ownership prefix. So instead of:
```
INNER JOIN Campaign AS T2 ON T1.CampaignId = T2.Id
```
Use
```
INNER JOIN dbo.Campaign AS T2 ON T1.CampaignId = T2.Id
```
That will allow the database to cache the execution plan.
Another possibility is to disable database locking, which has data integrity risks, but can significantly increase performance:
```
INNER JOIN dbo.Campaign AS T2 (nolock) ON T1.CampaignId = T2.Id
```
Run a sample query in SQL Analyzer with "Show Execution Plan" turned on. This might give you a hint as to the slowest part of the query. | it seems like FetchAdvertDetails hit the same tables as the start of CanAddStatToAdvert (Advert and Campaign). If possible, I'd try to eliminate FetchAdvertDetails and roll its logic into CanAddStatToAdvert, so you don't have the hit Advert and Campaign the extra times. | Sql Optimization on advertising system | [
"",
"sql",
"optimization",
""
] |
I'm stuck trying to turn on a single pixel on a Windows Form.
```
graphics.DrawLine(Pens.Black, 50, 50, 51, 50); // draws two pixels
graphics.DrawLine(Pens.Black, 50, 50, 50, 50); // draws no pixels
```
The API really should have a method to set the color of one pixel, but I don't see one.
I am using C#. | This will set a single pixel:
```
e.Graphics.FillRectangle(aBrush, x, y, 1, 1);
``` | The `Graphics` object doesn't have this, since it's an abstraction and could be used to cover a vector graphics format. In that context, setting a single pixel wouldn't make sense. The `Bitmap` image format does have `GetPixel()` and `SetPixel()`, but not a graphics object built on one. For your scenario, your option really seems like the only one because there's no one-size-fits-all way to set a single pixel for a general graphics object (and you don't know EXACTLY what it is, as your control/form could be double-buffered, etc.)
Why do you need to set a single pixel? | Draw a single pixel on Windows Forms | [
"",
"c#",
".net",
"winforms",
"gdi+",
"pixel",
""
] |
Are there any good ways to objectively measure a query's performance in Oracle 10g? There's one particular query that I've been [tuning](https://stackoverflow.com/questions/827108/how-can-i-speed-up-rownumber-in-oracle) for a few days. I've gotten a version that seems to be running faster (at least based on my initial tests), but the EXPLAIN cost is roughly the same.
1. How likely is it that the EXPLAIN cost is missing something?
2. Are there any particular situations where the EXPLAIN cost is disproportionately different from the query's actual performance?
3. I used the first\_rows hint on this query. Does this have an impact? | > How likely is it that the EXPLAIN cost is missing something?
Very unlikely. In fact, it would be a level `1` bug :)
Actually, if your statistics have changed significantly from the time you ran the `EXPLAIN`, the actual query plan will differ. But as soom as the query is compliled, the plan will remain the same.
Note `EXPLAIN PLAN` may show you things that are *likely* to happen but may never happen in an actual query.
Like, if you run an `EXPLAIN PLAN` on a hierarchical query:
```
SELECT *
FROM table
START WITH
id = :startid
CONNECT BY
parent = PRIOR id
```
with indexes on both `id` and `parent`, you will see an extra `FULL TABLE SCAN` which most probably will not happen in real life.
Use `STORED OUTLINE`'s to store and reuse the plan no matter what.
> Are there any particular situations where the EXPLAIN cost is disproportionately different from the query's actual performance?
Yes, it happens very very often on complicate queries.
`CBO` (cost based optimizer) uses calculated statistics to evaluate query time and choose optimal plan.
If you have lots of `JOIN`'s, subqueries and these kinds on things in your query, its algorithm cannot predict exactly which plan will be faster, especially when you hit memory limits.
Here's the particular situation you asked about: `HASH JOIN`, for instance, will need several passes over the `probe table` if the hash table will not fit into `pga_aggregate_table`, but as of `Oracle 10g`, I don't remember this ever to be taken into account by `CBO`.
That's why I hint *every* query I expect to run for more than `2` seconds in a worst case.
> I used the first\_rows hint on this query. Does this have an impact?
This hint will make the optimizer to use a plan which has lower *response* time: it will return first rows as soon as possible, despite the overall query time being larger.
Practically, it almost always means using `NESTED LOOP`'s instead of `HASH JOIN`'s.
`NESTED LOOP`'s have poorer overall performance on large datasets, but they return the first rows faster (since no hash table needs to be built).
As for the query from your [**original question**](https://stackoverflow.com/questions/827108/how-can-i-speed-up-rownumber-in-oracle), see my answer [**here**](https://stackoverflow.com/questions/827108/how-can-i-speed-up-rownumber-in-oracle/830005#830005). | **Q:** Are there any good ways to objectively measure a query's performance in Oracle 10g?
* Oracle tracing is the best way to measure performance. Execute the query and let Oracle instrument the execution. In the SQLPlus environment, it's very easy to use AUTOTRACE.
<http://asktom.oracle.com/tkyte/article1/autotrace.html> (article moved)
<http://tkyte.blogspot.com/2007/04/when-explanation-doesn-sound-quite.html>
<http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5671636641855>
And enabling Oracle trace in other environments isn't that difficult.
**Q:** There's one particular query that I've been tuning for a few days. I've gotten a version that seems to be running faster (at least based on my initial tests), but the EXPLAIN cost is roughly the same.
* The actual execution of the statement is what needs to be measured. EXPLAIN PLAN does a decent job of predicting the optimizer plan, but it doesn't actually *measure* the performance.
**Q:**> 1 . How likely is it that the EXPLAIN cost is missing something?
* Not very likely, but I have seen cases where EXPLAIN PLAN comes up with a different plan than the optimizer.
**Q:**> 2 . Are there any particular situations where the EXPLAIN cost is disproportionately different from the query's actual performance?
* The short answer is that I've not observed any. But then again, there's not really a direct correlation between the EXPLAIN PLAN cost and the actual observed performance. It's possible for EXPLAIN PLAN to give a really high number for cost, but to have the actual query run in less than a second. EXPLAIN PLAN does not measure the actual performance of the query, for that you need Oracle trace.
**Q:**> 3 . I used the first\_rows hint on this query. Does this have an impact?
* Any hint (like `/*+ FIRST_ROWS */`) may influence which plan is selected by the optimizer.
---
The "cost" returned by the EXPLAIN PLAN is relative. It's an indication of performance, but not an accurate gauge of it. You can't translate a cost number into a number of disk operations or a number of CPU seconds or number of wait events.
Normally, we find that a statement with an EXPLAIN PLAN cost shown as 1 is going to run "very quickly", and a statement with an EXPLAIN PLAN cost on the order of five or six digits is going to take more time to run. But not always.
What the optimizer is doing is comparing a lot of possible execution plans (full table scan, using an index, nested loop join, etc.) The optimizer is assigning a number to each plan, then selecting the plan with the lowest number.
I have seen cases where the optimizer plan shown by EXPLAIN PLAN does *NOT* match the actual plan used when the statement is executed. I saw that a decade ago with Oracle8, particularly when the statement involved bind variables, rather than literals.
To get an actual cost for statement execution, turn on tracing for your statement.
The easiest way to do this is with SQLPlus AUTOTRACE.
```
[http://asktom.oracle.com/tkyte/article1/autotrace.html][4]
```
Outside the SQLPlus environment, you can turn on Oracle tracing:
```
alter session set timed_statistics = true;
alter session set tracefile_identifier = here_is_my_session;
alter session set events '10046 trace name context forever, level 12'
--alter session set events '10053 trace name context forever, level 1'
select /*-- your_statement_here --*/ ...
alter session set events '10046 trace name context off'
--alter session set events '10053 trace name context off'
```
This puts a trace file into the user\_dump\_dest directory on the server. The tracefile produced will have the statement plan AND all of the wait events. (The assigned tracefile identifier is included in the filename, and makes it easier to find your file in the udump directory)
```
select value from v$parameter where name like 'user_dump_dest'
```
If you don't have access to the tracefile, you're going to need to get help from the dba to get you access. (The dba can create a simple shell script that developers can run against a .trc file to run tkprof, and change the permissions on the trace file and on the tkprof output. You can also use the newer trcanlzr. There are Oracle metalink notes on both. | How accurate is Oracle's EXPLAIN PLAN? | [
"",
"sql",
"oracle",
"oracle10g",
"sql-execution-plan",
""
] |
Is there any way to create all instance properties dynamically? For example, I would like to be able to generate all attributes in the constructor and still be able to access them after the class is instantiated like this: `$object->property`. Note that I want to access the properties separately, and not using an array; here's an example of what I *don't* want:
```
class Thing {
public $properties;
function __construct(array $props=array()) {
$this->properties = $props;
}
}
$foo = new Thing(array('bar' => 'baz');
# I don't want to have to do this:
$foo->properties['bar'];
# I want to do this:
//$foo->bar;
```
To be more specific, when I'm dealing with classes that have a large number of properties, I would like to be able to select all columns in a database (which represent the properties) and create instance properties from them. Each column value should be stored in a separate instance property. | Sort of. There are magic methods that allow you to hook your own code up to implement class behavior at runtime:
```
class foo {
public function __get($name) {
return('dynamic!');
}
public function __set($name, $value) {
$this->internalData[$name] = $value;
}
}
```
That's an example for dynamic getter and setter methods, it allows you to execute behavior whenever an object property is accessed. For example
```
print(new foo()->someProperty);
```
would print, in this case, "dynamic!" and you could also assign a value to an arbitrarily named property in which case the \_\_set() method is silently invoked. The \_\_call($name, $params) method does the same for object method calls. Very useful in special cases. But most of the time, you'll get by with:
```
class foo {
public function __construct() {
foreach(getSomeDataArray() as $k => $value)
$this->{$k} = $value;
}
}
```
...because mostly, all you need is to dump the content of an array into correspondingly named class fields once, or at least at very explicit points in the execution path. So, unless you really need dynamic behavior, use that last example to fill your objects with data.
> This is called overloading
> <http://php.net/manual/en/language.oop5.overloading.php> | It depends exactly what you want. Can you modify the **class** dynamically? Not really. But can you create **object** properties dynamically, as in one particular instance of that class? Yes.
```
class Test
{
public function __construct($x)
{
$this->{$x} = "dynamic";
}
}
$a = new Test("bar");
print $a->bar;
```
Outputs:
> dynamic
So an object property named "bar" was created dynamically in the constructor. | Can you create instance properties dynamically in PHP? | [
"",
"php",
"design-patterns",
"oop",
"class",
""
] |
I have a table that contains, among other things, about 30 columns of boolean flags that denote particular attributes. I'd like to return them, sorted by frequency, as a recordset along with their column names, like so:
```
Attribute Count
attrib9 43
attrib13 27
attrib19 21
etc.
```
My efforts thus far can achieve something similar, but I can only get the attributes in columns using conditional SUMs, like this:
```
SELECT SUM(IIF(a.attribIndex=-1,1,0)), SUM(IIF(a.attribWorkflow =-1,1,0))...
```
Plus, the query is already getting a bit unwieldy with all 30 SUM/IIFs and won't handle any changes in the number of attributes without manual intervention.
The first six characters of the attribute columns are the same (attrib) and unique in the table, is it possible to use wildcards in column names to pick up all the applicable columns?
Also, can I pivot the results to give me a sorted two-column recordset?
I'm using Access 2003 and the query will eventually be via ADODB from Excel. | This depends on whether or not you have the attribute names anywhere *in data*. If you do, then birdlips' answer will do the trick. However, if the names are only column names, you've got a bit more work to do--and I'm afriad you can't do it with simple SQL.
No, you can't use wildcards to column names in SQL. You'll need procedural code to do this (i.e., a VB Module in Access--you could do it within a Stored Procedure if you were on SQL Server). Use this code build the SQL code.
It won't be pretty. I *think* you'll need to do it one attribute at a time: select a string whose value is that attribute name and the count-where-True, then either A) run that and store the result in a new row in a scratch table, or B) append all those selects together with "Union" between them before running the batch.
My Access VB is more than a bit rusty, so I don't trust myself to give you anything like executable code.... | Just a simple count and group by should do it
```
Select attribute_name
, count(*)
from attribute_table
group by attribute_name
```
To answer your comment use Analytic Functions for that:
```
Select attribute_table.*
, count(*) over(partition by attribute_name) cnt
from attribute_table
``` | How best to sum multiple boolean values via SQL? | [
"",
"sql",
"ms-access",
"jet",
""
] |
I have a class with 2 constructors:
```
public class Lens
{
public Lens(string parameter1)
{
//blabla
}
public Lens(string parameter1, string parameter2)
{
// want to call constructor with 1 param here..
}
}
```
I want to call the first constructor from the 2nd one. Is this possible in C#? | Append `:this(required params)` at the end of the constructor to do **'constructor chaining'**
```
public Test( bool a, int b, string c )
: this( a, b )
{
this.m_C = c;
}
public Test( bool a, int b, float d )
: this( a, b )
{
this.m_D = d;
}
private Test( bool a, int b )
{
this.m_A = a;
this.m_B = b;
}
```
Source Courtesy of [csharp411.com](http://www.csharp411.com/constructor-chaining/) | Yes, you'd use the following
```
public class Lens
{
public Lens(string parameter1)
{
//blabla
}
public Lens(string parameter1, string parameter2) : this(parameter1)
{
}
}
``` | Calling constructor from other constructor in same class | [
"",
"c#",
"constructor",
""
] |
What text to HTML converter for PHP would you recommend?
One of the examples would be Markdown, which is used here at SO. User just types some text into the text-box with some natural formatting: enters at the end of line, empty line at the end of paragraph, asterisk delimited bold text, etc. And this syntax is converted to HTML tags.
The simplicity is the main feature we are looking for, there does not need to be a lot of possibilities but those basic that are there should be very intuitive (automatic URL conversion to link, emoticons, paragraphs).
A big plus would be if there is WYSIWYG editor for it. Half-wysiwig just like here at SO would be even better.
Extra points would be if it would fit with Zend Framework well. | I will stick with my original idea of adopting Texy.
None of the products mentioned here actually beats it. I had problem with Texys syntax but it seems to be quite standard and is present in other products too.
It is very lightweith, supports very natural syntax and has great "half" wysiwyg editor [Texyla](http://code.google.com/p/texyla/) (wiki is in Czech only) | Take your pick at <http://en.wikipedia.org/wiki/Lightweight_markup_language>.
As for Markdown, there's one PHP parser that I've been using called [PHP Markdown](http://michelf.com/projects/php-markdown/), and I especially like the [Extra extension](http://michelf.com/projects/php-markdown/extra/).
I have actually taken a stab at extending it with my own (undocumented) features. It's [available at GitHub](http://github.com/wolfie/php-markdown/tree/extra) (remember that it's the extra branch I've fixed, not the masteR), if you're interested. I've intended on making it a 'proper fork' for a while, but that's another, largely offtopic, story. | Text-to-HTML converter for PHP | [
"",
"php",
"syntax",
"wysiwyg",
""
] |
C# 4.0 introduces `dynamic` keyword, which will look up at run-time.
Does this mean we'll need awkward reflection no more? If does, Can you show up an example of it? | We'll still have Reflection - using 'dynamic' against regular CLR objects will invoke a Reflection-based dispatcher.
So - we'll still have Reflection, but it'll be easier to do.
Here's an example:
```
// Via 'dynamic'
dynamic dx = GetSomeCLRObject();
dx.DoSomething();
dx.SomeMember = 2;
// Via Reflection
object x = GetSomeCLRObject();
Type xt = x.GetType();
MemberInfo DoSomethingMethod = xt.GetMethod("DoSomething");
DoSomethingMethod.Invoke(x, null);
PropertyInfo SomeMemberProperty = xt.GetProperty("SomeMember");
SomeMemberProperty.SetValue(x, 2);
```
I don't know about you, but I like the former. =)
In both these cases, I get no compile-time checking, no Intellisense, no IDE support - but the former case is much more expressive than the latter. | Dynamic dispatch is only one possible use of Reflection. There are many good reasons to interrogate a class for its structure, get information about that structure and visualize in some form or act on it in some way without ever dynamically accessing members. Reflection is here to stay. :)
If you want examples of the dynamic keyword, [here is a video from PDC](http://channel9.msdn.com/pdc2008/TL16/) of [the man himself](http://www.microsoft.com/presspass/exec/techfellow/hejlsberg/default.mspx) talking about it (and other stuff C# 4.0 related). | with dynamic, awkward reflection no more? | [
"",
"c#",
"dynamic",
"reflection",
"runtime",
""
] |
I have a string with symbols like this:
```
'
```
That's an apostrophe apparently.
I tried saxutils.unescape() without any luck and tried urllib.unquote()
How can I decode this? Thanks! | Check out [this question](https://stackoverflow.com/questions/275174/). What you're looking for is "html entity decoding". Typically, you'll find a function named something like "htmldecode" that will do what you want. Both Django and Cheetah provide such functions as does BeautifulSoup.
The other answer will work just great if you don't want to use a library and all the entities are numeric. | Try this: (found it [here](http://github.com/sku/python-twitter-ircbot/blob/321d94e0e40d0acc92f5bf57d126b57369da70de/html_decode.py))
```
from htmlentitydefs import name2codepoint as n2cp
import re
def decode_htmlentities(string):
"""
Decode HTML entities–hex, decimal, or named–in a string
@see http://snippets.dzone.com/posts/show/4569
>>> u = u'E tu vivrai nel terrore - L'aldilà (1981)'
>>> print decode_htmlentities(u).encode('UTF-8')
E tu vivrai nel terrore - L'aldilà (1981)
>>> print decode_htmlentities("l'eau")
l'eau
>>> print decode_htmlentities("foo < bar")
foo < bar
"""
def substitute_entity(match):
ent = match.group(3)
if match.group(1) == "#":
# decoding by number
if match.group(2) == '':
# number is in decimal
return unichr(int(ent))
elif match.group(2) == 'x':
# number is in hex
return unichr(int('0x'+ent, 16))
else:
# they were using a name
cp = n2cp.get(ent)
if cp: return unichr(cp)
else: return match.group()
entity_re = re.compile(r'&(#?)(x?)(\w+);')
return entity_re.subn(substitute_entity, string)[0]
``` | How to unescape apostrophes and such in Python? | [
"",
"python",
"html",
"django",
"html-entities",
""
] |
I have been reviewing some code that looks like this:
```
class A; // defined somewhere else, has both default constructor and A(int _int) defined
class B
{
public:
B(); // empty
A a;
};
int main()
{
B* b;
b = new B();
b->a(myInt); // here, calling the A(int _int) constructor,
//but default constructor should already have been called
}
```
Does this work? Calling a specific constructor after the default has already been called? | That code does not call a's constructor. It calls `A::operator()(int)`.
But *if* you explicitly call a constructor on an object that has already been constructed, you're well into undefined behavior-land. It may seem to work in practice, but there is no guarantee that it'll do what you expect. | you can make another constructor in Class B
**B(int \_int):a(\_int)
{}**
in that case when you write
b = new B(myInt);
Above code will not delay your constructor code of class A.
you dont need to call b->a(myInt) | Delayed constructor in C++ | [
"",
"c++",
"constructor",
"default",
""
] |
I am using MySQL and I have a table with an index that is used as a foreign key in many other tables. I want to change the data type of the index (from signed to unsigned integer) , what is the best way to do this?
I tried altering the data type on the index field, but that fails because it is being used as a foreign key for other tables. I tried altering the data type on one of the foreign keys, but that failed because it didn't match the data type of the index.
I suppose that I could manually drop all of the foreign key constraints, change the data types and add the constraints back, but this would be a lot of work because I have a lot of tables using this index as a foreign key. Is there a way to turn off foreign key constraints temporarily while making a change? Also, is there a way to get a list of all the fields referencing the index as a foreign key?
**Update:**
I tried modifying the one foreign key after turning off foreign key checks, but it doesn't seem to be turning off the checks:
```
SET foreign_key_checks = 0;
ALTER TABLE `escolaterrafir`.`t23_aluno` MODIFY COLUMN `a21_saida_id` INTEGER DEFAULT NULL;
```
Here's the error:
```
------------------------
LATEST FOREIGN KEY ERROR
------------------------
090506 11:57:34 Error in foreign key constraint of table escolaterrafir/t23_aluno:
there is no index in the table which would contain
the columns as the first columns, or the data types in the
table do not match to the ones in the referenced table
or one of the ON ... SET NULL columns is declared NOT NULL. Constraint:
,
CONSTRAINT FK_t23_aluno_8 FOREIGN KEY (a21_saida_id) REFERENCES t21_turma (A21_ID)
```
Definition of the index table:
```
DROP TABLE IF EXISTS `escolaterrafir`.`t21_turma`;
CREATE TABLE `escolaterrafir`.`t21_turma` (
`A21_ID` int(10) unsigned NOT NULL auto_increment,
...
) ENGINE=InnoDB AUTO_INCREMENT=51 DEFAULT CHARSET=latin1;
```
and the table that has the foreign key that points to it:
```
DROP TABLE IF EXISTS `escolaterrafir`.`t23_aluno`;
CREATE TABLE `escolaterrafir`.`t23_aluno` (
...
`a21_saida_id` int(10) unsigned default NULL,
...
KEY `Index_7` (`a23_id_pedagogica`),
...
CONSTRAINT `FK_t23_aluno_8` FOREIGN KEY (`a21_saida_id`) REFERENCES `t21_turma` (`A21_ID`)
) ENGINE=InnoDB AUTO_INCREMENT=387 DEFAULT CHARSET=latin1;
``` | To answer my own question, I could not discover a simpler way to do this. I ended up dropping all the foreign key constraints, changing the field types and then adding all the foreign key constraints back.
As R. Bemrose noted, using `SET foreign_key_checks = 0;` only helps when adding or changing data, but doesn't allow `ALTER TABLE` commands that would break foreign key constraints. | Here is my small contribution to this thread. Thanks to Daniel Schneller for inspiration and giving me a huge part of the solution!
```
set group_concat_max_len = 2048;
set @table_name = "YourTableName";
set @change = "bigint unsigned";
select distinct table_name,
column_name,
constraint_name,
referenced_table_name,
referenced_column_name,
CONCAT(
GROUP_CONCAT('ALTER TABLE ',table_name,' DROP FOREIGN KEY ',constraint_name SEPARATOR ';'),
';',
GROUP_CONCAT('ALTER TABLE `',table_name,'` CHANGE `',column_name,'` `',column_name,'` ',@change SEPARATOR ';'),
';',
CONCAT('ALTER TABLE `',@table_name,'` CHANGE `',referenced_column_name,'` `',referenced_column_name,'` ',@change),
';',
GROUP_CONCAT('ALTER TABLE `',table_name,'` ADD CONSTRAINT `',constraint_name,'` FOREIGN KEY(',column_name,') REFERENCES ',referenced_table_name,'(',referenced_column_name,')' SEPARATOR ';')
) as query
from INFORMATION_SCHEMA.key_column_usage
where referenced_table_name is not null
and referenced_column_name is not null
and referenced_table_name = @table_name
group by referenced_table_name
```
By setting @table\_name and @change you can generate a query. @table\_name should be a table name of the table with the primary key (it will look for the tables that uses that column as a foreign key) and change its type to @change.
I had to change a few tables like that, so that worked like a charm. I just had to change @table\_name and then perform a query. | MySQL change type of foreign key | [
"",
"sql",
"mysql",
"foreign-keys",
""
] |
Ok, so I have a function that takes the path to an image file and is supposed to return the binary data as a string. The caller then inserts the returned string into the inner text of an XML element.
Is this correct?:
```
string dataAsString = "";
using (StreamReader sr = new StreamReader(new FileStream(mergedFile, FileMode.Open, FileAccess.Read, FileShare.Read, 2048, FileOptions.DeleteOnClose)))
dataAsString = sr.ReadToEnd();
return dataAsString;
```
This returns something that looks like it might legitimately be binary data but if I cut and paste the contents out of the target file, paste it into a new test tif file and then attempt to open the tif image, it is unhappy with me... so I suspect something is wrong with the way I'm reading/writing the data.
Must I really do something like this?
```
using (BinaryReader br = new BinaryReader(new FileStream(mergedFile, FileMode.Open, FileAccess.Read, FileShare.Read, 1024, FileOptions.None)))
{
int blockReadSz = 2048;
int bytesToRead = (int)br.BaseStream.Length;
int bytesRead = 0;
byte[] data = new byte[bytesToRead];
while (bytesRead < bytesToRead)
bytesRead += br.Read(data, bytesRead, blockReadSz);
}
```
And if so, how do I get a string out of the byte[] once I'm done reading the file in?
Thanks for any help! :D | What you are asking for doesn't really make sense. You can't read binary data into a string.
You can easily read the data into a byte array without using a `BinaryReader`:
```
byte[] data = File.ReadAllBytes(mergedFile);
```
If you want to put the data in an XML document, you have to represent it as text somehow, for example using base64 encoding:
```
string text = Convert.ToBase64String(data);
```
You can't just decode the data into a string, as it's not a string that was encoded in the first place. Whatever encoding you use, the binary data can always contain byte sequences that doesn't represent anything that would be produced by encoding text. | The `StreamReader` is going to read the file as if it's a text, which it isn't. So yes, you'll have to use the`BinaryReader` or `FileStream` to read your data. In fact the `BinaryReader` is redundant here. The `FileStream` will work on it's own. Use `FileStream.Read` . Since your file appears short enough to put into a string, the following code should work for you:
```
using (FileStream fs = new new FileStream(mergedFile, FileAccess.Read))
{
int length = (int) fs.Length;
byte[] data = new byte[length];
fs.Read(data, 0, length);
// convert to string here... TBD
}
```
Next you want to convert your binary data into a string. This depends on on how you want to represent this string. If it's ASCII (and you want each byte to convert into a char), you would do something like this:
```
string myString = System.Text.Encoding.UTF8.GetString( data );
``` | Binary Data vs Character Data confusion | [
"",
"c#",
""
] |
I run iPlanet's Java application server, something in it is loading `commons-logging-1.0.4.jar`.
That's fine until one of my applications calls `AuthSSLProtocolSocketFactory` which is another apache library that also uses `commons-logging`.
I put the jar on the jvm classpath and get this error:
> `Invalid class loader hierarchy. You have more than one version of 'org.apache.commons.logging.Log' visible, which is not allowed.
> (Caused by org.apache.commons.logging.LogConfigurationException: Invalid class loader hierarchy....`
It seems that `commons-logger` doesn't like having two instances of itself loaded in different classloaders. I assume the application server has its own classloader that's loading it the first time (Although I can't find any appserver configuration that mentions it) so when my application goes to load it a second time it throws that exception.
I can't change the web server, and I can't change the apache library. Suggestions? | Are you putting commons logging in your classpath explicitly? You said jvm classpath, so I assume you are specifying it on the command line when you start iPlanet. That's not the recommended way to load jars in J2EE apps.
The easiest thing is to just let the Apache library use the commons logging jar that comes with iPlanet. Don't put commons-logging.jar in your WEB-INF/lib dir or in any classpath setting and the iPlanet one should be picked up automatically. | Take a look at [SLF4J](http://www.slf4j.org).
Additionally, <http://www.qos.ch/logging/classloader.jsp> will help. | How can I get around this invalid classloader hierarchy? | [
"",
"java",
"apache",
"logging",
"classloader",
""
] |
I've got a function which fills an array of type sbyte[], and I need to pass this array to another function which accepts a parameter of type byte[].
Can I convert it nicely and quickly, without copying all the data or using `unsafe` magic? | Yes, you can.
Since both `byte` and `sbyte` have the same binary representation there's no need to copy the data.
Just do a cast to Array, then cast it to `byte[]` and it'll be enough.
```
sbyte[] signed = { -2, -1, 0, 1, 2 };
byte[] unsigned = (byte[]) (Array)signed;
``` | You will have to copy the data (only reference-type arrays are covariant) - but we can try to do it efficiently; `Buffer.BlockCopy` seems to work:
```
sbyte[] signed = { -2, -1, 0, 1, 2 };
byte[] unsigned = new byte[signed.Length];
Buffer.BlockCopy(signed, 0, unsigned, 0, signed.Length);
```
If it was a reference-type, you can just cast the reference without duplicating the array:
```
Foo[] arr = { new Foo(), new Foo() };
IFoo[] iarr = (IFoo[])arr;
Console.WriteLine(ReferenceEquals(arr, iarr)); // true
``` | How to convert a sbyte[] to byte[] in C#? | [
"",
"c#",
"arrays",
""
] |
My goal is to ensure that an array allocated in java is allocated across contiguous physical memory. The issue that I've run into is that the pages an array is allocated across tend not to be contiguous in physical memory, unless I allocate a really large array.
My questions are:
* Why does a really large array ensure
pages which are contiguous in physical memory?
* Is there any way to ensure an array is allocated across physical memory, that doesn't involve making the array really large?
* How can I tell what page or physical address a Java object/array exists in, without measuring cache hits/cache misses?
I'm not looking for answers asking why I am doing this in java. I understand that C would "solve my problem", and that I'm going against the fundamental nature of java. Nevertheless I have a good reason for doing this.
The answers need not be guaranteed to work all the time. I am looking for answers that work most of the time. Extra points for creative, out-of-the-box answers that no reasonable Java programmer would ever write. It's OK to be platform specific(x86 32-bit 64-bit). | Given that the garbage collector moves objects around in (logical) memory, I think you are going to be out of luck.
About the best you could do is use [ByteBuffer.allocateDirect](http://java.sun.com/javase/6/docs/api/java/nio/ByteBuffer.html#allocateDirect(int)). That will (typically) not get moved around (logical) memory by the GC, but it may be moved in physical memory or even paged out to disc. If you want any better guarantees, you'll have to hit the OS.
Having said that, if you can set the page size to be as big as your heap, then all arrays will necessarily be physically contiguous (or swapped out). | No. Physically contiguous memory requires direct interaction with the OS. Most applications, JVM included only get virtually contiguous addresses. And a JVM cannot give to you what it doesn't get from the OS.
Besides, why would you want it? If you're setting up DMA transfers, you probably are using techniques besides Java anyway.
**Bit of background:**
Physical memory in a modern PC is typically a flexible amount, on replacable DIMM modules. Each byte of it has a physical address, so the Operating System during boot determines which physical addresses are available. It turns out applications are better off by not using these addresses directly. Instead, all modern CPUs (and their caches) use virtual addresses. There is a mapping table to physical addresses, but this need not be complete - swap to disk is enabled by the use of virtual addresses not mapped to physical addresses. Another level of flexibility is gained from having one table per process, with incomplete mappings. If process A has a virtual address that maps to physical address X, but process B doesn't, then there is no way that process B can write to physical address X, and we can consider that memory to be exclusive to process A. Obviously for this to be safe, the OS has to protect access to mapping table, but all modern OSes do.
The mapping table works at the page level. A page, or contiguous subset of physical addresses is mapped to a contiguous subset of virtual addresses. The tradeoff between overhead and granularity has resulted in 4KB pages being a common page size. But as each page has its own mapping, one cannot assume contiguity beyond that page size. In particular, when pages are evicted from physical memory, swapped to disk, and restored, it's quite possible that the end up at a new physical memory address. The program doesn't notice, as the virtual address does not change, only the OS-managed mapping table does. | Contigious Pages/Physical Memory in Java | [
"",
"java",
"memory-management",
"operating-system",
"x86",
"paging",
""
] |
I have created a ClickOnce Solution with VS2008.
My main project references another project who references COM dll as "links".
When I build my solution in VS the dlls from the orther projects are moved in my bin folder but when I publish and launch the project these files are not presents in my Local Settings\Apps\2.0... folder.
I know that I can add each dll of the other project as a reference of my main project but I'd like a cleaner solution ...
Is it possible ? | First add those files to your project directly.
Then goto Application properties -> Publish -> Application files
Select "show all files" if you do not see the files you need and then set their
publish status to "Include" NOT "Include (Auto)". This is important or they will not be added.
Please note if you update the files, you will have to remove them and add them again
and set their publish Status again. This is a small bug.
See a previous question of mine for more info:
[ClickOnce - Overwriting content files](https://stackoverflow.com/questions/261789/clickonce-overwriting-content-files) | You need to open the "Application Files" dialog in the Publish tab of your project. From there you can set the publish type (Include, Prerequisite, etc.) of each of your files.
If it's an unmanaged DLL, you'll need to add the actual .dll as a file to your project and mark its build action as "Data". You can then set the Publish Type of that file to Include. | ClickOnce & References | [
"",
"c#",
"reference",
"clickonce",
""
] |
I refactored my foreach loop from this before:
```
foreach (KeyValuePair[string, string] param in paramsList)
{
XmlElement mainNode = xmlDoc.CreateElement("parameter");
mainNode.SetAttribute("name", param.Key);
mainNode.SetAttribute("value", param.Value);
rootNode.AppendChild(mainNode);
}
```
to this, using LINQ:
```
XmlElement mainNode = xmlDoc.CreateElement("parameter");
var selected = paramsList.AsEnumerable().Select(param => param).ToList();
selected.ForEach(x => (mainNode.SetAttribute("name", x.Key)));
selected.ForEach(x => (mainNode.SetAttribute("value", x.Value)));
rootNode.AppendChild(mainNode);
```
However, i know the section below can still be refactored into a single loop but i dont know how. please enlighten me.
```
selected.ForEach(x => (mainNode.SetAttribute("name", x.Key)));
selected.ForEach(x => (mainNode.SetAttribute("value", x.Value)));
``` | I think you can achieve the same results with:
```
paramsList.ToList().ForEach( e => {
XmlElement mainNode = xmlDoc.CreateElement("parameter");
mainNode.SetAttribute("name", e.Key);
mainNode.SetAttribute("value", e.Value);
rootNode.AppendChild(mainNode);
});
```
but, in this case, I would choose a simple `foreach`:
```
foreach (var e in paramsList)
{
XmlElement mainNode = xmlDoc.CreateElement("parameter");
mainNode.SetAttribute("name", e.Key);
mainNode.SetAttribute("value", e.Value);
rootNode.AppendChild(mainNode);
}
``` | maybe something like this
```
selected.ForEach(x =>
{
mainNode.SetAttribute("name", x.Key);
mainNode.SetAttribute("value", x.Value);
});
``` | LINQ Refactoring | [
"",
"c#",
".net",
"linq",
"linq-to-xml",
""
] |
maybe i am understanding it all wrong but as i understood the best way to protect your connection string is through encrypting it, now i got these questions.
**Question 1:**
will encryption work in shared hosting environment ?
**Question 2:**
if i have an FTP service can any user upload an ASPX file and retrieve the connection string through configuration manager namespace ? | **Question 1:**
What are you protecting your connection string against?
If it's against hackers and such, if `web.config` is available for them, your system has already been compromised and nothing you can do will help you.
If it's against casual observers (i.e. the coders), even something simple as base64 encoding will help.
**Question 2:**
You shouldn't allow your users to upload files to a directory where aspx files can be executed - that way the users can do *anything* they want. | 1: I can't see why not.
2: Potentially, although you should be taking measures to prevent this. I find the best approach is to use windows authentication and set the identity of the app-pool to use a dedicated, locked-down, windows account in IIS. This way your connection string will never contain a password. | How to solidly protect web.config | [
"",
"c#",
"asp.net",
""
] |
I would like to split a text into single words using PHP. Do you have any idea how to achieve this?
My approach:
```
function tokenizer($text) {
$text = trim(strtolower($text));
$punctuation = '/[^a-z0-9äöüß-]/';
$result = preg_split($punctuation, $text, -1, PREG_SPLIT_NO_EMPTY);
for ($i = 0; $i < count($result); $i++) {
$result[$i] = trim($result[$i]);
}
return $result; // contains the single words
}
$text = 'This is an example text, it contains commas and full-stops. Exclamation marks, too! Question marks? All punctuation marks you know.';
print_r(tokenizer($text));
```
Is this a good approach? Do you have any idea for improvement?
Thanks in advance! | Use the class \p{P} which matches any unicode punctuation character, combined with the \s whitespace class.
```
$result = preg_split('/((^\p{P}+)|(\p{P}*\s+\p{P}*)|(\p{P}+$))/', $text, -1, PREG_SPLIT_NO_EMPTY);
```
This will split on a group of one or more whitespace characters, but also suck in any surrounding punctuation characters. It also matches punctuation characters at the beginning or end of the string. This discriminates cases such as "don't" and "he said 'ouch!'" | Tokenize - [strtok](https://www.php.net/strtok).
```
<?php
$text = 'This is an example text, it contains commas and full stops. Exclamation marks, too! Question marks? All punctuation marks you know.';
$delim = ' \n\t,.!?:;';
$tok = strtok($text, $delim);
while ($tok !== false) {
echo "Word=$tok<br />";
$tok = strtok($delim);
}
?>
``` | Create array of words from a string of text | [
"",
"php",
"split",
"cpu-word",
""
] |
I have some c# code that uses the Microsoft Scripting Control to evaluate some expressions:
```
using MSScriptControl; // references msscript.ocx
ScriptControlClass sc = new ScriptControlClass();
sc.Language = "VBScript";
sc.AllowUI = true;
try
{
Console.WriteLine(sc.Eval(txtEx.Text).ToString());
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
```
(txtEx is a simple text field)
Numerical expressions: "6+4", "cos(34)", "abs(-99)", "round(1.234, 2)" etc. are fine
Boolean expressions: "true or false", "1=2" are fine
But how can I evaluate a simple 'if'? I have tried "if(true, 2, 3)", "iif(true, 2, 3)", "if (true) then 2 else 3" and "if (true) then 2 else 3 endif"
Can anybody please help me to evaluate simple conditional expressions? Any help much appreciated!
RH | Try wrapping your IF-expression in a function
```
Function test
if (true) then
return true
else
return false
end if
End function
```
Add the function to the control and then use Run
```
Result = ScriptControl.Run("Test")
```
(the code above is not tested, but something along that way should work)
Check out this link for some more info
<http://support.microsoft.com/kb/184740> | ```
using MSScriptControl; // references msscript.ocx
ScriptControlClass sc = new ScriptControlClass();
sc.Language = "VBScript";
sc.AllowUI = true;
// VBScript engine doesn’t have IIf function
// Adding wraper IIF function to script control.
// This IIF function will work just as VB6 IIf function.
sc.AddCode(@"Function IIF(Expression,TruePart,FalsePart)
If Expression Then
IIF=TruePart
Else
IIF=FalsePart
End IF
End Function");
try
{
//define x,y variable with value
sc.AddCode(@"x=5
y=6");
//test our IIF
Console.WriteLine(sc.Eval("IIF(x>y,x,y)").ToString());
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
``` | Use Microsoft Scripting Control to evaluate 'If' expressions (via c#) | [
"",
"c#",
".net",
"vbscript",
"expression",
""
] |
Can someone point me the to the implementation of sizeof operator in C++ and also some description about its implementation.
sizeof is one of the operator that cannot be overloaded.
So it means we cannot change its default behavior? | `sizeof` is not a real operator in C++. It is merely special syntax which inserts a constant equal to the size of the argument. `sizeof` doesn't need or have any runtime support.
**Edit:** do you want to know how to determine the size of a class/structure looking at its definition? The rules for this are part of the [ABI](http://en.wikipedia.org/wiki/Application_binary_interface), and compilers merely implement them. Basically the rules consist of
1. size and alignment definitions for primitive types;
2. structure, size and alignment of the various pointers;
3. rules for packing fields in structures;
4. rules about virtual table-related stuff (more esoteric).
However, ABIs are platform- and often vendor-specific, i.e. on x86 and (say) IA64 the size of `A` below will be different because IA64 does not permit unaligned data access.
```
struct A
{
char i ;
int j ;
} ;
assert (sizeof (A) == 5) ; // x86, MSVC #pragma pack(1)
assert (sizeof (A) == 8) ; // x86, MSVC default
assert (sizeof (A) == 16) ; // IA64
``` | [**http://en.wikipedia.org/wiki/Sizeof**](http://en.wikipedia.org/wiki/Sizeof)
Basically, to quote [Bjarne Stroustrup's C++ FAQ](http://www.research.att.com/~bs/bs_faq2.html#overload-dot):
> Sizeof cannot be overloaded because built-in operations, such as incrementing a pointer into an array implicitly depends on it. Consider:
```
X a[10];
X* p = &a[3];
X* q = &a[3];
p++; // p points to a[4]
// thus the integer value of p must be
// sizeof(X) larger than the integer value of q
```
> Thus, sizeof(X) could not be given a
> new and different meaning by the
> programmer without violating basic
> language rules. | How is the sizeof operator implemented in c++? | [
"",
"c++",
"sizeof",
""
] |
Does anyone know how, on Ubuntu 8.04, with PHP 5 and MySQL 5.0, to enable/install the mysqli package/extensions?
Preferably I'd like to preserve the existing installations, but, if necessary, I'll reinstall from scratch.
I realise it's not, technically, programming-related but, I think (at a stretch, maybe) it's programming-enabling? *hopes*
Thanks for any help you're able to provide. | You might have to install php with the mysqli option: apt-get install php5-mysqli | In Ubuntu Hardy (8.04) mysqli is part of the php5-mysql package along with the standard mysql library and pdo - see <http://packages.ubuntu.com/hardy/php5-mysql> for more info.
```
sudo apt-get install php5-mysql
``` | mysqli for php on Ubuntu 8.04 LAMP stack | [
"",
"php",
"mysqli",
"ubuntu-8.04",
""
] |
How can I implement an *enumeration type* (spelled `enum` in some languages) in Python? What is the common practice to get this functionality? | ```
class Materials:
Shaded, Shiny, Transparent, Matte = range(4)
>>> print Materials.Matte
3
```
## Update: For Python 3.4+:
As of Python 3.4+, you can now use `Enum` (or `IntEnum` for enums with `int` values) from the [enum](https://docs.python.org/3/library/enum.html) module. Use [`enum.auto`](https://docs.python.org/3/library/enum.html#enum.auto) to increment the values up automatically:
```
import enum
class Materials(enum.IntEnum):
Shaded = 1
Shiny = enum.auto()
Transparent = 3
Matte = enum.auto()
print(Materials.Shiny == 2) # True
print(Materials.Matte == 4) # True
``` | I've seen this pattern several times:
```
>>> class Enumeration(object):
def __init__(self, names): # or *names, with no .split()
for number, name in enumerate(names.split()):
setattr(self, name, number)
>>> foo = Enumeration("bar baz quux")
>>> foo.quux
2
```
You can also just use class members, though you'll have to supply your own numbering:
```
>>> class Foo(object):
bar = 0
baz = 1
quux = 2
>>> Foo.quux
2
```
If you're looking for something more robust (sparse values, enum-specific exception, etc.), [try this recipe](http://code.activestate.com/recipes/67107/). | What's the common practice for enums in Python? | [
"",
"python",
"enums",
""
] |
Currently I have tables like: `Pages, Groups, GroupPage, Users, UserGroup`. With pickled sets I can implement the same thing with only 3 tables: `Pages, Groups, Users`.
`set` seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.
If human readability is important, I can always use json instead of cPickle for serialization and use `set` when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?
We're using SQLAlchemy as ORM, so it's likely to be implemented with `PickleType` column. I'm not planning to store the whole pickled "resource" recordset, only the `set` object made out of "resource" primary key values. | If you're going to pickle sets, you should find a good object database (like [ZODB](http://wiki.zope.org/ZODB/FrontPage)). In a pure-relational world, your sets are stored as BLOBS, which works out well. Trying to pickle sets in an ORM situation may lead to confusing problems with the ORM mappings, since they mostly assume purely relational mappings without any BLOBs that must be decoded.
Sets and other first-class objects are really what belongs in a database. The ORM is a hack because some folks think relational databases are "better", so we hack in a mapping layer.
Go with an object database and you'll find that things are often much smoother.
---
**Edit**
SQLAlchemy has it's own serializer.
<http://www.sqlalchemy.org/docs/05/reference/ext/serializer.html>
This is neither pickle or cPickle. However, because it needs to be extensible, it will behave like pickle. Which -- for your purposes -- will be as fast as you need. You won't be deserializing ACL's all the time. | You need to consider what it is that a DBMS provides you with, and which of those features you'll need to reimplement.
The issue of concurrency is a big one. There are a few race conditions to be considered (such as multiple writes taking place in different threads and processes and overwriting the new data), performance issues (write policy? What if your process crashes and you lose your data?), memory issues (how big are your permission sets? Will it all fit in RAM?).
If you have enough memory and you don't have to worry about concurrency, then your solution might be a good one. Otherwise I'd stick with a databases -- it takes care of those problems for you, and lots of work has gone into them to make sure that they always take your data from one consistent state to another. | Using Python set type to implement ACL | [
"",
"python",
"set",
"acl",
"pickle",
""
] |
I have a standart Page within my ListView control on the page, And the Pager is work, however in order to move to next list of items i required to click on pager link twice before it actually moves to next set of items.
The code for the pager is:
```
<asp:ListView ID="lv_LostCard" runat="server" DataKeyNames="request_id" EnableViewState="false">
<LayoutTemplate>
<table width="550" border="1" class="table">
<asp:PlaceHolder ID="itemPlaceholder" runat="server" />
</table>
<asp:DataPager ID="lv_Books_Pager" runat="server" PageSize="10">
<Fields>
<asp:NextPreviousPagerField ShowFirstPageButton="false" ShowPreviousPageButton="true" ShowNextPageButton="false" />
<asp:NumericPagerField />
<asp:NextPreviousPagerField ShowFirstPageButton="false" ShowPreviousPageButton="false" ShowNextPageButton="true" ShowLastPageButton="false" />
</Fields>
</asp:DataPager>
</LayoutTemplate>
<ItemTemplate>
</ItemTemplate>
</asp:ListView>
```
and the Code behind is:
protected void Page\_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
getLostCardsList();
}
}
```
protected void getLostCardsList()
{
using(LostCardsManagementDataContext LostCard = new LostCardsManagementDataContext())
{
var getLostCardsList = from lc in LostCard.lostcard_request_cards
select lc;
lv_LostCard.DataSource = getLostCardsList;
lv_LostCard.DataBind();
}
```
Can somebody tell me what happening and how to fix it ?
Thanks in advance | I have problems with listview sincerely.
I found a solution related to your question which seems there is no other way to fix that.You need to call OnPreRender method to rebind your source to listview.
```
protected void listview_PreRender(object sender, EventArgs e)
{
getLostCardsList();//your method for binding
}
```
Be adviced, **PreRender events called before your page is rendered**.More clearly,if your page has a postback event will render again.That means you need to store your data into a server collection (i.e. Session). | DataBind in the PagePropertiesChanged event.
```
private void listview_PagePropertiesChanged(object sender, System.EventArgs e)
{
listview.DataBind();
}
``` | Strange Pager behaviour in ListView | [
"",
"c#",
".net",
"asp.net",
""
] |
When should BOOL and bool be used in C++ and why?
I think using bool is cleaner and more portable because it's a built-in type. But BOOL is unavoidable when you interactive with legacy code/C code, or doing inter-op from .NET with C code/Windows API.
So my policy is:
Use bool inside C++.
Use BOOL when talk to outer world, e.g., export function in windows DLL.
Is there a definitive explanation of when to use one over the other? | Matthew Wilson discusses `BOOL`, `bool`, and similar in section 13.4.2 of [*Imperfect C++*](http://my.safaribooksonline.com/0321228774/ch13lev1sec4). Mixing the two can be problematic, since they generally have different sizes (and so pointers and references aren't interchangeable), and since `bool` isn't guaranteed to have any particular size. Trying to use typedefs or conditional compilating to smooth over the differences between `BOOL` and `bool` or trying to allow for a single Boolean type to work in both C and C++ is even worse:
```
#if defined(__cplusplus) || \
defined(bool) /* for C compilation with C99 bool (macro) */
typedef bool bool_t;
#else
typedef BOOL bool_t;
#endif /* __cplusplus */
```
This approach means that a function's return type can differ depending on which language calls it; Wilson explains that he's seen more than one bug in his own code and others' that results from this. He concludes:
> The solution to this imperfection is, as it so often is, abstinence. I never use `bool` for anything that can possibly be accessed across multiple link units—dynamic/static libraries, supplied object files—which basically means not in functions or classes that appear outside of header files. The practical answer, such as it is, is to use a pseudo-Boolean type, which is the size of `int`.
In short, he would agree with your approach. | If BOOL is some sort of integral type, and it always is, and BOOL is defined so that it works right, the standard conversions will automatically get it right. You can't quite use them interchangeably, but you can get close.
Use BOOL at the interface, where you have to talk to the Win32 API or whatever. Use bool everywhere else. | When should BOOL and bool be used in C++? | [
"",
"c++",
"windows",
""
] |
I'm trying to understand what it is about the following code that **is** perfectly happy with loading a text file and displaying its contents, but **isn't** happy with loading a BitmapImage and displaying it on a timer.Elapsed event handler.
I understand it has to do with the UI thread.
But why is this not a problem for the textfile example?
First, the XAML:
```
<Window x:Class="WpfApplication7.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Window1" Height="300" Width="300">
<StackPanel Orientation="Vertical">
<TextBlock Text="{Binding Path=Message, UpdateSourceTrigger=PropertyChanged}" FontSize="20" Height="40" Width="300" Background="AliceBlue" />
<Image Source="{Binding Path=Image,UpdateSourceTrigger=PropertyChanged}" Height="100" Width="100"/>
</StackPanel>
</Window>
```
and the C#, which raises a PropertyChangedEventHandler on a timer:
```
using System;
using System.ComponentModel;
using System.Timers;
using System.Windows;
using System.IO;
using System.Windows.Threading;
using System.Windows.Media.Imaging;
```
and
```
namespace WpfApplication7
{
public partial class Window1 : Window, INotifyPropertyChanged
{
public BitmapImage Image { get; private set; }
public string Message { get; set; }
public event PropertyChangedEventHandler PropertyChanged = delegate { };
private Timer timer;
public Window1()
{
InitializeComponent();
this.DataContext = this;
this.timer = new Timer { Enabled = true, Interval = 100 };
this.timer.Elapsed += (s, e) =>
{
//---happy loading from text file. UI updates :)
this.Message = File.ReadAllText(@"c:\windows\win.ini").Substring(0, 20);
PropertyChanged(this, new PropertyChangedEventArgs("Message"));
//---not happy loading a BitmapImage. PropertyChanged unhappy :(
// (Don't make me have to: ! )
//Application.Current.Dispatcher.Invoke(
//DispatcherPriority.Send, new Action(delegate
//{
this.Image = new BitmapImage(new Uri(@"C:\WINDOWS\Web\Wallpaper\Ascent.jpg"));
//Edit --Ah hah, thanks Daniel !
// DependencyObject-> Freezable-> Animatable->
// ImageSource-> BitmapSource-> BitmapImage
this.Image.Freeze(); //<--- this will fix it, no need for Dispatcher
//Without Dispatcher or Freeze() ... right here:
//"The calling thread cannot access this object because a different thread owns it."
PropertyChanged(this, new PropertyChangedEventArgs("Image"));
//}));
};
}
}
}
```
I know I can fix this with a "Application.Current.Dispatcher.Invoke". So fixing it isn't the problem. Not understanding why I should have to is the problem :)
*Similar* questions
* [asynchronously loading a BitmapImage in C# using WPF](https://stackoverflow.com/questions/36748/asynchronously-loading-a-bitmapimage-in-c-using-wpf)
* [Making sure OnPropertyChanged() is called on UI thread in MVVM WPF app](https://stackoverflow.com/questions/590590/making-sure-onpropertychanged-is-called-on-ui-thread-in-mvvm-wpf-app) | I think the critical difference between the two scenarios is that BitmapImage is a dependency object, which means it has the concept of an "owning" thread (the thread that created the object). When your main UI thread tries to access the BitmapImage object created on (and owned by) another thread...boom!
Strings, on the other hand, do not have a concept of an "owning" thread. | For those of us (like me) who came here looking to see how to actually pass bitmaps across threads, check out
<https://stackoverflow.com/a/2672929/745913>
The answer is to `Freeze()` the bitmap first. | WPF Event: BitmapImage PropertyChanged: "Calling Thread Cannot access" | [
"",
"c#",
"wpf",
"xaml",
""
] |
I am studying computer science and we have to do a programming project which must be strongly related to XML and using XSD and XSLT or XQuery/XPath at least. Because I like C# I'd like to do it in this language, but I could use another if anyone has another idea.
My Idea is now to code some kind of appointment book. I imagine that all appointments for the week are shown as HTML and you can enter for each day appointment notes in the textarea for this day.
Now my question: How can I take over the data entered in the textboxes? The application is an offline one so I have no web server receiving the GET request containing the entered data. Is it possible to read the current HTML DOM from memory with all its entered values and then transform it to an XML format for persistent storage from which it could be read in later?
Or is this idea totally stupid?
How else can I put all those XML technologies in one app? | If you want to show UI Generation from XSLT, the web page approach is easiest.
More impressive is generation of XAML from XSLT -> windows app (WPF).
Download [Visual Web Developer (FREE)](http://www.microsoft.com/express/vwd/#webInstall)
or
[Visual C#](http://www.microsoft.com/express/vcsharp/Default.aspx) | Why does it have to be Web based?
You can use those technologies in a Windows Application. | How can I read data from a textbox with no web server, then save it in an XML file? | [
"",
"c#",
"xml",
"dom",
"projects",
""
] |
I have 14 LINQ queries to resolve in one method. None of them have a base query that I could hang them from as subqueries and then store the results as properties of an anonymous type instance.
Rather than making 14 separate calls to the database, how can I ensure that they are all called in the same operation?
**UPDATE**
I ended up using a dodgy hack which did the trick. I know that there will ALWAYS be at least one user in the user table, so I ended up using:
```
var data = (from tmp in DataContext.Users
select new {
Property1 = (from...),
Property2 = (from...),
PropertyN = (from...),
}).First();
```
**UPDATE 2**
It has been implied in one of the answers that doing this may create MARS (Multiple Active Result Set) errors. What are these errors (I have never seen one), why do they occur and is there anything wrong the whole premise of this line of questioning? i.e. Am I flawed in my assertion that asking the database to return everything in one go is going to be quicker/more efficient than doing fourteen wholly separate LINQ queries?
**UPDATE 3**
I think my approach is redundant and from a pragmatic perspective, should be handled differently, either via a stored procedure or some sort of delayed/mixed approach. | You're still making 14 seperate call's to the database through each of your sub queries, your just doing it at a single instance when your data object is called.
Edit:
The MARS error occurs when you have multiple open data readers for a connection. By default with SQl 2005 this is turned off. in your query, while all the SQL is being passed in one hit your still returning 14 datareaders. The datareader takes that connection exclusivly unless you tell the connection to allow multiple active result sets (MARS).
To resolve this you either need to preload each of your sub queries into a list and run your subqueries off of that list, or you need to set the MutlipleActiveResultSet attribute of the connection string = true.
```
<add name="Name" connectionString="Data Source=<server>;Initial Catalog=<database>;Integrated Security=True;MultipleActiveResultSets=true" providerName="System.Data.SqlClient"/>
``` | I'm not completely sure about your queries but you could encapsulate the query logic in a stored procedure drag it with LINQ to SQL data context designer to use.
If you need the results independently and you really don't want the round trip to database server, I think you should fall back to `SqlDataReader` and use `NextResult` by calling a procedure that retrieves the whole data. | How do I resolve multiple linq queries in one operation without using a base query? | [
"",
"c#",
".net",
"linq",
"linq-to-sql",
""
] |
I need to write a stored procedure to update one of a set of similar columns. The columns are named 'UserField1', 'UserField2' etc. I was hoping to pass a parameter to the SPROC which would set the column to be updated. However, I can't seem to get the code correct. Here's a simplified example of what I tried (which gets me an 'Incorrect syntax' message):
```
create procedure UpdateUserField
(@UserFieldNumber int, @UserFieldNewValue int)
as
update MyTable set
case @UserFieldNumber
when 1 then UserField1
when 2 then UserField2
end
= @UserFieldNewValue
``` | What about using a number of IF's?
```
CREATE PROCEDURE UpdateUserField
(
@UserFieldNumber int,
@UserFieldNewValue int
) AS
IF @UserFieldNumber=1
BEGIN
UPDATE MyTable SET UserField1 = @UserFieldNewValue
END
IF @UserFieldNumber=2
BEGIN
UPDATE MyTable SET UserField2 = @UserFieldNewValue
END
```
Alternatively you can build dynamic SQL in an exec
```
CREATE PROCEDURE UpdateUserField
(
@UserFieldNumber int,
@UserFieldNewValue int
) AS
EXEC('UPDATE MyTable SET UserField' + CONVERT(varchar(10), @UserFieldNumber) + ' = ' + CONVERT(varchar(10), @UserFieldNewValue))
```
Beware SQL Injection if you do this though, with ints you won't have a problem, anything else you may need to consider risks. | If you don't want to use dynamic SQL or multiple `IF` and `UPDATE` statements then you could try something like this instead:
```
UPDATE MyTable
SET UserField1 = CASE WHEN @UserFieldNumber = 1
THEN @UserFieldNewValue ELSE UserField1 END,
UserField2 = CASE WHEN @UserFieldNumber = 2
THEN @UserFieldNewValue ELSE UserField2 END
``` | TSQL Parameterized SPROC question | [
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
I created a simple code here just to play around with C#. It has 3 buttons and 1 panel. If you click on the 2nd & 3rd button the panel height changes. Is that also possible to change the color?
For example, if I click on the 2nd button, I would like to have it as yellow and at the same time the height changes as well, and the same with 3rd button.
```
public partial class Form1 : Form
{
public int heightPanel;
public Form1()
{
InitializeComponent();
heightPanel = panel1.Height;
}
private void button1_Click(object sender, EventArgs e)
{
panel1.Height = heightPanel;
}
private void button2_Click(object sender, EventArgs e)
{
panel1.Height = this.Height/2;
}
private void button3_Click(object sender, EventArgs e)
{
panel1.Height = this.Height - 150;
}
}
```
I have an idea but I don't know where to put this. I think it would be something like this:
```
panel1.Height=this.BackColor.ToString();
```
Any inputs?
## Response to reply
Yes, I would like to retain the 3 colors if I click in any of the button. I'm not sure if it's possible. For ex:
```
my button1 =pink
button2=yellow
button3=green
```
If I click on the button 1 I'll see the pink color, and if I click on the button2 I'll see the pink and yellow.
Is that possible? | All you'd need to do would be to set the `BackColor` on a new line. For instance...
```
private void button2_Click(object sender, EventArgs e)
{
panel1.Height = this.Height/2;
panel1.BackColor = Color.Yellow;
}
private void button3_Click(object sender, EventArgs e)
{
panel1.Height = this.Height - 150;
panel1.BackColor = Color.Yellow;
}
``` | Simple as:
```
panel1.BackColor = Color.Red;
``` | changing panel colors | [
"",
"c#",
""
] |
I'm working on a query which returns numeric values (currency). Some of the values are whole numbers and are being displayed as 3, 5, 2 etc whilst other numbers are coming up like 2.52, 4.50 etc.
How can I force oracle to always show the decimal places?
Thanks | ```
TO_CHAR(pAmount, '9,999,999.99');
```
<http://www.techonthenet.com/oracle/functions/to_char.php>
<http://www.ss64.com/orasyntax/to_char.html> | To enhance the answers already given, you can use:
* `TO_CHAR(your_value,'fm999.99')` to prevent leading spaces
`____3.45` becomes `3.45` (`_` indicates whitespace)
* `TO_CHAR(your_value,'fm990.99')` to force values less than `1` to show a leading zero
`.52` becomes `0.52`
* `TO_CHAR(your_value,'fm990.00')` to force 2 decimal places, even if `0`
`6.3` becomes `6.30`
* `(TO_CHAR(your_value,'fm990.00')||'%')` to add a percentage sign
`18.6` becomes `18.60%`
source: <https://community.oracle.com/thread/968373?start=0&tstart=0> | Always show decimal places in SQL? | [
"",
"sql",
"oracle",
"formatting",
""
] |
Any recommendations for software to allow **users** to edit a workflow representing a business process? Ideally .NET, but any technology (winform / asp.net / wpf / etc) would do.
I would need to be able to initialize / investigate the layout through code to map it to my system - either via an object model or a file in a known format.
I've considered:
* Microsoft Automatic Graph Layout - a bit scrappy, to be honest
* ILOG Diagram for .NET - very impressive looking, but a bit pricey for my needs
* Workflow Foundation (editor hosting) - too WF specific (rather than BPM)
To clarify - [this is the sort of thing I'm trying to do](http://www.ilog.com/image.cfm?name=BPMNEditor.png) (this is the ILOG tool in action) - although it doesn't have to be quite as feature-rich...
(edit - I think I'm going to plough forward with [Netron](https://stackoverflow.com/questions/830124#979973)) | There are a couple of [XPDL](http://en.wikipedia.org/wiki/XML_Process_Definition_Language) tools with graphical editors such as [Bonita ProEd](http://bonita.objectweb.org):
[](https://i.stack.imgur.com/EdtLD.jpg)
(source: [theserverside.com](http://www.theserverside.com/tt/articles/content/BonitaPart3/images/figure7.jpg))
or [Enhydra Shark](http://www.together.at/prod/workflow/tws):
[](https://i.stack.imgur.com/9umAP.jpg)
Don't know how user friendly they might be.
There is also [YAWL](http://yawlfoundation.org/), which has some pretty friendly looking tools. | [Skelta](http://www.skelta.com/) is an interesting piece of software. Though we ended up not purchasing because of their odd licensing and pricing model.
I'm very interested to see what you finally settle on to implement this, as we are facing this same challenge, and so far we have settled on rehosting the WF designer. | Workflow editing software required - recommendation | [
"",
"c#",
".net",
"workflow",
"business-process-management",
""
] |
What is the best way to set up connection with mysql's jdbc?
And execute simple statement. How to do that?
Thank you. | The basic boilerplate for MySQL/JDBC goes something like this:
**Get the connection**:
```
Class.forName("com.mysql.jdbc.Driver").newInstance();
Connection conn = DriverManager.getConnection("jdbc:mysql://databaseName");
```
**Execute the statement**:
```
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * from tableName");
while (rs.next()) {
System.out.println(rs.getString(1));
}
```
**Close the statement and connection**:
```
rs.close();
stmt.close();
conn.close();
```
You just need to make sure you have the driver installed and/or in your CLASSPATH. | This is the twenty first century - use a JPA (ORM) implementation. But if you insist on going back to the metal (at the risk of down votes) -
There are many ways of getting a JDBC connection from some driver. Using reflection with a hardwired class name is the commonest and perhaps most brain damaged. If you're going to hardwire a class name, you might as well as get the benefits of normal code (compiler catches typos, no extraneous exceptions to deal with, easier to read, explicit dependencies, better tool support, etc).
Also get in to the habit of clearing up resources safely.
So:
```
public static void main(String[] args) throws SQLException {
Driver driver = new com.mysql.jdbc.Driver();
Connection connection = driver.connect(
"jdbc:mysql://mydatabase",
new java.util.Properties() {{
put("user", "fred");
}}
);
try {
PreparedStatement statement = connection.prepareStatement(
"SELECT insideLeg FROM user WHERE name=?"
);
try {
statement.setString(1, "jim");
ResultSet results = statement.executeQuery();
try {
if (results.next() {
System.out.println("= "+results.getLong(1));
} else {
System.out.println("Missing.");
}
} finally {
results.close();
}
} finally {
statement.close();
}
} finally {
connection.close();
}
}
```
What a mess! And it doesn't even use transactions yet. Yes, use an ORM. They're very respectable these days.
You wont need to do all that for every single statement. You don't want to go around creating instantiating drivers every time. In particular the execute around idiom is useful. | What is the best way/template to set up connection mysql and jdbc? | [
"",
"java",
"jdbc",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.