Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Is there a way in .NET 2.0 (C#) to serialize object like you do using XmlSerializer in a simple / customizable human readable format thats for instance looks like [PXLS](http://community.moertel.com/pxsl/) or JSON?
Also I know that XML is human readable, I'm looking for something with less annoying redundancy, something that you can output to the console as a result for the user.
|
To Serialize into JSON in .NET you do as follows:
```
public static string ToJson(IEnumerable collection)
{
DataContractJsonSerializer ser = new DataContractJsonSerializer(collection.GetType());
string json;
using (MemoryStream m = new MemoryStream())
{
XmlDictionaryWriter writer = JsonReaderWriterFactory.CreateJsonWriter(m);
ser.WriteObject(m, collection);
writer.Flush();
json = Encoding.Default.GetString(m.ToArray());
}
return json;
}
```
The collections item need to have the "DataContract" attribute, and each member you wish to be serialized into the JSON must have the "DataMember" attibute.
It's possible that this only works for .NET 3.5. But there is an equally simple version for 2.0 aswell...
|
I found a exaustive documentation here:
<http://pietschsoft.com/post/2008/02/NET-35-JSON-Serialization-using-the-DataContractJsonSerializer.aspx>
with this usefull class (support generics)
```
using System.Runtime.Serialization;
using System.Runtime.Serialization.Json;
public class JSONHelper
{
public static string Serialize<T>(T obj)
{
DataContractJsonSerializer serializer = new DataContractJsonSerializer(obj.GetType());
MemoryStream ms = new MemoryStream();
serializer.WriteObject(ms, obj);
string retVal = Encoding.Default.GetString(ms.ToArray());
ms.Dispose();
return retVal;
}
public static T Deserialize<T>(string json)
{
T obj = Activator.CreateInstance<T>();
MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(json));
DataContractJsonSerializer serializer = new DataContractJsonSerializer(obj.GetType());
obj = (T)serializer.ReadObject(ms);
ms.Close();
ms.Dispose();
return obj;
}
}
```
|
Serialize in a human readable text format
|
[
"",
"c#",
".net",
"serialization",
".net-2.0",
""
] |
When debugging in Internet Explorer, I first get an alert box with extremely limited if not useless information (sorry IE) and choose to debug it. After selecting yes, I get another option *every time* to choose between 'New instance of Microsoft script debugger' and 'New instance of Visual Studio'. I'm fed up with having to click the yes button again after having clicked it once already on the alert box.
Update: I found that you can disable the Microsoft script debugger from within its own options; just disabling the JIT debugger from Tools -> Options, and JIT. This stops it appearing on the menu but now I get the dialog box asking me which one to choose and it only displays the Visual Studio - WHY? If there's only one option and you've already asked me if I want to debug, why ask again?!?! Bleh.
Can you tell I'm getting sick of clicking, "yes" twice? Lol.
|
Apparently the problem happens if you do not uninstall the old (crappy) Microsoft Script Debugger before you install the newer Microsoft Script Editor.
You would think that all you need to do is to uninstall the old debugger - however, according to a blog posting (which I can't recall at the moment), if you uninstall the Microsoft Script Debugger at this point, it will mess up the MSE and you will have to reinstall the MSE again. I am on a tight schedule now so I don't have the time to fix the problem properly - just living with it as best as I can.
|
If all other solutions fail, you can try another route: using a macro language (à la AutoHotkey or AutoIt) to dismiss all prompts with one key...
|
How to stop IE asking which debugger to choose when **trying** to debug?
|
[
"",
"javascript",
"visual-studio",
"debugging",
"internet-explorer",
""
] |
I have a script that needs to extract data temporarily to do extra operations on it, but then doesn't need to store it any further after the script has run. I currently have the data in question in a series of temporary local tables (CREATE TABLE #table), which are then dropped as their use is completed. I was considering switching to physical tables, treated in the same way (CREATE TABLE table), if there would be an improvement in the speed of the script for it (or other advantages, maybe?).
...So, is there a difference in performance, between temporary tables and physical tables? From what I'm reading, temporary tables are just physical tables that only the session running the script can look at (cutting down on locking issues).
EDIT: I should point out that I'm talking about physical tables vs. temporary tables. There is a lot of info available about temporary tables vs. table variables, e.g. [<http://sqlnerd.blogspot.com/2005/09/temp-tables-vs-table-variables.html>](http://sqlnerd.blogspot.com/2005/09/temp-tables-vs-table-variables.html).
|
Temporary tables are a big NO in SQL Server.
* They provoke query plan recompilations which is costly.
* Creating and dropping the table are also costly operations that you are adding to your process.
* If there is a big amount of data going to the temporary data your operations will be slow on the lack of indexes. You CAN create indexes on temporary tables. But I will never recommend a temporary table for anything with a big amount of records.
Your other approach: To create and then drop regular tables just creates the same overhead.
Another approach: Using existing tables, augmenting the rows with an additional column to differentiate which rows pertain to each user/session could be used. Removes the burden to create/drop the tables but, then, you will need to be paranoid with the code that generate the value to differentiate the rows AND you will have to develop a way to maintain the table for those cases where a session ended prematurely and there are leftovers (rows that were not removed at the end of the processing).
I recommend you to rethink your processing strategy. Some alternatives are as easy as using correlated queries, derived tables or table variables. Take a look at: <http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx>
---
**Edit:** The approach of creating and dropping regular tables and the approach of reusing a regular table augumented with an additional field: Both will generate query plan recompilations because the amount of data changed will trigger the reevaluation of table statistics. Again, your best approach is to find alternate ways to proccess your data.
|
What kind of data operations are you doing, and how much data are you working with?
I would stick with the temporary tables - for large data sets, I've always found it to be by far the best way to go, and not just in SQL Server. You could try a global temporary table (CREATE TABLE ##tablename), which lasts beyond just the scope of the create statement.
From SQL Server Books Online (2005):
> If you create the global temporary
> table ##employees, any user in the
> database can work with this table. If
> no other user works with this table
> after you create it, the table is
> deleted when you disconnect. If
> another user works with the table
> after you create it, SQL Server
> deletes it after you disconnect and
> after all other sessions are no longer
> actively using it.
|
What is the comparative speed of temporary tables to physical tables in SQL?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
I have a python script that I would like to add a "Shutdown when done" feature to.
I know I can use gksudo (when the user clicks on "shutdown when done") to ask the user for root privileges but how can I use those privileges at a later time (when the script is actually finished).
I have thought about chmod u+s on the shutdown command so I don't need a password but I really don't want to do that.
Any ideas how I can achieve this?
Thanks in advance,
Ashy.
|
gksudo should have a timeout, I believe it's from the time you last executed a gksudo command.
So I think I'd just throw out a "gksudo echo meh" or something every minute. Should reset the timer and keep you active until you reboot.
|
Instead of `chmod u+s`ing the shutdown command, allowing passwordless sudo access to that command would be better..
As for allowing shutdown at the end of the script, I suppose you could run the entire script with sudo, then drop privileges to the initial user at the start of the script?
|
How can I ask for root password but perform the action at a later time?
|
[
"",
"python",
"linux",
"ubuntu",
""
] |
is there a way to have a mysql select statement return fully qualified column names like "table.field" without using AS for every single field?
like so:
```
SELECT *
FROM table1
LEFT JOIN table2 on table1.f_ID = table2.ID
```
and the result would be:
"table1.ID", "table1.name", "table2.ID", "table2.name", ...
|
Not really. You could write some dynamic SQL to accomplish this, but it wouldn't be simple. If you really want the dynamic SQL, let me know and I could try to whip something up.
|
If you are using PHP you can use PDO to get this result.
`$PDO->setAttribute(PDO::ATTR_FETCH_TABLE_NAMES, true);`
See [SQL Select \* from multiple tables](https://stackoverflow.com/questions/2523631/sql-select-from-multiple-tables) for more information.
|
have mysql select statement return fully qualified column names like table.field
|
[
"",
"sql",
"mysql",
""
] |
With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded?
For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.
|
I think you could look at the child's `__dict__` attribute dictionary to check if the data is already there or not.
|
You can get a list of all unloaded properties (both relations and columns) from `sqlalchemy.orm.attributes.instance_state(obj).unloaded`.
See: [Completing object with its relations and avoiding unnecessary queries in sqlalchemy](https://stackoverflow.com/questions/5795492/completing-object-with-its-relations-and-avoiding-unnecessary-queries-in-sqlalch)
An easier way is to use `inspect()`, which gives the same results:
```
from sqlalchemy import inspect
from sqlalchemy.orm import lazyload
user = session.query(User).options(lazyload(User.articles)).first()
ins = inspect(user)
ins.unloaded # <- set or properties that are not yet loaded
```
|
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy?
|
[
"",
"python",
"sqlalchemy",
""
] |
I'm trying to make a PHP script, I have the script finished but it takes like 10 minutes to finish the process it is designed to do. This is not a problem, however I presume I have to keep the page loaded all this time which is annoying. Can I have it so that I start the process and then come back 10mins later and just view the log file it has generated?
|
Well, you can use [`ignore_user_abort`](https://www.php.net/manual/en/function.ignore-user-abort.php)
So the script will continue to work (keep an eye on script duration, perhaps add "[set\_time\_limit](https://www.php.net/manual/en/function.set-time-limit.php)(0)")
But a warning here: You will not be able to stop a script with these two lines:
```
ignore_user_abort(true);
set_time_limit(0);
```
Except you can directly access the server and kill the process there! (Been there, done an endless loop, calling itself over and over again, made the server come to a screeching stop, got shouted at...)
|
Sounds like you should have a queue and an external script for processing the queue.
For example, your PHP script should put an entry into a database table and return right away. Then, a cron running every minute checks the queue and forks a process for each job.
The advantage here is that you don't lock an apache thread up for 10 minutes.
|
PHP Background Processes
|
[
"",
"php",
"background-process",
""
] |
I'm using event delegation to listen for events lower in the DOM, but it's not working for an onchange event on a select box. Does the onchange event propagate or bubble up the DOM?
Googling has failed in finding a conclusive answer.
|
[According to specification](http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-eventgroupings-htmlevents-h3), `change`, `submit`, `reset` should bubble and `focus` and `blur` should not bubble.
This behavior is implemented properly in all web browsers except IE < 9, that is, `change`, `submit`, `reset` do bubble properly in IE >= 9.
See <https://stackoverflow.com/a/4722246/227299> for a jQuery workaround on old IE versions
* <http://www.quirksmode.org/dom/events/change.html>
* <http://quirksmode.org/dom/events/submit.html>
|
In jQuery 1.4+ the change event bubbles in all browsers, including IE.
```
$('div.field_container').change(function() {
// code here runs in all browers, including IE.
});
```
|
Does the onchange event propagate?
|
[
"",
"javascript",
"html",
"dom",
""
] |
I'm trying to work through the problems on [projecteuler.net](http://projecteuler.net) but I keep running into a couple of problems.
The first is a question of storing large quanities of elements in a `List<t>`. I keep getting OutOfMemoryException's when storing large quantities in the list.
Now I admit I might not be doing these things in the best way but, is there some way of defining how much memory the app can consume?
It usually crashes when I get abour 100,000,000 elements :S
Secondly, some of the questions require the addition of massive numbers. I use ulong data type where I think the number is going to get super big, but I still manage to wrap past the largest supported int and get into negative numbers.
Do you have any tips for working with incredibly large numbers?
|
Consider [System.Numerics.BigInteger](https://learn.microsoft.com/en-us/dotnet/api/system.numerics.biginteger).
|
You need to use a large number class that uses some basic math principals to split these operations up. This [implementation of a C# BigInteger library](http://www.codeproject.com/KB/cs/biginteger.aspx) on CodePoject seems to be the most promising. The article has some good explanations of how operations with massive numbers work, as well.
Also see:
[Big integers in C#](https://stackoverflow.com/questions/176775/big-integers-in-c)
|
working with incredibly large numbers in .NET
|
[
"",
"c#",
".net",
"biginteger",
"largenumber",
""
] |
I have a two years of experience of programming in Visual C# and Visual C++. I would like to know some good online sources to start learning ASP.NET or anything else I should/need to learn before diving into ASP.NET. I found some online videos that are proving to be quite useful. Perhaps I would like to know about some open source projects where I can have a look at the ASP.NET code directly.
|
Sorry, but I'm going to have to suggest the immediately obvious first:
[Official Microsoft ASP .Net Site](http://www.asp.net)
There's a link at the top to both "Get Started" and "Learn", and I have found this site incredibly useful over the past year or so.
|
Speaking as a convert from WinForms to the Web, I offer the following tips
* Learn the ASP.NET Life-cycle
* Get to grips with the concepts of client vs server-side code; know how pages are served up etc
* Don't bite off too much too soon, there are A LOT of new things to learn, and it changes very quickly. But you don't need to be writing a full-blown AJAX web app straight away.
|
Good place to start learning ASP.NET
|
[
"",
"c#",
"asp.net",
""
] |
I have a menu of product brands that I want to split over 4 columns. So if I have 39 brands, then I want the maximum item count for each column to be 10 (with a single gap in the last column. Here's how I'm calculating the item count for a column (using C#):
```
int ItemCount = Convert.ToInt32(Math.Ceiling(Convert.ToDecimal(BrandCount) / 4m));
```
All that conversion seems really ugly to me. Is there a better way to do math on integers in C#?
|
You can cast:
```
int ItemCount = (int) Math.Ceiling( (decimal)BrandCount / 4m );
```
Also, because `int`/`decimal` results in a `decimal` you can remove one of the casts:
```
int ItemCount = (int) Math.Ceiling( BrandCount / 4m );
```
|
Why are you even using a decimal?
```
int ItemCount = (BrandCount+3)/4;
```
The `+3` makes sure you round up rather than down:
```
(37+3)/4 == 40/4 == 10
(38+3)/4 == 41/4 == 10
(39+3)/4 == 42/4 == 10
(40+3)/4 == 43/4 == 10
```
In general:
```
public uint DivUp(uint num, uint denom)
{
return (num + denom - 1) / denom;
}
```
|
Integer math in c#
|
[
"",
"c#",
"math",
"casting",
"rounding",
""
] |
I'm trying to copy both an image from a file and text from a file to the clipboard. My intention is to then open a word document or an outlook email and paste both the text and the image in one standard paste command (CTRL-V for example). I can do both separately easily enough, but doing them both in one operation doesn't seem to work.
This is how I've got the two working as separate operations (only relevant code lines of course, with try/catch stripped out etc.):
Add Image to Clipboard:
...
```
Bitmap imageToAdd = new Bitmap(imageFilePath);
Clipboard.SetImage(imageToAdd);
```
...
Add Text to Clipboard:
...
```
StreamReader rdr = new StreamReader(textFilePath);
string text = rdr.ReadToEnd();
Clipboard.SetText(text);
```
...
I'm using c# and .net 2.0 framework and targeting Windows XP (and likely Vista in the near future).
TIA
|
I noticed only an object can be passed in.
In that case, when the user presses the command to paste, your code could execute two functions, or one function recursively, and each time get the data you want and paste it in.
So, look at looping or recursion.
|
You could use RTF, which could combine text and graphics. Note that you CAN have CF\_BITMAP and CF\_TEXT on the clipboard at the same time. But it's not useful. You'd get the text when you paste into notepad, you'd get the bitmap when you paste into Paint, and most other apps would pick one or the other, but never both. So it's merely academic. Kind of neat, in the way that transporter malfunctions on Star Trek were neat. But also messy.
|
How do I copy an image and text to the clipboard as one object?
|
[
"",
"c#",
".net",
"clipboard",
"copy-paste",
""
] |
Just starting to explore the 'wonders' of regex. Being someone who learns from trial and error, I'm really struggling because my trials are throwing up a disproportionate amount of errors... My experiments are in PHP using ereg().
Anyway. I work with first and last names separately but for now using the same regex. So far I have:
```
^[A-Z][a-zA-Z]+$
```
Any length string that starts with a capital and has only letters (capital or not) for the rest. But where I fall apart is dealing with the special situations that can pretty much occur anywhere.
* Hyphenated Names (Worthington-Smythe)
* Names with Apostophies (D'Angelo)
* Names with Spaces (Van der Humpton) - capitals in the middle which may or may not be required is way beyond my interest at this stage.
* Joint Names (Ben & Jerry)
Maybe there's some other way a name can be that I'm no thinking of, but I suspect if I can get my head around this, I can add to it. I'm pretty sure there will be instances where more than one of these situations comes up in one name.
So, I think the bottom line is to have my regex also accept a space, hyphens, ampersands and apostrophes - but not at the start or end of the name to be technically correct.
|
* Hyphenated Names (Worthington-Smythe)
Add a - into the second character class. The easiest way to do that is to add it at the start so that it can't possibly be interpreted as a range modifier (as in `a-z`).
```
^[A-Z][-a-zA-Z]+$
```
* Names with Apostophies (D'Angelo)
A naive way of doing this would be as above, giving:
```
^[A-Z][-'a-zA-Z]+$
```
Don't forget you may need to escape it inside the string! A 'better' way, given your example might be:
```
^[A-Z]'?[-a-zA-Z]+$
```
Which will allow a possible single apostrophe in the second position.
* Names with Spaces (Van der Humpton) - capitals in the middle which may or may not be required is way beyond my interest at this stage.
Here I'd be tempted to just do our naive way again:
```
^[A-Z]'?[- a-zA-Z]+$
```
A potentially better way might be:
```
^[A-Z]'?[- a-zA-Z]( [a-zA-Z])*$
```
Which looks for extra words at the end. This probably isn't a good idea if you're trying to match names in a body of extra text, but then again, the original wouldn't have done that well either.
* Joint Names (Ben & Jerry)
At this point you're not looking at single names anymore?
Anyway, as you can see, regexes have a habit of growing very quickly...
|
This regex is perfect for me.
```
^([ \u00c0-\u01ffa-zA-Z'\-])+$
```
It works fine in php environments using preg\_match(), but doesn't work everywhere.
It matches `Jérémie O'Co-nor` so I think it matches all UTF-8 names.
|
Regex for names
|
[
"",
"php",
"regex",
""
] |
What's the best way to remove a page frame automatically?
I've used this type of code before:
```
<script language="JavaScript">
setTimeout ("changePage()", 3000);
function changePage() {
if (self.parent.frames.length != 0)
self.parent.location="http://www.example.com";
}
</script>
```
|
Do you mean if someone has put a frame around your content? If so, you need the following any where in your HTML page to jump out of the iframe:
```
<script type="text/javascript">
if (window.top.location != window.location) {
window.top.location = window.location;
}
</script>
```
|
Here's an alternative that's more generic in that it doesn't name the parent URL, nor use the separate function call:
```
// is the current page at the top of the browser window hierarchy?
if (top.location != self.location)
{
// it isn't, so force this page to be at
// the top of the hierarchy, in its own window
top.location = self.location
}
```
|
How do I automatically remove an HTML page frame?
|
[
"",
"javascript",
"html",
"frame",
""
] |
We are working with some legacy code that accesses a shared drive by the letter (f:\ for example). Using the UNC notation is not an option. Our Java wrapper app will run as a service, and as the first step, I would like to map the drive explicitly in the code. Has anyone done this?
|
Consider executing the DOS command that maps a network drive as in the following code:
```
String command = "c:\\windows\\system32\\net.exe use f: \\\\machine\\share /user:user password";
Process p = Runtime.getRuntime().exec(command);
...
```
See details on net use command:
```
The syntax of this command is:
NET USE
[devicename | *] [\\computername\sharename[\volume] [password | *]]
[/USER:[domainname\]username]
[/USER:[dotted domain name\]username]
[/USER:[username@dotted domain name]
[/SMARTCARD]
[/SAVECRED]
[[/DELETE] | [/PERSISTENT:{YES | NO}]]
NET USE {devicename | *} [password | *] /HOME
NET USE [/PERSISTENT:{YES | NO}]
```
|
You can use JCIFS
<http://jcifs.samba.org/src/docs/api/jcifs/smb/SmbFile.html>
or if you want higher level API and support for other protocols like FTP, Zip and others:
<http://commons.apache.org/vfs/filesystems.html>
Both options are pure Java and cross platform.
|
How can I mount a windows drive in Java?
|
[
"",
"java",
"windows-services",
"unc",
""
] |
Suppose to have a code like this:
```
<div class="notSelected">
<label>Name
<input type="text" name="name" id="name" />
</label>
<div class="description">
Tell us what's your name to make us able to fake to be your friend
when sending you an email.
</div>
</div>
```
Now suppose I've something like this (it's just an example) for each element of a form.
I'd like to change the style from notSelected to Selected when:
* User focus on the input element
* User move the mouse over a notSelected div
When he change focus the Selected div should became notSelected again.
I'd like to do something like this to increment the size of the text of the selected div. Anyway it could be cool make other changes too so I'd prefer to change the class attribute.
What is the best way to do something like this in JavaScript? Is there any JavaScript framework that can boost me doing this thing? So it will be easy to add effects like fading etc...
I downloaded MooTools but with a fast read of the docs I did not see how to do this without having a specific ID for any of the forms div, but is the first time I use it. I've no problem using any other framework, but if you suggest one, please write also what should I look for specifically.
|
I would recommend a look at jQuery for your task. It is quite easy to learn and produces nice effects quickly. But your described effect alone, pure JavaScript would also be enough.
Make your DIVs always have a class called "selectable". You can toggle other CSS classes later on. Create a CSS class named "selected" and give it the desired look.
```
<div class="selectable"> (=off) vs. <div class="selectable selected"> (=on)
```
Then add something like this to the scripts section of your document:
```
$(document).ready(function(){
// handle mouseover and mouseout of the parent div
$("div.selectable").mouseover(
function() {
$(this).addClass("selected").addClass("mouseIsOver");
}
).mouseout(
function() {
$(this).removeClass("selected").removeClass("mouseIsOver");
}
);
// handle focus and blur of the contained input elememt,
// unless it has already been selected by mouse move
$("div.selectable input").focus(
function() {
$(this).parents("div.selectable").not(".mouseIsOver").addClass("selected");
}
).blur(
function() {
$(this).parents("div.selectable").not(".mouseIsOver").removeClass("selected");
}
);
});
```
This is untested so there might be a glitch in it, but it will give you the general idea where to start.
P.S: Changing text size on mouse move may not be the best of all ideas. It leads to rearranging the page layout which is annoying for the user.
|
There's also a pure CSS solution to this problem. However, it doesn't work in MSIE 6. Technically, it works similar to Tomalek's solution but instead of using JavaScript to toggle the element's class, it uses CSS to toggle its style:
```
.selectable { /* basic styles … */ }
.selectable:hover { /* hover styles … */ }
.selectable:active { /* focus styles … */ }
```
|
How to change the style of an element/container when the focus is "inside" it?
|
[
"",
"javascript",
"javascript-framework",
""
] |
I am looking for a generic, bidirectional 1 to 1 Dictionary class in C# (2), ie. a `BiDictionaryOneToOne<T, S>` which is guaranteed to only contain one of each value and key (up to RefEquals anyway), and which can be searched using either key or value. Anyone know of one, or should I just implement it myself? I can't believe that I'm the first person to need this...
There is a BiDictionary in the answers to [this question](https://stackoverflow.com/questions/255341/getting-key-of-value-of-a-generic-dictionary), but it is not for unique elements (and also does not implement RemoveByFirst(T t) or RemoveBySecond(S s)).
Thanks!
|
OK, here is my attempt (building on Jon's - thanks), archived here and open for improvement :
```
/// <summary>
/// This is a dictionary guaranteed to have only one of each value and key.
/// It may be searched either by TFirst or by TSecond, giving a unique answer because it is 1 to 1.
/// </summary>
/// <typeparam name="TFirst">The type of the "key"</typeparam>
/// <typeparam name="TSecond">The type of the "value"</typeparam>
public class BiDictionaryOneToOne<TFirst, TSecond>
{
IDictionary<TFirst, TSecond> firstToSecond = new Dictionary<TFirst, TSecond>();
IDictionary<TSecond, TFirst> secondToFirst = new Dictionary<TSecond, TFirst>();
#region Exception throwing methods
/// <summary>
/// Tries to add the pair to the dictionary.
/// Throws an exception if either element is already in the dictionary
/// </summary>
/// <param name="first"></param>
/// <param name="second"></param>
public void Add(TFirst first, TSecond second)
{
if (firstToSecond.ContainsKey(first) || secondToFirst.ContainsKey(second))
throw new ArgumentException("Duplicate first or second");
firstToSecond.Add(first, second);
secondToFirst.Add(second, first);
}
/// <summary>
/// Find the TSecond corresponding to the TFirst first
/// Throws an exception if first is not in the dictionary.
/// </summary>
/// <param name="first">the key to search for</param>
/// <returns>the value corresponding to first</returns>
public TSecond GetByFirst(TFirst first)
{
TSecond second;
if (!firstToSecond.TryGetValue(first, out second))
throw new ArgumentException("first");
return second;
}
/// <summary>
/// Find the TFirst corresponing to the Second second.
/// Throws an exception if second is not in the dictionary.
/// </summary>
/// <param name="second">the key to search for</param>
/// <returns>the value corresponding to second</returns>
public TFirst GetBySecond(TSecond second)
{
TFirst first;
if (!secondToFirst.TryGetValue(second, out first))
throw new ArgumentException("second");
return first;
}
/// <summary>
/// Remove the record containing first.
/// If first is not in the dictionary, throws an Exception.
/// </summary>
/// <param name="first">the key of the record to delete</param>
public void RemoveByFirst(TFirst first)
{
TSecond second;
if (!firstToSecond.TryGetValue(first, out second))
throw new ArgumentException("first");
firstToSecond.Remove(first);
secondToFirst.Remove(second);
}
/// <summary>
/// Remove the record containing second.
/// If second is not in the dictionary, throws an Exception.
/// </summary>
/// <param name="second">the key of the record to delete</param>
public void RemoveBySecond(TSecond second)
{
TFirst first;
if (!secondToFirst.TryGetValue(second, out first))
throw new ArgumentException("second");
secondToFirst.Remove(second);
firstToSecond.Remove(first);
}
#endregion
#region Try methods
/// <summary>
/// Tries to add the pair to the dictionary.
/// Returns false if either element is already in the dictionary
/// </summary>
/// <param name="first"></param>
/// <param name="second"></param>
/// <returns>true if successfully added, false if either element are already in the dictionary</returns>
public Boolean TryAdd(TFirst first, TSecond second)
{
if (firstToSecond.ContainsKey(first) || secondToFirst.ContainsKey(second))
return false;
firstToSecond.Add(first, second);
secondToFirst.Add(second, first);
return true;
}
/// <summary>
/// Find the TSecond corresponding to the TFirst first.
/// Returns false if first is not in the dictionary.
/// </summary>
/// <param name="first">the key to search for</param>
/// <param name="second">the corresponding value</param>
/// <returns>true if first is in the dictionary, false otherwise</returns>
public Boolean TryGetByFirst(TFirst first, out TSecond second)
{
return firstToSecond.TryGetValue(first, out second);
}
/// <summary>
/// Find the TFirst corresponding to the TSecond second.
/// Returns false if second is not in the dictionary.
/// </summary>
/// <param name="second">the key to search for</param>
/// <param name="first">the corresponding value</param>
/// <returns>true if second is in the dictionary, false otherwise</returns>
public Boolean TryGetBySecond(TSecond second, out TFirst first)
{
return secondToFirst.TryGetValue(second, out first);
}
/// <summary>
/// Remove the record containing first, if there is one.
/// </summary>
/// <param name="first"></param>
/// <returns> If first is not in the dictionary, returns false, otherwise true</returns>
public Boolean TryRemoveByFirst(TFirst first)
{
TSecond second;
if (!firstToSecond.TryGetValue(first, out second))
return false;
firstToSecond.Remove(first);
secondToFirst.Remove(second);
return true;
}
/// <summary>
/// Remove the record containing second, if there is one.
/// </summary>
/// <param name="second"></param>
/// <returns> If second is not in the dictionary, returns false, otherwise true</returns>
public Boolean TryRemoveBySecond(TSecond second)
{
TFirst first;
if (!secondToFirst.TryGetValue(second, out first))
return false;
secondToFirst.Remove(second);
firstToSecond.Remove(first);
return true;
}
#endregion
/// <summary>
/// The number of pairs stored in the dictionary
/// </summary>
public Int32 Count
{
get { return firstToSecond.Count; }
}
/// <summary>
/// Removes all items from the dictionary.
/// </summary>
public void Clear()
{
firstToSecond.Clear();
secondToFirst.Clear();
}
}
```
|
A more complete implementation of bidirectional dictionary:
* **Supports almost all interfaces** of original `Dictionary<TKey,TValue>` (except infrastructure interfaces):
+ `IDictionary<TKey, TValue>`
+ `IReadOnlyDictionary<TKey, TValue>`
+ `IDictionary`
+ `ICollection<KeyValuePair<TKey, TValue>>` (this one and below are the base interfaces of the ones above)
+ `ICollection`
+ `IReadOnlyCollection<KeyValuePair<TKey, TValue>>`
+ `IEnumerable<KeyValuePair<TKey, TValue>>`
+ `IEnumerable`
* **Serialization** using `SerializableAttribute`.
* **Debug view** using `DebuggerDisplayAttribute` (with Count info) and `DebuggerTypeProxyAttribute` (for displaying key-value pairs in watches).
* Reverse dictionary is available as `IDictionary<TValue, TKey> Reverse` property and also implements all interfaces mentioned above. All operations on either dictionaries modify both.
Usage:
```
var dic = new BiDictionary<int, string>();
dic.Add(1, "1");
dic[2] = "2";
dic.Reverse.Add("3", 3);
dic.Reverse["4"] = 4;
dic.Clear();
```
Code is available in my private framework on GitHub: [BiDictionary(TFirst,TSecond).cs](https://github.com/Athari/Alba.Framework/blob/master/Alba.Framework/Collections/Collections/BiDictionary(TFirst%2CTSecond).cs) ([permalink](https://github.com/Athari/Alba.Framework/blob/33cdaf77872d33608edc6b9a0f84f757a6bbcce2/Alba.Framework/Collections/Collections/BiDictionary(TFirst%2CTSecond).cs), [search](https://github.com/Athari/Alba.Framework/search?q=bidictionary&ref=cmdform)).
Copy:
```
[Serializable]
[DebuggerDisplay ("Count = {Count}"), DebuggerTypeProxy (typeof(DictionaryDebugView<,>))]
public class BiDictionary<TFirst, TSecond> : IDictionary<TFirst, TSecond>, IReadOnlyDictionary<TFirst, TSecond>, IDictionary
{
private readonly IDictionary<TFirst, TSecond> _firstToSecond = new Dictionary<TFirst, TSecond>();
[NonSerialized]
private readonly IDictionary<TSecond, TFirst> _secondToFirst = new Dictionary<TSecond, TFirst>();
[NonSerialized]
private readonly ReverseDictionary _reverseDictionary;
public BiDictionary ()
{
_reverseDictionary = new ReverseDictionary(this);
}
public IDictionary<TSecond, TFirst> Reverse
{
get { return _reverseDictionary; }
}
public int Count
{
get { return _firstToSecond.Count; }
}
object ICollection.SyncRoot
{
get { return ((ICollection)_firstToSecond).SyncRoot; }
}
bool ICollection.IsSynchronized
{
get { return ((ICollection)_firstToSecond).IsSynchronized; }
}
bool IDictionary.IsFixedSize
{
get { return ((IDictionary)_firstToSecond).IsFixedSize; }
}
public bool IsReadOnly
{
get { return _firstToSecond.IsReadOnly || _secondToFirst.IsReadOnly; }
}
public TSecond this [TFirst key]
{
get { return _firstToSecond[key]; }
set
{
_firstToSecond[key] = value;
_secondToFirst[value] = key;
}
}
object IDictionary.this [object key]
{
get { return ((IDictionary)_firstToSecond)[key]; }
set
{
((IDictionary)_firstToSecond)[key] = value;
((IDictionary)_secondToFirst)[value] = key;
}
}
public ICollection<TFirst> Keys
{
get { return _firstToSecond.Keys; }
}
ICollection IDictionary.Keys
{
get { return ((IDictionary)_firstToSecond).Keys; }
}
IEnumerable<TFirst> IReadOnlyDictionary<TFirst, TSecond>.Keys
{
get { return ((IReadOnlyDictionary<TFirst, TSecond>)_firstToSecond).Keys; }
}
public ICollection<TSecond> Values
{
get { return _firstToSecond.Values; }
}
ICollection IDictionary.Values
{
get { return ((IDictionary)_firstToSecond).Values; }
}
IEnumerable<TSecond> IReadOnlyDictionary<TFirst, TSecond>.Values
{
get { return ((IReadOnlyDictionary<TFirst, TSecond>)_firstToSecond).Values; }
}
public IEnumerator<KeyValuePair<TFirst, TSecond>> GetEnumerator ()
{
return _firstToSecond.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator ()
{
return GetEnumerator();
}
IDictionaryEnumerator IDictionary.GetEnumerator ()
{
return ((IDictionary)_firstToSecond).GetEnumerator();
}
public void Add (TFirst key, TSecond value)
{
_firstToSecond.Add(key, value);
_secondToFirst.Add(value, key);
}
void IDictionary.Add (object key, object value)
{
((IDictionary)_firstToSecond).Add(key, value);
((IDictionary)_secondToFirst).Add(value, key);
}
public void Add (KeyValuePair<TFirst, TSecond> item)
{
_firstToSecond.Add(item);
_secondToFirst.Add(item.Reverse());
}
public bool ContainsKey (TFirst key)
{
return _firstToSecond.ContainsKey(key);
}
public bool Contains (KeyValuePair<TFirst, TSecond> item)
{
return _firstToSecond.Contains(item);
}
public bool TryGetValue (TFirst key, out TSecond value)
{
return _firstToSecond.TryGetValue(key, out value);
}
public bool Remove (TFirst key)
{
TSecond value;
if (_firstToSecond.TryGetValue(key, out value)) {
_firstToSecond.Remove(key);
_secondToFirst.Remove(value);
return true;
}
else
return false;
}
void IDictionary.Remove (object key)
{
var firstToSecond = (IDictionary)_firstToSecond;
if (!firstToSecond.Contains(key))
return;
var value = firstToSecond[key];
firstToSecond.Remove(key);
((IDictionary)_secondToFirst).Remove(value);
}
public bool Remove (KeyValuePair<TFirst, TSecond> item)
{
return _firstToSecond.Remove(item);
}
public bool Contains (object key)
{
return ((IDictionary)_firstToSecond).Contains(key);
}
public void Clear ()
{
_firstToSecond.Clear();
_secondToFirst.Clear();
}
public void CopyTo (KeyValuePair<TFirst, TSecond>[] array, int arrayIndex)
{
_firstToSecond.CopyTo(array, arrayIndex);
}
void ICollection.CopyTo (Array array, int index)
{
((IDictionary)_firstToSecond).CopyTo(array, index);
}
[OnDeserialized]
internal void OnDeserialized (StreamingContext context)
{
_secondToFirst.Clear();
foreach (var item in _firstToSecond)
_secondToFirst.Add(item.Value, item.Key);
}
private class ReverseDictionary : IDictionary<TSecond, TFirst>, IReadOnlyDictionary<TSecond, TFirst>, IDictionary
{
private readonly BiDictionary<TFirst, TSecond> _owner;
public ReverseDictionary (BiDictionary<TFirst, TSecond> owner)
{
_owner = owner;
}
public int Count
{
get { return _owner._secondToFirst.Count; }
}
object ICollection.SyncRoot
{
get { return ((ICollection)_owner._secondToFirst).SyncRoot; }
}
bool ICollection.IsSynchronized
{
get { return ((ICollection)_owner._secondToFirst).IsSynchronized; }
}
bool IDictionary.IsFixedSize
{
get { return ((IDictionary)_owner._secondToFirst).IsFixedSize; }
}
public bool IsReadOnly
{
get { return _owner._secondToFirst.IsReadOnly || _owner._firstToSecond.IsReadOnly; }
}
public TFirst this [TSecond key]
{
get { return _owner._secondToFirst[key]; }
set
{
_owner._secondToFirst[key] = value;
_owner._firstToSecond[value] = key;
}
}
object IDictionary.this [object key]
{
get { return ((IDictionary)_owner._secondToFirst)[key]; }
set
{
((IDictionary)_owner._secondToFirst)[key] = value;
((IDictionary)_owner._firstToSecond)[value] = key;
}
}
public ICollection<TSecond> Keys
{
get { return _owner._secondToFirst.Keys; }
}
ICollection IDictionary.Keys
{
get { return ((IDictionary)_owner._secondToFirst).Keys; }
}
IEnumerable<TSecond> IReadOnlyDictionary<TSecond, TFirst>.Keys
{
get { return ((IReadOnlyDictionary<TSecond, TFirst>)_owner._secondToFirst).Keys; }
}
public ICollection<TFirst> Values
{
get { return _owner._secondToFirst.Values; }
}
ICollection IDictionary.Values
{
get { return ((IDictionary)_owner._secondToFirst).Values; }
}
IEnumerable<TFirst> IReadOnlyDictionary<TSecond, TFirst>.Values
{
get { return ((IReadOnlyDictionary<TSecond, TFirst>)_owner._secondToFirst).Values; }
}
public IEnumerator<KeyValuePair<TSecond, TFirst>> GetEnumerator ()
{
return _owner._secondToFirst.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator ()
{
return GetEnumerator();
}
IDictionaryEnumerator IDictionary.GetEnumerator ()
{
return ((IDictionary)_owner._secondToFirst).GetEnumerator();
}
public void Add (TSecond key, TFirst value)
{
_owner._secondToFirst.Add(key, value);
_owner._firstToSecond.Add(value, key);
}
void IDictionary.Add (object key, object value)
{
((IDictionary)_owner._secondToFirst).Add(key, value);
((IDictionary)_owner._firstToSecond).Add(value, key);
}
public void Add (KeyValuePair<TSecond, TFirst> item)
{
_owner._secondToFirst.Add(item);
_owner._firstToSecond.Add(item.Reverse());
}
public bool ContainsKey (TSecond key)
{
return _owner._secondToFirst.ContainsKey(key);
}
public bool Contains (KeyValuePair<TSecond, TFirst> item)
{
return _owner._secondToFirst.Contains(item);
}
public bool TryGetValue (TSecond key, out TFirst value)
{
return _owner._secondToFirst.TryGetValue(key, out value);
}
public bool Remove (TSecond key)
{
TFirst value;
if (_owner._secondToFirst.TryGetValue(key, out value)) {
_owner._secondToFirst.Remove(key);
_owner._firstToSecond.Remove(value);
return true;
}
else
return false;
}
void IDictionary.Remove (object key)
{
var firstToSecond = (IDictionary)_owner._secondToFirst;
if (!firstToSecond.Contains(key))
return;
var value = firstToSecond[key];
firstToSecond.Remove(key);
((IDictionary)_owner._firstToSecond).Remove(value);
}
public bool Remove (KeyValuePair<TSecond, TFirst> item)
{
return _owner._secondToFirst.Remove(item);
}
public bool Contains (object key)
{
return ((IDictionary)_owner._secondToFirst).Contains(key);
}
public void Clear ()
{
_owner._secondToFirst.Clear();
_owner._firstToSecond.Clear();
}
public void CopyTo (KeyValuePair<TSecond, TFirst>[] array, int arrayIndex)
{
_owner._secondToFirst.CopyTo(array, arrayIndex);
}
void ICollection.CopyTo (Array array, int index)
{
((IDictionary)_owner._secondToFirst).CopyTo(array, index);
}
}
}
internal class DictionaryDebugView<TKey, TValue>
{
private readonly IDictionary<TKey, TValue> _dictionary;
[DebuggerBrowsable (DebuggerBrowsableState.RootHidden)]
public KeyValuePair<TKey, TValue>[] Items
{
get
{
var array = new KeyValuePair<TKey, TValue>[_dictionary.Count];
_dictionary.CopyTo(array, 0);
return array;
}
}
public DictionaryDebugView (IDictionary<TKey, TValue> dictionary)
{
if (dictionary == null)
throw new ArgumentNullException("dictionary");
_dictionary = dictionary;
}
}
public static class KeyValuePairExts
{
public static KeyValuePair<TValue, TKey> Reverse<TKey, TValue> (this KeyValuePair<TKey, TValue> @this)
{
return new KeyValuePair<TValue, TKey>(@this.Value, @this.Key);
}
}
```
|
Bidirectional 1 to 1 Dictionary in C#
|
[
"",
"c#",
".net",
"collections",
""
] |
I frequently run into large, non-template classes in C++ where simple methods are defined directly in the class body in the header file instead of separately in the implementation file. For example:
```
class Foo {
int getBar() const { return bar; }
...
};
```
Why do this? It seems like there are disadvantages. The implementation is not as hidden as it should be, the code is less readable, and there would also be an increased burden on the compiler if the class's header file is included in many different places.
My guess is that people intend for these functions to be inlined in other modules, which could improve performance significantly. However, I've heard newer compilers can do inlining (and other interprocedural optimizations) at link-time across modules. How broad is the support for this kind of link-time optimization, and does it actually make these kind of definitions unnecessary? Are there any other good reasons for these definitions?
|
The C++ standard says that methods defined inside the class definition are `inline` by default. This results in obvious performance gains for simplistic functions such as getters and setters. Link-time cross-module optimization is harder, although some compilers can do it.
|
Often there's no reason other than it's just easier and saves time. It also saves a little clutter in the implementation file, while taking up the same number of lines in the header file. And being less readable is quite a stretch if it's limited to things like getters and setters.
|
Why are C++ methods sometimes defined inside classes?
|
[
"",
"c++",
"optimization",
"compiler-construction",
"inline-functions",
""
] |
Let's have the following class definition:
```
CThread::CThread ()
{
this->hThread = NULL;
this->hThreadId = 0;
this->hMainThread = ::GetCurrentThread ();
this->hMainThreadId = ::GetCurrentThreadId ();
this->Timeout = 2000; //milliseconds
}
CThread::~CThread ()
{
//waiting for the thread to terminate
if (this->hThread) {
if (::WaitForSingleObject (this->hThread, this->Timeout) == WAIT_TIMEOUT)
::TerminateThread (this->hThread, 1);
::CloseHandle (this->hThread);
}
}
//*********************************************************
//working method
//*********************************************************
unsigned long CThread::Process (void* parameter)
{
//a mechanism for terminating thread should be implemented
//not allowing the method to be run from the main thread
if (::GetCurrentThreadId () == this->hMainThreadId)
return 0;
else {
m_pMyPointer = new MyClass(...);
// my class successfully works here in another thread
return 0;
}
}
//*********************************************************
//creates the thread
//*********************************************************
bool CThread::CreateThread ()
{
if (!this->IsCreated ()) {
param* this_param = new param;
this_param->pThread = this;
this->hThread = ::CreateThread (NULL, 0, (unsigned long (__stdcall *)(void *))this->runProcess, (void *)(this_param), 0, &this->hThreadId);
return this->hThread ? true : false;
}
return false;
}
//*********************************************************
//creates the thread
//*********************************************************
int CThread::runProcess (void* Param)
{
CThread* thread;
thread = (CThread*)((param*)Param)->pThread;
delete ((param*)Param);
return thread->Process (0);
}
MyClass* CThread::getMyPointer() {
return m_pMyPointer;
}
```
In the main program, we have the following:
```
void main(void) {
CThread thread;
thread.CreateThread();
MyClass* myPointer = thread.getMyPointer();
myPointer->someMethod(); // CRASH, BOOM, BANG!!!!
}
```
At the moment the myPointer is used ( in the main thread ) it crashes. I don't know how to get the pointer, which points to memory, allocated in another thread. Is this actually possible?
|
The memory space for your application is accessible to all threads. By default any variable is visible to any thread regardless of context (the only exception would be variables declared \_\_delcspec(thread) )
You are getting a crash due to a race condition. The thread you just created hasn't started running yet at the point where you call getMyPointer. You need to add some kind of synchronization between the newly created thread and the originating thread. In other words, the originating thread has to wait until the new thread signals it that it has created the object.
|
I'm trying to get my head around what you are trying to do. It looks overly complicated for something like a thread-class. Would you mind post the class-definition as well?
Start by removing the C-style cast of the process-argument to CreateThread():
```
this->hThread = ::CreateThread (NULL, 0,&runProcess, (void *)(this_param), 0, &this->hThreadId);
```
If this doesn't compile you're doing something wrong! *Never ever* cast a function pointer! If the compiler complains you need to change your function, not try to cast away the errors! Really! You'll only make it worse for yourself! If you do it again *they*\* will come to your home and do ... Let's see how you like that! Seriously, don't do it again.
Btw, in Process() I think it would be more appropriate to do something like:
```
assert(::GetCurrentThreadId() == hThreadId);
```
But if you declare it private it should only be accessible by your CThread-class anyway and therefor it shouldn't be a problem. Asserts are good though!
\*It's not clear who *they* are but it's clear whatever *they* do it won't be pleasant!
|
How to get pointer from another thread?
|
[
"",
"c++",
"multithreading",
""
] |
Currently I have subversion set up so that when I make changes in Eclipse PDT, I can commit the changes and they will be saved in /home/administrator/*Project File*. This file has the /branches /tags and /trunk directories recommended by subversion. I have no problem properly uploading files to the repository, but do I have to "mv" the files in Linux to /var/www in order to make them live on the server? Is there an automatic process to do this that I am missing? I'm assuming we don't want to make the /var/www directory (which is live on the machine) the repository, but I don't know.
|
You can do an `svn export` into your www directory. That will give you a "clean" version of your repo, without the .svn directories.
```
cd /var/www
svn export /home/administrator/MyProject/trunk MyProject
```
---
Edit: adding in some good ideas from the comments...
Some options for when you want to update your exported copy:
* run `svn export --force /home/...../ MyProject` this will stop it complaining about overwriting the existing files. This method will mean that if you delete a file from your repository, it'll still remain in your www folder.
* change your SVN command to export into a new directory each time:
`svn export /home/..../ MyProject_20081105`
and then create a symlink to that folder:
`ln -s MyProject_20081105 MyProject`
Just delete and recreate the symlink each time you "release". In this case, the export directory doesn't need to be in the `www` folder at all.
|
You can simply check out a copy of the repository in the /var/www folder, and then run **svn update** on it whenever you require (or switch it to a new branch/tag, etc). Thus you have one copy of the respository checked out on your local machine where you make changes and updates, and another copy on your webserver.
Using an SVN repository also gives you the ability to revert to earlier versions as well.
|
What is the best way to make files live using subversion on a production server?
|
[
"",
"php",
"linux",
"svn",
""
] |
As I mention in an earlier question, I'm refactoring a project I'm working on. Right now, everything depends on everything else. Everything is separated into namespaces I created early on, but I don't think my method of separtion was very good. I'm trying to eliminate cases where an object depends on another object in a different namespace that depends on the other object.
The way I'm doing this, is by partitioning my project (a game) into a few assemblies:
```
GameName.Engine
GameName.Rules
GameName.Content
GameName.Gui
```
The `GameName.Engine` assembly contains a bunch of interfaces, so other parts of the program don't need to depend on any particular implementation. For instance, I have a `GameName.Engine.ICarryable` interface that is primarily implemented by `GameName.Content.Item` class (and its sub-classes). I also have an object to allow an `Actor` to pick up an `ICarryable`: `PickupAction`. `Previously`, it required an Item, but this exposes unneccessary methods and properties, where it really only needed the methods required to pick it up and carry it. That's why I've created the `ICarryable` interface.
Now that's all good, so to my question. `GameName.Gui` should only depend on `GameName.Engine`, not any implementation. Inside `GameName.Gui` I have a `MapView` object that displays a `Map` and any `IRenderable` objects on it.
`IRenderable` is basically just an interface that exposes an image and some strings describing the object. But, the MapView also needs the object to implement `ILocateable`, so it can see its location and know when it's changed via an event, `LocationChanged`, inside `ILocateable`.
These two interfaces are implemented by both `Item` and `Actor` objects. Which, again are defined in `GameName.Content`. Since it needs both interfaces, I have two choices:
1. Make `GameName.Gui` depend on `GameName.Content` and require an `Entity` (base-class of `Item` and `Actor`).
2. Make an interface inside `GameName.Engine` that looks like this:
```
interface ILocateableRenderable : ILocateable, IRenderable
{
}
```
And then make my `Actor` and `Item` objects implement that interface instead of the two individually.
Anyone have any suggestions on which method is the best? Is it appropriate to create an interface with no methods or properties, that only enforces implementing two other interfaces?
*Clarification: `MapView` works on a `Map`, which is composed of `Entity` objects. I don't want to expose the `Entity` objects to the `MapView`, it only needs to know their location (`ILocateable`) and how to render them (`IRenderable`).*
|
You seem to have two conflicting requirements:
> Inside GameName.Gui I
> have a **MapView object that displays** a
> Map and **any IRenderable** objects on it.
and
> But, the **MapView also needs the object
> to implement ILocateable**, so it can
> see its location and know when its
> changed via an event, LocationChanged,
> inside ILocateable.
So, if the MapView only needs IRenderable, then it should accept IRenderable and then check whether the class also implements ILocateable. In this case use its methods.
```
public void Whatever(IRenderable renderable)
{
if (renderable is ILocateable)
{
((ILocateable) renderable).LocationChanged += myEventHandler;
}
// Do normal stuff
}
```
On the other hand, if you always need it to be ILocateable and IRenderable, then you should really create a derived interface in one of two ways
Either
```
interface IMappable: IRenderable, ILocateable {}
public void Whatever(IMappable mappable)
{
mappable.LocationChanged += myEventHandler;
// Do normal stuff
}
```
or
```
interface IRenderable: ILocateable
{
// IRenderable interface
}
public void Whatever(IRenderable renderable)
{
renderable.LocationChanged += myEventHandler;
// Do normal stuff
}
```
depending on how your code is at the moment.
|
In all cases, the IRenderable objects would also have to implement ILocateable, so I'm doing what @Sklivvz suggested and make IRenderable implement ILocateable:
```
public interface IRenderable : ILocateable
{
// IRenderable interface
}
```
This pattern is used heavily in .NET. For example, the ICollection interface implements IEnumerable, while adding additional methods. This is a direct analogy to what the above code would do.
Thanks @Sklivvz.
|
Combining two interfaces into one
|
[
"",
"c#",
".net",
"oop",
""
] |
What order of precedence are events handled in JavaScript?
Here are the events in alphabetical order...
1. onabort - Loading of an image is
interrupted
2. onblur - An element loses focus
3. onchange - The user changes the
content of a field
4. onclick - Mouse clicks an object
5. ondblclick - Mouse double-clicks an
object
6. onerror - An error occurs when
loading a document or an image
7. onfocus - An element gets focus
8. onkeydown - A keyboard key is
pressed
9. onkeypress - A keyboard key is
pressed or held down
10. onkeyup - A keyboard key is
released
11. onload - A page or an image is
finished loading
12. onmousedown - A mouse button is
pressed
13. onmousemove - The mouse is moved
14. onmouseout - The mouse is moved off
an element
15. onmouseover - The mouse is moved
over an element
16. onmouseup - A mouse button is
released
17. onreset - The reset button is
clicked
18. onresize - A window or frame is
resized
19. onselect - Text is selected
20. onsubmit - The submit button is
clicked
21. onunload - The user exits the page
What order are they handled out of the event queue?
The precedence is not first-in-first-out (FIFO) or so I believe.
|
This was not, so far as i know, explicitly defined in the past. Different browsers are free to implement event ordering however they see fit. While most are close enough for all practical purposes, there have been and continue to be some odd edge cases where browsers differ somewhat (and, of course, the many more cases where certain browsers fail to send certain events *at all*).
That said, the [HTML 5 draft recommendation](http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html) does make an attempt to specify how events will be queued and dispatched - [the event loop](http://www.whatwg.org/specs/web-apps/current-work/multipage/browsers.html#event-loops):
> To coordinate events, user
> interaction, scripts, rendering,
> networking, and so forth, user agents
> must use event loops as described in
> this section.
>
> There must be at least one event loop
> per user agent, and at most one event
> loop per unit of related
> similar-origin browsing contexts.
>
> An event loop has one or more task
> queues. A task queue is an ordered
> list of tasks [...]
> When a user agent is to queue a task,
> it must add the given task to one of
> the task queues of the relevant event
> loop. All the tasks from one
> particular task source must always be
> added to the same task queue, but
> tasks from different task sources may
> be placed in different task queues.
> [...]
>
> [...]a user agent could have one task queue
> for mouse and key events (the user
> interaction task source), and another
> for everything else. The user agent
> could then give keyboard and mouse
> events preference over other tasks
> three quarters of the time, keeping
> the interface responsive but not
> starving other task queues, and never
> processing events from any one task
> source out of order. [...]
Note that last bit: it is up to the browser implementation to determine which events will be grouped together and processed in order, as well as the priority given to any particular type of event. Therefore, there's little reason to expect all browsers to dispatch *all* events in a fixed order, now or in the future.
|
For anyone wanting to know the sequence relative events get called in, see below. So far I have only tested in Chrome.
1. mouseover
2. mousemove
3. mouseout
---
1. mousedown
2. change (on focused input)
3. blur (on focused element)
4. focus
5. mouseup
6. click
7. dblclick
---
1. keydown
2. keypress
3. keyup
|
DOM event precedence
|
[
"",
"javascript",
"event-handling",
"dom-events",
"eventqueue",
""
] |
I'm relatively new to the Python world, but this seems very straight forward.
Google is yelling at me that this code needs to be optimized:
```
class AddLinks(webapp.RequestHandler):
def post(self):
# Hash the textarea input to generate pseudo-unique value
hash = md5.new(self.request.get('links')).hexdigest()
# Seperate the input by line
allLinks = self.request.get('links').splitlines()
# For each line in the input, add to the database
for x in allLinks:
newGroup = LinkGrouping()
newGroup.reference = hash
newGroup.link = x
newGroup.put()
# testing vs live
#baseURL = 'http://localhost:8080'
baseURL = 'http://linkabyss.appspot.com'
# Build template parameters
template_values = {
'all_links': allLinks,
'base_url': baseURL,
'reference': hash,
}
# Output the template
path = os.path.join(os.path.dirname(__file__), 'addLinks.html')
self.response.out.write(template.render(path, template_values))
```
The dashboard is telling me that this is using a ton of CPU.
Where should I look for improvements?
|
The main overhead here is the multiple individual puts to the datastore. If you can, store the links as a single entity, as Andre suggests. You can always split the links into an array and store it in a ListProperty.
If you do need an entity for each link, try this:
```
# For each line in the input, add to the database
groups = []
for x in allLinks:
newGroup = LinkGrouping()
newGroup.reference = hash
newGroup.link = x
groups.append(newGroup)
db.put(groups)
```
It will reduce the datastore roundtrips to one, and it's the roundtrips that are really killing your high CPU cap.
|
Looks pretty tight to me.
I see one thing that may make a small improvement.
Your calling, "self.request.get('links')" twice.
So adding:
```
unsplitlinks = self.request.get('links')
```
And referencing, "unsplitlinks" could help.
Other than that the loop is the only area I see that would be a target for optimization.
Is it possible to prep the data and then add it to the db at once, instead of doing a db add per link? (I assume the .put() command adds the link to the database)
|
How can I optimize this Google App Engine code?
|
[
"",
"python",
"google-app-engine",
"optimization",
""
] |
I am about to write junit tests for a XML parsing Java class that outputs directly to an OutputStream. For example `xmlWriter.writeString("foo");` would produce something like `<aTag>foo</aTag>` to be written to the outputstream held inside the XmlWriter instance. The question is how to test this behaviour. One solution would of course be to let the OutputStream be a FileOutputStream and then read the results by opening the written file, but it isn't very elegant.
|
Use a [ByteArrayOutputStream](http://java.sun.com/javase/6/docs/api/java/io/ByteArrayOutputStream.html) and then get the data out of that using [toByteArray()](http://java.sun.com/javase/6/docs/api/java/io/ByteArrayOutputStream.html#toByteArray()). This won't test *how* it writes to the stream (one byte at a time or as a big buffer) but usually you shouldn't care about that anyway.
|
If you can pass a Writer to XmlWriter, I would pass it a `StringWriter`. You can query the `StringWriter`'s contents using `toString()` on it.
If you have to pass an `OutputStream`, you can pass a `ByteArrayOutputStream` and you can also call `toString()` on it to get its contents as a String.
Then you can code something like:
```
public void testSomething()
{
Writer sw = new StringWriter();
XmlWriter xw = new XmlWriter(sw);
...
xw.writeString("foo");
...
assertEquals("...<aTag>foo</aTag>...", sw.toString());
}
```
|
Testing what's written to a Java OutputStream
|
[
"",
"java",
"junit",
"outputstream",
""
] |
Suppose I have a python object `x` and a string `s`, how do I set the attribute `s` on `x`? So:
```
>>> x = SomeObject()
>>> attr = 'myAttr'
>>> # magic goes here
>>> x.myAttr
'magic'
```
What's the magic? The goal of this, incidentally, is to cache calls to `x.__getattr__()`.
|
```
setattr(x, attr, 'magic')
```
For help on it:
```
>>> help(setattr)
Help on built-in function setattr in module __builtin__:
setattr(...)
setattr(object, name, value)
Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
``x.y = v''.
```
However, you should note that you can't do that to a "pure" instance of `object`. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that.
|
Usually, we define classes for this.
```
class XClass( object ):
def __init__( self ):
self.myAttr= None
x= XClass()
x.myAttr= 'magic'
x.myAttr
```
However, you can, to an extent, do this with the `setattr` and `getattr` built-in functions. However, they don't work on instances of `object` directly.
```
>>> a= object()
>>> setattr( a, 'hi', 'mom' )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'hi'
```
They do, however, work on all kinds of simple classes.
```
class YClass( object ):
pass
y= YClass()
setattr( y, 'myAttr', 'magic' )
y.myAttr
```
|
How do you programmatically set an attribute?
|
[
"",
"python",
"attributes",
"object",
""
] |
I have multiple ordered lists. Unfortunately, the order of the items isn't a simple alpha or numeric comparison, otherwise this is trivial. So what I have is something like:
```
List #1 List #2 List #3
groundhog groundhog easter
mothersday mayday mothersday
midsummer laborday halloween
christmas
```
And from this I can gather than groundhog < mothersday, but the relationship of groundhog and easter is unknown. I am guaranteed that the order of the items from list-to-list is self consistent. (i.e. that no matter which list it occurs in, easter is always before halloween)
But what I need is a new ordered list that represents each item in the other lists only once, that preserves all of the known relationships above:
```
groundhog
easter
mayday
mothersday
midsummer
laborday
halloween
christmas
```
However, the following list is also perfectly valid:
```
easter
groundhog
mothersday
mayday
midsummer
laborday
halloween
christmas
```
I'm looking for a fairly quick, general-purpose algorithm I can use to order N lists in this way. (Working C# code a plus, for sure, but not necessary.)
I have solution that works, but its O(N^2) and a dog with even modest data sets.
|
You may want to have a look at [topological sorting](http://en.wikipedia.org/wiki/Topological_sort). I think it applies quite well to your case.
|
I agree with @bdumitriu, you want topological sorting.
This type of sort assumes you have a *partial order* among your data items, which means that for *certain pairs* of items, you can compare them to see which one precedes the other. In this case, like you say, there are multiple ways to create a single list of the items that preserves all of the constraints.
Topological sort usually works by first creating a directed acyclic graph of your items, where each item becomes a vertex, and a directed edge from node X to node Y means item X precedes item Y in your input lists. (So you'd walk through your set of input sorted lists, and every time you encounter a new item, you'd make a vertex for it, and for every consecutive pair of items in each sorted list, you make a directed edge from the first item to the second. Note that you don't need to create directed edges from an item to *all* of the previous items in each input list; for example in your input List 1, you'd create edges `groundhog` -> `mothersday`, `mothersday` -> `midsummer`, and `midsummer` -> `christmas`.)
A topological sort will take time O(V+E), where V is the total number of items you are sorting (the number of vertices), and E is the total number of predecessor relationships from your input lists (the number of edges).
--Phil
|
Multiple ordered lists boiled down to one list, where order is relative
|
[
"",
"c#",
"sorting",
"diff",
""
] |
I need to pack all my js, but need to edit it going into source control.
is there a nice easy plugin for ccnet, or nant, that will allow me to pack my js, and store them in the same files on the way out to production.
Not really looking for file combining, just minifying each file.
|
Here is the best answer I have found. It calls the YUI version of minify and just uses plain old Nant to do so and replace the existing js files with the minifyed ones.
<http://codeclimber.net.nz/archive/2007/08/22/How-to-use-YUI-JS-Compressor-inside-a-NAnt-build.aspx>
|
I use copy concatenation and the YUI Compressor in a post build script. Old-school batch file style.
The [compressor](http://www.julienlecomte.net/yuicompressor/README) works like a charm. My .NET website application uses the website deployment project and is continuously built by TeamCity. A bit hackish perhaps, but it works.
|
Best packing strategy for js during continuous integration?
|
[
"",
"javascript",
"deployment",
"continuous-integration",
"minify",
""
] |
Google hosts some popular JavaScript libraries at:
<http://code.google.com/apis/ajaxlibs/>
According to google:
> The most powerful way to load the libraries is by using google.load() ...
What are the real advantages of using
`google.load("jquery", "1.2.6")`
vs.
`<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script>`
?
|
Aside from the benefit of Google being able to bundle multiple files together on the request, there is no perk to using google.load. In fact, if you know all libraries that you want to use (say just jQuery 1.2.6), you're possibly making the user's browser perform one unneeded HTTP connection. Since the whole point of using Google's hosting is to reduce bandwidth consumption and response time, the best decision - if you're just using 1 library - is to call that library directly.
Also, if your site will be using any SSL certificates, you want to plan for this by calling the script via Google's HTTPS connection. There's no downside to calling a https script from an http page, but calling an http script from an https page will causing more obscure debugging problems than you would want to think about.
|
1. It allows you to [dynamically load](http://code.google.com/apis/ajax/documentation/#Dynamic) the libraries in your code, wherever you want.
2. Because it lets you switch directly to a new version of the library in the javascript, without forcing you to rebuild/change templates all across your site.
|
What are advantages of using google.load('jQuery', ...) vs direct inclusion of hosted script URL?
|
[
"",
"javascript",
"optimization",
""
] |
We've had an ongoing need here that I can't figure out how to address using the stock Maven 2 tools and documentation.
Some of our developers have some very long running JUnit tests (usually stress tests) that under no circumstances should be run as a regular part of the build process / nightly build.
Of course we can use the surefire plugin's exclusion mechanism and just punt them from the build, but ideally we'd love something that would allow the developer to run them at will through Maven 2.
|
Normally you would add a profile to your maven configuration that runs a different set of tests:
run this with mvn -Pintegrationtest install
```
<profile>
<id>integrationtest</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<argLine>-client -Xmx896m -XX:MaxPermSize=192m</argLine>
<forkMode>once</forkMode>
<includes>
<include>**/**/*Test.java</include>
<include>**/**/*IntTest.java</include>
</includes>
<excludes>
<exclude>**/**/*SeleniumTest.java</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
<activation>
<property>
<name>integrationtest</name>
</property>
</activation>
</profile>
```
|
Adding to **krosenvold**'s answer, to ensure no unexpected behavior, make sure you also have a default profile that is active by default that *excludes* the integration or stresstests you want to run in your special profile.
```
<profile>
<id>normal</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/**/*IntTest.java</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
</profile>
```
You will need to create a profile like this, simply listing the surefire-plugin outside of a profile will override the profile should it be selected with:
```
mvn -P integrationtest clean install
```
|
Is there a way to separate long running (e.g. stress tests) out so they're not run by default in Maven 2?
|
[
"",
"java",
"maven-2",
"build-process",
"junit",
"build-automation",
""
] |
I am the tech intern for an online independent newspaper, and the writers on the staff are not tech-savvy. They don't quite understand how web pages work, and often they upload and include images straight from their digital cameras, or scanned from original media. These images become a burden when there are 10 images on the front page each at 3.5Mb each.
We are trying to work out some sort of training method for teaching them how to resize and optimize the images they want to include in their articles, but like I said, they are not very tech savvy, and any method we attempt to employ may go way over their heads.
So, I wanted to know if it is outside of reason to attempt to resample and cache images that are included in the articles using a PHP function and the GD library in order to stream line the amount of data that has to be passed per article.
I think it's possible, I'm just trying to figure out if it would be worth it to just take the time and effort to train the writers, or if creating an automated process would be better.
|
You'd be better off doing the GD image processing during the upload process. GD can take up quite a bit of resources, so processing each image on every request would not be a preferable solution. If you can't do it during the upload process, you should cache all the resampled images and use those if/when available.
|
It's certainly possible, and I'd be very surprised if Joomla! doesn't already have modules that do just that.
|
PHP Replace Images with GD Library Resampled Images in Joomla
|
[
"",
"php",
"image",
"joomla",
"gd",
""
] |
I'd like to find a way to do a SQL query that will calculate the cidr (bit representation) of a subnet mask stored in the database. So for example, I've got either 255.255.255.0 or its decimal value (4294967040) stored in the database. I'd like to do a select and get back /24 representation via the query.
I've done things like the following to determine the last IP of a subnet so I'm hoping to do something similar to determine the cidr representation of a mask.
```
select concat(inet_ntoa(ip_addr),'-',
inet_ntoa(ip_addr+(POWER(2,32)-ip_mask-1))) range
from subnets
order by ip_addr
```
Preferably this would be a SQL statement that would work under mysql, postgres, oracle etc.
|
I think I have found the solution to my issue. Here is what I have done:
```
select CONCAT(INET_NTOA(ip_addr),'/',32-log2((4294967296-ip_mask))) net
from subnets
order by ip_addr
```
Basically I take my decmial mask and subtract it from the maximum decimal value. I then to a log2 on that value to get the logarithm value. Then simply subtract that from 32 (the maximum bit available).
Hope that helps others.
Thanks
|
SQL queries don't have a procedural looping construct (notwithstanding procedural language), but you can compare one set of rows to another set of rows, which is kind of like a loop.
You only have 32 possible subnet masks. In cases like this, it makes sense to create a small table that stores these 32 masks and the associated CIDR number.
```
CREATE TABLE cidr (
bits INT UNSIGNED PRIMARY KEY,
mask INT UNSIGNED NOT NULL
);
INSERT INTO cidr (bits) VALUES
( 1), ( 2), ( 3), ( 4), ( 5), ( 6), ( 7), ( 8), ( 9), (10),
(11), (12), (13), (14), (15), (16), (17), (18), (19), (20),
(21), (22), (23), (24), (25), (26), (27), (28), (29), (30),
(31), (32);
UPDATE cidr SET mask = ((POWER(2,32)-1)<<(32-bits)) & (POWER(2,32)-1);
SELECT CONCAT(s.ip_addr, '/', c.bits)
FROM cidr c JOIN subnets s ON (c.mask = inet_aton(s.ip_mask));
```
|
Using SQL to determine cidr value of a subnet mask
|
[
"",
"sql",
"subnet",
"bitmask",
"cidr",
""
] |
I'm pulling email address records from a table in SQL Server 2005, and want to build a single string to use as the `@recipients` list with `sp_send_dbmail`. The table has a field called EmailAddress and there are 10 records in the table.
I'm doing this:
```
DECLARE @email VARCHAR(MAX)
SELECT
@email = ISNULL(@email + '; ', '') + EmailAddress
FROM
accounts
```
Now @email has a semi-delimited list of 10 email address from the accounts table.
My questions is why/how does this work? Why doesn't @email only have the last email address in the table?
|
Because for each row you concatentate the current value of `@email` with the next result in `EmailAddress`. String concatenation is just like calling a function, in that it must evaluate the result for each row in sequence.
|
Say you have 3 addresses:
```
a@b.c
b@b.c
c@b.c
```
For the first row, `@email` is `NULL`, so it becomes `"" + "a@b.c"`, so `"a@b.c"`.
For the second row, `@email` becomes `"a@b.c" + "; " + "b@b.c"`, so `"a@b.c; b@b.c"`.
For the last row, `@email` becomes `"a@b.c; b@b.c" + "; " + "c@b.c"`, so `"a@b.c; b@b.c; c@b.c"`.
|
Why does this SQL script work as it does?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
On an ASP.NET page, I have a GridView populated with the results of a LINQ query. I'm setting the DataSource in code, then calling DataBind on it. In the GridView's RowDataBound event, I'm selectively hiding links in some GridView fields based on the query results. (For instance, I hide the "Show Parent" link of the row in question has no parent row.)
This works fine initially. But on postback (when I *don't* call DataBind, but the GridView stays populated through ViewState), the data displays, but the RowDataBound event (obviously) doesn't fire, and my links don't get hidden.
What's the best way to get the links to be hidden after a postback?
|
Here's how I ended up solving this:
1. I created a serializable class with readonly properties: PK of a row, and a boolean for each link indicating whether it's enabled or not. We'll call it `LinkVisibility`.
2. I created a serializable class inheriting from KeyedCollection to hold instances of the class above.
3. I created a ViewState-backed property holding an instance of that collection.
4. In my Search procedure (populating the GridView), I clear the collection.
5. In RowDataBound, which initially shows/hides the links, I add a `LinkVisibility` instance to the collection for each row.
6. In Page.Load, when IsPostBack is true, I loop through the GridView rows. I look up the `LinkVisibility` for each one by PK in the collection (DataKeyNames is set in the GridView), and I set the links accordingly.
I don't know that this is the best way to do this, but it certainly does work, which is more than I can say for anything else I've tried.
|
The RowDataBound event only fires when the GridView's data changes during the postback. The event is short-circuited for speed so it's not re-generating the exact same data unnecessarily. Use the RowCreated event to manipulate the HTML instead - it fires on every postback regardless of whether the data has changed.
|
GridView RowDataBound doesn't fire on postback
|
[
"",
"c#",
"asp.net",
"vb.net",
"gridview",
"postback",
""
] |
currently I have the following code:
```
String select = qry.substring("select ".length(),qry2.indexOf(" from "));
String[] attrs = select.split(",");
```
which works for the most parts but fails if given the following:
```
qry = "select a,b,c,DATETOSTRING(date_attr_name,'mm/dd/yyyy') from tbl_a";
```
what I'm looking for is the regex to feed to String.split() which will hande that situation, and for that matter any other special cases you might be able to think of that I'm missing.
|
```
[^,]+\([^\)]+\)|[^,]+,
```
Should do it nicely provided you always add a final ',' to your select string:
```
a,b,c,DATETOSTRING(date_attr_name,'mm/dd/yyyy'),f,gg,dr(tt,t,),fff
```
would fail to split the last 'fff' attributes, but:
```
a,b,c,DATETOSTRING(date_attr_name,'mm/dd/yyyy'),f,gg,dr(tt,t,),fff,
```
would captures it. So a little pre-processing would smooth things out.
**Caveat**: this does not take into account expression within expression
```
EXP(arg1, EXP2(ARG11,ARG22), ARG2)
```
Tell me if that can happen in the queries you have to process.
**Caveat bis**: since this needs a true regexp and not a simple separator expected by split(), you must use a Matcher, based on the pattern `[^,]+\([^\)]+\)|[^,]),`, and iterate on Matcher.find() to fill the array of attributes `attrs`.
In short, with split() function, there is no single simple separator that might do the trick.
|
Your answer in the form of a quote:
> Some people, when confronted with a
> problem, think "I know, I'll use
> regular expressions." Now they have
> two problems. — Jamie Zawinski
Your regex would have to take into account all possible functions, nested functions, nested strings, etc. Your solution probably isn't a regex, it is a lexer+parser.
|
java regex to split an attribute list from an sql query into a String[] of attrs
|
[
"",
"java",
"regex",
""
] |
[**According to spec**](http://www.w3.org/TR/REC-html40/interact/scripts.html), only the `BODY` and `FRAMESET` elements provide an "onload" event to attach to, but I would like to know when a dynamically-created DOM element has been added to the DOM in JavaScript.
The super-naive heuristics I am currently using, which work, are as follows:
* Traverse the **parentNode** property of the element back until I find the ultimate ancestor (i.e. parentNode.parentNode.parentNode.etc until parentNode is null)
* If the ultimate ancestor has a defined, non-null **body** property
+ assume the element in question is part of the dom
* else
+ repeat these steps again in 100 milliseconds
What I am after is either confirmation that what I am doing is sufficient (again, it is working in both IE7 and FF3) or a better solution that, for whatever reason, I have been completely oblivious to; perhaps other properties I should be checking, etc.
---
EDIT: I want a browser-agnostic way of doing this, I don't live in a one-browser world, unfortunately; that said, browser-specific information is appreciated, but please note which browser you know that it *does* work in. Thanks!
|
UPDATE: For anyone interested in it, here is the implementation I finally used:
```
function isInDOMTree(node) {
// If the farthest-back ancestor of our node has a "body"
// property (that node would be the document itself),
// we assume it is in the page's DOM tree.
return !!(findUltimateAncestor(node).body);
}
function findUltimateAncestor(node) {
// Walk up the DOM tree until we are at the top (parentNode
// will return null at that point).
// NOTE: this will return the same node that was passed in
// if it has no ancestors.
var ancestor = node;
while(ancestor.parentNode) {
ancestor = ancestor.parentNode;
}
return ancestor;
}
```
The reason I wanted this is to provide a way of synthesizing the `onload` event for DOM elements. Here is that function (although I am using something slightly different because I am using it in conjunction with [**MochiKit**](http://www.mochikit.com/)):
```
function executeOnLoad(node, func) {
// This function will check, every tenth of a second, to see if
// our element is a part of the DOM tree - as soon as we know
// that it is, we execute the provided function.
if(isInDOMTree(node)) {
func();
} else {
setTimeout(function() { executeOnLoad(node, func); }, 100);
}
}
```
For an example, this setup could be used as follows:
```
var mySpan = document.createElement("span");
mySpan.innerHTML = "Hello world!";
executeOnLoad(mySpan, function(node) {
alert('Added to DOM tree. ' + node.innerHTML);
});
// now, at some point later in code, this
// node would be appended to the document
document.body.appendChild(mySpan);
// sometime after this is executed, but no more than 100 ms after,
// the anonymous function I passed to executeOnLoad() would execute
```
Hope that is useful to someone.
NOTE: the reason I ended up with this solution rather than [**Darryl's answer**](https://stackoverflow.com/questions/220188/how-can-i-determine-if-a-dynamically-created-dom-element-has-been-added-to-the-do/220224#220224) was because the getElementById technique only works if you are within the same document; I have some iframes on a page and the pages communicate between each other in some complex ways - when I tried this, the problem was that it couldn't find the element because it was part of a different document than the code it was executing in.
|
The most straightforward answer is to make use of the [`Node.contains`](https://developer.mozilla.org/en-US/docs/DOM/Node.contains) method, supported by Chrome, Firefox (Gecko), Internet Explorer, Opera, and Safari. Here is an example:
```
var el = document.createElement("div");
console.log(document.body.contains(el)); // false
document.body.appendChild(el);
console.log(document.body.contains(el)); // true
document.body.removeChild(el);
console.log(document.body.contains(el)); // false
```
Ideally, we would use `document.contains(el)`, but that doesn't work in IE, so we use `document.body.contains(el)`.
Unfortunately, you still have to poll, but checking whether an element is in the document yet is very simple:
```
setTimeout(function test() {
if (document.body.contains(node)) {
func();
} else {
setTimeout(test, 50);
}
}, 50);
```
If you're okay with adding some CSS to your page, here's another clever technique that uses animations to detect node insertions: <http://www.backalleycoder.com/2012/04/25/i-want-a-damnodeinserted/>
|
How can I determine if a dynamically-created DOM element has been added to the DOM?
|
[
"",
"javascript",
"dom",
""
] |
The subject doesn't say much cause it is not easy to question in one line.
I have to execute a few programs which I read from the registry. I have to read from a field where somebody saves the whole paths and arguments.
I've been using System.Diagnostics.ProcessStartInfo setting the name of the program and its arguments but I've found a wide variety of arguments which I have to parse to save the process executable file in one field and its arguments in the other.
Is there a way to just execute the whole string as is?
|
I have tackled this the same way as the poster above, using cmd.exe with process start info.
```
Process myProcess = New Process;
myProcess.StartInfo.FileName = "cmd.exe";
myProcess.StartInfo.Arguments = "/C " + cmd;
myProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
myProcess.StartInfo.CreateNoWindow = True;
myProcess.Start();
myProcess.WaitForExit();
myProcess.Close();
```
cmd /c carries out the command, and then terminates.
WaitForExit will terminate the process if it runs for too long.
|
There are several, in fact.
1. You can call cmd.exe with /C [your command line] as the arguments. This causes cmd.exe to process your command, and then quit.
2. You could write the command to a batch file and launch that.
And of course there's the approach you're taking now, namely parsing the command line.
|
How do you run a program you don't know where the arguments start?
|
[
"",
"c#",
".net",
"registry",
"system.diagnostics",
"processstartinfo",
""
] |
I'm looking for Ruby's Active record for PHP. Something that is so simple that I just define my fields, extend the base ORM class, and I get ACID operations for free. I should get default getters and setters without writing any code, but overriding a default getter or setter is as easy as declaring get$fieldName or set$fieldName functions with the behavior I want. Symphony makes you create about 5 files per object, and all defined objects always load as far as I can tell. What is a better alternative? Why is it better? Can you put simple examples in your answers please?
Doctrine is another ORM I've looked at besides symphony . There also you need to create yaml files that describe your data structures. The database already defines this stuff. What will just read my table defs without having to generate and store config files everywhere?
|
Both CodeIgniter (<http://codeigniter.com/user_guide/database/active_record.html>) and its PHP5 only fork Kohana (<http://docs.kohanaphp.com/libraries/orm>) contain implementations of the ActiveRecord pattern.
|
I'm a big fan of [Doctrine](http://www.doctrine-project.org/) which is a full featured ORM that will be replacing Propel as Symfony's default ORM.
It's got your basic ORM stuff you'd expect along with a full featured query builder that I've found to be wonderful.
It comes with a full suite of command line tools to manage your databases. For example, you can create your schemas and fixtures in YAML, have Doctrine generate classes based on your Schema, create the database, create the schema based on the models, then populate the database with your fixtures all with a single `./doctrine build-all-reload`.
It also includes support for database migrations and [recently updated](http://www.doctrine-project.org/blog/new-to-migrations-in-1-1) the migrations to automatically diff and generate your migration models.
**As per your doctrine complaints, you can run a command `./doctrine generate-models-db` or `./doctrine generate-yaml-db` to automatically create models and yaml files respectively from your current database setup.**
Other niceties include "[Behaviors](http://www.doctrine-project.org/documentation/manual/1_0?one-page#behaviors)" which makes life much easier when implementing certain, well, behaviors in your schema. For example you can add the "Timestampable" behavior to your class file. Doctine automatically adds a 'created\_at' and 'updated\_at' column, populates them, and every `$object->save()` you run automatically updates the 'updated\_at' column. More complex behaviors include i18n, table versioning, and trees (though really only NestedSet).
Personally I've been extremely enamored with Doctrine and rave about it every chance I get.
|
What is the easiest to use ORM framework for PHP?
|
[
"",
"php",
"orm",
""
] |
How do I access a page's HTTP response headers via JavaScript?
Related to [**this question**](https://stackoverflow.com/questions/220149/how-do-i-access-the-http-request-header-fields-via-javascript), which was modified to ask about accessing two specific HTTP headers.
> **Related:**
> [How do I access the HTTP request header fields via JavaScript?](https://stackoverflow.com/questions/220149/how-do-i-access-the-http-request-header-fields-via-javascript)
|
Unfortunately, there isn't an API to give you the HTTP response headers for your initial page request. That was the original question posted here. It has been [repeatedly asked](https://stackoverflow.com/questions/12258705/how-can-i-read-the-current-headers-without-making-a-new-request-with-js), too, because some people would like to get the actual response headers of the original page request without issuing another one.
# For AJAX Requests:
If an HTTP request is made over AJAX, it is possible to get the response headers with the **`getAllResponseHeaders()`** method. It's part of the XMLHttpRequest API. To see how this can be applied, check out the *`fetchSimilarHeaders()`* function below. Note that this is a work-around to the problem that won't be reliable for some applications.
```
myXMLHttpRequest.getAllResponseHeaders();
```
* The API was specified in the following candidate recommendation for XMLHttpRequest: [XMLHttpRequest - W3C Candidate Recommendation 3 August 2010](http://www.w3.org/TR/XMLHttpRequest/#the-getresponseheader-method)
* Specifically, the `getAllResponseHeaders()` method was specified in the following section: [w3.org: `XMLHttpRequest`: the `getallresponseheaders()` method](http://www.w3.org/TR/XMLHttpRequest/#the-getallresponseheaders()-method)
* The MDN documentation is good, too: [developer.mozilla.org: `XMLHttpRequest`](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest).
This will not give you information about the original page request's HTTP response headers, but it could be used to make educated guesses about what those headers were. More on that is described next.
# Getting header values from the Initial Page Request:
This question was first asked several years ago, asking specifically about how to get at the original HTTP response headers for the *current page* (i.e. the same page inside of which the javascript was running). This is quite a different question than simply getting the response headers for any HTTP request. For the initial page request, the headers aren't readily available to javascript. Whether the header values you need will be reliably and sufficiently consistent if you request the same page again via AJAX will depend on your particular application.
The following are a few suggestions for getting around that problem.
## 1. Requests on Resources which are largely static
If the response is largely static and the headers are not expected to change much between requests, you could make an AJAX request for the same page you're currently on and assume that they're they are the same values which were part of the page's HTTP response. This could allow you to access the headers you need using the nice XMLHttpRequest API described above.
```
function fetchSimilarHeaders (callback) {
var request = new XMLHttpRequest();
request.onreadystatechange = function () {
if (request.readyState === XMLHttpRequest.DONE) {
//
// The following headers may often be similar
// to those of the original page request...
//
if (callback && typeof callback === 'function') {
callback(request.getAllResponseHeaders());
}
}
};
//
// Re-request the same page (document.location)
// We hope to get the same or similar response headers to those which
// came with the current page, but we have no guarantee.
// Since we are only after the headers, a HEAD request may be sufficient.
//
request.open('HEAD', document.location, true);
request.send(null);
}
```
This approach will be problematic if you truly have to rely on the values being consistent between requests, since you can't fully guarantee that they are the same. It's going to depend on your specific application and whether you know that the value you need is something that won't be changing from one request to the next.
## 2. Make Inferences
There are **some BOM properties** (Browser Object Model) which the browser determines by looking at the headers. Some of these properties reflect HTTP headers directly (e.g. `navigator.userAgent` is set to the value of the HTTP `User-Agent` header field). By sniffing around the available properties you might be able to find what you need, or some clues to indicate what the HTTP response contained.
## 3. Stash them
If you control the server side, you can access any header you like as you construct the full response. Values could be passed to the client with the page, stashed in some markup or perhaps in an inlined JSON structure. If you wanted to have every HTTP request header available to your javascript, you could iterate through them on the server and send them back as hidden values in the markup. It's probably not ideal to send header values this way, but you could certainly do it for the specific value you need. This solution is arguably inefficient, too, but it would do the job if you needed it.
|
It's not possible to read the current headers. You could make another request to the same URL and read its headers, but there is no guarantee that the headers are exactly equal to the current.
---
Use the following JavaScript code to get all the HTTP headers by performing a `get` request:
```
var req = new XMLHttpRequest();
req.open('GET', document.location, true);
req.send(null);
req.onload = function() {
var headers = req.getAllResponseHeaders().toLowerCase();
console.log(headers);
};
```
|
Accessing the web page's HTTP Headers in JavaScript
|
[
"",
"javascript",
"http",
"http-headers",
""
] |
In the `IDisposable.Dispose` method is there a way to figure out if an exception is being thrown?
```
using (MyWrapper wrapper = new MyWrapper())
{
throw new Exception("Bad error.");
}
```
If an exception is thrown in the `using` statement I want to know about it when the `IDisposable` object is disposed.
|
**No**, there is no way to do this in the .Net framework, you cannot figure out the current-exception-which-is-being-thrown in a finally clause.
See this [post on my blog](http://samsaffron.com/blog/archive/2007/08/06/16.aspx), for a comparison with a similar pattern in Ruby, it highlights the gaps I think exist with the IDisposable pattern.
Ayende has a trick that will allow you to [detect an exception happened](http://ayende.com/Blog/archive/2007/06/20/Did-you-know-Find-out-if-an-exception-was-thrown.aspx), however, it will not tell you which exception it was.
|
You can extend `IDisposable` with method `Complete` and use pattern like that:
```
using (MyWrapper wrapper = new MyWrapper())
{
throw new Exception("Bad error.");
wrapper.Complete();
}
```
If an exception is thrown inside the `using` statement `Complete` will not be called before `Dispose`.
If you want to know what exact exception is thrown, then subscribe on `AppDomain.CurrentDomain.FirstChanceException` event and store last thrown exception in `ThreadLocal<Exception>` variable.
Such pattern implemented in `TransactionScope` class.
|
Intercepting an exception inside IDisposable.Dispose
|
[
"",
"c#",
".net",
"idisposable",
""
] |
I'm trying to figure out a decent solution (especially from the SEO side) for embedding fonts in web pages. So far I have seen [the W3C solution](http://web.archive.org/web/20100208164146/http://www.spoono.com/html/tutorials/tutorial.php?id=19), which doesn't even work on Firefox, and [this pretty cool solution](http://web.archive.org/web/20130127125919/http://wiki.novemberborn.net/sifr/How+to+use). The second solution is for titles only. Is there a solution available for full text? I'm tired of the standard fonts for web pages.
Thanks!
|
**Things have changed** since this question was originally asked and answered. There's been a large amount of work done on getting cross-browser font embedding for body text to work using @font-face embedding.
Paul Irish put together [Bulletproof @font-face syntax](http://paulirish.com/2009/bulletproof-font-face-implementation-syntax/) combining attempts from multiple other people. If you actually go through the entire article (not just the top) it allows a single @font-face statement to cover IE, Firefox, Safari, Opera, Chrome and possibly others. Basically this can feed out OTF, EOT, SVG and WOFF in ways that don't break anything.
Snipped from his article:
```
@font-face {
font-family: 'Graublau Web';
src: url('GraublauWeb.eot');
src: local('Graublau Web Regular'), local('Graublau Web'),
url("GraublauWeb.woff") format("woff"),
url("GraublauWeb.otf") format("opentype"),
url("GraublauWeb.svg#grablau") format("svg");
}
```
Working from that base, [Font Squirrel](http://www.fontsquirrel.com/) put together a variety of useful tools including the [**@font-face Generator**](http://www.fontsquirrel.com/fontface/generator) which allows you to upload a TTF or OTF file and get auto-converted font files for the other types, along with pre-built CSS and a demo HTML page. Font Squirrel also has [Hundreds of @font-face kits](http://www.fontsquirrel.com/fontface).
Soma Design also put together the [FontFriend Bookmarklet](http://somadesign.ca/projects/fontfriend/), which redefines fonts on a page on the fly so you can try things out. It includes drag-and-drop @font-face support in FireFox 3.6+.
More recently, Google has started to provide the [Google Web Fonts](http://www.google.com/webfonts), an assortment of fonts available under an Open Source license and served from Google's servers.
**License Restrictions**
Finally, WebFonts.info has put together a nice wiki'd list of [Fonts available for @font-face embedding](https://web.archive.org/web/20140713192153/http://www.webfonts.info/fonts-available-font-face-embedding) based on licenses. It doesn't claim to be an exhaustive list, but fonts on it should be available (possibly with conditions such as an attribution in the CSS file) for embedding/linking. **It's important to read the licenses**, because there are some limitations that aren't pushed forward obviously on the font downloads.
|
Try [Facetype.js](https://gero3.github.io/facetype.js/ "Facetype.js"), you convert your .TTF font into a Javascript file. Full SEO compatible, supports FF, IE6 and Safari and degrades gracefully on other browsers.
|
How to embed fonts in HTML?
|
[
"",
"javascript",
"html",
"flash",
"fonts",
"embed",
""
] |
I am curious as to why an implicit cast fails in...
```
int? someValue = SomeCondition ? ResultOfSomeCalc() : null;
```
and why I have to perform an explicit cast instead
```
int? someValue = SomeCondition ? ResultofSomeCalc() : (int?)null;
```
It seems to me that the compiler has all the information it need to make an implicit casting decision, no?
|
The relevant section of the C# 3.0 spec is 7.13, the conditional operator:
The second and third operands of the ?: operator control the type of the conditional expression. Let X and Y be the types of the second and third operands. Then,
If X and Y are the same type, then this is the type of the conditional
Otherwise, if an implicit conversion (§6.1) exists from X to Y, but not from Y to X, then Y is the type of the conditional expression.
Otherwise, if an implicit conversion (§6.1) exists from Y to X, but not from X to Y, then X is the type of the conditional expression.
Otherwise, no expression type can be determined, and a compile-time error occurs.
|
I also am annoyed that it can't infer the type based on the assignment, especially when it's a value type. There are reasons though when you get into object heirarchies.
**If "ResultOfSomeCalc()" returned a "int?", then this would work**. C# needs to figure out the type regardless of what is to the left of the assignment. So you are telling it that you'll return a null or an int - and the logic in the compiler doesn't exist to have it substitute a Nullable as a common denominator.
Notice that these variants DO work, and it may help you understand:
```
object someValue = true ? new Nullable<int>(ResultOfSomeCalc()) : null;
object someValue = true ? (int?)ResultOfSomeCalc() : null;
```
Hope this helps.
|
In C# why can't a conditional operator implicitly cast to a nullable type
|
[
"",
"c#",
"conditional-operator",
"nullable",
""
] |
I need to be able to compare some month names I have in an array.
It would be nice if there were some direct way like:
```
Month.toInt("January") > Month.toInt("May")
```
My Google searching seems to suggest the only way is to write your own method, but this seems like a common enough problem that I would think it would have been already implemented in .Net, anyone done this before?
|
`DateTime.ParseExact(monthName, "MMMM", CultureInfo.CurrentCulture ).Month`
Although, for your purposes, you'll probably be better off just creating a `Dictionary<string, int>` mapping the month's name to its value.
|
You could do something like this:
```
Convert.ToDate(month + " 01, 1900").Month
```
|
How to parse a month name (string) to an integer for comparison in C#?
|
[
"",
"c#",
"parsing",
"integer",
"compare",
""
] |
Scenario:
An **event** is raised in class A that needs to be handled by a **method** in class B. (currently via a delegate)
The data that gets passed from the **event** to the **method** is currently wrapped in class C.
This obviously requires class B to be dependent on class C.
Is there any techniques/refactoring that i can perform in order to remove this dependecy?
*e.g. unrolling the data back to simple primitive data types and passing them directly.*
|
unrolling to primitives would work, but be certain that you really do want to remove this dependency. It is perfectly valid for classes A and B to both depend on C if C is a bridge between them, or if C feeds both of them, etc.
unrolling to primitives removes a compilation dependency, but not a data dependency, and may actually be "denormalizing" the design by removing an entity (class C) which is logicially required
|
I agree with Steven Lowe; The dependency probably is valid. The only alternative I can offer is to depend on an interface instead of an actual class, but it pretty much boils down to the same thing.
|
Techniques to remove dependencies?
|
[
"",
"c#",
"class",
"dependencies",
""
] |
This is just a general question - I was sitting and waiting for a bit of software to compile (we use Incredibuild here but can still take 10/15 mins) and it got me wondering, does anyone know how long it took to compile Windows XP or Vista?
I did some googling but didn't really find any useful information
|
OP is asking about ***Windows***:
> "There are no other software projects
> like this," [Lucovsky](http://en.wikipedia.org/wiki/Mark_Lucovsky) said, "but the
> one thing that's remained constant
> [over the years] is how long it takes
> to build [Windows]. ***No matter which
> generation of the product, it takes 12
> hours to compile and link the system.***"
> Even with the increase in processing
> horsepower over the years, Windows has
> grown to match, and the development
> process has become far more
> sophisticated, so that Microsoft does
> more code analysis as part of the
> daily build. "The CPUs in the build
> lab are pegged constantly for 12
> hours," he said. "We've adapted the
> process since Windows 2000. Now, we
> decompose the source [code] tree into
> independent source trees, and use a
> new build environment. It's a
> multi-machine environment that lets us
> turn the crank faster. But because of
> all the new code analysis, it still
> takes 12 hours."
[SOURCE](http://web.archive.org/web/20100712104930/http://www.winsupersite.com/reviews/winserver2k3_gold2.asp)
Also see Mark Lucovsky classic [presentation](http://www.usenix.org/events/usenix-win2000/invitedtalks/lucovsky_html/sld001.htm) on developing Windows NT/2000.
I don't work at Microsoft, so I don't know for sure...
|
Third-hand information I have is that it takes about a day to complete a Windows build. Which is more or less in line with attempting to build your favorite OSS Operating System from scratch.
Building a modern operating system is a complex and difficult task. The only reason why it doesn't take longer is because companies like Microsoft have build environments setup to help automate integration testings. Thus they can build a system with less manual effort than is involved in most OSS builds.
If you've like to get a feel for what it takes to build an operating system, might I recommend the free eBook: [Linux from Scratch](http://www.linuxfromscratch.org/)
For a more automated build, try [Gentoo](http://www.gentoo.org/). Both options should give you a better idea of the Operating System build process.
|
Operating System compile time
|
[
"",
"c++",
"c",
"operating-system",
"compilation",
"buildfarm",
""
] |
I'm trying to create a C++ class, with a templated superclass. The idea being, I can easily create lots of similar subclasses from a number of superclasses which have similar characteristics.
I have distilled the problematic code as follows:
`template_test.h`:
```
template<class BaseClass>
class Templated : public BaseClass
{
public:
Templated(int a);
virtual int Foo();
};
class Base
{
protected:
Base(int a);
public:
virtual int Foo() = 0;
protected:
int b;
};
```
`template_test.cpp`:
```
#include "template_test.h"
Base::Base(int a)
: b(a+1)
{
}
template<class BaseClass>
Templated<BaseClass>::Templated(int a)
: BaseClass(a)
{
}
template<class BaseClass>
int Templated<BaseClass>::Foo()
{
return this->b;
}
```
`main.cpp`:
```
#include "template_test.h"
int main()
{
Templated<Base> test(1);
return test.Foo();
}
```
When I build the code, I get linker errors, saying that the symbols `Templated<Base>::Templated(int)` and `Templated<Base>::Foo()` cannot be found.
A quick Google suggests that adding the following to `main.cpp` will solve the problem:
```
template<> Templated<Base>::Templated(int a);
template<> int Templated<Base>::Foo();
```
But this does not solve the problem. Adding the lines to `main.cpp` does not work either. (Though, interestingly, adding them to both gives 'multiply defined symbol' errors from the linker, so they must be doing something...)
*However*, putting all the code in one source file does solve the problem. While this would be ok for the noddy example above, the real application I'm looking at would become unmanageable very fast if I was forced to put the whole lot in one cpp file.
Does anyone know if what I'm doing is even possible? (How) can I solve my linker errors?
I would assume that I could make all the methods in `class Templated` inline and this would work, but this doesn't seem ideal either.
|
With templated classes, the definitions must be available for each translation unit that uses it. The definitions can go in a separate file, usually with `.inl` or `.tcc` extension; the header file `#include`s that file at the bottom. Thus, even though it's in a separate file, it's still `#include`d for each translation unit; it cannot be standalone.
So, for your example, rename `template_test.cpp` to `template_test.inl` (or `template_test.tcc`, or whatever), then have `#include "template_test.inl"` (or whatever) at the bottom of `template_test.h`, just before the `#endif` of the include guard.
Hope this helps!
|
The problem is that when your Templated file is compiled, the compiler doesn't know what types it will need to generate code for, so it doesn't.
Then when you link, main.cpp says it needs those functions, but they were never compiled into object files, so the linker can't find them.
The other answers show ways to solve this problem in a portable way, in essence putting the definitions of the templated member functions in a place that is visible from where you instantiate instances of that class -- either through explicit instantiation, or putting the implementations in a file that is #included from main.cpp.
You may also want to read your compiler's documentation to see how they recommends setting things up. I know the IBM XLC compiler has some different settings and options for how to set these up.
|
Templated superclass linking problem
|
[
"",
"c++",
"templates",
""
] |
What is the best OS for Java development? People from Sun are pushing the Solaris, yes Solaris have some extra features included in itself such as (dTrace, possibility for Performance tuning the JVM, etc.. ). Some friends of mine, had port their application on solaris, and they said to me that the performances was brilliant. I'm not happy with switching my OS, and use Solaris instead.
What were your experiences?
|
Of the three I've used (Mac OS X, Linux, Windows), I consider Linux the best place to do Java development.
My primary personal machine is a Mac, and I've done quite a lot of Java development there and been happy with it. Unfortunately, however, Apple lags behind the official JDK releases and you're pretty much limited to the few versions they choose to provide.
My employer-provided machine is an old P4 crate from HP which I use mostly to keep my feet warm. The real work occurs "Oberon", on a 2.6 GHz quad-core running *Ubuntu 8.04* in 32-bit mode [1]. The two advantages I notice day-to-day compared with Windows are:
1. A powerful command line, which helps me automate the boring little stuff.
2. **Far** superior file system performance. (I'm currently using EXT3 because I'm becoming conservative in my old age. I used ReiserFS previously, which was even faster for the sorts of operations one typically performs on large workspaces checked out of subversion.)
You can get those advantages from a mac too, but Linux offers another nice bonus:
* Remote X11: Before my $EMPLOYER provided e-mail and calendar via web, I had to be on the Windows box to read my mail and see my meetings, so I used Cygwin's X11. This allowed my to run the stuff on Linux but display it on my windows desktop.
---
[1] I used to run Ubuntu in 64-bit mode, but I had no end of trouble. (Mixing 64-bit and 32-bit is something Mac OS X does *much* better.) 7.04 worked fine running 32-bit applications on the 64-bit kernel. 7.10 broke the `linux32` script and the ability to install new 32-bit applications though old ones continued to (mostly) run. 8.04 killed 32-bit java by making it impossible to connect to the network from a 32-bit JVM (no more updates for Eclipse). Running Eclipse 64-bit didn't work reliably. The then current version of oXygen would only run (grudgingly) under the IBM 64-bit VM which would work for about 10 minutes until it stopped getting keyboard events. I finally gave up in frustration and used my Mac for a few months until I had enough slack time to do a 32-bit install of 8.04 on the linux box. Now everything works again and I'm quite happy.
|
Develop on whatever you like. As a java programmer you might want to avoid Mac OS X, primarily because new features seem to have been significantly delayed, and also because you can find you've no longer got a machine that supports the new versions of Java. Having said that I imagine developing on Mac OS X must be very nice (command line interface, dtrace, nice OS).
I develop on windows with IntelliJ 7. It's ok, but needs some hefty hardware. I then deploy onto solaris/linux. Unless you're writing GUI's or integrating with C++ code, you should be fine choosing whatever takes your fancy.
|
Best OS for java development?
|
[
"",
"java",
"operating-system",
"jvm",
"solaris",
""
] |
The company I work for has recently been hit with many header injection and file upload exploits on the sites we host and while we have fixed the problem with respect to header injection attacks, we have yet to get the upload exploits under control.
I'm trying to set up a plug-and-play-type series of upload scripts to use in-house that a designer can copy into their site's structure, modify a few variables, and have a ready-to-go upload form on their site. We're looking to limit our exposure as much as possible (we've already shut down fopen and shell commands).
I've searched the site for the last hour and found many different answers dealing with specific methods that rely on outside sources. What do you all think is the best script-only solution that is specific enough to use as a reliable method of protection? Also, I'd like to keep the language limited to PHP or pseudo-code if possible.
**Edit:** I've found my answer (posted below) and, while it does make use of the shell command exec(), if you block script files from being uploaded (which this solution does very well), you won't run into any problems.
|
The best solution, IMHO, is to put the directory containing the uploaded files outside of the "web" environment and use a script to make them downloadable. In this way, even if somebody uploads a script it can not be executed by calling it from the browser and you don't have to check the type of the uploaded file.
|
1. **Allow only authorized users to upload a file.** You can add a captcha as well to hinder primitive bots.
2. First of all, **set the `MAX_FILE_SIZE` in your upload form**, and set the **maximum file `size` and `count` on the server** as well.
```
ini_set('post_max_size', '40M'); //or bigger by multiple files
ini_set('upload_max_filesize', '40M');
ini_set('max_file_uploads', 10);
```
Do size check by the uploaded files:
```
if ($fileInput['size'] > $sizeLimit)
; //handle size error here
```
3. You should **use `$_FILES` and [`move_uploaded_file()`](https://stackoverflow.com/questions/16276835/php-move-uploaded-file-why-is-it-important)** to put your uploaded files into the right directory, or if you want to process it, then check with `is_uploaded_file()`. (These functions exist to prevent *file name injections* caused by `register_globals`.)
```
$uploadStoragePath = '/file_storage';
$fileInput = $_FILES['image'];
if ($fileInput['error'] != UPLOAD_ERR_OK)
; //handle upload error here, see http://php.net/manual/en/features.file-upload.errors.php
//size check here
$temporaryName = $fileInput['tmp_name'];
$extension = pathinfo($fileInput['name'], PATHINFO_EXTENSION);
//mime check, chmod, etc. here
$name = bin2hex(mcrypt_create_iv(32, MCRYPT_DEV_URANDOM)); //true random id
move_uploaded_file($temporaryName, $uploadStoragePath.'/'.$name.'.'.$extension);
```
Always **generate a random id instead of using the original file name**.
4. Create a **new *subdomain*** for example <http://static.example.com> or at least a new directory outside of the `public_html`, for the uploaded files. This *subdomain* or directory **should not execute any file**. Set it in the server config, or set [in a **`.htaccess` file**](https://stackoverflow.com/questions/18932756/disable-all-cgi-php-perl-for-a-directory-using-htaccess) by the directory.
```
SetHandler none
SetHandler default-handler
Options -ExecCGI
php_flag engine off
```
Set it [**with `chmod()`**](https://stackoverflow.com/questions/828172/how-do-i-secure-a-web-servers-image-upload-directory) as well.
```
$noExecMode = 0644;
chmod($uploadedFile, $noExecMode);
```
Use `chmod()` on the newly uploaded files too and set it on the directory.
5. You should **check the *mime type*** sent by the hacker. You should create a *whitelist* of allowed *mime types*. **Allow images only** if any other format is not necessary. Any other format is a security threat. Images too, but at least we have tools to handle them...
The *corrupted content* for example: *HTML* in an image file can cause *XSS* by browsers with [*content sniffing* vulnerability](https://stackoverflow.com/questions/18337630/what-is-x-content-type-options-nosniff). When the corrupted content is a *PHP* code, then it can be combined with an *eval injection* vulnerability.
```
$userContent = '../uploads/malicious.jpg';
include('includes/'.$userContent);
```
Try to avoid this, for example use a `class autoloader` instead of including php files manually...
By handling the *javascript injection* at first you have to **turn off *xss* and *content sniffing* in the browsers**. *Content sniffing* problems are typical by older *msie*, I think the other browsers filter them pretty well. Anyways you can prevent these problems with a bunch of headers. (Not fully supported by every browser, but that's the best you can do on client side.)
```
Strict-Transport-Security: max-age={your-max-age}
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-XSS-Protection: 1; mode=block
Content-Security-Policy: {your-security-policy}
```
You can check if a file is corrupted with [`Imagick identify`](http://www.php.net/manual/en/imagick.identifyimage.php), but that does not mean a complete protection.
```
try {
$uploadedImage = new Imagick($uploadedFile);
$attributes = $uploadedImage->identifyImage();
$format = $image->getImageFormat();
var_dump($attributes, $format);
} catch (ImagickException $exception) {
//handle damaged or corrupted images
}
```
If you want to serve **other *mime types*, you should always *force download*** by them, never *include* them into webpages, unless you really know what you are doing...
```
X-Download-Options: noopen
Content-Disposition: attachment; filename=untrustedfile.html
```
6. It is possible to have valid image files with code inside them, for example in *exif* data. So you have to **purge *exif* from images**, if its content is not important to you. You can do that with *`Imagick`* or *`GD`*, but both of them requires repacking of the file. You can find an *`exiftool`* as an alternative.
I think the simplest way to clear *exif*, is loading images with *GD*, and [save them as *PNG* with highest quality](https://stackoverflow.com/questions/18531457/php-clear-png-metadata-but-keep-similar-quality). So the images won't lose quality, and the *exif* tag will be purged, because *GD* cannot handle it. Make this with images uploaded as *PNG* too...
If you want to extract the *exif* data, never use `preg_replace()` if the `pattern` or `replacement` is from the user, because that will lead to an *eval injection*... Use `preg_replace_callback()` instead of the `eval regex flag`, if necessary. (Common mistake in copy paste codes.)
*Exif* data can be a problem if your site has an *eval injection* vulnerability, for example if you use `include($userInput)` somewhere.
7. **Never ever use `include()`, `require()` by uploaded files**, serve them as static or use `file_get_contents()` or `readfile()`, or any other file reading function, if you want to control access.
It is rarely available, but I think the best approach to use the `X-Sendfile: {filename}` headers with the [sendfile apache module](https://tn123.org/mod_xsendfile/). By the headers, never use user input without validation or sanitization, because that will lead to *HTTP header injection*.
If you don't need *access control* (means: only authorized users can see the uploaded files), then serve the files with your webserver. It is much faster...
8. **Use an antivir** to check the uploaded files, if you have one.
9. Always **use a combined protection**, not just a single approach. It will be harder to breach your defenses...
|
What is the most secure method for uploading a file?
|
[
"",
"php",
"security",
"file-upload",
""
] |
At my work everyone has sql snippets that they use to answer questions. Some are specific to a customer, while some are generic for a given database. I want to consolidate those queries into a library/repository that can be accessed by anyone on the team. The requirements would be:
1. Accessible
2. Searchable
3. Tagable (multiple tags allowed per sql)
4. Exportable (create a document containing all queries with certain tags)
I'm interested in what has been found to work in other team environments.
|
You could use a wiki.
You could get started with something as simple as [Tiddly wiki](http://www.tiddlywiki.com/).
|
A wiki is a great approach.
For database specific or project specific snippets it's also very useful to have links to where a similar construct occurs in the code. We use trac's wiki which gives nice integration with out SVN for this.
|
How do you maintain a library of useful SQL in a team environment?
|
[
"",
"sql",
""
] |
Is there a good way to determine if a person has a popup blocker enabled? I need to maintain a web application that unfortunately has tons of popups throughout it and I need to check if the user has popup blockers enabled.
The only way I've found to do this is to open a window from javascript, check to see if it's open to determine if a blocker is enabled and then close it right away.
This is slightly annoying since users who do not have it enabled see a small flash on the screen as the window opens and closes right away.
Are there any other non-obtrusive methods for accomplishing this?
|
Read [Detect a popup blocker using Javascript](http://www.visitor-stats.com/articles/detect-popup-blocker.php):
Basically you check if the 'window.open' method returns a handle to a newly-opened window.
Looks like this:
```
var mine = window.open('','','width=1,height=1,left=0,top=0,scrollbars=no');
if(mine)
var popUpsBlocked = false
else
var popUpsBlocked = true
mine.close()
```
|
As others have said, you'll have to try it and see, but checking for the resulting window object being non-"falsy" isn't sufficient for all browsers.
Opera still returns a `Window` object when a popup is blocked, so you have to examine the object sufficiently to determine if it's a real window:
```
var popup = window.open(/* ... */);
var popupBlocked = (!popup || typeof popup.document.getElementById == "undefined");
```
|
Popup detection before user logs in
|
[
"",
"javascript",
"browser",
"popup",
""
] |
If you have a Property that gets and sets to an instance variable then normally you always use the Property from outside that class to access it.
My question is should you also always do so within the class? I've always used the Property if there is one, even within the class, but would like to hear some arguments for and against as to which is the most correct and why.
Or is it just a matter of coding standards being used on the project?
|
One of the stronger argument for accessing local (class scope) variables through properties is that you add a level of abstraction in your class. If you change **any** logic concerning how that field is stored then the rest of your code will be left unaffected.
For example you might change that from a local variable to a property of a child object, to a database call, to a webservice call, to a static property on a class and so on. When making the change it gives you a single point of change, the property, and you do not have to update the rest of your class since they all use the property.
Also using the property enables you to apply business rules on the value of the property instead of having to enforce the same rule at each location where you'd directly access the field. Again, encapsulation
With the introduction of automatic properties there's even less reason to explicitly have a local variable, unless you need to apply business rules on the get/set
|
It depends on whether you want to apply any logic implemented within the property setter, and so you really have to decide on a case by case basis.
When you go directly to the private field, you know that the field is being set to exactly what you say.
When you go through the Property, the value gets set according to the setter logic, so you get any business rules or validation you want over values assigned to that field.
Pretty hard to come up with a rule about when doing either is 'correct', about the only one I'd say I follow is that in constructor initialisation I'd pretty much never use the Property.
|
Should you access a variable within the same class via a Property?
|
[
"",
"c#",
"variables",
"properties",
""
] |
Could you recommend a lightweight SQL database which doesn't require installation on a client computer to work and could be accessed easily from .NET application? Only basic SQL capabilities are needed.
Now I am using Access database in simple projects and distribute .MDB and .EXE files together. Looking for any alternatives.
|
Depends on what you mean by lightweight. Easy on RAM? Or lighter db file? Or lighter connector to connect to db? Or fewer files over all? I'll give a comparison of what I know:
```
no of files cumulative size of files db size
Firebird 2.5 5 6.82 MB 250 KB
SqlServerCe 4 7 2.08 MB 64 KB
Sqlite 3.7.11.0 1 0.83 MB 15 KB
VistaDb 4.3.3.34 1 1.04 MB 48 KB
no of files - includes the .net connector and excludes the db file
```
The dbs are of 1 table with 2 columns and 2 rows. Take the db size with a pinch of salt as dbs could grow differently with further use. For instance `SqlServerCe` though initially was at 64 KB, it didn't grow at all after adding a few hundred records, while `VistaDb` grew easily from 48 to 72 to 140 KB. SQLite was the best in that regard which started from the lowest and grew linearly.
Few anecdotes: I had better performance using SqlServerCe with the factory settings which means its the easiest to get kick started without any configuration, while I found Firebird little bit harder to get it started due to lack of online materials. Firebird as I could read had widest standard sql compliance. While VistaDb is written in fully managed C# which means it can be merged with your application's assembly to have one single file, it seemed slowest to me. Of all, considering performance, ease and size I chose SQLite. SqlServerCe would be my second choice.
In short each has its pluses and minuses. Again, take my rant with a pinch of salt, its just my personal experience.
|
Check [SQLite](http://www.sqlite.org/), it's a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.
It has many [wrappers](http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers) for .NET
|
Lightweight SQL database which doesn't require installation
|
[
"",
"sql",
".net",
"database",
"ms-access",
""
] |
I have a C# Windows application which runs a service. I would like to leverage PowerShell in order to offer a command line management interface for administering my running service.
From my point of view, I am trying to offer the same administrative interface a la Exchange 2007.
Do you have any suggestion or sample code on how to start/design the management cmdlets in order "connect" to the running service to query or send administrative commands?
How can I access service's internal runtime state from powershell command line? For example I would like to implement a cmdlet called Get-ConnectionsInfo to find out how many outbound connections my Windows service is using when the cmdlet is executed.
A practical example or web link to an example would be more than welcomed.
Thanks,
Robert
|
The key here is to start by writting an API that you can access from .Net, then you can easily wrap the calls to this API into a DLL that exposes classes that are PowerShell cmdlets. If you want someone to be able to administer your service remotely then I think probably the best thing is to create either a webservice or WCF Service that does this and then you can wrap that in PowerShell. If you have a look on codeplex you will find some examples of administer remote web based services like GoGrid and SQL Data Services that will give you some sample code to get you started.
|
A solution would be for your Windows Service to expose an administrative interface through WCF. Your PowerShell commands would use this WCF service to pull out information and make the Windows Service perform actions.
|
Send administrative commands to my C# Windows Service using own PowerShell CmdLets
|
[
"",
"c#",
"powershell",
"powershell-cmdlet",
""
] |
I need to apply some xml templates to various streams of xml data (and files, on occasion) and there seem to be a large number of xml libraries for java out there -- enough that it's difficult to quickly determine which libraries are still active, how they differ from the other options that are also active, and what criteria should be considered when choosing one.
What libraries do you use for manipulating xml in java, and why is it better than the alternatives?
|
saxon is the xslt and xquery parser -- <http://saxon.sourceforge.net/>. this is built by a known xslt expert(who was on the xslt spec committe and who has authored books). there is an open source version and a commercial version.
It(xslt piece) gets continuously improved .
the other xslt tool in java, is of course, XALAN.
xml -- there are so many. notable(well tested over the years) ones
1) jdk xml parser -- dom, sax, stax
2) xerces : from apache
3) XOM -- if DOM doesn't work for you
4) JDOM -- one of the earlier popular open source tool
5) JAXB -- built into the JDK 6 now
6) woodstox -- nice xml processor(read/write) -- <http://woodstox.codehaus.org/>
|
No one has mentioned JAXP, the Java API for XML Processing. Comes right out of the box with the jdk, with default xml library implementations.
|
What xml/xslt library(ies) currently work well for java?
|
[
"",
"java",
"xml",
"xslt",
""
] |
So, I want to export all my contacts from Outlook as vcards. If I google that, I get a bunch of shareware programs, but I want something free that just works.
If I'm to code it myself, I guess I should use the Microsoft.Office.Interop.Outlook assembly. Has anyone already code to convert ContactItems to vcards?
**Edit:** I solved it in a completely different way, see answer below, but I have marked dok1.myopenid.com's answer as accepted because it answers my original question.
|
I solved it in a non-programmatically way:
* Selected all contacts in Outlook
* Forwarded them as cards to myself
* Saved all the attachments (vcards) in a folder, `c:\temp`
* Opened a command prompt and typed the command `copy /a *.vcf c:\allcards.vcf` which concatenates all vcards into one
|
For what it's worth - I just came across this thread looking for the same export to individual .VCF files from outlook. I haev 2007 (don't know if that makes a difference) but I selected all contacts and dragged them to a new email message to be added as individual .VCF files. After they were all added, I clicked in the attachments section of the new email, hit CTRL-A to highlight all of them, then left-click-dragged the first (and therefore all of them) to the folder I wanted the individual .VCF files in. A few minutes of Outlook "thinking" about my 400 contacts they were all there!
|
Export all contacts as vcards from Outlook
|
[
"",
"c#",
".net",
"outlook",
"vcf-vcard",
""
] |
I have a page with many forms on it. could be 1..200. None of these forms have buttons and they are built programatically. I am using jquery to submit all the forms that are checked.
```
function FakeName()
{
$("input:checked").parent("form").submit();
}
```
My forms look like:
```
<form name="FakeForm<%=i%>" action="javascript:void%200" onSubmit="processRow(<%=i%>)" method="post" style="margin:0px;">
<input type="checkbox" name="FakeNameCheck" value="FakeNameCheck"/>
<input type="hidden" name="FakeNum" value="<%= FakeNum%>"/>
<input type="hidden" name="FakeId" value="<%=FakeIdr%>"/>
<input type="hidden" name="FakeAmt" value="<%=FakeAmount%>"/>
<input type="hidden" name="FakeTrans" value="FakeTrans"/>
</form>
```
Note: action is set to "javascript:void%200" so that it posts to a fake page. I want to handle my own posting in processRow.
OnSubmit never gets called and therefore ProcessRow never gets called.
Obviously all the names of the functions and variables have been changed to protect their identity :D
How can I get a function in each form to fire when I call submit programmatically.
|
The `onsubmit` handler is deliberately not triggered when you programatically submit the form. This is to avoid infinite recursion if an event handler would cause the event to be triggered again (and therefore the event handler to be called again)
However, of course you can call the `processRow()` function yourself in place of the `.submit()` call.
You're allowed to have inputs outside of forms. One school of thought is that a `<form>` shouldn't be a `<form>` if it's not intended to be submitted to the server via HTML.
|
Look up [dispatchEvent](https://developer.mozilla.org/index.php?title=En/DOM/Element.dispatchEvent) and it's equivalent [fireEvent](http://msdn.microsoft.com/en-us/library/ms536423(VS.85).aspx). It's not the easiest thing in the world to use, but I think that's what you are looking for.
I'm surprised that there's no library that helps with this easily. Prototype (the one I've used the most) comes closest with a .fire() method on elements.
|
How to get function to fire when buttonless form is submitted
|
[
"",
"javascript",
""
] |
I have a class with the following member functions:
```
/// caller pid
virtual pid_t Pid() const = 0;
/// physical memory size in KB
virtual uint64_t Size() const = 0;
/// resident memory for this process
virtual uint64_t Rss() const = 0;
/// cpu used by this process
virtual double PercentCpu() const = 0;
/// memory used by this process
virtual double PercentMemory() const = 0;
/// number of threads in this process
virtual int32_t Lwps() const = 0;
```
This class' duty is to return process information about caller. Physical memory size can easily determined by a sysctl call, and pid is trivial, but the remaining calls have eluded me, aside from invoking a popen on ps or top and parsing the output - which isn't acceptable. Any help would be greatly appreciated.
Requirements:
Compiles on g++ 4.0
No obj-c
OSX 10.5
|
Process info comes from `pidinfo`:
```
cristi:~ diciu$ grep proc_pidinfo /usr/include/libproc.h
int proc_pidinfo(int pid, int flavor, uint64_t arg, void *buffer, int buffersize);
```
cpu load comes from `host_statistics`:
```
cristi:~ diciu$ grep -r host_statistics /usr/include/
/usr/include/mach/host_info.h:/* host_statistics() */
/usr/include/mach/mach_host.defs:routine host_statistics(
/usr/include/mach/mach_host.h:/* Routine host_statistics */
/usr/include/mach/mach_host.h:kern_return_t host_statistics
```
For more details, check out sources for `top` and `lsof`, they are open source (you need to register as an Apple developer but that's free of charge):
<https://opensource.apple.com/source/top/top-111.20.1/libtop.c.auto.html>
**Later edit:** All these interfaces are version specific, so you need to take that into account when writing production code (libproc.h):
```
/*
* This header file contains private interfaces to obtain process information.
* These interfaces are subject to change in future releases.
*/
```
|
Since you say no Objective-C we'll rule out most of the MacOS frameworks.
You can get CPU time using getrusage(), which gives the total amount of User and System CPU time charged to your process. To get a CPU percentage you'd need to snapshot the getrusage values once per second (or however granular you want to be).
```
#include <sys/resource.h>
struct rusage r_usage;
if (getrusage(RUSAGE_SELF, &r_usage)) {
/* ... error handling ... */
}
printf("Total User CPU = %ld.%ld\n",
r_usage.ru_utime.tv_sec,
r_usage.ru_utime.tv_usec);
printf("Total System CPU = %ld.%ld\n",
r_usage.ru_stime.tv_sec,
r_usage.ru_stime.tv_usec);
```
There is an RSS field in the getrusage structure, but is appears to always be zero in MacOS X 10.5. [Michael Knight](http://miknight.blogspot.com/2005/11/resident-set-size-in-mac-os-x.html) wrote a blog post several years ago about how to determine the RSS.
|
Determine Process Info Programmatically in Darwin/OSX
|
[
"",
"c++",
"c",
"macos",
"operating-system",
"darwin",
""
] |
I understand that creating too many threads in an application isn't being what you might call a "good neighbour" to other running processes, since cpu and memory resources are consumed even if these threads are in an efficient sleeping state.
What I'm interested in is this: **How much memory (win32 platform) is being consumed by a sleeping thread?**
Theoretically, I'd assume somewhere in the region of 1mb (since this is the default stack size), but I'm pretty sure it's less than this, but I'm not sure why.
Any help on this will be appreciated.
(*The reason I'm asking is that I'm considering introducing a thread-pool, and I'd like to understand how much memory I can save by creating a pool of 5 threads, compared to 20 manually created threads*)
|
I have a server application which is heavy in thread usage, it uses a configurable thread pool which is set up by the customer, and in at least one site it has 1000+ threads, and when started up it uses only 50 MB. The reason is that Windows *reserves* 1MB for the stack (it maps its address space), but it is not necessarily allocated in the physical memory, only a smaller part of it. If the stack grows more than that a page fault is generated and more physical memory is allocated. I don't know what the initial allocation is, but I would assume it's equal to the page granularity of the system (usually 64 KB). Of course, the thread would also use a little more memory for other things when created (TLS, TSS, etc), but my guess for the total would be about 200 KB. And bear in mind that any memory that is not frequently used would be unloaded by the virtual memory manager.
|
Adding to Fabios comments:
Memory is your second concern, not your first. The purpose of a threadpool is usually to constrain the context switching overhead between threads that want to run concurrently, ideally to the number of CPU cores available.
A context switch is very expensive, often quoted at a few thousand to 10,000+ CPU cycles.
A little test on WinXP (32 bit) clocks in at about **15k private bytes per thread** (999 threads created). This is the initial commited stack size, plus any other data managed by the OS.
|
How much memory does a thread consume when first created?
|
[
"",
"c++",
"multithreading",
"winapi",
""
] |
Is there a way in C# or .NET in general to create an attribute on a method which triggers an event when the method is invoked? Ideally, I would be able to run custom actions before and after the invocation of the method.
I mean something like this:
```
[TriggersMyCustomAction()]
public void DoSomeStuff()
{
}
```
I am totally clueless how to do it or if it possible at all, but [System.Diagnostic.ConditionalAttribute](http://msdn.microsoft.com/en-us/library/system.diagnostics.conditionalattribute.aspx) might do a similar thing in the background. I am not sure though.
**EDIT**: I forgot to mention that due to the circumstances of my specific case, performance is not really an issue.
|
The only way I know how to do this is with [PostSharp](https://www.postsharp.net/). It post-processes your IL and can do things like what you asked for.
|
This concept is used in **[MVC](http://en.wikipedia.org/wiki/ASP.NET_MVC_Framework) web applications.**
The **.NET Framework 4.x** provides several attributes which trigger actions, e.g.: `ExceptionFilterAttribute` (handling exceptions), `AuthorizeAttribute` (handling authorization). Both are defined in `System.Web.Http.Filters`.
You could for instance define your own authorization attribute as follows:
```
public class myAuthorizationAttribute : AuthorizeAttribute
{
protected override bool IsAuthorized(HttpActionContext actionContext)
{
// do any stuff here
// it will be invoked when the decorated method is called
if (CheckAuthorization(actionContext))
return true; // authorized
else
return false; // not authorized
}
}
```
Then, in your **controller** class you decorate the methods which are supposed to use your authorization as follows:
```
[myAuthorization]
public HttpResponseMessage Post(string id)
{
// ... your code goes here
response = new HttpResponseMessage(HttpStatusCode.OK); // return OK status
return response;
}
```
Whenever the `Post` method is invoked, it will call the `IsAuthorized` method inside the `myAuthorization` Attribute *before* the code inside the `Post` method is executed.
If you return `false` in the `IsAuthorized` method, you signal that authorization is not granted and the execution of the method `Post` aborts.
---
To understand how this works, let's look into a different example: The **`ExceptionFilter`**, which allows filtering exceptions by using attributes, the usage is similar as shown above for the `AuthorizeAttribute` (you can find a more detailed description about its usage [here](https://learn.microsoft.com/en-us/aspnet/web-api/overview/error-handling/exception-handling)).
To use it, derive the `DivideByZeroExceptionFilter` class from the `ExceptionFilterAttribute` as shown [here](http://blog.karbyn.com/articles/handling-errors-in-web-api-using-exception-filters-and-exception-handlers/), and override the method `OnException`:
```
public class DivideByZeroExceptionFilter : ExceptionFilterAttribute
{
public override void OnException(HttpActionExecutedContext actionExecutedContext)
{
if (actionExecutedContext.Exception is DivideByZeroException)
{
actionExecutedContext.Response = new HttpResponseMessage() {
Content = new StringContent("A DIV error occured within the application.",
System.Text.Encoding.UTF8, "text/plain"),
StatusCode = System.Net.HttpStatusCode.InternalServerError
};
}
}
}
```
Then use the following demo code to trigger it:
```
[DivideByZeroExceptionFilter]
public void Delete(int id)
{
// Just for demonstration purpose, it
// causes the DivideByZeroExceptionFilter attribute to be triggered:
throw new DivideByZeroException();
// (normally, you would have some code here that might throw
// this exception if something goes wrong, and you want to make
// sure it aborts properly in this case)
}
```
Now that we know how it is used, we're mainly interested in the implementation. The following code is from the .NET Framework. It uses the interface `IExceptionFilter` internally as a contract:
```
namespace System.Web.Http.Filters
{
public interface IExceptionFilter : IFilter
{
// Executes an asynchronous exception filter.
// Returns: An asynchronous exception filter.
Task ExecuteExceptionFilterAsync(
HttpActionExecutedContext actionExecutedContext,
CancellationToken cancellationToken);
}
}
```
The `ExceptionFilterAttribute` itself is defined as follows:
```
namespace System.Web.Http.Filters
{
// Represents the attributes for the exception filter.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method,
Inherited = true, AllowMultiple = true)]
public abstract class ExceptionFilterAttribute : FilterAttribute,
IExceptionFilter, IFilter
{
// Raises the exception event.
// actionExecutedContext: The context for the action.
public virtual void OnException(
HttpActionExecutedContext actionExecutedContext)
{
}
// Asynchronously executes the exception filter.
// Returns: The result of the execution.
Task IExceptionFilter.ExecuteExceptionFilterAsync(
HttpActionExecutedContext actionExecutedContext,
CancellationToken cancellationToken)
{
if (actionExecutedContext == null)
{
throw Error.ArgumentNull("actionExecutedContext");
}
this.OnException(actionExecutedContext);
return TaskHelpers.Completed();
}
}
}
```
Inside `ExecuteExceptionFilterAsync`, the method `OnException` is called. Because you have overridden it as shown earlier, the error can now be handled by your own code.
---
There is also a commercial product available as mentioned in OwenP's answer, [PostSharp](http://www.postsharp.net/), which allows you to do that easily. [Here](http://doc.postsharp.net/method-interception#intercepting-method) is an example how you can do that with PostSharp. Note that there is an Express edition available which you can use for free even for commercial projects.
**PostSharp Example** (see the link above for full description):
```
public class CustomerService
{
[RetryOnException(MaxRetries = 5)]
public void Save(Customer customer)
{
// Database or web-service call.
}
}
```
Here the attribute specifies that the `Save` method is called up to 5 times if an exception occurs. The following code defines this custom attribute:
```
[PSerializable]
public class RetryOnExceptionAttribute : MethodInterceptionAspect
{
public RetryOnExceptionAttribute()
{
this.MaxRetries = 3;
}
public int MaxRetries { get; set; }
public override void OnInvoke(MethodInterceptionArgs args)
{
int retriesCounter = 0;
while (true)
{
try
{
args.Proceed();
return;
}
catch (Exception e)
{
retriesCounter++;
if (retriesCounter > this.MaxRetries) throw;
Console.WriteLine(
"Exception during attempt {0} of calling method {1}.{2}: {3}",
retriesCounter, args.Method.DeclaringType, args.Method.Name, e.Message);
}
}
}
}
```
|
C#: How to create an attribute on a method triggering an event when it is invoked?
|
[
"",
"c#",
".net",
"events",
"methods",
"attributes",
""
] |
I'm using jQuery to wire up some mouseover effects on elements that are inside an UpdatePanel. The events are bound in `$(document).ready` . For example:
```
$(function() {
$('div._Foo').bind("mouseover", function(e) {
// Do something exciting
});
});
```
Of course, this works fine the first time the page is loaded, but when the UpdatePanel does a partial page update, it's not run and the mouseover effects don't work any more inside the UpdatePanel.
What's the recommended approach for wiring stuff up in jQuery not only on the first page load, but every time an UpdatePanel fires a partial page update? Should I be using the ASP.NET ajax lifecycle instead of `$(document).ready`?
|
An UpdatePanel completely replaces the contents of the update panel on an update. This means that those events you subscribed to are no longer subscribed because there are new elements in that update panel.
What I've done to work around this is re-subscribe to the events I need after every update. I use `$(document).ready()` for the initial load, then use Microsoft's [`PageRequestManager`](https://learn.microsoft.com/en-us/previous-versions/bb311028%28v%3dvs.140%29) (available if you have an update panel on your page) to re-subscribe every update.
```
$(document).ready(function() {
// bind your jQuery events here initially
});
var prm = Sys.WebForms.PageRequestManager.getInstance();
prm.add_endRequest(function() {
// re-bind your jQuery events here
});
```
The `PageRequestManager` is a javascript object which is automatically available if an update panel is on the page. You shouldn't need to do anything other than the code above in order to use it as long as the UpdatePanel is on the page.
If you need more detailed control, this event passes arguments similar to how .NET events are passed arguments `(sender, eventArgs)` so you can see what raised the event and only re-bind if needed.
Here is the latest version of the documentation from Microsoft: [msdn.microsoft.com/.../bb383810.aspx](http://msdn.microsoft.com/en-us/library/bb383810.aspx)
---
A better option you may have, depending on your needs, is to use jQuery's [`.on()`](http://api.jquery.com/on/). These method are more efficient than re-subscribing to DOM elements on every update. Read all of the documentation before you use this approach however, since it may or may not meet your needs. There are a lot of jQuery plugins that would be unreasonable to refactor to use `.delegate()` or `.on()`, so in those cases, you're better off re-subscribing.
|
```
<script type="text/javascript">
function BindEvents() {
$(document).ready(function() {
$(".tr-base").mouseover(function() {
$(this).toggleClass("trHover");
}).mouseout(function() {
$(this).removeClass("trHover");
});
}
</script>
```
The area which is going to be updated.
```
<asp:UpdatePanel...
<ContentTemplate
<script type="text/javascript">
Sys.Application.add_load(BindEvents);
</script>
*// Staff*
</ContentTemplate>
</asp:UpdatePanel>
```
|
jQuery $(document).ready and UpdatePanels?
|
[
"",
"javascript",
"jquery",
"asp.net",
"asp.net-ajax",
"jquery-events",
""
] |
I'm reading *The C++ Programming Language.* In it Stroustrup states that `sizeof(char) == 1` and `1 <= sizeof(bool)`. The specifics depend on the implementation. Why would such a simple value as a boolean take the same space as a char?
|
In modern computer architectures, a byte is the smallest addressable unit of memory. To pack multiple bits into a byte requires applying extra bit-shift operations. At the compiler level, it's a trade off of memory vs. speed requirements (and in high-performance software, those extra bit-shift operations can add up and slow down the application needlessly).
|
Because in C++ you can take the address of a boolean and most machines cannot address individual bits.
|
Why is a char and a bool the same size in c++?
|
[
"",
"c++",
"boolean",
""
] |
I have a simple application with the following code:
```
FileInfo[] files = (new DirectoryInfo(initialDirectory)).GetFiles();
List<Thread> threads = new List<Thread>(files.Length);
foreach (FileInfo f in files)
{
Thread t = new Thread(delegate()
{
Console.WriteLine(f.FullName);
});
threads.Add(t);
}
foreach (Thread t in threads)
t.Start();
```
Lets say in 'I=initialDirectory' directory I have 3 files. This application should then create 3 threads, with each thread printing off one of the file names; however, instead each thread will print off the name of the last file in the 'files' array.
Why is this? Why is the current file 'f' variable not getting setup in the anonymous method correctly?
|
The anonymous method keeps a **reference** to the variable in the enclosing block -- not the actual value of the variable.
By the time the methods are actually executed (when you start the threads) `f` has been assigned to point to the last value in the collection, so all 3 threads print that last value.
|
Here are some nice articles about anonymous methods in C# and the code that will be generated by compiler:
<http://blogs.msdn.com/oldnewthing/archive/2006/08/02/686456.aspx>
<http://blogs.msdn.com/oldnewthing/archive/2006/08/03/687529.aspx>
<http://blogs.msdn.com/oldnewthing/archive/2006/08/04/688527.aspx>
I think if you did:
```
foreach (FileInfo f in files)
{
FileInfo f2 = f; //variable declared inside the loop
Thread t = new Thread(delegate()
{
Console.WriteLine(f2.FullName);
});
threads.Add(t);
}
```
it would would work the way you wanted it to.
|
C# 2.0 Threading Question (anonymous methods)
|
[
"",
"c#",
"multithreading",
".net-2.0",
"anonymous-methods",
""
] |
I am trying to make a copy of a database to a new database on the same server. The server is my local computer running SQL 2008 Express under Windows XP.
Doing this should be quite easy using the SMO.Transfer class and it almost works!
My code is as follows (somewhat simplified):
```
Server server = new Server("server");
Database sourceDatabase = server.Databases["source database"];
Database newDatbase = new Database(server, "new name");
newDatbase.Create();
Transfer transfer = new Transfer(sourceDatabase);
transfer.CopyAllObjects = true;
transfer.Options.WithDependencies = true;
transfer.DestinationDatabase = newDatbase.Name;
transfer.CopySchema = true;
transfer.CopyData = true;
StringCollection transferScript = transfer.ScriptTransfer();
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand switchDatabase = new SqlCommand("USE " + newDatbase.Name, conn))
{
switchDatabase.ExecuteNonQuery();
}
foreach (string scriptLine in transferScript)
{
using (SqlCommand scriptCmd = new SqlCommand(scriptLine, conn, transaction))
{
int res = scriptCmd.ExecuteNonQuery();
}
}
}
```
What I do here is to first create a new database, then generate a copy script using the `Transfer` class and finally running the script in the new database.
This works fine for copying the structure, but the `CopyData` option doesn't work!
Are there any undocumented limits to the `CopyData` option? The documentation only says that the option specifies whether data is copied.
I tried using the `TransferData()` method to copy the databse without using a script but then I get an exception that says "Failed to connect to server" with an inner exception that says "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)"
I also tried to enable Named Pipes on the server, but that doesn't help.
Edit:
I found a solution that works by making a backup and then restoring it to a new database. It's quite clumsy though, and slower than it should be, so I'm still looking for a better solution.
|
Well, after contacting Microsft Support I got it working properly, but it is slow and more or less useless. Doing a backup and then a restore is much faster and I will be using it as long as the new copy should live on the same server as the original.
The working code is as follows:
```
ServerConnection conn = new ServerConnection("rune\\sql2008");
Server server = new Server(conn);
Database newdb = new Database(server, "new database");
newdb.Create();
Transfer transfer = new Transfer(server.Databases["source database"]);
transfer.CopyAllObjects = true;
transfer.CopyAllUsers = true;
transfer.Options.WithDependencies = true;
transfer.DestinationDatabase = newdb.Name;
transfer.DestinationServer = server.Name;
transfer.DestinationLoginSecure = true;
transfer.CopySchema = true;
transfer.CopyData = true;
transfer.Options.ContinueScriptingOnError = true;
transfer.TransferData();
```
The trick was to set the DestinationDatabase property. This must be set even if the target is that same as the source. In addition I had to connect to the server as a named instance instead of using the other connection options.
|
Try setting [**SetDefaultInitFields**](http://msdn.microsoft.com/en-us/library/ms210363.aspx) to true on the **Server** object.
I had the same issue with the SMO database object running slowly. I guess this is because sql server doesn't like to retrieve entire objects and collections at once, instead lazy loading everything, causing a round-trip for each field, which for an entire database is pretty inefficient.
|
Using SMO to copy a database and data
|
[
"",
"c#",
".net",
"sql-server-2008",
"smo",
""
] |
I have problem in some JavaScript that I am writing where the Switch statement does not seem to be working as expected.
```
switch (msg.ResultType) {
case 0:
$('#txtConsole').val("Some Val 0");
break;
case 1:
$('#txtConsole').val("Some Val 1");
break;
case 2:
$('#txtConsole').text("Some Val 2");
break;
}
```
The ResultType is an integer value 0-2 and I can see that in FireBug. In all cases, the switch transfers control to the final break statement which means all the logic is completely skipped. What am I missing?
|
I'm sure that a switch uses === for comparison in Actionscript and since JS and AS both follow the ECMAScript standard, I guess the same applies to JS. My guess is that the value is not actually a Number, but perhaps a String.
You could try to use parseInt(msg.ResultType) in the switch or use strings in the cases.
|
I ran into a similar problem and the issue turned out to be that where as it was showing as an int value, the switch statement was reading it as a string variable. May not be the case here, but that is what happened to me.
|
JavaScript switch statement
|
[
"",
"javascript",
""
] |
I remember some rules from a time ago (pre-32bit Intel processors), when was quite frequent (at least for me) having to analyze the assembly output generated by C/C++ compilers (in my case, Borland/Turbo at that time) to find performance bottlenecks, and to safely mix assembly routines with C/C++ code. Things like using the SI register for the *this* pointer, AX being used for return values, which registers should be preserved when an assembly routine returns, etc.
Now I was wondering if there's some reference for the more popular C/C++ compilers (Visual C++, GCC, Intel...) and processors (Intel, ARM, ...), and if not, where to find the pieces to create one. Ideas?
|
You are asking about "application binary interface" (ABI) and calling conventions. These are typically set by operating systems and libraries, and enforced by compilers and linkers. Google for "ABI" or "calling convention." Some starting points from [Wikipedia](http://en.wikipedia.org/wiki/Calling_convention) and [Debian for ARM](http://wiki.debian.org/ArmEabiPort).
|
Agner Fog's "Calling Conventions" document summarizes, amongst other things, the Windows and Linux 64 and 32-bit ABIs: <http://www.agner.org/optimize/calling_conventions.pdf>. See Table 4 on p.10 for a summary of register usage.
One warning from personal experience: don't embed assumptions about the ABI in inline assembly. If you write a function in inline assembly that assumes return and/or parameter transfer in particular registers (e.g. eax, rdi, rsi), it will break if/when the function is inlined by the compiler.
|
Register allocation rules in code generated by major C/C++ compilers
|
[
"",
"c++",
"c",
"assembly",
"compiler-construction",
"cpu-registers",
""
] |
I was wondering about the best practices for indicating invalid argument combinations in Python. I've come across a few situations where you have a function like so:
```
def import_to_orm(name, save=False, recurse=False):
"""
:param name: Name of some external entity to import.
:param save: Save the ORM object before returning.
:param recurse: Attempt to import associated objects as well. Because you
need the original object to have a key to relate to, save must be
`True` for recurse to be `True`.
:raise BadValueError: If `recurse and not save`.
:return: The ORM object.
"""
pass
```
The only annoyance with this is that every package has its own, usually slightly differing `BadValueError`. I know that in Java there exists `java.lang.IllegalArgumentException` -- is it well understood that everybody will be creating their own `BadValueError`s in Python or is there another, preferred method?
|
I would just raise [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError), unless you need a more specific exception..
```
def import_to_orm(name, save=False, recurse=False):
if recurse and not save:
raise ValueError("save must be True if recurse is True")
```
There's really no point in doing `class BadValueError(ValueError):pass` - your custom class is identical in use to [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError), so why not use that?
|
I would inherit from `ValueError`
```
class IllegalArgumentError(ValueError):
pass
```
It is sometimes better to create your own exceptions, but inherit from a built-in one, which is as close to what you want as possible.
If you need to catch that specific error, it is helpful to have a name.
|
Which exception should I raise on bad/illegal argument combinations in Python?
|
[
"",
"python",
"exception",
"arguments",
""
] |
How can I get the start and end positions of all matches using the `re` module? For example given the pattern `r'[a-z]'` and the string `'a1b2c3d4'` I'd want to get the positions where it finds each letter. Ideally, I'd like to get the text of the match back too.
|
```
import re
p = re.compile("[a-z]")
for m in p.finditer('a1b2c3d4'):
print(m.start(), m.group())
```
|
Taken from
[*Regular Expression HOWTO*](http://docs.python.org/3.3/howto/regex.html)
> span() returns both start and end indexes in a single tuple. Since the
> match method only checks if the RE matches at the start of a string,
> start() will always be zero. However, the search method of RegexObject
> instances scans through the string, so the match may not start at zero
> in that case.
```
>>> p = re.compile('[a-z]+')
>>> print p.match('::: message')
None
>>> m = p.search('::: message') ; print m
<re.MatchObject instance at 80c9650>
>>> m.group()
'message'
>>> m.span()
(4, 11)
```
Combine that with:
In Python 2.2, the finditer() method is also available, returning a sequence of MatchObject instances as an iterator.
```
>>> p = re.compile( ... )
>>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...')
>>> iterator
<callable-iterator object at 0x401833ac>
>>> for match in iterator:
... print match.span()
...
(0, 2)
(22, 24)
(29, 31)
```
you should be able to do something on the order of
```
for match in re.finditer(r'[a-z]', 'a1b2c3d4'):
print match.span()
```
|
Python Regex - How to Get Positions and Values of Matches
|
[
"",
"python",
"regex",
""
] |
I need to implement a SSO between a SharePoint site and a custom PHP-based site which resides on a different domain. I need to use the SharePoint user accounts to log in the PHP site. (I'll have a full control over the PHP source)
Is this possible? Any tips and tricks?
|
My assumption is you have full control over your php app/domain, but a different team is managing the sharepoint server. Also assume you can ask the sharepoint team to create a simple webpart for your SSO functionality.
If this is the case, you can ask the sharepoint team to create a webpart which has a link to your site on it. When the user clicks the link, the request is made back to the sharepoint server, the sharepoint server takes the user's logon name, encrypts it using your public key, and adds it to a url on your php site then sends this as a redirect back to the browser. So the location looks like this:
<https://your.php.domain/sso.php?logon=encrypted_users_logon_name&api_key=some_token>
Your sso.php script will verify the api\_key is a valid token from your sharepoint partner, and then decrypt the logon name of the user trying to get in. You can get more fancy, and have a callback on the sharepoint site to confirm the logon request is legitimate within some time window, or bake that into the encrypted logon name, but this is a barebones way to do it, assuming you trust requests coming from the sharepoint partner.
The sharepoint .net developers will probably be able to do any encryption you want, so pick an algorithm you can use on both php and .net sides and give them the key to use for encryption, and the format of the information to encrypt. something like n=logon\_name;expire=timestamp; then when you decrypt, if it is after the expire time then you deny the logon.
|
I don't know much about this area but hopefully this might help point you in the right direction.
**Investigate LDAP...**
You can set up PHP to use LDAP credentials. If your SharePoint site uses Active Directory, then you can expose this directory as an LDAP source and use that in the PHP application.
**Automated sign-in...**
Having the sign in happen automatically between each site is a very different matter. e.g. I'm logged into MOSS already, click on a link that goes to the PHP app and find that I'm already logged in there as well. For this you will need to investigate using something like Kerberos keys/authentication. It's a messy and difficult area.
|
SharePoint SSO with a PHP application on a different server?
|
[
"",
"php",
"sharepoint",
"single-sign-on",
""
] |
Heres a tricky one . .
I have a webpage (called PageA) that has a header and then simply includes an iframe. Lets call the page within the iframe PageB. PageB simply has a bunch of thumbnails but there are a lot so you have to scroll down on PageA to view them all.
When i scroll down to the bottom of the pageB and click on a thumbnail it looks like it takes me to a blank page. What actually happens is that it bring up the image but since the page that is just the image is much shorter in height, the scroll bar stays at the same location and doesn't adjust for it. I have to scroll up to the top of the page to view the picture.
Is there anyway when i click a link on a page that is within an iframe, the outer pages scroll bar goes back up to the top
thks,
ak
|
@mek after trying various methods, the best solution I've found is this:
In the outer page, define a scroller function:
```
<script type="text/javascript">
function gotop() {
scroll(0,0);
}
</script>
```
Then when you define the iframe, set an onload handler (which fires each time the iframe source loads i.e. whenever you navigate to a new page in the iframe)
```
<iframe id="myframe"
onload="try { gotop() } catch (e) {}"
src="http://yourframesource"
width="100%" height="999"
scrolling="auto" marginwidth="0" marginheight="0"
frameborder="0" vspace="0" hspace="0" >
</iframe>
```
The nice thing about this approach is it means you do not need to make any changes to the pages included in the iframe (and the iframe contents can happily be in another domain - no cross-site scripting issues).
|
Javascript is your best bet. You can use the [scroll() method](http://www.java2s.com/Code/JavaScriptReference/Javascript-Methods/scrollSyntaxParametersandNote.htm) to scroll back up to the top of your IFRAME. Add a javascript handler in the body load so that each time you click a thumbnail, call a function that calls scroll() to scroll up.
|
asp.net IFrame scroll bar push to top . .
|
[
"",
"asp.net",
"javascript",
"html",
"iframe",
"scroll",
""
] |
I have a form that excepts a file upload in ASP.NET. I need to increase the max upload size to above the 4 MB default.
I have found in certain places referencing the below code at [msdn](http://msdn.microsoft.com/en-us/library/system.web.configuration.httpruntimesection.maxrequestlength.aspx).
```
[ConfigurationPropertyAttribute("maxRequestLength", DefaultValue = )]
```
None of the references actually describe how to use it, and I have tried several things with no success. I only want to modify this attribute for certain pages that are asking for file upload.
Is this the correct route to take? And how do I use this?
|
This setting goes in your web.config file. It affects the entire application, though... I don't think you can set it per page.
```
<configuration>
<system.web>
<httpRuntime maxRequestLength="xxx" />
</system.web>
</configuration>
```
"xxx" is in KB. The default is 4096 (= 4 MB).
|
For IIS 7+, as well as adding the httpRuntime maxRequestLength setting you also need to add:
```
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="52428800" /> <!--50MB-->
</requestFiltering>
</security>
</system.webServer>
```
Or in IIS (7):
> * Select the website you want enable to accept large file uploads.
> * In the main window double click 'Request filtering'
> * Select "Edit Feature Settings"
> * Modify the "Maximum allowed content length (bytes)"
|
How to increase the max upload file size in ASP.NET?
|
[
"",
"c#",
".net",
"asp.net",
"file-upload",
""
] |
How would I go about creating a real world form creation class that I can use to display a new form with fields of different types, as how many fields I want, I can use drop downs and I can do all of this by using OOP?
|
You definitely can. Consider a Form class which stores information about the form itself: the `method`, `action`, `enctype` attributes. Also throw in stuff like an optional heading and/or description text at the top. Of course you will also need an array of input elements. These could probably be put into their own class (though subclassing them for InputText, InputCheckbox, InputRadio maybe be a bit over the top). Here's a vague skeleton design:
```
class Form {
var $attributes, // array, with keys ['method' => 'post', 'action' => 'mypage.php'...]
$heading,
$description,
$inputs // array of FormInput elements
;
function render() {
$output = "<form " . /* insert attributes here */ ">"
. "<h1>" . $this->heading . "</h1>"
. "<p>" . $this->description . "</p>"
;
// wrap your inputs in whatever output style you prefer:
// ordered list, table, etc.
foreach ($this->inputs as $input) {
$output .= $input->render();
}
$output .= "</form>";
return $output;
}
}
```
The FormInput class would just need to store the basics, such as type, name, value, label. If you wanted to get tricky then you could apply validation rules which would then be converted to Javascript when rendering.
|
To be honest I wouldn't roll my own, considering there are a few mature form packages out there for PHP.
I use PEAR's HTML\_QuickForm package (<http://pear.php.net/manual/en/package.html.html-quickform.php>) for PHP4 sites.
For PHP5, I'd have a look into Zend\_Form (<http://framework.zend.com/manual/en/zend.form.html>).
For my quickform code, I use a helper class that lets me define forms using a config array. For example:
```
echo QuickFormHelper::renderFromConfig(array(
'name' => 'area_edit',
'elements' => array(
'area_id' => array('type' => 'hidden'),
'active' => array('type' => 'toggle'),
'site_name' => array('type' => 'text'),
'base_url' => array('type' => 'text'),
'email' => array('type' => 'text'),
'email_admin' => array('type' => 'text'),
'email_financial' => array('type' => 'text'),
'cron_enabled' => array('type' => 'toggle'),
'address' => array('type' => 'address'),
),
'groups' => array(
'Basic Details' => array('site_name', 'base_url'),
'Address Details' => array('address'),
'Misc Details' => array(), // SM: Display the rest with this heading.
),
'defaults' => $site,
'callback_on_success' => array(
'object' => $module,
'function' => 'saveSite',
),
));
```
Note that the above element types 'address' and 'toggle' are in fact multiple form fields (basically, meta-types). This is what I love about this helper class - I can define a standard group of fields with their rules (such as address, credit\_card, etc) and they can be used on lots of pages in a consistent fashion.
|
How to build a PHP form Dynamically with OOP?
|
[
"",
"php",
"oop",
"class",
"forms",
""
] |
(If anything here needs clarification/ more detail please let me know.)
I have an application (C#, 2.\* framework) that interfaces with a third-party webservice using SOAP. I used thinktecture's WSCF add-in against a supplied WSDL to create the client-side implementation. For reasons beyond my control the SOAP message exchange uses WSE2.0 for security (the thinctecture implementation had to be modified to include the WSE2.0 reference). In addition to the 'normal' data package I attach a stored X509 cert and a binary security token from a previous call to a different web service. We are using SSL encryption of some sort - I don't know the details.
All the necessary serialization/deserialization is contained in the web service client - meaning when control is returned to me after calling the client the entire XML string contained in the SOAP response is not available to me - just the deserialized components. Don't get me wrong - I think that's good because it means I don't have to do it myself.
However, in order for me to have something worth storing/archiving I am having to re-serialize the data at the root element. This seems like a waste of resources since my result was in the SOAP response.
**Now for my question:
How can I get access to a 'clear' version of the SOAP response so that I don't have to re-serialize everything for storage/archiving?**
Edit- My application is a 'formless' windows app running as a network service - triggered by a WebsphereMQ client trigger monitor. I don't *think* ASP.NET solutions will apply.
Edit - Since the consensus so far is that it doesn't matter whether my app is ASP.NET or not then I will give CodeMelt's (and by extension Chris's) solution a shot.
|
You can utilize SoapExtension from existing WSE2.0 framework to intercept the responses from the server.
```
public class MyClientSOAPExtension : SoapExtension
{
Stream oldStream;
Stream newStream;
// Save the Stream representing the SOAP request or SOAP response into
// a local memory buffer.
public override Stream ChainStream( Stream stream )
{
oldStream = stream;
newStream = new MemoryStream();
return newStream;
}
public override void ProcessMessage(SoapMessage message)
{
switch (message.Stage)
{
case SoapMessageStage.BeforeDeserialize:
// before the XML deserialized into object.
break;
case SoapMessageStage.AfterDeserialize:
break;
case SoapMessageStage.BeforeSerialize:
break;
case SoapMessageStage.AfterSerialize:
break;
default:
throw new Exception("Invalid stage...");
}
}
}
```
At stage of SoapMessageStage.BeforeDeserialize,
You can read the expected data you want from oldstream (e.g. use XmlReader).
Then store the expected data somewhere for yourself to use and also you need
forward the old stream data to the newstream for web service later stage to use the data, e.g. deserialize XML into objects.
[The sample of logging all the traffic for the web service from MSDN](http://msdn.microsoft.com/en-us/library/system.web.services.protocols.soapextension(VS.85).aspx)
|
Here is an example you can setup using Visual studio web reference to <http://footballpool.dataaccess.eu/data/info.wso?WSDL>
Basically, you must insert in the webservice call chain a XmlReader spyer that will reconstruct the raw XML.
I believe this way is somehow simpler that using SoapExtensions.
Solution solution was inspired by <http://orbinary.com/blog/2010/01/getting-the-raw-soap-xml-sent-via-soaphttpclientprotocol/>
```
using System;
using System.Collections.Generic;
using System.Text;
using System.Net;
using System.IO;
using System.Reflection;
using System.Xml;
namespace ConsoleApplication1 {
public class XmlReaderSpy : XmlReader {
XmlReader _me;
public XmlReaderSpy(XmlReader parent) {
_me = parent;
}
/// <summary>
/// Extracted XML.
/// </summary>
public string Xml;
#region Abstract method that must be implemented
public override XmlNodeType NodeType {
get {
return _me.NodeType;
}
}
public override string LocalName {
get {
return _me.LocalName;
}
}
public override string NamespaceURI {
get {
return _me.NamespaceURI;
}
}
public override string Prefix {
get {
return _me.Prefix;
}
}
public override bool HasValue {
get { return _me.HasValue; }
}
public override string Value {
get { return _me.Value; }
}
public override int Depth {
get { return _me.Depth; }
}
public override string BaseURI {
get { return _me.BaseURI; }
}
public override bool IsEmptyElement {
get { return _me.IsEmptyElement; }
}
public override int AttributeCount {
get { return _me.AttributeCount; }
}
public override string GetAttribute(int i) {
return _me.GetAttribute(i);
}
public override string GetAttribute(string name) {
return _me.GetAttribute(name);
}
public override string GetAttribute(string name, string namespaceURI) {
return _me.GetAttribute(name, namespaceURI);
}
public override void MoveToAttribute(int i) {
_me.MoveToAttribute(i);
}
public override bool MoveToAttribute(string name) {
return _me.MoveToAttribute(name);
}
public override bool MoveToAttribute(string name, string ns) {
return _me.MoveToAttribute(name, ns);
}
public override bool MoveToFirstAttribute() {
return _me.MoveToFirstAttribute();
}
public override bool MoveToNextAttribute() {
return _me.MoveToNextAttribute();
}
public override bool MoveToElement() {
return _me.MoveToElement();
}
public override bool ReadAttributeValue() {
return _me.ReadAttributeValue();
}
public override bool Read() {
bool res = _me.Read();
Xml += StringView();
return res;
}
public override bool EOF {
get { return _me.EOF; }
}
public override void Close() {
_me.Close();
}
public override ReadState ReadState {
get { return _me.ReadState; }
}
public override XmlNameTable NameTable {
get { return _me.NameTable; }
}
public override string LookupNamespace(string prefix) {
return _me.LookupNamespace(prefix);
}
public override void ResolveEntity() {
_me.ResolveEntity();
}
#endregion
protected string StringView() {
string result = "";
if (_me.NodeType == XmlNodeType.Element) {
result = "<" + _me.Name;
if (_me.HasAttributes) {
_me.MoveToFirstAttribute();
do {
result += " " + _me.Name + "=\"" + _me.Value + "\"";
} while (_me.MoveToNextAttribute());
//Let's put cursor back to Element to avoid messing up reader state.
_me.MoveToElement();
}
if (_me.IsEmptyElement) {
result += "/";
}
result += ">";
}
if (_me.NodeType == XmlNodeType.EndElement) {
result = "</" + _me.Name + ">";
}
if (_me.NodeType == XmlNodeType.Text || _me.NodeType == XmlNodeType.Whitespace) {
result = _me.Value;
}
if (_me.NodeType == XmlNodeType.XmlDeclaration) {
result = "<?" + _me.Name + " " + _me.Value + "?>";
}
return result;
}
}
public class MyInfo : ConsoleApplication1.eu.dataaccess.footballpool.Info {
protected XmlReaderSpy _xmlReaderSpy;
public string Xml {
get {
if (_xmlReaderSpy != null) {
return _xmlReaderSpy.Xml;
}
else {
return "";
}
}
}
protected override XmlReader GetReaderForMessage(System.Web.Services.Protocols.SoapClientMessage message, int bufferSize) {
XmlReader rdr = base.GetReaderForMessage(message, bufferSize);
_xmlReaderSpy = new XmlReaderSpy((XmlReader)rdr);
return _xmlReaderSpy;
}
}
class Program {
static void Main(string[] args) {
MyInfo info = new MyInfo();
string[] rest = info.Cities();
System.Console.WriteLine("RAW Soap XML response :\n"+info.Xml);
System.Console.ReadLine();
}
}
}
```
|
How do I get access to SOAP response
|
[
"",
"c#",
"xml",
"web-services",
"soap",
"wse2.0",
""
] |
We're using Microsoft.Practices.CompositeUI.EventBroker to handle event subscription and publication in our application. The way that works is that you add an attribute to your event, specifying a topic name, like this:
```
[EventPublication("example", PublicationScope.Global)]
public event EventHandler Example;
```
then you add another attribute to your handler, with the same topic name, like this:
```
[EventSubscription("example", ThreadOption.Publisher)]
public void OnExample(object sender, EventArgs e)
{
...
}
```
Then you pass your objects to an EventInspector which matches everything up.
We need to debug this, so we're trying to create a debug class that subscribes to *all* the events. I can get a list of all the topic names... but only at runtime. So I need to be able to add attributes to a method at runtime, before we pass our debug object to the EventInspector.
How do I add attributes to a method at runtime?
|
What you are trying to achieve is quite complicated, so I will try to provide something just to get you started. This is what I think you would need to combine in order to achieve something:
1. Define an abstract class `AbstractEventDebugger`, with a method `Search` that searches all of the `event` members, and registers them with the EventInspector. Also, define a method `IdentifyEvent` that will allow you to identify the event that has called it (this depends on you - what parameters will have, etc.).
2. Define a `dynamic type` using `TypeBuilder` (as described [here](http://msdn.microsoft.com/en-us/library/system.reflection.emit.typebuilder.aspx)), that inherits from your class. This class would be the class of your `debugger` object.
3. Attach the Handlers to your class using `Reflection.Emit.MethodBuilder` (see [here](http://msdn.microsoft.com/en-us/library/system.reflection.emit.methodbuilder.aspx)), which will be calling the `IdentifyEvent` method from the parent class and,
4. `Reflection.Emit` the attributes on the handlers using `CustomAttributeBuilder` class (see [here](http://msdn.microsoft.com/en-us/library/system.reflection.emit.customattributebuilder.aspx)).
5. Create an instance of your `dynamic` class and send it to the EventInspector.
6. Fire it up `:)`
[Here](http://blogs.msdn.com/joelpob/archive/2004/03/31/105282.aspx) is a sample on how to create a method that calls something (Actually it's the classic "Hello world").
You will need to do a lot of tweaking in order to get it done well, but you will learn a lot about reflection.
Good luck!
|
Attributes are a compile-time feature (unless you are dealing with ComponentModel - but I suspect it is using reflection). As such, you cannot add attributes at runtime. It would a similar question to "how do I add an extra method to a type at runtime?". In regular C# / .NET (pre-DLR), you can't.
|
How do I add attributes to a method at runtime?
|
[
"",
"c#",
"reflection",
"attributes",
"reflection.emit",
""
] |
Obviously (methinks), creating an index on a `BIT` column is unnecessary. However, if you had a column that you need to search in which every value is likely unique, like `BlogPost` or `StreetAddress` or something, then an index seems appropriate (again, methinks).
But what's the cutoff? What if you expect 10,000 rows and you'll have about 20 unique values among them. Should an index be created?
Thanks in advance.
|
The best answer to this is to profile your queries and see if the index improves your queries. The difficulty in answering this is that it is nearly impossible to generalize the behavior of the query optimizer.
That said, a rule-of-thumb is if your selectivity is 10% or less on a given query on a table, then you will most likely benefit from an index. So in your example, you *might* benefit from an index if your values are evenly distributed. However, considering that your table is small, so your performance boost might be negligible.
This is not a hard and fast rule as there are a lot of factors that can change the 10% number, including the use of a clustered or other index types, size of the rows, if some columns not inline, query structure, etc.
Also keep in mind there is a significant performance penalty for inserting into a table with an index. If this table is frequently updated or appended, the speed boost from the index may be negated by the slower inserts and updates.
See the MSDN article on [Tablescan vs Index access](http://msdn.microsoft.com/en-us/library/aa224773(SQL.80).aspx).
Edit: As others have pointed out, your query may benefit from an index if you are performing aggregation queries, such as counting the number of times a particular value appears. You may also benefit if you frequently sort on a particular column.
|
James hit the nail on the head. I'll just add that even a bit column might benefit from an index depending on how you are using the table. For example, if you need to count the number of rows that have a 1 many times throughout the day, an index there could be useful. Indexes aren't always about finding a single record - they can also be used for aggregations.
|
When to create a new SQL Server Index?
|
[
"",
"sql",
"database",
"t-sql",
""
] |
I'm using MS SQL Server 2005. Is there a difference, to the SQL engine, between
```
SELECT * FROM MyTable;
```
and
```
SELECT ColA, ColB, ColC FROM MyTable;
```
When ColA, ColB, and ColC represent every column in the table?
If they are the same, is there a reason why you should use the 2nd one anyway? I have a project that's heavy on LINQ, and I'm not sure if the standard SELECT \* it generates is a bad practice, or if I should always be a .Select() on it to specify which cols I want.
EDIT: Changed "When ColA, ColB, and ColC are all the columns to the table?" to "When ColA, ColB, and ColC represent every column in the table?" for clarity.
|
Generally, it's better to be explicit, so `Select col1, col2 from Table` is better. The reason being that at some point, an extra column may be added to that table, and would cause unneeded data to be brought back from the query.
This isn't a hard and fast rule though.
|
1) The second one is more explicit about which columns are returned. The value of the 2nd one then is how much you value explicitly knowing which columns come back.
2) This involves potentially less data being returned when there are more columns than the ones explicitly used as well.
3) If you change the table by adding a new column, the first query changes and the second does not. If you have code like "for all columns returned do ..." then the results change if you use the first, but not the 2nd.
|
Is there a difference between Select * and Select [list each col]
|
[
"",
"sql",
"database",
"linq",
"linq-to-sql",
""
] |
How to 'group by' a query using an alias, for example:
```
select count(*), (select * from....) as alias_column
from table
group by alias_column
```
I get 'alias\_column' : INVALID\_IDENTIFIER error message. Why? How to group this query?
|
```
select
count(count_col),
alias_column
from
(
select
count_col,
(select value from....) as alias_column
from
table
) as inline
group by
alias_column
```
Grouping normally works if you repeat the respective expression in the GROUP BY clause. Just mentioning an alias is not possible, because the SELECT step is the last step to happen the execution of a query, grouping happens earlier, when alias names are not yet defined.
To GROUP BY the result of a sub-query, you will have to take a little detour and use an nested query, as indicated above.
|
Nest the query with the alias column:
```
select count(*), alias_column
from
( select empno, (select deptno from emp where emp.empno = e.empno) as alias_column
from emp e
)
group by alias_column;
```
|
Group by alias (Oracle)
|
[
"",
"sql",
"oracle",
""
] |
What's the best way to parse fragments of HTML in C#?
For context, I've inherited an application that uses a great deal of composite controls, which is fine, but a good deal of the controls are rendered using a long sequence of literal controls, which is fairly terrifying. I'm trying to get the application into unit tests, and I want to get these controls under tests that will find out if they're generating well formed HTML, and in a dream solution, validate that HTML.
|
If the HTML is XHTML compliant, you can use the built in System.Xml namespace.
|
Have a look at the [HTMLAgility](http://www.codeplex.com/htmlagilitypack) pack. It's very compatible with the .NET XmlDocument class, but it much more forgiving about HTML that's not clean/valid XHTML.
|
Parsing HTML Fragments
|
[
"",
"c#",
"asp.net",
"unit-testing",
"web-standards",
""
] |
Someone at work just asked for the reasoning behind having to wrap a wait inside a synchronized.
Honestly I can't see the reasoning. I understand what the javadocs say--that the thread needs to be the owner of the object's monitor, but why? What problems does it prevent? (And if it's actually necessary, why can't the wait method get the monitor itself?)
I'm looking for a fairly in-depth why or maybe a reference to an article. I couldn't find one in a quick google.
Oh, also, how does thread.sleep compare?
edit: Great set of answers--I really wish I could select more than one because they all helped me understand what was going on.
|
If the object does not own the object monitor when it calls Object.wait(), it will not be able to access the object to setup a notify listener until the the monitor is released. Instead, it will be treated as a thread attempting to access a method on a synchronized object.
Or to put it another way, there is no difference between:
```
public void doStuffOnThisObject()
```
and the following method:
```
public void wait()
```
Both methods will be blocked until the object monitor is released. This is a feature in Java to prevent the state of an object from being updated by more than one thread. It simply has unintended consequences on the wait() method.
Presumably, the wait() method is not synchronized because that could create situations where the Thread has multiple locks on the object. (See [Java Language Specifications/Locking](http://java.sun.com/docs/books/jls/third_edition/html/memory.html#61803) for more info on this.) Multiple locks are a problem because the wait() method will only undo one lock. If the method were synchronized, it would guarantee that only the method's lock would be undone while still leaving a potential outer lock undone. This would create a deadlock condition in the code.
To answer your question on Thread.sleep(), Thread.sleep() does not guarantee that whatever condition you are waiting on has been met. Using Object.wait() and Object.notify() allows a programmer to manually implement blocking. The threads will unblock once a notify is sent that a condition has been met. e.g. A read from disk has finished and data can be processed by the thread. Thread.sleep() would require the programmer to poll if the condition has been met, then fall back to sleep if it has not.
|
Lots of good answers here already. But just want to mention here that the other MUST DO when using wait() is to do it in a loop dependent on the condition you are waiting for in case you are seeing spurious wakeups, which in my experience do happen.
To wait for some other thread to change a condition to true and notify:
```
synchronized(o) {
while(! checkCondition()) {
o.wait();
}
}
```
Of course, these days, I'd recommend just using the new Condition object as it is clearer and has more features (like allowing multiple conditions per lock, being able to check wait queue length, more flexible schedule/interrupt, etc).
```
Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
lock.lock();
try {
while (! checkCondition()) {
condition.await();
}
} finally {
lock.unlock();
}
```
}
|
Can anyone explain thread monitors and wait?
|
[
"",
"java",
"multithreading",
"monitor",
""
] |
I am trying to write a query for SQL Server 2005 but I can't figure out how to do it. I have a table with the following fields:
**MessageID int
CategoryID int
Priority tinyint
MessageText NVARCHAR(MAX)**
I need a query that will return \* for each row that has the highest priority within a Category. For example, if I had the following data:
**MessageID, CategoryID, Priority, MessageText**
1, 100, 1, Error #1234 occurred
2, 100, 2, Error #243 occurred
3, 100, 3, Error #976 occurred
4, 200, 4, Error #194 occurred
5, 200, 1, Error #736 occurred
6, 300, 3, Error #54 occurred
7, 300, 2, Error #888 occurred
then the result would be:
**MessageID, CategoryID, Priority, MessageText**
3, 100, 3, Error #976 occurred
4, 200, 4, Error #194 occurred
6, 300, 3, Error #54 occurred
Notice that it returns one row per category, and that it is the row which had the highest priority for that Category.
Can anyone tell me how I can write this query?
|
Verified:
```
SELECT
highest_priority_messages.*
FROM
(
SELECT
m.MessageID
, m.CategoryID
, m.Priority
, m.MessageText
, Rank() OVER
(PARTITION BY m.CategoryID ORDER BY m.Priority DESC) AS p_rank
FROM [Message] m
GROUP BY
m.CategoryID
, m.Priority
, m.MessageID
, m.MessageText
) highest_priority_messages
WHERE
p_rank = 1
```
|
If you'd like to do it without all of the subqueries:
```
SELECT
MessageID,
CategoryID,
Priority,
MessageText
FROM
dbo.Messages M1
LEFT OUTER JOIN dbo.Messages M2 ON
M2.CategoryID = M1.CategoryID AND
M2.Priority > M1.Priority
WHERE
M2.MessageID IS NULL
```
You might have to adjust the query depending on how you want to handle ties. You didn't have any such examples, so I wasn't sure.
|
Semi-Tricky SQL Query
|
[
"",
"sql",
"sql-server",
""
] |
I read all over the place about how Spring encourages you to use interfaces in your code. I don't see it. There is no notion of interface in your spring xml configuration. What part of Spring actually encourages you to use interfaces (other than the docs)?
|
When you define an interface for your classes, it helps with dependency injection. Your Spring configuration files don't have anything about interfaces in them themselves -- you just put in the name of the class.
But if you want to inject another class that offers "equivalent" functionality, using an interface really helps.
For example, saying you've got a class that analyzes a website's content, and you're injecting it with Spring. If the classes you're injecting it into know what the actual class is, then in order to change it out you'll have to change a whole lot of code to use a different concrete class. But if you created an `Analyzer` interface, you could just as easily inject your original `DefaultAnalyzer` as you could a mocked up `DummyAnalyzer` or even another one that does essentially the same thing, like a `PageByPageAnalyzer` or anything else. In order to use one of those, you just have to change the classname you're injecting in your Spring config files, rather than go through your code changing classes around.
It took me about a project and a half before I really started to see the usefulness. Like most things (in enterprise languages) that end up being useful, it seems like a pointless addition of work at first, until your project starts to grow and then you discover how much time you saved by doing a little bit more work up front.
|
The [Dependency Inversion Principle](https://en.wikipedia.org/wiki/Dependency_inversion_principle) explains this well. In particular, figure 4.
> A. High level modules should not depend on low level modules. Both should depend upon abstractions.
>
> B. Abstraction should not depend upon details. Details should depend upon abstractions.
Translating the examples from the link above into java:
```
public class Copy {
private Keyboard keyboard = new Keyboard(); // concrete dependency
private Printer printer = new Printer(); // concrete dependency
public void copy() {
for (int c = keyboard.read(); c != KeyBoard.EOF) {
printer.print(c);
}
}
}
```
Now with dependency inversion:
```
public class Copy {
private Reader reader; // any dependency satisfying the reader interface will work
private Writer writer; // any dependency satisfying the writer interface will work
public void copy() {
for (int c = reader.read(); c != Reader.EOF) {
writer.write(c);
}
}
public Copy(Reader reader, Writer writer) {
this.reader = reader;
this.writer = writer;
}
}
```
Now `Copy` supports more than just copying from a keyboard to a printer.
It is capable of copying from any `Reader` to any `Writer` without requiring any modifications to its code.
And now with Spring:
```
<bean id="copy" class="Copy">
<constructor-arg ref="reader" />
<constructor-arg ref="writer" />
</bean>
<bean id="reader" class="KeyboardReader" />
<bean id="writer" class="PrinterWriter" />
```
or perhaps:
```
<bean id="reader" class="RemoteDeviceReader" />
<bean id="writer" class="DatabaseWriter" />
```
|
spring and interfaces
|
[
"",
"java",
"spring",
""
] |
I know they're using a jQuery plugin, but I can't seem to find which one they used. In particular, what I'm looking for is autocomplete with exactly the same functionality as SO's autocomplete, where it will perform an AJAX command with each new word typed in and allow you to select one from a dropdown.
|
Note that the tag editor [has been completely re-written now](https://meta.stackexchange.com/questions/100669/feedback-wanted-improved-tag-editor), and no longer resembles the original, simple text box w/ suggestion drop-down that adorned the site for nearly three years.
If you're interested in the new form, see this Meta question: <https://meta.stackexchange.com/questions/102510/can-i-use-the-tag-textbox-script>
[Autocomplete](http://docs.jquery.com/UI/Autocomplete) is the plugin used originally, albeit with various tweaks and customizations made to it over the years.
|
You might also like this one:
* <http://code.google.com/p/jquery-autocomplete/>
Read the history here: <http://code.google.com/p/jquery-autocomplete/wiki/History>
|
How does StackOverflow's 'tags' textbox autocomplete work?
|
[
"",
"javascript",
"jquery",
"ajax",
"textbox",
"autocomplete",
""
] |
1. We currently just utilize soap webservices for all our communication but have been thinking about moving to WCF instead. What are the benefits of using it over an asmx service?
2. If we do go with a WCF service, can other languages still communicate with it? SOAP is standardized and all languages can interact with it.
3. Are there any really good examples of how to get started with WCF that show the benefits of it over soap?
**EDIT**
* I just found [this](https://stackoverflow.com/questions/216931/what-is-the-difference-between-an-aspnet-web-method-and-a-wcf-service) question which is quite helpful.
* The [Getting Started Tutorial](http://msdn.microsoft.com/en-us/library/ms734712.aspx) is great.
|
1. There's a bit of a learning curve with WCF, but once you learn it it's no harder to implement than an asmx web services. One advantage is you can easily switch protocols and serialization from binary remoting all the way to [web protocols](http://msdn.microsoft.com/en-us/library/ms730294.aspx). It's also easy to host either in IIS or out.
2. Other languages can communicate with the web protocols. Binary, not so much...
3. I just dug into the [Getting Started Tutorial](http://msdn.microsoft.com/en-us/library/ms734712.aspx). It does a good job of showing the relative ease-of-use. From there, take a look at [Hosting](http://msdn.microsoft.com/en-us/library/ms730158.aspx) and more detailed [Features](http://msdn.microsoft.com/en-us/library/ms733103.aspx).
|
WCF is not a replacement for SOAP, and indeed, SOAP can be used as the serialization format between endpoints for communication. SOAP the standard also doesn't really define what goes in the message body, so in the case of WCF-to-WCF communication, a SOAP envelope is used, but the content is binary, so there's your primary advantage, at least in terms of performance. Another advantage is that the programming model of WCF is, or at least is intended to be, much more straightforward; especially since it sounds like you're really just using Web Services to communicate internally. In this case, most of the work would be up front, configuring endpoints (though consuming asmx WSDLs is obviously very easy in .NET).
I'm no expert in WCF, but what I do know has been from .NET user group presentations by Juval Lowy (who wrote the O'Reilly WCF book), [his site](http://www.idesign.net/idesign/DesktopDefault.aspx) has a decent amount of information in the way of screencasts and samples in the Resources section, in addition to the Microsoft sites (have you checked [Channel9](http://channel9.msdn.com)?).
---
|
What are the benefits of using WCF?
|
[
"",
"c#",
".net",
"wcf",
".net-3.5",
""
] |
In C++ I'd like to do something like:
```
int n = get_int_from_user();
char* matrix = new char[n][n];
matrix[0][0] = 'c';
//...
matrix[n][n] = 'a';
delete [][] matrix;
```
but of course this doesn't work. What is the best way to do something similar? I've seen some solutions to this but they seem pretty messy.
|
**The manual dynamic way:**
Let's say you want an array of width\*height, the most efficient way is to just use a single dimensional array:
```
char *matrix = new char[width*height];
```
To delete it:
```
delete[] matrix;
```
To access it:
```
char getArrayValue(char *matrix, int row, int col)
{
return matrix[row + col*width];
}
```
To modify it:
```
void setArrayValue(char *matrix, int row, int col, char val)
{
matrix[row + col*width] = val;
}
```
---
**Boost Matrix:**
[Consider using boost::matrix](http://www.boost.org/doc/libs/1_36_0/libs/numeric/ublas/doc/matrix.htm) if you can have the dependency.
You could then tie into the [boost linear algebra](http://www.boost.org/doc/libs/1_36_0/libs/numeric/ublas/doc/index.htm) libraries.
Here is some [sample code of boost::matrix](http://www.boost.org/doc/libs/1_36_0/libs/numeric/ublas/doc/matrix.htm):
```
#include <boost/numeric/ublas/matrix.hpp>
using namespace boost::numeric::ublas;
matrix<char> m (3, 3);
for (unsigned i = 0; i < m.size1 (); ++ i)
for (unsigned j = 0; j < m.size2 (); ++ j)
m (i, j) = 3 * i + j;
```
---
**On the stack for some compilers:**
Some compilers actually allow you to create arrays on the stack with runtime determined sizes. g++ is an example of such a compiler. You cannot do this by default VC++ though.
So in g++ this is valid code:
```
int width = 10;
int height = 10;
int matrix[width][height];
```
Drew Hall mentioned that this C99 feature is called Variable Length Arrays (VLAs) and it can probably be turned on in any modern compiler.
|
I usually do something like this:
```
char *matrix = new char [width * height];
matrix[i + j * width] = 'c'; // same as matrix[i][j] = 'c';
delete [] matrix;
```
|
Best way to represent a 2-D array in C++ with size determined at run time
|
[
"",
"c++",
"arrays",
"dynamic",
"matrix",
"multidimensional-array",
""
] |
I've been thinking of ways of providing syntactic sugar for a framework I have been working on. I want to deal with Immitable objects exclusively.
### Say I have an immutable object and wish to create a modified version of it. Would, in your view, a non-instantiable class with a single static factory method break OO principles ?
> As an example using a String:
>
> ```
> public final class LOWERCASE {
>
> private LOWERCASE() {}
>
> public static String string( final String STRING ) {
>
> return STRING.toLowerCase();
> }
> }
> ```
>
> Therefore from this example I could write:
>
> ```
> String lowercaseString = LOWERCASE.string( targetString );
> ```
>
> Which I find very readable.
### Any provisos against such an approach?
|
I don't think it's a good idea to create one class per method. You could instead create a static only methods class, named e.g StringUtils and implement the methods. This way you would call:
String lowerCaseString = StringUtils.lowercase( targetString );
This would also offer you intellisense help while you are typing. The list of your classes will go otherwise too big. Even for this simple example, you should implement more than one Lowercase classes, so that you could also cater for circumstances that the CulutureInfo must be taken into consideration.
I don't think this breaks OO principles in any way or that is bad design. In other languages, Ruby for example, you could add your methods directly to String class. Methods that end with ! denote that the original object is modified. All other methods return a modified copy. Ruby on Rails framework adds some methods to the String class and there is some debate about whether this is a good technique or not. It is definitely handy though.
|
Usually on immutable objects, I would have a method returning a modified version of the object. So if you have some immutable collection, it can have a sort() method, that returns a new collection that is sorted. However, in your String example this is not possible, since you cannot touch the String class.
Your approach is quite readable, and I think for edge cases like this, is perfectly fine. For immutable objects you write yourself, I'd have the method on the object itself.
[Eric Lippert's series on immutable objects in C#](http://blogs.msdn.com/ericlippert/archive/tags/Immutability/default.aspx) is quite good, by the way.
|
Java: Immutable to Immutable Conversions
|
[
"",
"java",
"static",
"frameworks",
"methods",
"factory",
""
] |
I have a class that defines a CallRate type. I need to add the ability to create multiple instances of my class by reading the data from a file.
I added a static method to my class CallRate that returns a `List<CallRate>`. Is it ok for a class to generate new instances of itself by calling one of its own constructors? It works, I just wonder if it's the proper thing to do.
```
List<CallRates> cr = CallRates.ProcessCallsFile(file);
```
|
It is perfectly fine to get object(s) of its own from the static method.
e.g.
One of the dot net libraries does the same thing as you did,
```
XmlReadrer reader = XmlReader.Create(filepathString);
```
|
Sure that's fine, even encouraged in some instances. There are several [design patterns that deal with object creation](http://en.wikipedia.org/wiki/Design_Patterns#Creational_patterns), and a few of them do just what you're describing.
|
Should a c# class generate instances of itself?
|
[
"",
"c#",
"class-design",
""
] |
Does the "for…in" loop in Javascript loop through the hashtables/elements in the order they are declared? Is there a browser which doesn't do it in order?
The object I wish to use will be declared *once* and will never be modified.
Suppose I have:
```
var myObject = { A: "Hello", B: "World" };
```
And I further use them in:
```
for (var item in myObject) alert(item + " : " + myObject[item]);
```
Can I expect 'A : "Hello"' to always come before 'B : "World"' in most decent browsers?
|
[Quoting John Resig](http://ejohn.org/blog/javascript-in-chrome/):
> Currently all major browsers loop over the properties of an object in the order in
> which they were defined. Chrome does this as well, except for a couple cases. [...]
> This behavior is explicitly left undefined by the ECMAScript specification.
> In ECMA-262, section 12.6.4:
>
> > The mechanics of enumerating the properties ... is implementation dependent.
>
> However, specification is quite different from implementation. All modern implementations
> of ECMAScript iterate through object properties in the order in which they were defined.
> Because of this the Chrome team has deemed this to be a bug and will be fixing it.
All browsers respect definition order [with the exception of Chrome](https://code.google.com/p/v8/issues/detail?id=164) and Opera which do for every non-numerical property name. In these two browsers the properties are pulled in-order ahead of the first non-numerical property (this is has to do with how they implement arrays). The order is the same for `Object.keys` as well.
This example should make it clear what happens:
```
var obj = {
"first":"first",
"2":"2",
"34":"34",
"1":"1",
"second":"second"
};
for (var i in obj) { console.log(i); };
// Order listed:
// "1"
// "2"
// "34"
// "first"
// "second"
```
The technicalities of this are less important than the fact that this may change at any time. Do not rely on things staying this way.
In short: **Use an array if order is important to you.**
|
*Bumping this a year later...*
It is **2012** and the major browsers **still** differ:
```
function lineate(obj){
var arr = [], i;
for (i in obj) arr.push([i,obj[i]].join(':'));
console.log(arr);
}
var obj = { a:1, b:2, c:3, "123":'xyz' };
/* log1 */ lineate(obj);
obj.a = 4;
/* log2 */ lineate(obj);
delete obj.a;
obj.a = 4;
/* log3 */ lineate(obj);
```
[gist](https://gist.github.com/1551668) or [test in current browser](https://rawgit.com/dvdrtrgn/460db3c214ade506fe7123500b1b4c8c/raw/36aaeb687bc34280e8f1cb5410359271c629e9b6/domlineate.html)
Safari 5, Firefox 14
```
["a:1", "b:2", "c:3", "123:xyz"]
["a:4", "b:2", "c:3", "123:xyz"]
["b:2", "c:3", "123:xyz", "a:4"]
```
Chrome 21, Opera 12, Node 0.6, Firefox 27
```
["123:xyz", "a:1", "b:2", "c:3"]
["123:xyz", "a:4", "b:2", "c:3"]
["123:xyz", "b:2", "c:3", "a:4"]
```
IE9
```
[123:xyz,a:1,b:2,c:3]
[123:xyz,a:4,b:2,c:3]
[123:xyz,a:4,b:2,c:3]
```
|
Elements order in a "for (… in …)" loop
|
[
"",
"javascript",
"for-loop",
""
] |
I'm in the process of creating a class that stores metadata about a particular data source. The metadata is structured in a tree, very similar to how XML is structured. The metadata values can be integer, decimal, or string values.
I'm curious if there is a good way in C++ to store variant data for a situation like this. I'd like for the variant to use standard libraries, so I'm avoiding the COM, Ole, and SQL VARIANT types that are available.
My current solution looks something like this:
```
enum MetaValueType
{
MetaChar,
MetaString,
MetaShort,
MetaInt,
MetaFloat,
MetaDouble
};
union MetaUnion
{
char cValue;
short sValue;
int iValue;
float fValue;
double dValue;
};
class MetaValue
{
...
private:
MetaValueType ValueType;
std::string StringValue;
MetaUnion VariantValue;
};
```
The MetaValue class has various Get functions for obtaining the currently stored variant value, but it ends up making every query for a value a big block of if/else if statements to figure out which value I'm looking for.
I've also explored storing the value as only a string, and performing conversions to get different variant types out, but as far as I've seen this leads to a bunch of internal string parsing and error handling which isn't pretty, opens up a big old can of precision and data loss issues with floating point values, and still doesn't eliminate the query if/else if issue stated above.
Has anybody implemented or seen something that's cleaner to use for a C++ variant data type using standard libraries?
|
As of C++17, there’s [`std::variant`](http://en.cppreference.com/w/cpp/utility/variant).
If you can’t use that yet, you might want [Boost.Variant](http://www.boost.org/doc/libs/1_36_0/doc/html/variant.html). A similar, but distinct, type for modelling polymorphism is provided by [`std::any`](http://en.cppreference.com/w/cpp/utility/any) (and, pre-C++17, [Boost.Any](http://www.boost.org/doc/libs/1_36_0/doc/html/any.html)).
Just as an additional pointer, you can look for “[type erasure](https://stackoverflow.com/q/5450159/1968)”.
|
While Konrad's answer (using an existing standardized solution) is certainly preferable to writing your own bug-prone version, the boost variant has some overheads, especially in copy construction and memory.
A common customized approach is the following modified Factory Pattern:
1. Create a Base interface for a generic object that also encapsulates the object type (either as an enum), or using 'typeid' (preferable).
2. Now implement the interface using a template `Derived` class.
3. Create a factory class with a templateized `create` function with signature:
`template <typename _T> Base * Factory::create ();`
This internally creates a `Derived<_T>` object on the heap, and retuns a dynamic cast pointer. Specialize this for each class you want implemented.
Finally, define a `Variant` wrapper that contains this `Base *` pointer and defines template get and set functions. Utility functions like `getType()`, `isEmpty()`, assignment and equality operators, etc can be appropriately implemented here.
Depending on the utility functions and the factory implementation, supported classes will need to support some basic functions like assignment or copy construction.
|
How to store variant data in C++
|
[
"",
"c++",
"variant",
""
] |
I'm writing an application that uses renaming rules to rename a list of files based on information given by the user. The files may be inconsistently named to begin with, or the filenames may be consistent. The user selects a list of files, and inputs information about the files (for MP3s, they would be Artist, Title, Album, etc). Using a rename rule (example below), the program uses the user-inputted information to rename the files accordingly.
However, if all or some the files are named consistently, I would like to allow the program to 'guess' the file information. That is the problem I'm having. What is the best way to do this?
Sample filenames:
```
Kraftwerk-Kraftwerk-01-RuckZuck.mp3
Kraftwerk-Autobahn-01-Autobahn.mp3
Kraftwerk-Computer World-03-Numbers.mp3
```
Rename Rule:
```
%Artist%-%Album%-%Track%-%Title%.mp3
```
The program should properly deduce the Artist, Track number, Title, and Album name.
Again, what's the best way to do this? I was thinking regular expressions, but I'm a bit confused.
|
Easiest would be to replace each `%Label%` with `(?<Label>.*?)`, and escape any other characters.
```
%Artist%-%Album%-%Track%-%Title%.mp3
```
becomes
```
(?<Artist>.*?)-(?<Album>.*?)-(?<Track>.*?)-(?<Title>.*?)\.mp3
```
You would then get each component into named capture groups.
```
Dictinary<string,string> match_filename(string rule, string filename) {
Regex tag_re = new Regex(@'%(\w+)%');
string pattern = tag_re.Replace(Regex.escape(rule), @'(?<$1>.*?)');
Regex filename_re = new Regex(pattern);
Match match = filename_re.Match(filename);
Dictionary<string,string> tokens =
new Dictionary<string,string>();
for (int counter = 1; counter < match.Groups.Count; counter++)
{
string group_name = filename_re.GroupNameFromNumber(counter);
tokens.Add(group_name, m.Groups[counter].Value);
}
return tokens;
}
```
But if the user leaves out the delimiters, or if the delimiters could be contained within the fields, you could get some strange results. The pattern would for `%Artist%%Album%` would become `(?<Artist>.*?)(?<Album>.*?)` which is equivalent to `.*?.*?`. The pattern wouldn't know where to split.
This could be solved if you know the format of certain fields, such as the track-number. If you translate `%Track%` to `(?<Track>\d+)` instead, the pattern would know that any digits in the filename must be the `Track`.
|
Not the answer to the question you asked, but an [ID3 tag](http://en.wikipedia.org/wiki/ID3) reading library might be a better way to do this when you are using MP3s. A quick Google came up with: [C# ID3 Library](http://sourceforge.net/projects/csid3lib).
As for guessing which string positions hold the artist, album, and song title... the first thing I can think of is that if you have a good selection to work with, say several albums, you could first see which position repeats the most, which would be the artist, which repeats the second most (album) and which repeats the least (song title).
Otherwise, it seems like a difficult guess to make based solely on a few strings in the file name... could you ask the user to also input a matching expression for the file name that describes the order of the fields?
|
Pattern matching and placeholder values
|
[
"",
"c#",
"regex",
""
] |
I need to construct some rather simple SQL, I suppose, but as it's a rare event that I work with DBs these days I can't figure out the details.
I have a table 'posts' with the following columns:
> id, caption, text
and a table 'comments' with the following columns:
> id, name, text, post\_id
What would the (single) SQL statement look like which retrieves the captions of all posts which have one or more comments associated with it through the 'post\_id' key? The DBMS is MySQL if it has any relevance for the SQL query.
|
```
select p.caption, count(c.id)
from posts p join comments c on p.id = c.post_id
group by p.caption
having count (c.id) > 0
```
|
```
SELECT DISTINCT p.caption, p.id
FROM posts p,
comments c
WHERE c.post_ID = p.ID
```
I think using a join would be a lot faster than using the IN clause or a subquery.
|
SQL: Get all posts with any comments
|
[
"",
"sql",
"mysql",
""
] |
In C# I sometimes wish I could make special methods for certain "instantiations" of generic classes.
**UPDATE: The following code is just a dumb example of a more abstract problem - don't focus too much on time series, just the principles of "adding extra methods" for certain T**.
Example:
```
class Timeseries<T>
{
...
TimeSeries<T> Slice(...) { ... }
}
```
In the case where T is double, I would like some extra methods, like `Integrate()`, `Interpolate()` and so on that make only sense for `double`, because I need to do arithmetic on them.
There are several ways to do this, but I cannot find one that I'm satisfied with.
**1. Inherit into a special class**
```
class TimeseriesDouble : Timeseries<double>
{
double Interpolate(...) { ... }
...
}
```
**cons:** `TimeseriesDouble.Slice()` will return a new `Timeseries<double>` object, now missing my special methods.
**2. External methods**
```
public static double Interpolate(Timeseries<double> ts, ...) { ... }
```
**cons:** Breaks with OO principles. And I don't want to put my methods away. Also, the methods might need private/protected state.
**3. Extension methods**
Same as 2, just with a nicer calling syntax.
**4. Common base class**
```
class TimeSeries_base { ... }
class TimeSeries<T> : TimeSeries_base { .. typesafe versions of methods .. }
class TimeSeriesDouble : TimeSeries_base { .. typesafe versions of methods .. }
```
**cons:** too much duplication of things from `TimeSeries_base` into the two subclasses. The base class might become just a place holder for utility functions for the sub classes.
**pro:** I can now do things like `List<TimeSeries_base>` dynamically.
**5. Just forget about a common class**
I.e., keep `Timeseries<T>` and `TimeseriesDouble` separate in the code.
**cons:** Then I don't get all the benefit of treating a `TimeseriesDouble` like a `TimeSeries<T>`, e.g. combining two timeseries with ZIP(A,B), where one happens to be of doubles.
---
**Any other ideas?**
Currently, I think I like the design (1) best.
|
You could always use the self-referential generics trick:
```
public class TimeSeries<T, U> where U : TimeSeries<T, U>
{
U Slice(...)
}
public class TimeSeriesDouble : TimeSeries<double, TimeSeriesDouble>
{
...
}
```
It can get a bit brain-bending, but it can work.
|
```
interface ITimeSeries<T> { ... }
abstract class TimeSeriesBase<TS> where TS : TimeSeriesBase<TS>
{ public TS Slice() { ... }
}
class TimeSeries<T>:TimeSeriesBase<TimeSeries<T>>,ITimeSeries<T> {}
class TimeSeriesDouble:TimeSeriesBase<TimeSeriesDouble>,ITimeSeries<double>
{ public double Interpolate() { ... }
}
```
|
Pattern for specialization of generic class in C#?
|
[
"",
"c#",
".net",
"design-patterns",
"generics",
""
] |
Suppose I have a SELECT statement that returns some set of results. Is there some way I can number my results in the following way:
> SELECT TOP 3 Name FROM PuppyNames ORDER BY NumberOfVotes
would give me...
> Fido
>
> Rover
>
> Freddy Krueger
...but I want...
> 1, Fido
>
> 2, Rover
>
> 3, Freddy Krueger
where of course the commas signify that the numbers are in their own column. [I am using SQL Server 2000.]
|
In Microsoft SQL Server 2005, you have the `ROW_NUMBER()` function which does exactly what you want.
If you are stuck with SQL Server 2000, the typical technique was to create a new temporary table to contain the result of your query, plus add an `IDENTITY` column and generate incremental values. See an article that talks about this technique here: <http://www.databasejournal.com/features/mssql/article.php/3572301/RowNumber-function-in-SQL-Server-2005.htm>
|
With SQL 2000 you need to use a correlated sub-query.
```
SELECT (
SELECT COUNT(*)
FROM PuppyNames b
WHERE b.Popularity <= a.Popularity
) AS Ranking
, a.Name
FROM PuppyNames a
ORDER BY a.Popularity
```
|
SQL: Numbering the rows returned by a SELECT statement
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I am on Vista 64 bits and I have a project built with x86 configuration. All work fine. Now, we are at the time to create test. We have NUnit 2.4.8 but we have a lot of problem.
The test are loading trough the Nunit.exe (gui) when we select the .dll directly but when executing we have a system.badimageformatexception.
I have read by searching on Google few trick about the nunit.exe.config but none work. (changing to UTF8... uncomment .net version for startup).
Any idea?
**Update**
I have clean the solution and erase all BIN folder. Now when I compile I clearly see that I have only the /x86/ in the bin directory and not the old /debug/ that was in x64.
When I go with Nunit I have an exception (in the loading) : **System.IO.FileNotFoundException...**
Server stack trace:
at System.Reflection.Assembly.\_nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection)
at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
at System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
at System.Reflection.Assembly.Load(String assemblyString)
at NUnit.Core.Builders.TestAssemblyBuilder.Load(String path)
at NUnit.Core.Builders.TestAssemblyBuilder.Build(String assemblyName, Boolean autoSuites)
at NUnit.Core.Builders.TestAssemblyBuilder.Build(String assemblyName, String testName, Boolean autoSuites)
at NUnit.Core.TestSuiteBuilder.BuildSingleAssembly(TestPackage package)
at NUnit.Core.TestSuiteBuilder.Build(TestPackage package)
at NUnit.Core.SimpleTestRunner.Load(TestPackage package)
at NUnit.Core.ProxyTestRunner.Load(TestPackage package)
at NUnit.Core.ProxyTestRunner.Load(TestPackage package)
at NUnit.Core.RemoteTestRunner.Load(TestPackage package)
at System.Runtime.Remoting.Messaging.StackBuilderSink.\_PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at NUnit.Core.TestRunner.Load(TestPackage package)
at NUnit.Util.TestDomain.Load(TestPackage package)
at NUnit.Util.TestLoader.LoadTest(String testName)
**Update 2**
I am compiling with ANY CPU that I have modified to be x86 instead of x64. The reason is for the [debug](https://stackoverflow.com/questions/206894/change-to-64-bits-not-allowed-when-trying-to-edit-in-debug-why). This has been already discussed in the previous link. I have to confirm that NUnit is running in 64bits mod and Corflags.exe
|
Ok I found the solution in this [website](http://cloudnine.no/2008/07/msbuild-nunit-running-32-bit-unit-tests-on-64-bin-machine). You have to use the \NUnit-2.4.8\bin\nunit-x86.exe instead of \NUnit-2.4.8\bin\nunit.exe... didn't know that the \bin\ had 2 nunit!!!
Thx all
|
The NUnit host is likely running as a 64 bit process (you can confirm that by looking in task manager). If you assembly is x86 only then it won't be able to run in that process.
You can try running [corflags](http://msdn.microsoft.com/en-us/library/ms164699(VS.80).aspx) on the NUnit executable to force it to run x86, using the /32bit+ flag
|
Nunit.exe cannot work on Vista 64bits if x86 build
|
[
"",
"c#",
".net",
".net-2.0",
"nunit",
"64-bit",
""
] |
How can I create an instance of the following annotation (with all fields set to their default value).
```
@Retention( RetentionPolicy.RUNTIME )
public @interface Settings {
String a() default "AAA";
String b() default "BBB";
String c() default "CCC";
}
```
I tried `new Settings()`, but that does not seem to work...
|
You cannot create an instance, but at least get the default values
```
Settings.class.getMethod("a").getDefaultValue()
Settings.class.getMethod("b").getDefaultValue()
Settings.class.getMethod("c").getDefaultValue()
```
And then, a dynamic proxy could be used to return the default values. Which is, as far as I can tell, the way annotations are handled by Java itself also.
```
class Defaults implements InvocationHandler {
public static <A extends Annotation> A of(Class<A> annotation) {
return (A) Proxy.newProxyInstance(annotation.getClassLoader(),
new Class[] {annotation}, new Defaults());
}
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
return method.getDefaultValue();
}
}
Settings s = Defaults.of(Settings.class);
System.out.printf("%s\n%s\n%s\n", s.a(), s.b(), s.c());
```
|
To create an instance you need to create a class that implements:
* [`java.lang.annotation.Annotation`](https://docs.oracle.com/javase/8/docs/api/java/lang/annotation/Annotation.html)
* and the annotation you want to "simulate"
For example:
`public class MySettings implements Annotation, Settings`
But you need to pay special attention to the **correct** implementation of `equals` and `hashCode` according to the `Annotation` interface.
<http://download.oracle.com/javase/1.5.0/docs/api/java/lang/annotation/Annotation.html>
If you do not want to implement this again and again then have a look at the [javax.enterprise.util.AnnotationLiteral](http://docs.jboss.org/cdi/api/1.0/javax/enterprise/util/AnnotationLiteral.html) class.
That is part of the CDI(Context Dependency Injection)-API.
[(@see code)](http://grepcode.com/file/repo1.maven.org/maven2/javax.enterprise/cdi-api/1.0/javax/enterprise/util/AnnotationLiteral.java)
To get the default values you can use the way that is described by akuhn (former known as: Adrian).
`Settings.class.getMethod("a").getDefaultValue()`
|
Create Annotation instance with defaults, in Java
|
[
"",
"java",
"annotations",
"instantiation",
""
] |
I have a interface that defines some methods with attributes. These attributes need to be accessed from the calling method, but the method I have does not pull the attributes from the interface. What am I missing?
```
public class SomeClass: ISomeInterface
{
MyAttribute GetAttribute()
{
StackTrace stackTrace = new StackTrace();
StackFrame stackFrame = stackTrace.GetFrame(1);
MethodBase methodBase = stackFrame.GetMethod();
object[] attributes = methodBase.GetCustomAttributes(typeof(MyAttribute), true);
if (attributes.Count() == 0)
throw new Exception("could not find MyAttribute defined for " + methodBase.Name);
return attributes[0] as MyAttribute;
}
void DoSomething()
{
MyAttribute ma = GetAttribute();
string s = ma.SomeProperty;
}
}
```
|
The methodBase will be the method on the class, not the interface. You will need to look for the same method on the interface. In C# this is a little simpler (since it must be like-named), but you would need to consider things like explicit implementation. If you have VB code it will be trickier, since VB method "Foo" can implement an interface method "Bar". To do this, you would need to investigate the interface map:
```
using System;
using System.ComponentModel;
using System.Diagnostics;
using System.Reflection;
interface IFoo
{
void AAA(); // just to push Bar to index 1
[Description("abc")]
void Bar();
}
class Foo : IFoo
{
public void AAA() { } // just to satisfy interface
static void Main()
{
IFoo foo = new Foo();
foo.Bar();
}
void IFoo.Bar()
{
GetAttribute();
}
void GetAttribute()
{ // simplified just to obtain the [Description]
StackTrace stackTrace = new StackTrace();
StackFrame stackFrame = stackTrace.GetFrame(1);
MethodBase classMethod = stackFrame.GetMethod();
InterfaceMapping map = GetType().GetInterfaceMap(typeof(IFoo));
int index = Array.IndexOf(map.TargetMethods, classMethod);
MethodBase iMethod = map.InterfaceMethods[index];
string desc = ((DescriptionAttribute)Attribute.GetCustomAttribute(iMethod, typeof(DescriptionAttribute))).Description;
}
}
```
|
Mark's method will work for non-generic interfaces. But it appears that I am dealing with some that have generics
```
interface IFoo<T> {}
class Foo<T>: IFoo<T>
{
T Bar()
}
```
It appears that the T is replaced with the actual classType in the map.TargetMethods.
|
Attributes on an interface
|
[
"",
"c#",
"reflection",
"attributes",
"interface",
""
] |
I use eclipse to work on an application which was originally created independently of eclipse. As such, the application's directory structure is decidedly not eclipse-friendly.
I want to programmatically generate a project for the application. The `.project` and `.classpath` files are easy enough to figure out, and I've learned that projects are stored in the workspace under `<workspace>/.metadata/.plugins/org.eclipse.core.resources/.projects`
Unfortunately, some of the files under here (particularly `.location`) seem to be encoded in some kind of binary format. On a hunch I tried to deserialize it using `ObjectInputStream` - no dice. So it doesn't appear to be a serialized java object.
My question is: is there a way to generate these files automatically?
For the curious, the error I get trying to deserialize the `.location` file is the following:
`java.io.StreamCorruptedException: java.io.StreamCorruptedException: invalid stream header: 40B18B81`
**Update:** My goal here is to be able to replace the New Java Project wizard with a command-line script or program. The reason is the application in question is actually a very large J2EE/weblogic application, which I like to break down into a largish (nearly 20) collection of subprojects. Complicating matters, we use clearcase for SCM and create a new branch for every release. This means I need to recreate these projects for every development view (branch) I create. This happens often enough to automate.
|
You should be able to accomplish this by writing a small Eclipse plugin. You could even extend it out to being a "headless" RCP app, and pass in the command line arguments you need.
The barebones code to create a project is:
```
IProgressMonitor progressMonitor = new NullProgressMonitor();
IWorkspaceRoot root = ResourcesPlugin.getWorkspace().getRoot();
IProject project = root.getProject("DesiredProjectName");
project.create(progressMonitor);
project.open(progressMonitor);
```
Just take a look at the eclipse code for the Import Project wizard to give you a better idea of where to go with it.
|
Use [AntEclipse](http://ant-eclipse.sourceforge.net)
It can create eclipse projects from ant.
|
Programmatically generate an Eclipse project
|
[
"",
"java",
"eclipse",
""
] |
In C# which is more memory efficient: Option #1 or Option #2?
```
public void TestStringBuilder()
{
//potentially a collection with several hundred items:
string[] outputStrings = new string[] { "test1", "test2", "test3" };
//Option #1
StringBuilder formattedOutput = new StringBuilder();
foreach (string outputString in outputStrings)
{
formattedOutput.Append("prefix ");
formattedOutput.Append(outputString);
formattedOutput.Append(" postfix");
string output = formattedOutput.ToString();
ExistingOutputMethodThatOnlyTakesAString(output);
//Clear existing string to make ready for next iteration:
formattedOutput.Remove(0, output.Length);
}
//Option #2
foreach (string outputString in outputStrings)
{
StringBuilder formattedOutputInsideALoop = new StringBuilder();
formattedOutputInsideALoop.Append("prefix ");
formattedOutputInsideALoop.Append(outputString);
formattedOutputInsideALoop.Append(" postfix");
ExistingOutputMethodThatOnlyTakesAString(
formattedOutputInsideALoop.ToString());
}
}
private void ExistingOutputMethodThatOnlyTakesAString(string output)
{
//This method actually writes out to a file.
System.Console.WriteLine(output);
}
```
|
Several of the answers gently suggested that I get off my duff and figure out it myself so below are my results. I think that sentiment generally goes against the grain of this site but if you want something done right, you might as well do.... :)
I modified option #1 to take advantage of @Ty suggestion to use StringBuilder.Length = 0 instead of the Remove method. This made the code of the two options more similar. The two differences are now whether the constructor for the StringBuilder is in or out of the loop and option #1 now uses the the Length method to clear the StringBuilder. Both options were set to run over an outputStrings array with 100,000 elements to make the garbage collector do some work.
A couple answers offered hints to look at the various PerfMon counters & such and use the results to pick an option. I did some research and ended up using the built-in Performance Explorer of the Visual Studio Team Systems Developer edition that I have at work. I found the second blog entry of a multipart series that explained how to set it up [here](http://blogs.msdn.com/ianhu/archive/2005/02/11/371418.aspx). Basically, you wire up a unit test to point at the code you want to profile; go through a wizard & some configurations; and launch the unit test profiling. I enabled the .NET object allocation & lifetime metrics. The results of the profiling where difficult to format for this answer so I placed them at the end. If you copy and paste the text into Excel and massage them a bit, they'll be readable.
Option #1 is the most memory efficiency because it makes the garbage collector do a little less work and it allocates half the memory and instances to the StringBuilder object than Option #2. For everyday coding, picking option #2 is perfectly fine.
If you're still reading, I asked this question because Option #2 will make the memory leak detectors of an experience C/C++ developer go ballistic. A huge memory leak will occur if the StringBuilder instance is not released before being reassigned. Of course, we C# developers don't worry about such things (until they jump up and bite us). Thanks to all!!
---
```
ClassName Instances TotalBytesAllocated Gen0_InstancesCollected Gen0BytesCollected Gen1InstancesCollected Gen1BytesCollected
=======Option #1
System.Text.StringBuilder 100,001 2,000,020 100,016 2,000,320 2 40
System.String 301,020 32,587,168 201,147 11,165,268 3 246
System.Char[] 200,000 8,977,780 200,022 8,979,678 2 90
System.String[] 1 400,016 26 1,512 0 0
System.Int32 100,000 1,200,000 100,061 1,200,732 2 24
System.Object[] 100,000 2,000,000 100,070 2,004,092 2 40
======Option #2
System.Text.StringBuilder 200,000 4,000,000 200,011 4,000,220 4 80
System.String 401,018 37,587,036 301,127 16,164,318 3 214
System.Char[] 200,000 9,377,780 200,024 9,379,768 0 0
System.String[] 1 400,016 20 1,208 0 0
System.Int32 100,000 1,200,000 100,051 1,200,612 1 12
System.Object[] 100,000 2,000,000 100,058 2,003,004 1 20
```
|
Option 2 should (I believe) actually outperform option 1. The act of calling `Remove` "forces" the StringBuilder to take a copy of the string it's already returned. The string is actually mutable within StringBuilder, and StringBuilder doesn't take a copy unless it needs to. With option 1 it copies before basically clearing the array out - with option 2 no copy is required.
The only downside of option 2 is that if the string ends up being long, there will be multiple copies made while appending - whereas option 1 keeps the original size of buffer. If this is going to be the case, however, specify an initial capacity to avoid the extra copying. (In your sample code, the string will end up being bigger than the default 16 characters - initializing it with a capacity of, say, 32 will reduce the extra strings required.)
Aside from the performance, however, option 2 is just cleaner.
|
Is using StringBuilder Remove method more memory efficient than creating a new StringBuilder in loop?
|
[
"",
"c#",
"memory-leaks",
"garbage-collection",
"stringbuilder",
""
] |
I am trying to convert a date with individual parts such as 12, 1, 2007 into a datetime in SQL Server 2005. I have tried the following:
```
CAST(DATEPART(year, DATE)+'-'+ DATEPART(month, DATE) +'-'+ DATEPART(day, DATE) AS DATETIME)
```
but this results in the wrong date. What is the correct way to turn the three date values into a proper datetime format.
|
Assuming `y, m, d` are all `int`, how about:
```
CAST(CAST(y AS varchar) + '-' + CAST(m AS varchar) + '-' + CAST(d AS varchar) AS DATETIME)
```
Please see [my other answer](https://stackoverflow.com/a/10142966/18255) for SQL Server 2012 and above
|
Try this:
```
Declare @DayOfMonth TinyInt Set @DayOfMonth = 13
Declare @Month TinyInt Set @Month = 6
Declare @Year Integer Set @Year = 2006
-- ------------------------------------
Select DateAdd(day, @DayOfMonth - 1,
DateAdd(month, @Month - 1,
DateAdd(Year, @Year-1900, 0)))
```
It works as well, has added benefit of not doing any string conversions, so it's pure arithmetic processing (very fast) and it's not dependent on any date format
This capitalizes on the fact that SQL Server's internal representation for datetime and smalldatetime values is a two part value the first part of which is an integer representing the number of days since 1 Jan 1900, and the second part is a decimal fraction representing the fractional portion of one day (for the time) --- So the integer value 0 (zero) always translates directly into Midnight morning of 1 Jan 1900...
or, thanks to suggestion from @brinary,
```
Select DateAdd(yy, @Year-1900,
DateAdd(m, @Month - 1, @DayOfMonth - 1))
```
Edited October 2014. As Noted by @cade Roux, SQL 2012 now has a built-in function:
`DATEFROMPARTS(year, month, day)`
that does the same thing.
Edited 3 Oct 2016, (Thanks to @bambams for noticing this, and @brinary for fixing it), The last solution, proposed by @brinary. does not appear to work for leap years unless years addition is performed first
```
select dateadd(month, @Month - 1,
dateadd(year, @Year-1900, @DayOfMonth - 1));
```
|
Create a date from day month and year with T-SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
My problem is that all the textbox's my formview are getting cleared when I hit the submit button.
I currently have a page with a small section that has an update panel around it. This small section adds an address to my databse. To the left of this form there is a gridview that is tied into the formview. So if i click on an item in the gridview its contents fill the address section(formview) with the correct data.
When I hit add the data gets validated in the c# code behind, and if all the info is correct the address is inserted. **If there is an error the entire form is cleared** and the error message(label) is displayed.
I have already done this many times in other pages, but none have had the gridview tied to the formview, and they have all worked. I tried removing the gridview and the form still erases itself.
Is there some reason that .net thinks it should be clearing the form? When in other cases it decides it won't? If so what are these cases, or what general tips should I try to solve this?
|
is the ViewState enabled?
|
in the page\_load are you using if(!Page.IsPostback) { ... } so if it's a postback nothing gets re-bound?
|
Formview Being Cleared
|
[
"",
"c#",
"asp.net",
"gridview",
"formview",
""
] |
I'd like to write Python scripts that drive Visual Studio 2008 and Visual C++ 2008. All the examples I've found so far use `win32com.client.Dispatch`. This works fine for Excel 2007 and Word 2007 but fails for Visual Studio 2008:
```
import win32com.client
app1 = win32com.client.Dispatch( 'Excel.Application' ) # ok
app2 = win32com.client.Dispatch( 'Word.Application' ) # ok
app3 = win32com.client.Dispatch( 'MSDev.Application' ) # error
```
Any ideas? Does Visual Studio 2008 use a different string to identify itself? Is the above method out-dated?
|
I don't know if this will help you with 2008, but with Visual Studio 2005 and win32com I'm able to do this:
```
>>> import win32com.client
>>> b = win32com.client.Dispatch('VisualStudio.DTE')
>>> b
<COMObject VisualStudio.DTE>
>>> b.name
u'Microsoft Visual Studio'
>>> b.Version
u'8.0'
```
Unfortunately I don't have 2008 to test with though.
|
Depending on what exactly you're trying to do, [AutoIt](http://www.autoitscript.com/autoit3/index.shtml) may meet your needs. In fact, I'm sure it will do anything you need it to do.
Taken from my [other post](https://stackoverflow.com/questions/151846/get-other-running-processes-window-sizes-in-python#155587) about how to use AutoIt with Python:
```
import win32com.client
oAutoItX = win32com.client.Dispatch( "AutoItX3.Control" )
oAutoItX.Opt("WinTitleMatchMode", 2) #Match text anywhere in a window title
width = oAutoItX.WinGetClientSizeWidth("Firefox")
height = oAutoItX.WinGetClientSizeHeight("Firefox")
print width, height
```
You can of course use any of the [AutoItX functions](http://www.autoitscript.com/autoit3/docs/functions.htm) (note that that link goes to the AutoIt function reference, the com version of AutoIt - AutoItX has a subset of that list...the documentation is included in the download) in this way. I don't know what you're wanting to do, so I can't point you towards the appropriate functions, but this should get you started.
|
How to script Visual Studio 2008 from Python?
|
[
"",
"python",
"visual-studio",
"visual-studio-2008",
"visual-c++",
""
] |
I have a property on a domain object that is declared in a many-to-one element. The basic syntax of this property looks like this:
```
<many-to-one name="propertyName" class="propertyClass" fetch="select" not-found="ignore" lazy="proxy" />
```
Now, the idea is to have Hibernate NOT eagerly fetch this property. It may be null, so the not-found ignore is set.
But, Hibernate, upon loading the class containing this association, takes it upon itself to load the actual class (not even a proxy) instance when the parent class is loaded. Since some properties are over 1MB in size, they take up a lot of the heap space.
If, however, not-found is set to exception (or defaulted to exception), the parent classes which have this property do load a proxy!
How can I stop hibernate from not loading a proxy, while still allowing this property to be null?
I found lazy=no-proxy, but the documentation talks about some sort of bytecode modification and doesn't go into any details. Can someone help me out?
If it matters, it is the Java version of Hibernate, and it is at least version 3 (I can look up the actual version if it helps, but it is Hibernate 3+ for now).
I didn't specify earlier, but the Java version is 1.4. So, Java annotations aren't supported.
|
If the other end of the association can be *null*, I believe hibernate must query for the association end in order to determine if it should use a proxy or not (no need for proxy if the other end is *null*). I can't find the reference to this right now, but I remember reading it somewhere.
In order to provide lazy-loading of **fields** the documentation refers to bytecode enhancements on fields at buildtime: [Using lazy property fetching](http://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/performance.html#performance-fetching-lazyproperties). Here is an excerpt:
> Hibernate3 supports the lazy fetching
> of individual properties. This
> optimization technique is also known
> as fetch groups. Please note that this
> is mostly a marketing feature, as in
> practice, optimizing row reads is much
> more important than optimization of
> column reads. However, only loading
> some properties of a class might be
> useful in extreme cases, when legacy
> tables have hundreds of columns and
> the data model can not be improved.
>
> Lazy property loading requires
> buildtime bytecode instrumentation! If
> your persistent classes are not
> enhanced, Hibernate will silently
> ignore lazy property settings and fall
> back to immediate fetching.
|
> I found lazy=no-proxy, but the
> documentation talks about some sort of
> bytecode modification and doesn't go
> into any details. Can someone help me
> out?
I'll assume you're using ANT to build your project.
```
<property name="src" value="/your/src/directory"/><!-- path of the source files -->
<property name="libs" value="/your/libs/directory"/><!-- path of your libraries -->
<property name="destination" value="/your/build/directory"/><!-- path of your build directory -->
<fileset id="applibs" dir="${libs}">
<include name="hibernate3.jar" />
<!-- include any other libraries you'll need here -->
</fileset>
<target name="compile">
<javac srcdir="${src}" destdir="${destination}" debug="yes">
<classpath>
<fileset refid="applibs"/>
</classpath>
</javac>
</target>
<target name="instrument" depends="compile">
<taskdef name="instrument" classname="org.hibernate.tool.instrument.javassist.InstrumentTask">
<classpath>
<fileset refid="applibs"/>
</classpath>
</taskdef>
<instrument verbose="true">
<fileset dir="${destination}">
<!-- substitute the package where you keep your domain objs -->
<include name="/com/mycompany/domainobjects/*.class"/>
</fileset>
</instrument>
</target>
```
|
How to stop Hibernate from eagerly fetching many-to-one associated object
|
[
"",
"java",
"hibernate",
""
] |
I have a T4 template that generates classes from an xml file.
How can I add a dependency between the xml file and the template file so that when the xml file is modified the template is rerun automatically without choosing "Run custom tool" from the context menu?
|
I don't believe T4 supports automatic template transformation based on an external dependency. I agree with Marc - if you only have one external file, you could create a custom "custom tool" for your XML file or simply use [ttxgen](http://code.msdn.microsoft.com/TTxGen). However, I don't think this approach scales up to a scenario where t4 template depends on more than one file. You may need to create a Visual Studio package to handle that.
|
How long does the tool take to execute? One lazy option might be to simply edit the csproj such that it *always* runs the tool during build (presumably via [`<Exec ... />`](http://msdn.microsoft.com/en-us/library/x8zx72cd.aspx) or a custom `targets` file) - of course, this depends on it being quick to execute.
Another way would be to write a shim that works as the "Custom Tool" in VS, and simply calls the existing exe (or whatever) with the right args. Not trivial, but doable ([see here](http://www.codeproject.com/KB/cs/VsMultipleFileGenerator.aspx?display=Print)) - I believe this then supposedly plays fairly nicely with change detection. It is actually on my list of things to do for a current project, so I'll find out soon enough...
|
How to add a dependency to a arbitrary file to a T4 template?
|
[
"",
"c#",
".net",
"t4",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.