Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a mysql\_query result that I am looping through multiple times in different parts of code, each time using a mysql\_data\_seek( $result, 0 ) to reset to the beginning of the result.
I am using mysql\_fetch\_array on those results, and would like to remove a few specific rows from the $result. Basically the equivalent to unset( $result[$row] ) if it was a normal array. Is there any way to do this?
Sample code:
```
$result = mysql_query( $sql );
$num_rows = mysql_num_rows( $result );
if( $num_rows ){
for( $a=0; $a < $num_rows; $a++ ){
$row = mysql_fetch_array( $result );
if( my_check_function( $row['test'] ){
// do stuff
} else {
// remove this row from $result
}
}
}
mysql_data_seek( $result, 0 );
```
I know I can simply do unset( $row[$a] ) to remove that specific row, but after the data seek and I loop through the results next time I end up with the same original result rows.
Any help would be appreciated.
ps - Not sure why the \_'s were removed in my top text and changed to italics, I tried to fix it but it ended up being bold.. :) | The best option is to re-write your queries so you don't have to remove any records after running a query against your database. That, or insert your records into another array, and merely skip over the ones you don't want.
```
$survivors = array();
while ($row = mysql_fetch_array($result)) { // while we have records
if (do_something($row)) // if this is a good record
$survivors[] = $row; // save it for later
}
print_r($survivors); // who survived the cut?
``` | ```
$result = mysql_query( $sql );
$new_rows = mysql_num_rows( $result );
$new_array = array();
if( $num_rows ){
for( $a=0; $a < $num_rows; $a++ ){
$row = mysql_fetch_array( $result );
if( my_check_function( $row['test'] ){
// do stuff
// populate new array with only validated data
$new_array[$a] = $row;
} else {
// remove this row from $result
// do not remove anything.
}
}
}
```
PS. Are you sure you can't sort out unnecessary rows within your sql query? | How do I remove a specific row result obtained through mysql_query? | [
"",
"php",
"mysql",
""
] |
This is a WinForm written in C#.
Lets say I'm generating a random named text file in my selected directory. When the button is clicked teh first time, i write the data contained in the textboxes into that text file. If the user wants to do the same thing with different data in the textboxes then the click on the button should write the new data into the text file without losing the old data. It's like keeping logs, is this possible?
My code is like:
```
private readonly Random setere = new Random();
private const string chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
private string RandomString()
{
char[] buffer = new char[5];
for (int i = 0; i < 5; i++)
{
buffer[i] = chars[setere.Next(chars.Length)];
}
return new string(buffer);
}
private void button1_Click(object sender, EventArgs e)
{
DialogResult dia = MessageBox.Show("Wanna continue?", "Question", MessageBoxButtons.YesNo, MessageBoxIcon.Question);
if (dia == DialogResult.Yes)
{
StreamWriter wFile = new StreamWriter("C:\\Users\\Ece\\Documents\\Testings\\" + RandomString() + ".txt");
wFile.WriteLine("Name Surname:" + text1.Text + text2.Text);
wFile.WriteLine("Other:" + text3.Text + text4.Text);
wFile.WriteLine("Money:" + textBox1.Text + " TL.");
wFile.WriteLine("*************************************");
wFile.Close();
}
else
{
return;
}
}
``` | Have a look at using something like this:
```
StreamWriter fw = new StreamWriter(@"C:\Logs\MyFile.txt",true);
fw.WriteLine("Some Message" + Environment.Newline);
fw.Flush();
fw.Close();
```
Hope that helps. See [MSDN StreamWriter](http://msdn.microsoft.com/en-us/library/system.io.streamwriter.aspx) for more information
Updated: Removed old example
Also if you are trying to create a unique file you can use Path.GetRandomFileName()
Again from the MSDN Books:
> The GetRandomFileName method returns a
> cryptographically strong, random
> string that can be used as either a
> folder name or a file name.
**UPDATED**: *Added a Logger class example below*
Add a new class to your project and add the following lines (this is 3.0 type syntax so you may have to adjust if creating a 2.0 version)
```
using System;
using System.IO;
namespace LogProvider
{
//
// Example Logger Class
//
public class Logging
{
public static string LogDir { get; set; }
public static string LogFile { get; set; }
private static readonly Random setere = new Random();
private const string chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
public Logging() {
LogDir = null;
LogFile = null;
}
public static string RandomFileName()
{
char[] buffer = new char[5];
for (int i = 0; i < 5; i++)
{
buffer[i] = chars[setere.Next(chars.Length)];
}
return new string(buffer);
}
public static void AddLog(String msg)
{
String tstamp = Convert.ToString(DateTime.Now.Day) + "/" +
Convert.ToString(DateTime.Now.Month) + "/" +
Convert.ToString(DateTime.Now.Year) + " " +
Convert.ToString(DateTime.Now.Hour) + ":" +
Convert.ToString(DateTime.Now.Minute) + ":" +
Convert.ToString(DateTime.Now.Second);
if(LogDir == null || LogFile == null)
{
throw new ArgumentException("Null arguments supplied");
}
String logFile = LogDir + "\\" + LogFile;
String rmsg = tstamp + "," + msg;
StreamWriter sw = new StreamWriter(logFile, true);
sw.WriteLine(rmsg);
sw.Flush();
sw.Close();
}
}
}
```
Add this to your forms onload event
```
LogProvider.Logging.LogDir = "C:\\Users\\Ece\\Documents\\Testings";
LogProvider.Logging.LogFile = LogProvider.Logging.RandomFileName();
```
Now adjust your button click event to be like the following:
```
DialogResult dia = MessageBox.Show("Wanna continue?", "Question", MessageBoxButtons.YesNo, MessageBoxIcon.Question);
if (dia == DialogResult.Yes)
{
StringBuilder logMsg = new StringBuilder();
logMsg.Append("Name Surname:" + text1.Text + text2.Text + Environment.NewLine);
logMsg.Append("Other:" + text3.Text + text4.Text + Environment.NewLine);
logMsg.Append("Money:" + textBox1.Text + " TL." + Environment.NewLine);
logMsg.Append("*************************************" + Environment.NewLine);
LogProvider.Logging.AddLog(logMsg.ToString());
} else
{
return;
}
```
Now you should only create one file for the entire time that application is running and will log to that one file every time you click your button. | You can append to the text in the file.
See
[File.AppendText](http://msdn.microsoft.com/en-us/library/system.io.file.appendtext%28VS.71%29.aspx)
```
using (StreamWriter sw = File.AppendText(pathofFile))
{
sw.WriteLine("This");
sw.WriteLine("is Extra");
sw.WriteLine("Text");
}
```
where pathofFile is the path to the file to append to. | How to keep logs in C#? | [
"",
"c#",
"logging",
""
] |
I have a generator object returned by multiple yield. Preparation to call this generator is rather time-consuming operation. That is why I want to reuse the generator several times.
```
y = FunctionWithYield()
for x in y: print(x)
#here must be something to reset 'y'
for x in y: print(x)
```
Of course, I'm taking in mind copying content into simple list. Is there a way to reset my generator?
---
**See also:** [How to look ahead one element (peek) in a Python generator?](https://stackoverflow.com/questions/2425270) | Another option is to use the [`itertools.tee()`](https://docs.python.org/library/itertools.html#itertools.tee) function to create a second version of your generator:
```
import itertools
y = FunctionWithYield()
y, y_backup = itertools.tee(y)
for x in y:
print(x)
for x in y_backup:
print(x)
```
This could be beneficial from memory usage point of view if the original iteration might not process all the items. | Generators can't be rewound. You have the following options:
1. Run the generator function again, restarting the generation:
```
y = FunctionWithYield()
for x in y: print(x)
y = FunctionWithYield()
for x in y: print(x)
```
2. Store the generator results in a data structure on memory or disk which you can iterate over again:
```
y = list(FunctionWithYield())
for x in y: print(x)
# can iterate again:
for x in y: print(x)
```
The downside of option **1** is that it computes the values again. If that's CPU-intensive you end up calculating twice. On the other hand, the downside of **2** is the storage. The entire list of values will be stored on memory. If there are too many values, that can be unpractical.
So you have the classic *memory vs. processing tradeoff*. I can't imagine a way of rewinding the generator without either storing the values or calculating them again.
You could also use `tee` as suggested by other answers, however that would still store the entire list in memory in your case, so it would be the same results and similar performance to option 2. | Resetting generator object in Python | [
"",
"python",
"generator",
"yield",
""
] |
I have read [an article](http://www.codinghorror.com/blog/archives/001015.html) about various shuffle algorithms over at [Coding Horror](http://www.codinghorror.com/). I have seen that somewhere people have done this to shuffle a list:
```
var r = new Random();
var shuffled = ordered.OrderBy(x => r.Next());
```
Is this a good shuffle algorithm? How does it work exactly? Is it an acceptable way of doing this? | It's not a way of shuffling that I like, mostly on the grounds that it's O(n log n) for no good reason when it's easy to implement an O(n) shuffle. The code in the question "works" by basically giving a random (hopefully unique!) number to each element, then ordering the elements according to that number.
I prefer Durstenfeld's variant of the [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher-Yates_shuffle#The_modern_algorithm) which swaps elements.
Implementing a simple `Shuffle` extension method would basically consist of calling `ToList` or `ToArray` on the input then using an existing implementation of Fisher-Yates. (Pass in the `Random` as a parameter to make life generally nicer.) There are plenty of implementations around... I've probably got one in an answer somewhere.
The nice thing about such an extension method is that it would then be very clear to the reader what you're actually trying to do.
EDIT: Here's a simple implementation (no error checking!):
```
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
// Note i > 0 to avoid final pointless iteration
for (int i = elements.Length-1; i > 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
int swapIndex = rng.Next(i + 1);
T tmp = elements[i];
elements[i] = elements[swapIndex];
elements[swapIndex] = tmp;
}
// Lazily yield (avoiding aliasing issues etc)
foreach (T element in elements)
{
yield return element;
}
}
```
EDIT: Comments on performance below reminded me that we can actually return the elements as we shuffle them:
```
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
for (int i = elements.Length - 1; i >= 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
// ... except we don't really need to swap it fully, as we can
// return it immediately, and afterwards it's irrelevant.
int swapIndex = rng.Next(i + 1);
yield return elements[swapIndex];
elements[swapIndex] = elements[i];
}
}
```
This will now only do as much work as it needs to.
Note that in both cases, you need to be careful about the instance of `Random` you use as:
* Creating two instances of `Random` at roughly the same time will yield the same sequence of random numbers (when used in the same way)
* `Random` isn't thread-safe.
I have [an article on `Random`](http://csharpindepth.com/Articles/Chapter12/Random.aspx) which goes into more detail on these issues and provides solutions. | This is based on Jon Skeet's [answer](https://stackoverflow.com/questions/1287567/c-is-using-random-and-orderby-a-good-shuffle-algorithm/1287572#1287572).
In that answer, the array is shuffled, then returned using `yield`. The net result is that the array is kept in memory for the duration of foreach, as well as objects necessary for iteration, and yet the cost is all at the beginning - the yield is basically an empty loop.
This algorithm is used a lot in games, where the first three items are picked, and the others will only be needed later if at all. My suggestion is to `yield` the numbers as soon as they are swapped. This will reduce the start-up cost, while keeping the iteration cost at O(1) (basically 5 operations per iteration). The total cost would remain the same, but the shuffling itself would be quicker. In cases where this is called as `collection.Shuffle().ToArray()` it will theoretically make no difference, but in the aforementioned use cases it will speed start-up. Also, this would make the algorithm useful for cases where you only need a few unique items. For example, if you need to pull out three cards from a deck of 52, you can call `deck.Shuffle().Take(3)` and only three swaps will take place (although the entire array would have to be copied first).
```
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
{
T[] elements = source.ToArray();
// Note i > 0 to avoid final pointless iteration
for (int i = elements.Length - 1; i > 0; i--)
{
// Swap element "i" with a random earlier element it (or itself)
int swapIndex = rng.Next(i + 1);
yield return elements[swapIndex];
elements[swapIndex] = elements[i];
// we don't actually perform the swap, we can forget about the
// swapped element because we already returned it.
}
// there is one item remaining that was not returned - we return it now
yield return elements[0];
}
``` | Is using Random and OrderBy a good shuffle algorithm? | [
"",
"c#",
"algorithm",
"shuffle",
""
] |
I just took a brief look at PowerShell (I knew it as Monad shell). My ignorant eyes see it more or less like a hybrid between regular bash and python. I would consider such integration between the two environments very cool on linux and osx, so I was wondering if it already exists (ipython is not really the same), and if not, why ? | I've only dabbled in Powershell, but what distinguishes it for me is the ability to pipe actual objects in the shell. In that respect, the closest I've found is actually using the IPython shell with `ipipe`:
* [Using ipipe](http://wiki.ipython.org/Using_ipipe)
* [Adding support for ipipe](http://wiki.ipython.org/Cookbook/Adding_support_for_ipipe)
Following the recipes shown on that page and cooking up my own extensions, I don't often leave the IPython shell for bash. YMMV. | I think Hotwire is basically what you're thinking of:
<http://code.google.com/p/hotwire-shell/wiki/GettingStarted0700>
It's a shell-type environment where you can access the outputs as Python objects.
It doesn't have all PowerShell's handy hooks into various Windows system information, though. For that, you may want to literally integrate Python with PowerShell; that's described in [IronPython In Action](http://www.ironpythoninaction.com/). | A python based PowerShell? | [
"",
"python",
"bash",
"powershell",
""
] |
I have the following SQL query and so far it works the way it should and gets the top 40 tag ids that I have stored in the tagmap table.
```
SELECT TOP 40
tbrm_TagMap.TagID,
Count(*)
FROM tbrm_TagMap
GROUP BY tbrm_TagMap.TagID
ORDER BY COUNT(tbrm_TagMap.TagID) DESC
```
I also want to join to the Tags table which contains the actual name of each TagID. Each attempt I make comes back with an error. How can I achieve this? I am using SQL 2008. | ```
SELECT *
FROM (
SELECT TOP 40
tbrm_TagMap.TagID, COUNT(*) AS cnt
FROM tbrm_TagMap
GROUP BY
tbrm_TagMap.TagID
ORDER BY
COUNT(*) DESC
) q
JOIN Tags
ON Tags.id = q.TagID
ORDER BY
cnt DESC
``` | My guess is that when you were joining `tags`, you weren't including it in the `group by` clause, which will always through an error in SQL Server. Every column not aggregated but returned needs to be in the `group by`.
Try something like this:
```
SELECT TOP 40
tbrm_TagMap.TagID,
t.Tag,
Count(*)
FROM
tbrm_TagMap
INNER JOIN tags t ON
tbrm_TagMap.TagID = t.TagID
GROUP BY
tbrm_TagMap.TagID,
t.Tag
ORDER BY 3 DESC
``` | How do I join this sql query to another table? | [
"",
"sql",
"sql-server-2008",
"stored-procedures",
""
] |
A little regex help please.
Why are these different?
```
Regex.Replace("(999) 555-0000 /x ext123", "/x.*|[^0-9]", String.Empty)
"9995550000"
Regex.Replace("(999) 555-0000 /x ext123", "[^0-9]|/x.*", String.Empty)
"9995550000123"
```
I thought the pipe operator did not care about order... or maybe there is something else that can explain this? | I think you've got the wrong idea about alternation (i.e., the pipe). In a pure DFA regex implementation, it's true that alternation favors the longest match no matter how the alternatives are ordered. In other words, the whole regex, whether it contains alternation or not, always returns the earliest and longest possible match--the "leftmost-longest" rule.
However, the regex implementations in most of today's popular programming languages, including .NET, are what [Friedl](https://rads.stackoverflow.com/amzn/click/com/0596528124) calls *Traditional NFA* engines. One of the most important differences between them and DFA engines is that alternation is **not** greedy; it attempts the alternatives in the order they're listed and stops as soon as one of them matches. The only thing that will cause it to change its mind is if the match fails at a later point in the regex, forcing it to backtrack into the alternation.
Note that if you change the `[^0-9]` to `[^0-9]+` in both regexes you'll get the same result from both--but not the one you want. (I'm assuming the `/x.*` alternative is supposed to match--and remove--the rest of the string, including the extension number.) I'd suggest something like this:
```
"[^0-9/]+|/x.*$"
```
That way, neither alternative can even *start* to match what the other one matches. Not only will that will prevent the kind of confusion you're experiencing, it avoids potential performance bottlenecks. One of the *other* major differences between DFA's and NFA's is that badly-written NFA's are prone to serious (even [catastrophic](http://www.regular-expressions.info/catastrophic.html)) performance problems, and sloppy alternations are one of the easiest ways to trigger them. | If I took a wild guess, I'd say that it's running the first part of the expression first, and then the second part. So, what's happening in the second case is that it's removing all the non-numeric parts, which means that the second part will never match, and leaves you with the extension intact.
Since it has to run some part of the expression first, since it can't run both at the same time, I'd say this is a fairly natural assumption, though I can see why you might get caught... Definitely an interesting gotcha though.
**EDIT:** To address the wording, as Ben rightly pointed out, the expression is attempted to be matched starting with each character in the string. So, what happens in the second case is:
* There is no `"^"` anchor, so we try at the start of each substring:
* For `"(999) 555-0000 /x ext123"`, `"("` matches `[^0-9]`, so replace that with nothing (remove it).
* For `"999) 555-0000 /x ext123"`, the `"999"` part doesn't match `[^0-9]`, nor does it match `/x.*`, so we keep trying from the `")"`, which matches `[^0-9]`, so we remove it.
* And so on. When it gets to the `"/"`, the same thing happens, it matches `[^0-9]` and is removed, meaning the second part of the regex can never, ever match.
In the first case, what happens is the following:
* Again, no `"^"` anchor, so we try for all substrings:
* For `"(999) 555-0000 /x ext123"`, `"("` does not match `/x.*`, but it does match `[^0-9]`, so replace that with nothing (remove it).
* For `"999) 555-0000 /x ext123"`, the `"999"` part doesn't match `/x.*`, nor does it match `[^0-9]`, so we keep trying from the `")"`, which doesn't match `/x.*`, but which matches `[^0-9]`, so we remove it.
* When we hit the `"/x"`, this time `/x.*` *does* match, it matches `"/x ext123"`, and the rest of the string is removed, leaving us with nothing to continue with. | Regular Expressions and The Pipe Operator | [
"",
"c#",
"regex",
""
] |
I want to create GUI applications with C++ on Windows. I have downloaded Qt, and it works well, but it has so much stuff in it and so many header files that I really don't use. It is a nice framework, but it has more than just GUI.
Are there any lighter GUI libraries out there for Windows C++ that is "just GUI"? | FLTK, if you are serious about lightweight.
<http://www.fltk.org/>
edit:
Blurb from the website:
FLTK is designed to be small and modular enough to be statically linked, but works fine as a shared library. FLTK also includes an excellent UI builder called FLUID that can be used to create applications in minutes.
I'll add that its *mature* and *stable*, too. | Even if wxWidgets is named here already:
wxWidgets!
Its a great and valuable Framwork (API, Class Library, whatever you may call it).
BUT: You can divide the functionality of this library into many small parts (base, core, gui, internet, xml) and use them, when necessary.
If you really want to make GOOD GUI applications, you have to use a GOOD API. wxWidgets is absolutly free (QT is not), only needs a small overhead in binary form, linked as dll or o-file is it about 2Megs, but has to offer all that you ever need to program great applications...
And wxWidgets is much more lighter than QT... and even better... :)
Try it... | Lightweight C++ Gui Library | [
"",
"c++",
"user-interface",
"frameworks",
""
] |
How can I do this?
I want a user to click a button and then a small window pops up and lets me end-user navigate to X folder. Then I need to save the location of the folder to a string variable.
Any help, wise sages of StackOverflow? | ```
using (FolderBrowserDialog dlg = new FolderBrowserDialog())
{
if (dlg.ShowDialog(this) == DialogResult.OK)
{
string s = dlg.SelectedPath;
}
}
```
(remove `this` if you aren't already in a winform) | If you're using Winforms, you can use a [FolderBrowserDialog control](http://msdn.microsoft.com/en-us/library/system.windows.forms.folderbrowserdialog.aspx). The path the user selects will be in the SelectedPath property. | User navigates to folder, I save that folder location to a string | [
"",
"c#",
"filedialog",
""
] |
I have a textbox that contain a string that must be binded only when the user press the button. In XAML:
```
<Button Command="{Binding Path=PingCommand}" Click="Button_Click">Go</Button>
<TextBox x:Name="txtUrl" Text="{Binding Path=Url,UpdateSourceTrigger=Explicit, Mode=OneWay}" />
```
In the code-behind:
```
private void Button_Click(object sender, RoutedEventArgs e)
{
BindingExpression be = this.txtUrl.GetBindingExpression(TextBox.TextProperty);
be.UpdateTarget();
}
```
"be" is always NULL. Why?
**Update:**
Alright, here is some update after a lot of try.
IF, I set the Mode the OneWay with Explicit update. I have a NullReferenceException in the "be" object from the GetBindingExpression.
IF, I set the Mode to nothing (default TwoWay) with Explicit update. I have the binding getting the value (string.empty) and it erases every times everything in the textbox.
IF, I set the Mode to OneWay, with PropertyChanged I have nothing raised by the property binded when I press keys in the textbox and once I click the button, I have a NULLReferenceException in the "be" object.
IF, I set the Mode to nothing (default TwoWay), with PropertyChanged I have the property that raise everytime I press (GOOD) but I do not want to have the property change everytime the user press a key... but only once the user press the Button. | Alright, after thinking a little more I noticed that :
```
BindingExpression be = this.txtUrl.GetBindingExpression(TextBox.TextProperty);
be.UpdateTarget();
```
Has something illogical because I do not want to update the Target but the source. I have simply changed be.Updatetarget() to be.UpdateSource(); and everything worked with this XAML:
```
<TextBox x:Name="txtUrl" Text="{Binding Path=Url, UpdateSourceTrigger=Explicit}">...
```
Thank you to everybody who have helped me in the process to solve this problem. I have added +1 to everybody! Thanks | It should work, may be you are calling functions that delete the binding before the lines you provide in event handler? Assigning value to text property of text box will remove binding | Binding when a button is pressed (Explicit binding) has always BindingExpression at null? | [
"",
"c#",
".net",
"wpf",
"binding",
""
] |
I am looking for any way to have Emacs format a Python buffer by hitting a few keys. By format, I mean:
1. Replace tabs with 4 spaces
2. Wrap all long lines correctly at 79 chars. This includes wrapping & concatenating long strings, wrapping long comments, wrapping lists, function headers, etc.
3. Unrelated, but when I hit enter, it'd be nice if the cursor was tabbed in automatically.
In general I'd like to just format everything according to PEP 8.
I've looked for a pretty printer / code beautifier / code formatter for Python to run the buffer through, but can't find an open source one.
My .emacs is [here](http://dpaste.com/hold/81704/).
For those who are going to answer "You don't need a formatter for Python, it's beautiful by the nature of the language," I say to you that this is not correct. In real software systems, comments should be auto-wrapped for you, strings are going to be longer than 79 characters, tab levels run 3+ deep. Please just help me solve my issue directly without some philosophical discussion about the merits of formatting Python source. | To change tabs into spaces and fill comments at the same time, you can use this command:
```
(defun my-format-python-text ()
"untabify and wrap python comments"
(interactive)
(untabify (point-min) (point-max))
(goto-char (point-min))
(while (re-search-forward comment-start nil t)
(call-interactively 'fill-paragraph)
(forward-line 1)))
```
Which you can bind to the key of your choice, presumably like so:
```
(eval-after-load "python"
'(progn
(define-key python-mode-map (kbd "RET") 'newline-and-indent)
(define-key python-mode-map (kbd "<f4>") 'my-format-python-text)))
```
Note the setting of the `RET` key to automatically indent.
If you wanted to all tabs with spaces with just built-in commands, this is a possible sequence:
```
C-x h ;; mark-whole-buffer
M-x untabify ;; tabs->spaces
```
To get the fill column and tab width to be what you want, add to your .emacs:
```
(setq fill-column 79)
(setq-default tab-width 4)
```
Arguably, the tab-width should be set to 8, depending on how other folks have indented their code in your environment (8 being a default that some other editors have). If that's the case, you could just set it to 4 in the `'python-mode-hook`. It kind of depends on your environment. | About your point 3:
> Unrelated, but when I hit enter, it'd
> be nice if the cursor was tabbed in
> automatically.
My emacs Python mode does this by default, apparently. It's simply called [python-mode](https://launchpad.net/python-mode/)... | Is there any way to format a complete python buffer in emacs with a key press? | [
"",
"python",
"emacs",
"formatter",
""
] |
Inline functions are just a request to compilers that insert the complete body of the inline function in every place in the code where that function is used.
But how the compiler decides whether it should insert it or not? Which algorithm/mechanism it uses to decide?
Thanks,
Naveen | Some common aspects:
* Compiler option (debug builds usually don't inline, and most compilers have options to override the inline declaration to try to inline all, or none)
* suitable calling convention (e.g. varargs functions usually aren't inlined)
* suitable for inlining: depends on size of the function, call frequency of the function, gains through inlining, and optimization settings (speed vs. code size). Often, tiny functions have the most benefits, but a huge function may be inlined if it is called just once
* inline call depth and recursion settings
The 3rd is probably the core of your question, but that's really "compiler specific heuristics" - you need to check the compiler docs, but usually they won't give much guarantees. MSDN has some (limited) information for MSVC.
Beyond trivialities (e.g. simple getters and very primitive functions), inlining *as such* isn't very helpful anymore. The cost of the call instruction has gone down, and branch prediction has greatly improved.
The great opportunity for inlining is removing code paths that the compiler knows won't be taken - as an extreme example:
```
inline int Foo(bool refresh = false)
{
if (refresh)
{
// ...extensive code to update m_foo
}
return m_foo;
}
```
A good compiler would inline `Foo(false)`, but not `Foo(true)`.
With Link Time Code Generation, `Foo` could reside in a .cpp (without a `inline` declararion), and `Foo(false)` would still be inlined, so again inline has only marginal effects here.
---
To summarize: There are few scenarios where you should attempt to take manual control of inlining by placing (or omitting) inline statements. | All I know about inline functions (and a lot of other c++ stuff) is [here](http://www.parashift.com/c++-faq-lite/inline-functions.html).
Also, if you're focusing on the heuristics of each compiler to decide wether or not inlie a function, that's implementation dependant and you should look at each compiler's documentation. Keep in mind that the heuristic could also change depending on the level of optimitation. | Inline Function (When to insert)? | [
"",
"c++",
""
] |
I have very simple form along with two database tables.
In this form is a ComboBox, which reads the first table `tblProjects`. It displays a "Project Name" to the user and when selected, filters a DataGridView, which reads its data from the second table: `tblData`.
`tblData` does not contain "Project Name" but instead a Guid that both tables share. Each project has a unique Guid, ie 10 projects = 10 Guids.
So naturally, when the table is filtered, it displays the data from that project, however "Project Name" is obviously not one of the values available in that DataGridView, as again, it reads from `tblData`.
Is it possible to replace the Guid that is displayed within that DataGridView with the corresponding "Project Name"? | It's possible to add data columns from other datatables to a dataview / datatable which is bound to a datagrid. But building a JOIN on the SQL / LINQ Level would be the better solution. | I am not sure how you are getting your data back but you sould be joining to that other table and making the project name part of the result set.
If you can provide more information on how you are retrieving the data it would make this easier to answer. | C# - DataGridView - have one column read from another database table? | [
"",
"c#",
"datagridview",
""
] |
In a div, I have some checkbox. I'd like when I push a button get all the name of all check box checked. Could you tell me how to do this ?
```
<div id="MyDiv">
....
<td><%= Html.CheckBox("need_" + item.Id.ToString())%></td>
...
</div>
```
Thanks, | ```
$(document).ready(function() {
$('#someButton').click(function() {
var names = [];
$('#MyDiv input:checked').each(function() {
names.push(this.name);
});
// now names contains all of the names of checked checkboxes
// do something with it
});
});
``` | Since nobody has mentioned this..
If all you want is an array of values, an easier alternative would be to use the [**`.map()`**](http://api.jquery.com/map/) method. Just remember to call `.get()` to convert the jQuery object to an array:
[**Example Here**](http://jsfiddle.net/08oLtpcz/)
```
var names = $('.parent input:checked').map(function () {
return this.name;
}).get();
console.log(names);
```
```
var names = $('.parent input:checked').map(function () {
return this.name;
}).get();
console.log(names);
```
```
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="parent">
<input type="checkbox" name="name1" />
<input type="checkbox" name="name2" />
<input type="checkbox" name="name3" checked="checked" />
<input type="checkbox" name="name4" checked="checked" />
<input type="checkbox" name="name5" />
</div>
```
Pure JavaScript:
[**Example Here**](http://jsfiddle.net/3pmxh1fq/)
```
var elements = document.querySelectorAll('.parent input:checked');
var names = Array.prototype.map.call(elements, function(el, i) {
return el.name;
});
console.log(names);
```
```
var elements = document.querySelectorAll('.parent input:checked');
var names = Array.prototype.map.call(elements, function(el, i){
return el.name;
});
console.log(names);
```
```
<div class="parent">
<input type="checkbox" name="name1" />
<input type="checkbox" name="name2" />
<input type="checkbox" name="name3" checked="checked" />
<input type="checkbox" name="name4" checked="checked" />
<input type="checkbox" name="name5" />
</div>
``` | Get checkbox list values with jQuery | [
"",
"javascript",
"jquery",
""
] |
My standalone smallish C# project requires a moderate number (ca 100) of (XML) files which are required to provide domain-specific values at runtime. They are not required to be visible to the users. However I shall need to add to them or update them occasionally which I am prepared to do manually (i.e. I don't envisage a specific tool, especially as they may be created outside the system).
I would wish them to be relocatable (i.e. to use relative filenames). What options should I consider for organizing them and what would be the calls required to open and read them?
The project is essentially standalone (not related to web services, databases, or other third-party applications). It is organised into a small number of namespaces and all the logic for the files can be confined to a single namespace.
=========
I am sorry for being unclear. I will try again. In a Java application it is possible to include resource files which are read relative to the classpath, not to the final \*.exe. I believe there is a way of doing a similar thing in C#.
=========
I believe I should be using somthing related to RESX. See (RESX files and xml data <https://stackoverflow.com/posts/1205872/edit>). I can put strings in a resx files, but this is tedious and error-prone and I would prefer to copy them into the appropriate location.
I am sorry to be unclear, but I am not quite sure how to ask the question.
=========
The question appears to be very close to ([C# equivalent of getClassLoader().getResourceAsStream(...)](https://stackoverflow.com/questions/474055/c-equivalent-of-getclassloader-getresourceasstream)). I would like to be able to add the files in VisualStudio - my question is where do I put them and how do I indicate they are resources? | There is a detailed answer on <http://www.attilan.com/2006/08/accessing_embedded_resources_u.php>. It appears that the file has to be specified as an EmbeddedResource. I have not yet got this to work but it is what I want. | If you put them in a subfolder relative to your executable, say `.\Config` you would be able to access them with `File.ReadAllText(@"Config\filename.xml")`.
If you have an ASP.NET application you could put them inside the special `App_Data` folder and access them with `File.ReadAllText(Server.MapPath("~/App_Data/filename.xml"))` | How should I organize project-specific read-only files in c# | [
"",
"c#",
"file",
"organization",
""
] |
We've got an ajax request that takes approx. 30 seconds and then sends the user to a different page. During that time, we of course show an ajaxy spinner indicator, but the browser can also "appear" stuck because the browser client isn't actually working or showing it's own loading message.
Is there an easy way to tell all major browsers to look busy with a JS command?
Thanks,
Chad | Do you need to use AJAX in this situation? Could you instead post/put to another page whose whole purpose is to process the request and once finished redirect to the destination page?
You could still use some JS to pop the spinner, and since you're posting to another page, the brower will display its "native busy indicator". The browser should never show the middle page, once the request has been processed the response gets redirected to the destination. | You could set the CSS `cursor` property of `body` to 'wait'. With prototype this would be like:
```
$(document.body).setStyle({cursor: 'wait'});
```
I *believe* this is the jQuery code, someone please correct me if I'm wrong as I am not a jQuery expert:
```
$("body").css("cursor", "wait");
```
This will make the entire page show an hourglass mouse cursor on Windows and a spinning watch cursor on Mac OS. | Browser "Busy State" with Ajax | [
"",
"javascript",
"ajax",
"browser",
""
] |
I have a C++ class in which many of its the member functions have a common set of operations. Putting these common operations in a separate function is important for avoiding redundancy, but where should i place this function ideally? Making it a member function of the class is not a good idea since it makes no sense being a member function of the class and putting it as a lone function in a header file also doesn't seem to be a nice option.
Any suggestion regarding this rather design question? | If the "set of operations" can be encapsulated in a function that is not inherently tied to the class in question then it probably should be a free function (perhaps in an appropriate namespace).
If it's somehow tied to the class but doesn't require a class instance it should probably be a `static` member function, probably a `private` function if it doesn't form part of the class interface. | Make it a free function in an anonymous namespace in the cpp file that defines the functions that use it:
```
namespace {
int myHelperFunction(int size, Bar &target) {
...
}
}
int Foo::doTarget(Bar &target) {
return myHelperFunction(this->size, target);
}
template <typename IT>
int Foo::doTargets(IT first, IT last, int size) {
size += this->size;
int total = 0;
while (first != last) {
total += myHelperFunction(size, *first);
++first;
}
return total;
}
```
or whatever.
This is assuming a simple setup where your member functions are declared in one header file, and defined in one translation unit. If it's more complicated, you could either make it a private static member function of the class, and define it in one of the translation units containing member function definitions (or add a new one), or else just give it its own header since you're decomposing things a long way into files already. | Placement of a method in a Class | [
"",
"c++",
""
] |
I'm trying to set the HTML of an Iframe at runtime, from code behind.
In my aspx page i have:
```
<asp:Button ID="btnChange" runat="server" Text="Change iframe content"
onclick="btnChange_Click" />
<br />
<iframe id="myIframe" runat="server" />
```
in the code behind:
```
protected void btnChange_Click(object sender, EventArgs e)
{
myIframe.InnerHtml = "<h1>Contents Changed</h1>";
}
```
When i run this.... it posts back, but doesn't change the myIframe contents at all...
What am i doing wrong??
---
I need to do this because im implementing 3D secure into my checkout process..
basically:
1) customer enters credit card details
2) form is submitted, checks with payment gateway if 3d secure is required. if so, url is generated for the banks secure location to enter information
3) i create a POST request to this url, that contains a long security token, and a few other bits of information. i get hold of the HTML returned from this POST request, and need to display it in an iFrame.
Heres what the documentation says to do:
```
<html>
<head>
<title>Please Authenticate</title>
</head>
<body onload="OnLoadEvent();">
<form name="downloadForm" action="https://mybank.com/vbyv/verify" method="POST">
<input type="hidden" name="PaReq" value="AAABBBBCCCCHHHHHH=">
<input type="hidden" name="TermUrl" value="https:// www. MyWidgits.Com/next.cgi">
<input type="hidden" name="MD" value="200304012012a">
</form>
<script language="Javascript"> <!-- function OnLoadEvent(){ document.downloadForm.target = "ACSframe"; document.downloadForm.submit(); } //--> </script>
<!-- MERCHANT TO FILL IN THEIR OWN BRANDING HERE -->
<iframe src="blank.htm" name="ACSframe" width="390" height="450" frameborder="0">
</iframe>
<!-- MERCHANT TO FILL IN THEIR OWN BRANDING HERE -->
</body>
</html>
``` | You can try this:
```
protected void btnChange_Click(object sender, EventArgs e)
{
myIframe.Attributes["src"] = "pathtofilewith.html"
}
```
or maybe this will work too:
```
protected void btnChange_Click(object sender, EventArgs e)
{
myIframe.Attributes["innerHTML"] = "htmlgoeshere"
}
``` | There's no innerHTML attribute for an iFrame. However, since HTML 5.0, there's a new **srcdoc** attribute. <http://www.w3schools.com/tags/tag_iframe.asp>
> Value: *HTML\_code*
>
> Description: Specifies the HTML content of the page to show in the < iframe >
Which you could use like this:
```
protected void btnChange_Click(object sender, EventArgs e)
{
myIframe.Attributes["srcdoc"] = "<h1>Contents Changed</h1>";
}
``` | Changing an IFrames InnerHtml from codebehind | [
"",
"c#",
"asp.net",
"iframe",
"3d-secure",
""
] |
I have a form that generates the following markup if there is one or more errors on submit:
```
<ul class="memError">
<li>Error 1.</li>
<li>Error 2.</li>
</ul>
```
I want to set this element as a modal window that appears after submit, to be closed with a click. I have jquery, but I can't find the right event to trigger the modal window. Here's the script I'm using, adapted from [an example I found here](https://stackoverflow.com/questions/1068586/jquery-lightbox-modal-windows-for-a-pop-up-web-screen):
```
<script type="text/javascript">
//<![CDATA[
$(document).ready(function() {
$('.memError').load(function() {
//Get the screen height and width
var maskHeight = $(document).height();
var maskWidth = $(window).width();
//Set heigth and width to mask to fill up the whole screen
$('#mask').css({
'width': maskWidth,
'height': maskHeight
});
//transition effect
$('#mask').fadeIn(1000);
$('#mask').fadeTo("slow", 0.8);
//Get the window height and width
var winH = $(window).height();
var winW = $(window).width();
//Set the popup window to center
$(id).css('top', winH / 2 - $(id).height() / 2);
$(id).css('left', winW / 2 - $(id).width() / 2);
//transition effect
$(id).fadeIn(2000);
});
//if close button is clicked
$('.memError').click(function(e) {
$('#mask').hide();
$('.memError').hide();
});
});
//]]>
</script>
```
I've set styles for `#mask` and `.memError` pretty much identical to the [example](https://stackoverflow.com/questions/1068586/jquery-lightbox-modal-windows-for-a-pop-up-web-screen), but I can't get anything to appear when I load the `ul.memError`. I've tried other [events](http://docs.jquery.com/Events) trying to muddle through, but I don't yet have the grasp of javascript needed for this.
Can anyone point me in the right direction? | if this is a plain old form submit, then just check $('.memerror').length > 0 on document.ready .. if it's true, then do the rest. you dont' need to add a load event handler as the ul is already loaded. if its an ajax submit, then you should be using the success event of the [jquery forms plugin](http://malsup.com/jquery/form/) | I played around with live events and such but I just can't seem to find a way to trigger it. You could use a bit of inline code when you insert the new items to trigger a custom event.
Not sure if that would fit with your implementation or not. | Jquery: How do I trigger a modal window just from the existence of an element | [
"",
"javascript",
"jquery",
"modal-dialog",
""
] |
I have a question.
I was recently writing some javascript code in dreamweaver and in it's code complete everytime i would deal with array of elements it will give me array.item(n) rather then array[n]
What is the difference between those two ?
Some example code:
```
function hideAllSubMenu(){
var submenu = document.getElementsByTagName("div");
for(var i = 0; i < submenu.length; i++)
{
if(submenu.item(i).className == "submenu_wrap")
submenu.item(i).style.display = "none";
}
}
```
However it can be writen in such maner as well
```
function hideAllSubMenu(){
var submenu = document.getElementsByTagName("div");
for(var i = 0; i < submenu.length; i++)
{
if(submenu.[i].className == "submenu_wrap")
submenu.[i].style.display = "none";
}
}
```
It is somewhat confusing for somebody like me who in his first steps on learning core javascript.
Can somebody explain to me what is a difference | The *item* method is available on some node lists of the ***DOM***.
The [] is array accessor.
So you can use *item* when manipulating with DOM. But not on a normal array.
Generally I would suggest to always use [] as it is just works in all cases. | i think you meant `submenu[i]` in the second example and item(n) and [n] are synonymous | simple question about difference between array.item(n) and array[n]? | [
"",
"javascript",
""
] |
Here's is what I have done so far but the length function isn't working.
```
import string
def main():
print " This program reads from a file and then prints out the"
print " line with the longest length the line ,or with the highest sum"
print " of ASCII values , or the line with the greatest number of words"
infile = open("30075165.txt","r")
for line in infile:
print line
infile.close()
def length():
maxlength = 0
infile = open("30075165.txt","r")
for line in infile:
linelength = lengthofline
if linelength > maxlength:
#If linelength is greater than maxlength value the new value is linelength
maxlength = linelength
linelength = line
print ,maxlinetext
infile.close()
``` | For Python 2.5 to 2.7.12
```
print max(open(your_filename, 'r'), key=len)
```
For Python 3 and up
```
print(max(open(your_filename, 'r'), key=len))
``` | ```
large_line = ''
large_line_len = 0
filename = r"C:\tmp\TestFile.txt"
with open(filename, 'r') as f:
for line in f:
if len(line) > large_line_len:
large_line_len = len(line)
large_line = line
print large_line
```
output:
```
This Should Be Largest Line
```
And as a function:
```
def get_longest_line(filename):
large_line = ''
large_line_len = 0
with open(filename, 'r') as f:
for line in f:
if len(line) > large_line_len:
large_line_len = len(line)
large_line = line
return large_line
print get_longest_line(r"C:\tmp\TestFile.txt")
```
Here is another way, you would need to wrap this in a try/catch for various problems (empty file, etc).
```
def get_longest_line(filename):
mydict = {}
for line in open(filename, 'r'):
mydict[len(line)] = line
return mydict[sorted(mydict)[-1]]
```
You also need to decide that happens when you have two 'winning' lines with equal length? Pick first or last? The former function will return the first, the latter will return the last.
File contains
```
Small Line
Small Line
Another Small Line
This Should Be Largest Line
Small Line
```
## Update
The comment in your original post:
```
print " This program reads from a file and then prints out the"
print " line with the longest length the line ,or with the highest sum"
print " of ASCII values , or the line with the greatest number of words"
```
Makes me think you are going to scan the file for length of lines, then for ascii sum, then
for number of words. It would probably be better to read the file once and then extract what data you need from the findings.
```
def get_file_data(filename):
def ascii_sum(line):
return sum([ord(x) for x in line])
def word_count(line):
return len(line.split(None))
filedata = [(line, len(line), ascii_sum(line), word_count(line))
for line in open(filename, 'r')]
return filedata
```
This function will return a list of each line of the file in the format: `line, line_length, line_ascii_sum, line_word_count`
This can be used as so:
```
afile = r"C:\Tmp\TestFile.txt"
for line, line_len, ascii_sum, word_count in get_file_data(afile):
print 'Line: %s, Len: %d, Sum: %d, WordCount: %d' % (
line.strip(), line_len, ascii_sum, word_count)
```
to output:
```
Line: Small Line, Len: 11, Sum: 939, WordCount: 2
Line: Small Line, Len: 11, Sum: 939, WordCount: 2
Line: Another Small Line, Len: 19, Sum: 1692, WordCount: 3
Line: This Should Be Largest Line, Len: 28, Sum: 2450, WordCount: 5
Line: Small Line, Len: 11, Sum: 939, WordCount: 2
```
You can mix this with Steef's solution like so:
```
>>> afile = r"C:\Tmp\TestFile.txt"
>>> file_data = get_file_data(afile)
>>> max(file_data, key=lambda line: line[1]) # Longest Line
('This Should Be Largest Line\n', 28, 2450, 5)
>>> max(file_data, key=lambda line: line[2]) # Largest ASCII sum
('This Should Be Largest Line\n', 28, 2450, 5)
>>> max(file_data, key=lambda line: line[3]) # Most Words
('This Should Be Largest Line\n', 28, 2450, 5)
``` | How to open a file and find the longest length of a line and then print it out | [
"",
"python",
""
] |
I have a data set that is generated by a Zip Code range search:
```
$zips:
key -> value
11967 -> 0.5
11951 -> 1.3
```
The key is the Zip Code (Which I need to query the Database for), and the value is the miles from the user entered zip code. I need to take the key (Zip Code) and search the database, **preferably** using a MySQL query similar to my current one:
```
$getlistings = mysql_query("SELECT * FROM stores WHERE zip IN ($zips)");
```
The other alternative is to change the array somehow in my code. I tried looking in code for where the array is generated originally but I couldn't find it. Any help would be greatly appreciated!! Thanks :) | [`array_keys`](http://docs.php.net/array_keys) should be what you're looking for.
```
$zip = array_keys($zips); # gives you simple array(11967, 11951);
implode(', ', $zip); # results in: '11967, 11951'
``` | You could convert the array keys to a SQL-compatible string. For example:
```
'11967', '11951'
```
and then use the string in the query.
Since the SQL query doesn't know what a php array is and there's no good way (that I know of) to extract just the keys and surround them in quotes, so this may be your best bet.
EDIT: As Ionut G. Stan wrote (and gave an example for), using the implode and *array\_map* functions will get you there. However, I *believe* the solution provided will only work if your column definition is numeric. Character columns would require that elements be surrounded by apostrophes in the IN clause. | Using Array "Keys" In a MySQL WHERE Clause | [
"",
"php",
"mysql",
"arrays",
""
] |
I would like to ask which IDE should I use for developing applications for Google App Engine with Python language?
Is [Eclipse](http://code.google.com/intl/it-IT/appengine/articles/eclipse.html) suitable or is there any other development environment better?
Please give me some advices!
Thank you! | Eclipse with the [PyDev](http://pydev.sourceforge.net/) plugin is very nice. Recent versions even go out of their way to support App Engine, with builtin support for uploading your project, etc without having to use the command line scripts.
See the [Pydev blog](http://pydev.blogspot.com/) for more documentation on the App Engine integration. | I think the answers you are looking for are here
[Best opensource IDE for building applications on Google App Engine?](https://stackoverflow.com/questions/495579/best-opensource-ide-for-building-applications-on-google-app-engine) | Which development environment should I use for developing Google App Engine with Python? | [
"",
"python",
"google-app-engine",
"ide",
""
] |
Is there any way to determine which version of Firebird SQL is running? Using SQL or code (Delphi, C++). | If you want to find it via SQL you can use [get\_context](http://www.firebirdsql.org/refdocs/langrefupd20-get-context.html) to find the engine version it with the following:
```
SELECT rdb$get_context('SYSTEM', 'ENGINE_VERSION')
as version from rdb$database;
```
you can read more about it here [firebird faq](http://www.firebirdfaq.org/faq223/), but it requires Firebird 2.1 I believe. | Two things you can do:
* Use the Services API to query the server version, the call is [`isc_service_query()`](http://www.ibphoenix.com/main.nfs?a=ibphoenix&s=1247418314:3809&page=ibp_60_api_iscsq_fs) with the `isc_info_svc_server_version` parameter. Your preferred Delphi component set should surface a method to wrap this API.
For C++ there is for example [IBPP](http://www.ibpp.org) which has `IBPP::Service::GetVersion()` to return the version string.
What you get back with these is the same string that is shown in the control panel applet.
* If you need to check whether certain features are available it may be enough (or even better) to execute statements against the system tables to check whether a given system relation or some field in that relation is available. If the ODS of the database is from an older version some features may not be supported, even though the server version is recent enough.
The ODS version can also be queried via the API, use the `isc_database_info()` call. | Ways to determine the version of Firebird SQL? | [
"",
"c++",
"delphi",
"firebird",
""
] |
I want to provide my visitors the ability to see images in high quality, is there any way I can detect the window size?
Or better yet, the viewport size of the browser with JavaScript? See green area here:
[](https://i.stack.imgur.com/zYrB7.jpg) | ## **Cross-browser** [`@media (width)`](http://dev.w3.org/csswg/mediaqueries/#width) and [`@media (height)`](http://dev.w3.org/csswg/mediaqueries/#height) values
```
let vw = Math.max(document.documentElement.clientWidth || 0, window.innerWidth || 0)
let vh = Math.max(document.documentElement.clientHeight || 0, window.innerHeight || 0)
```
## [`window.innerWidth`](https://developer.mozilla.org/en-US/docs/Web/API/Window/innerWidth) and [`window.innerHeight`](https://developer.mozilla.org/en-US/docs/Web/API/Window/innerHeight)
* gets [CSS viewport](http://www.w3.org/TR/CSS2/visuren.html#viewport) `@media (width)` and `@media (height)` which include scrollbars
* `initial-scale` and zoom [variations](https://github.com/ryanve/verge/issues/13) may cause mobile values to **wrongly** scale down to what PPK calls the [visual viewport](http://www.quirksmode.org/mobile/viewports2.html) and be smaller than the `@media` values
* zoom may cause values to be 1px off due to native rounding
* `undefined` in IE8-
## [`document.documentElement.clientWidth`](https://developer.mozilla.org/en-US/docs/Web/API/CSS_Object_Model/Determining_the_dimensions_of_elements#What.27s_the_size_of_the_displayed_content.3F) and `.clientHeight`
* equals CSS viewport width **minus** scrollbar width
* matches `@media (width)` and `@media (height)` when there is **no** scrollbar
* [same as](https://github.com/jquery/jquery/blob/1.9.1/src/dimensions.js#L12-L17) `jQuery(window).width()` which [jQuery](https://api.jquery.com/width/) *calls* the browser viewport
* [available cross-browser](http://www.quirksmode.org/mobile/tableViewport.html)
* [inaccurate if doctype is missing](https://github.com/ryanve/verge/issues/22#issuecomment-341944009)
---
## Resources
* [Live outputs for various dimensions](http://ryanve.com/lab/dimensions/)
* [**verge**](http://github.com/ryanve/verge) uses cross-browser viewport techniques
* [**actual**](http://github.com/ryanve/actual) uses `matchMedia` to obtain precise dimensions in any unit | [jQuery dimension functions](http://api.jquery.com/category/dimensions/)
`$(window).width()` and `$(window).height()` | How to get the browser viewport dimensions? | [
"",
"javascript",
"cross-browser",
"viewport",
""
] |
Below is a quick example of what I am doing. Basically, I have multiple interfaces that can be implemented by 1 class or by separate classes, so I store each one in the application. My only question is about the variables myInterface, yourInterface, and ourInterface. **Do they reference the same object or are there 3 different objects?**
```
interface IMyInterface
{
void MyFunction();
}
interface IYourInterface()
{
void YourFunction();
}
interface IOurInterface()
{
void OurFunction();
}
public class MainImplementation : IMyInterface, IYourInterface, IOurInterface
{
public void MyFunction() { }
public void YourFunction() { }
public void OurFunction() { }
}
private IMyInterface myInterface;
private IYourInterface yourInterface;
private IOurInterface ourInterface;
static void Main(string[] args)
{
myInterface = new MainImplementation() as IMyInterface;
yourInterface = myInterface as IYourInterface;
ourInterface = myInterface as IOurInterface;
}
```
Bonus: Is there a better way to do this? | They all reference the same object. So changes to them in the form of:
```
ourInterface.X = ...
```
Will be reflected in 'all views'.
Effectively what you are doing with your casting (and I presume you meant your last one to be 'as IOurInterface') is giving a different 'view' of the data. In this case, each interface opens up one function each. | They reference the same instance. There *is* only one instance. | Implementation of multiple interfaces and object instances in .Net | [
"",
"c#",
".net",
"memory-management",
""
] |
I was trying to come up with obscure test cases for an alternative open-source JVM I am helping with ([Avian](http://oss.readytalk.com)) when I came across an interesting bit of code, and I was surprised that it didn't compile:
```
public class Test {
public static int test1() {
int a;
try {
a = 1;
return a; // this is fine
} finally {
return a; // uninitialized value error here
}
}
public static void main(String[] args) {
int a = test1();
}
}
```
The most obvious code path (the only one that I see) is to execute a = 1, "attempt" to return a (the first time), then execute the finally, which *actually* returns a. However, javac complains that "a" might not have been initialized:
```
Test.java:8: variable a might not have been initialized
return a;
^
```
The only thing I can think of that might cause / allow a different code path is if an obscure runtime exception were to occur after the start of the try but before the value 1 is assigned to a - something akin to an OutOfMemoryError or a StackOverflowException, but I can't think of any case where these could possibly occur at this place in the code.
Can anyone more familiar with the specifics of the Java standard shed some light on this? Is this just a case where the compiler is being conservative - and therefore refusing to compile what would otherwise be valid code - or is something stranger going on here? | It may seem counter intuitive that an exception could occur on the a=1 line, but a JVM error could occur. Thus, leaving the variable a uninitialized. So, the compiler error makes complete sense. This is that *obscure* runtime error that you mentioned. However, I would argue that an OutOfMemoryError is far from obscure and should be at least thought about by developers. Furthermore, remember that the state that sets up the OutOfMemoryError could happen in another thread and the one action that pushes the amount of heap memory used past the limit is the assignment of the variable a.
Anyways, since you are looking at compiler design, I'm also assuming that you already know how silly it is to return values in a finally block. | The Java Language Specification requires that a variable is assigned before it is used. The `JLS` defines specific rules for that known as "Definite Assignment" rules. All Java compilers need to adhere to them.
[`JLS 16.2.15`](http://java.sun.com/docs/books/jls/third%5Fedition/html/defAssign.html#16.2.15):
> V is definitely assigned before the finally block iff V is definitely assigned before the try statement.
In another words, when considering the finally statement, the try and catch block statements within a `try-catch-finally` statement assignments are not considered.
Needless to say, that specification is being very conservative here, but they would rather have the specification be simple while a bit limited (believe the rules are already complicated) than be lenient but hard to understand and reason about.
Compilers have to follow these Definite Assignment rules, so all compilers issue the same errors. Compilers aren't permitted to perform any extra analysis than the `JLS` specifies to suppress any error. | Java uninitialized variable with finally curiosity | [
"",
"java",
"variables",
"finally",
"initialization",
""
] |
Is the trickery way that we can show the entire stack trace (function+line) for an exception, much like in Java and C#, in C++?
Can we do something with macros to accomplish that for windows and linux-like platforms? | On Windows it can be done using the Windows DbgHelp API, but to get it exactly right requires lots of experimenting and twiddling. See <http://msdn.microsoft.com/en-us/library/ms679267(VS.85).aspx> for a start. I have no idea how to implement it for other platforms. | Not without either platform specific knowledge or addition of code in each function. | How to implement a stacktrace in C++ (from throwing to catch site)? | [
"",
"c++",
"exception",
"stack-trace",
""
] |
I get this error when I try to use one of the py2exe samples with py2exe.
```
File "setup.py", line 22, in ?
import py2exe
ImportError: no module named py2exe
```
I've installed py2exe with the installer, and I use python 2.6. I have downloaded the correct installer from the site (The python 2.6 one.)
My path is set to C:\Python26 and I can run normal python scripts from within the command prompt.
Any idea what to do?
Thanks.
Edit: I had python 3.1 installed first but removed it afterwards. Could that be the problem? | Sounds like something has installed Python 2.4.3 behind your back, and set that to be the default.
Short term, try running your script explicitly with Python 2.6 like this:
```
c:\Python26\python.exe setup.py ...
```
Long term, you need to check your system PATH (which it sounds like you've already done) and your file associations, like this:
```
C:\Users\rjh>assoc .py
.py=Python.File
C:\Users\rjh>ftype Python.File
Python.File="C:\Python26\python.exe" "%1" %*
```
Simply removing Python 2.4.3 might be a mistake, as presumably something on your system is relying on it. Changing the PATH and file associations to point to Python 2.6 *probably* won't break whatever thing that is, but I couldn't guarantee it. | Seems like you need to download proper [py2exe](http://sourceforge.net/projects/py2exe/) distribution.
Check out if your `c:\Python26\Lib\site-packages\` contains `py2exe` folder. | ImportError: no module named py2exe | [
"",
"python",
"py2exe",
""
] |
I am developing a website using VS 2008 (C#). My current mission is to develop a module that should perform the following tasks:
* Every 15 minutes a process need to communicate with the database to find out whether a new user is added to the "User" table in the database through registration
* If it finds an new entry, it should add that entry to an xml file (say `NewUsers18Jan2009.xml`).
In order to achieve this, which of the following one is most appropriate?
1. Threads
2. Windows Service
3. Other
Are there any samples available to demonstrate this? | Separate this task from your website. Everything website does goes through webserver. Put the logic into class library (so you can use it in the future if you will need to ad on-demand checking), and use this class in console application. Use Windows “Scheduled task” feature and set this console app to run every 15 minutes. This is far better solution than running scheduled task via IIS. | It doesn't sound like there's any UI part to your task. If that's the case, use either a windows service, or a scheduled application. I would go with a service, because it's easier to control remotely.
I fail to see a connection to a web site here... | ASP.NET- How to monitor Database Tables Periodically? | [
"",
"c#",
"asp.net",
"background",
"scheduling",
""
] |
**Update:**
I finally figured out that "keypress" has a better compatibility than "keydown" or "keyup" on Linux platform. I just changed "keyup"/"keydown" to "keypress", so all went well.
I don't know what the reason is but it is solution for me. Thanks all who had responsed my question.
--
I have some codes that needs to detect key press event (I have to know when the user press Enter) with JQuery and here are the codes in Javascript:
```
j.input.bind("keyup", function (l) {
if (document.selection) {
g._ieCacheSelection = document.selection.createRange()
}
}).bind("keydown", function(l) {
//console.log(l.keyCode);
if (l.keyCode == 13) {
if(l.ctrlKey) {
g.insertCursorPos("\n");
return true;
} else {
var k = d(this),
n = k.val();
if(k.attr('intervalTime')) {
//alert('can not send');
k.css('color','red').val('Dont send too many messages').attr('disabled','disabled').css('color','red');
setTimeout(function(){k.css('color','').val(n).attr('disabled','').focus()},1000);
return
}
if(g_debug_num[parseInt(h.buddyInfo.id)]==undefined) {
g_debug_num[parseInt(h.buddyInfo.id)]=1;
}
if (d.trim(n)) {
var m = {
to: h.buddyInfo.id,
from: h.myInfo.id,
//stype: "msg",
body: (g_debug_num[parseInt(h.buddyInfo.id)]++)+" : "+n,
timestamp: (new Date()).getTime()
};
//g.addHistory(m);
k.val("");
g.trigger("sendMessage", m);
l.preventDefault();
g.sendStatuses("");
k.attr('intervalTime',100);
setTimeout(function(){k.removeAttr('intervalTime')},1000);
return
}
return
}
}
```
It works fine on Windows but on Linux, it fails to catch the Enter event sometimes. Can someone help?
**Updated:**
It seems good if I only use English to talk. But I have to use some input method to input Chinese. If it is the problem? (JQuery can not detect Enter if I use Chinese input method? ) | Try this
```
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" >
<title></title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script>
</head>
<body>
<div>
<input id="TestTextBox" type="text" />
</div>
</body>
<script type="text/javascript">
$(function()
{
var testTextBox = $('#TestTextBox');
var code =null;
testTextBox.keypress(function(e)
{
code= (e.keyCode ? e.keyCode : e.which);
if (code == 13) alert('Enter key was pressed.');
e.preventDefault();
});
});
</script>
</html>
``` | Use `if (l.keyCode == 10 || l.keyCode == 13)` instead of `if (l.keyCode == 13)`...
Under Windows, a new line consists of a `Carriage Return` (13) followed by a `Line Feed` (10).
Under \*nix, a new line consists of a `Line Feed` (10) only.
Under Mac, a new line consists of a `Carriage Return` (13) only. | Detect key event (Enter) with JQuery in Javascript (on Linux platform) | [
"",
"javascript",
"jquery",
"html",
""
] |
Ok, I'm new at C++. I got Bjarne's book, and I'm trying to follow the calculator code.
However, the compiler is spitting out an error about this section:
```
token_value get_token()
{
char ch;
do { // skip whitespace except '\n'
if(!std::cin.get(ch)) return curr_tok = END;
} while (ch!='\n' && isspace(ch));
switch (ch) {
case ';':
case '\n':
std::cin >> WS; // skip whitespace
return curr_tok=PRINT;
case '*':
case '/':
case '+':
case '-':
case '(':
case ')':
case '=':
return curr_tok=ch;
case '0': case '1': case '2': case '3': case '4': case '5':
case '6': case '7': case '8': case '9': case '.':
std::cin.putback(ch);
std::cin >> number_value;
return curr_tok=NUMBER;
default: // NAME, NAME=, or error
if (isalpha(ch)) {
char* p = name_string;
*p++ = ch;
while (std::cin.get(ch) && isalnum(ch)) *p++ = ch;
std::cin.putback(ch);
*p = 0;
return curr_tok=NAME;
}
error("bad token");
return curr_tok=PRINT;
}
```
The error it's spitting out is this:
```
calc.cpp:42: error: invalid conversion from ‘char’ to ‘token_value’
```
`token_value` is an enum that looks like:
```
enum token_value {
NAME, NUMBER, END,
PLUS='+', MINUS='-', MUL='*', DIV='/',
PRINT=';', ASSIGN='=', LP='(', RP=')'
};
token_value curr_tok;
```
My question is, how do I convert ch (from cin), to the associated enum value? | You can't implicitly cast from `char` to an `enum` - you have to do it explicitly:
```
return curr_tok = static_cast<token_value> (ch);
```
But be careful! If none of your `enum` values match your `char`, then it'll be hard to use the result :) | Note that the solutions given (i.e. telling you to use a `static_cast`) work correctly only because when the enum symbols were defined, the symbols (e.g. `PLUS`) were defined to have a physical/numeric value which happens to be equal to the underlying character value (e.g. `'+'`).
Another way (without using a cast) would be to use the switch/case statements to specify explicitly the enum value returned for each character value, e.g.:
```
case '*':
return curr_tok=MUL;
case '/':
return curr_tok=DIV;
``` | C++ enum from char | [
"",
"c++",
"enums",
"char",
""
] |
Does anyone know what compression to use in Java for creating KMZ files that have images stored within them? I tried using standard Java compression (and various modes, BEST\_COMPRESSION, DEFAULT\_COMPRESSION, etc), but my compressed file and the kmz file always come out slightly different don't load in google earth. It seems like my png images in particular (the actual kml file seems to compress the same way).
Has anyone successfully created a kmz archive that links to local images (and gets stored in the files directory) from outside of google earth?
thanks
Jeff | The key to understanding this is the answer from @fraser, which is supported by this snippet from KML Developer Support:
> The only supported compression method is ZIP (PKZIP-compatible), so
> neither gzip nor bzip would work. KMZ files compressed with this
> method are fully supported by the API.
>
> *[KMZ in Google Earth API & KML Compression in a Unix environment](https://groups.google.com/forum/#!topic/google-earth-browser-plugin/Lsd7DbRUerg)*
Apache Commons has an archive handling library which would be handy for this: <http://commons.apache.org/proper/commons-vfs/filesystems.html> | KMZ is simply a zip file with a KML file and assets. For example, the `london_eye.kmz` kmz file contains:
```
$ unzip -l london_eye.kmz
Archive: london_eye.kmz
Length Date Time Name
-------- ---- ---- ----
451823 09-27-07 08:47 doc.kml
0 09-26-07 07:39 files/
1796 12-31-79 00:00 files/Blue_Tile.JPG
186227 12-31-79 00:00 files/Legs.dae
3960 12-31-79 00:00 files/Olive.JPG
1662074 12-31-79 00:00 files/Wheel.dae
65993 12-31-79 00:00 files/Wooden_Fence.jpg
7598 12-31-79 00:00 files/a0.gif
7596 12-31-79 00:00 files/a1.gif
7556 12-31-79 00:00 files/a10.gif
7569 12-31-79 00:00 files/a11.gif
7615 12-31-79 00:00 files/a12.gif
7587 12-31-79 00:00 files/a13.gif
7565 12-31-79 00:00 files/a14.gif
7603 12-31-79 00:00 files/a15.gif
7599 12-31-79 00:00 files/a16.gif
7581 12-31-79 00:00 files/a17.gif
7606 12-31-79 00:00 files/a18.gif
7613 12-31-79 00:00 files/a19.gif
7607 12-31-79 00:00 files/a2.gif
7592 12-31-79 00:00 files/a3.gif
7615 12-31-79 00:00 files/a4.gif
7618 12-31-79 00:00 files/a5.gif
7618 12-31-79 00:00 files/a6.gif
7578 12-31-79 00:00 files/a7.gif
7609 12-31-79 00:00 files/a8.gif
7603 12-31-79 00:00 files/a9.gif
57185 12-31-79 00:00 files/capsule.dae
310590 12-31-79 00:00 files/groundoverlay.jpg
224927 12-31-79 00:00 files/mechanism.dae
160728 12-31-79 00:00 files/shadowoverlay.jpg
33044 12-31-79 00:00 files/shed.dae
-------- -------
3310275 32 files
```
You can build this with java.util.zip, or even with `jar` if you want.
As far as the images go, they should not be compressed, since they already contain compressed data. You don't get any significant savings. | kmz compression for google earth images with java | [
"",
"java",
"compression",
"kml",
"google-earth",
"kmz",
""
] |
I am designing an OO object model with a plan to use NHibernate as the data access layer.
I'd like to know the best OO design when dealing with two entities that have a many to many relationship with each other (especially for easy integration with NHibernate).
The objects:
User - a single User can be related to multiple Subjects
Subject - a single Subject can be related to multiple Users
In SQL, this relationship is straight forward using a many to many table;
```
tblUser
userID
tblSubject
subjectID
tblUserSubject
userSubjectID
userID
subjectID
```
So, how should the *pure* objects be created? Should each object contain a collection of the other? Example:
```
class User
{
public int userID {get; set;}
public List<Subject> subjects {get; set;}
}
class Subject
{
public int subjectID {get; set;}
public List<User> users {get; set;}
}
```
Is there a better way to model this so that NHibernate can easily persist the relationships? | Found an article which you may find helpful.
<http://codebetter.com/blogs/peter.van.ooijen/archive/2008/05/29/nhibernate-many-to-many-collections-or-mapping-is-not-one-table-one-class.aspx>
To summarize your solution looks very similar. For completeness here is what the nHibernate mapping file might look like for your User object. Using the nHibernate mapping collection for your lists of Users and Subjects.
```
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="YourAssemblyName" namespace="YourNamespace">
<class name ="User" table="Users" proxy="User">
<id name="Id" type="Int32" column="idUser">
<generator class="identity"></generator>
</id>
<many-to-one name="CreatedBy" class="User" column="idUser"></many-to-one>
<bag name="subjects" table="tblUserSubject" lazy="false" >
<key column="idUser"></key>
<many-to-many class="Subject" column="idSubject"></many-to-many>
</bag>
</class>
</hibernate-mapping>
```
subjects is a bag which uses the table tblUserSubject. The key to this linking table is idUser (assuming that is what your identity column name is). The many to many class is the Subject class we just mapped. | I don't have an answer as far as what design will best jive with NHibernate, but I wanted to comment because this reminds me a lot of Rob Conery's discussion on Hanselminutes about Domain Driven Design.
My visceral reaction is that something isn't right if your *pure* User contains Subjects at the same time your Subjects contain Users. I'd want to narrow it down to one or the other. Interestingly, Conery was discussing his storefront app and the debate over whether a product has categories or does a category have products. It turned out that *neither* was the case - it was actually a tangential relationship that would best be handled by a service in the application. (At least, this was the best DDD way of implementing it because it's how his customer related the entities.)
So, DDD aside, I wonder if it would help you take a good hard look at what the true *pure* relationship is between Users and Subjects, and whether one even contains the other at all. Since I kinda don't like the idea of each of them containing the other, I would probably consider which child collection is used more often. When retrieving a User, do you often use its Subjects? Conversely, while a Subject may have users related to it, do you often use this collection when operating on a Subject object? If not, maybe the collection doesn't directly belong to it.
Again, I can't attest to what design would work best with NHibernate, but I think this is something worth considering. | OO question about many to many relationships (planning for NHibernate) | [
"",
"c#",
"nhibernate",
"oop",
""
] |
I am trying to improve my C++ by creating a program that will take a large amount of numbers between 1 and 10^6. The buckets that will store the numbers in each pass is an array of nodes (where node is a struct I created containing a value and a next node attribute).
After sorting the numbers into buckets according to the least significant value, I have the end of one bucket point to the beginning of another bucket (so that I can quickly get the numbers being stored without disrupting the order). My code has no errors (either compile or runtime), but I've hit a wall regarding how I am going to solve the remaining 6 iterations (since I know the range of numbers).
The problem that I'm having is that initially the numbers were supplied to the radixSort function in the form of a int array. After the first iteration of the sorting, the numbers are now stored in the array of structs. Is there any way that I could rework my code so that I have just one for loop for the 7 iterations, or will I need one for loop that will run once, and another loop below it that will run 6 times before returning the completely sorted list?
```
#include <iostream>
#include <math.h>
using namespace std;
struct node
{
int value;
node *next;
};
//The 10 buckets to store the intermediary results of every sort
node *bucket[10];
//This serves as the array of pointers to the front of every linked list
node *ptr[10];
//This serves as the array of pointer to the end of every linked list
node *end[10];
node *linkedpointer;
node *item;
node *temp;
void append(int value, int n)
{
node *temp;
item=new node;
item->value=value;
item->next=NULL;
end[n]=item;
if(bucket[n]->next==NULL)
{
cout << "Bucket " << n << " is empty" <<endl;
bucket[n]->next=item;
ptr[n]=item;
}
else
{
cout << "Bucket " << n << " is not empty" <<endl;
temp=bucket[n];
while(temp->next!=NULL){
temp=temp->next;
}
temp->next=item;
}
}
bool isBucketEmpty(int n){
if(bucket[n]->next!=NULL)
return false;
else
return true;
}
//print the contents of all buckets in order
void printBucket(){
temp=bucket[0]->next;
int i=0;
while(i<10){
if(temp==NULL){
i++;
temp=bucket[i]->next;
}
else break;
}
linkedpointer=temp;
while(temp!=NULL){
cout << temp->value <<endl;
temp=temp->next;
}
}
void radixSort(int *list, int length){
int i,j,k,l;
int x;
for(i=0;i<10;i++){
bucket[i]=new node;
ptr[i]=new node;
ptr[i]->next=NULL;
end[i]=new node;
}
linkedpointer=new node;
//Perform radix sort
for(i=0;i<1;i++){
for(j=0;j<length;j++){
x=(int)(*(list+j)/pow(10,i))%10;
append(*(list+j),x);
printBucket(x);
}//End of insertion loop
k=0,l=1;
//Linking loop: Link end of one linked list to the front of another
for(j=0;j<9;j++){
if(isBucketEmpty(k))
k++;
if(isBucketEmpty(l) && l!=9)
l++;
if(!isBucketEmpty(k) && !isBucketEmpty(l)){
end[k]->next=ptr[l];
k++;
if(l!=9) l++;
}
}//End of linking for loop
cout << "Print results" <<endl;
printBucket();
for(j=0;j<10;j++)
bucket[i]->next=NULL;
cout << "End of iteration" <<endl;
}//End of radix sort loop
}
int main(){
int testcases,i,input;
cin >> testcases;
int list[testcases];
int *ptr=&list[0];
for(i=0;i<testcases;i++){
cin>>list[i];
}
radixSort(ptr,testcases);
return 0;
}
``` | I think you're severely overcomplicating your solution. You can implement radix using the single array received in the input, with the buckets in each step represented by an array of indices that mark the starting index of each bucket in the input array.
In fact, you could even do it recursively:
```
// Sort 'size' number of integers starting at 'input' according to the 'digit'th digit
// For the parameter 'digit', 0 denotes the least significant digit and increases as significance does
void radixSort(int* input, int size, int digit)
{
if (size == 0)
return;
int[10] buckets; // assuming decimal numbers
// Sort the array in place while keeping track of bucket starting indices.
// If bucket[i] is meant to be empty (no numbers with i at the specified digit),
// then let bucket[i+1] = bucket[i]
for (int i = 0; i < 10; ++i)
{
radixSort(input + buckets[i], buckets[i+1] - buckets[i], digit+1);
}
}
```
Of course `buckets[i+1] - buckets[i]` will cause a buffer overflow when `i` is 9, but I omitted the extra check or readability's sake; I trust you know how to handle that.
With that, you just have to call `radixSort(testcases, sizeof(testcases) / sizeof(testcases[0]), 0)` and your array should be sorted. | To speed up the process with better memory management, create a matrix for the counts that get converted into indices by making a single pass over the array. Allocate a second temp array the same size as the original array, and radix sort between the two arrays until the array is sorted. If an odd number of radix sort passes is performed, then the temp array will need to be copied back to the original array at the end.
To further speed up the process, use base 256 instead of base 10 for the radix sort. This only takes 1 scan pass to create the matrix and 4 radix sort passes to do the sort. Example code:
```
typedef unsigned int uint32_t;
uint32_t * RadixSort(uint32_t * a, size_t count)
{
size_t mIndex[4][256] = {0}; // count / index matrix
uint32_t * b = new uint32_t [COUNT]; // allocate temp array
size_t i,j,m,n;
uint32_t u;
for(i = 0; i < count; i++){ // generate histograms
u = a[i];
for(j = 0; j < 4; j++){
mIndex[j][(size_t)(u & 0xff)]++;
u >>= 8;
}
}
for(j = 0; j < 4; j++){ // convert to indices
m = 0;
for(i = 0; i < 256; i++){
n = mIndex[j][i];
mIndex[j][i] = m;
m += n;
}
}
for(j = 0; j < 4; j++){ // radix sort
for(i = 0; i < count; i++){ // sort by current lsb
u = a[i];
m = (size_t)(u>>(j<<3))&0xff;
b[mIndex[j][m]++] = u;
}
std::swap(a, b); // swap ptrs
}
delete[] b;
return(a);
}
``` | Radix Sort implemented in C++ | [
"",
"c++",
"algorithm",
"sorting",
"radix-sort",
""
] |
They say that to build a session factory in NHibernate is expensive and that it should only happen once. I use a singleton approach on this. This is done on the first time that a session is requested.
My question : Would there every be a time when you should close the Session factory? If so, when would one do this? | This is what i do in Java with Hibernate :
```
public class HibernateUtil
{
private static final SessionFactory sessionFactory;
static
{
try
{
// Create the SessionFactory from hibernate.cfg.xml
sessionFactory = new Configuration().configure().buildSessionFactory();
}
catch (Throwable ex)
{
// Make sure you log the exception, as it might be swallowed
System.err.println("Initial SessionFactory creation failed." + ex);
throw new ExceptionInInitializerError(ex);
}
}
public static SessionFactory getSessionFactory()
{
return sessionFactory;
}
}
```
You can free your SessionFactory when you don't need it anymore I guess, but honestly I've never closed my session factory | To AZ: This is referring to the SessionFactory, not the session. Though I wouldn't say there should only be one instance of SessionFactory. There should be one per unique configuration. For instance, if a single app is connecting to 2 different databases, then you need 2 different SessionFactory instances. | NHibernate SessionFactory | [
"",
"c#",
"nhibernate",
""
] |
I have been trying to create a decorator that can be used with both functions and methods in python. This on it's own is not that hard, but when creating a decorator that takes arguments, it seems to be.
```
class methods(object):
def __init__(self, *_methods):
self.methods = _methods
def __call__(self, func):
def inner(request, *args, **kwargs):
print request
return func(request, *args, **kwargs)
return inner
def __get__(self, obj, type=None):
if obj is None:
return self
new_func = self.func.__get__(obj, type)
return self.__class__(new_func)
```
The above code wraps the function/method correctly, but in the case of a method, the `request` argument is the instance it is operating on, not the first non-self argument.
Is there a way to tell if the decorator is being applied to a function instead of a method, and deal accordingly? | To expand on the `__get__` approach. This can be generalized into a decorator decorator.
```
class _MethodDecoratorAdaptor(object):
def __init__(self, decorator, func):
self.decorator = decorator
self.func = func
def __call__(self, *args, **kwargs):
return self.decorator(self.func)(*args, **kwargs)
def __get__(self, instance, owner):
return self.decorator(self.func.__get__(instance, owner))
def auto_adapt_to_methods(decorator):
"""Allows you to use the same decorator on methods and functions,
hiding the self argument from the decorator."""
def adapt(func):
return _MethodDecoratorAdaptor(decorator, func)
return adapt
```
In this way you can just make your decorator automatically adapt to the conditions it is used in.
```
def allowed(*allowed_methods):
@auto_adapt_to_methods
def wrapper(func):
def wrapped(request):
if request not in allowed_methods:
raise ValueError("Invalid method %s" % request)
return func(request)
return wrapped
return wrapper
```
Notice that the wrapper function is called on all function calls, so don't do anything expensive there.
Usage of the decorator:
```
class Foo(object):
@allowed('GET', 'POST')
def do(self, request):
print "Request %s on %s" % (request, self)
@allowed('GET')
def do(request):
print "Plain request %s" % request
Foo().do('GET') # Works
Foo().do('POST') # Raises
``` | The decorator is always applied to a function object -- have the decorator `print` the type of its argument and you'll be able to confirm that; and it should generally return a function object, too (which is already a decorator with the proper `__get__`!-) although there are exceptions to the latter.
I.e, in the code:
```
class X(object):
@deco
def f(self): pass
```
`deco(f)` is called within the class body, and, while you're still there, `f` is a function, not an instance of a method type. (The method is manufactured and returned in `f`'s `__get__` when later `f` is accessed as an attribute of `X` or an instance thereof).
Maybe you can better explain one toy use you'd want for your decorator, so we can be of more help...?
**Edit**: this goes for decorators with arguments, too, i.e.
```
class X(object):
@deco(23)
def f(self): pass
```
then it's `deco(23)(f)` that's called in the class body, `f` is still a function object when passed as the argument to whatever callable `deco(23)` returns, and that callable should still return a function object (generally -- with exceptions;-). | Using the same decorator (with arguments) with functions and methods | [
"",
"python",
"function",
"methods",
"arguments",
"decorator",
""
] |
Is it possible to have multiple drop down lists in asp.net mvc?
What I'm trying to do is have a drop down list, say with many colours, Red, Green, Blue, Black etc. Then next to it a text box which the user can enter a number.
However there should be then a small + little sign next to it so that another drop down list appears underneath it allowing the user to select another colour. The number of times they can add drop down boxes should ideally be unlimited. Is this possible? I know I can put hidden drop down lists underneath them then enable/show them when the user clicks the + button, but this will only mean a limited number of drop down lists!
Thanks | This has nothing to do with ASP.net mvc. It is a pure JavaScript. Use jQuery to add ddl dynamically. | Sure, you can add as many pull-down menus to your page as you like (and it sounds like you are comfortable with the scripting to do so). Just make sure to add a unique name to each of those menus so you can access the values in your controller. | Multiple/addable drop down lists in asp.net mvc | [
"",
"javascript",
""
] |
I can't figure out what is up with this.
I have a Scene class that has a vector of Entities and allows you to add and get Entities from the scene:
```
class Scene {
private:
// -- PRIVATE DATA ------
vector<Entity> entityList;
public:
// -- STRUCTORS ---------
Scene();
// -- PUBLIC METHODS ----
void addEntity(Entity); // Add entity to list
Entity getEntity(int); // Get entity from list
int entityCount();
};
```
My Entity class is as follows (output is for testing):
```
class Entity {
public:
virtual void draw() { cout << "No" << endl; };
};
```
And then I have a Polygon class that inherits from Entity:
```
class Polygon: public Entity
{
private:
// -- PRIVATE DATA ------
vector<Point2D> vertexList; // List of vertices
public:
// -- STRUCTORS ---------
Polygon() {}; // Default constructor
Polygon(vector<Point2D>); // Declare polygon by points
// -- PUBLIC METHODS ----
int vertexCount(); // Return number of vertices
void addVertex(Point2D); // Add vertex
void draw() { cout << "Yes" << endl; }; // Draw polygon
// -- ACCESSORS ---------
Point2D getVertex(int); // Return vertex
};
```
As you can see, it has a draw() method that should override the draw() method it inherits from the Entity class.
But it doesn't. When using the following code:
```
scene->getEntity(0).draw();
```
where entity 0 is a Polygon (or at least should be), it prints "No" from the parent method (as though it's not a Polygon, just an Entity). In fact, it doesn't seem to let me call any methods unique to Polygon without getting:
'*some method name*' : is not a member of 'Entity'
So any idea what's up?
Thanks for the help.
**UPDATE:**
So I've implemented the code given in the first answer, but I'm not sure how to add my polygon to the list. Something like this?
```
const tr1::shared_ptr<Entity>& poly = new Polygon;
poly->addVertex(Point2D(100,100));
poly->addVertex(Point2D(100,200));
poly->addVertex(Point2D(200,200));
poly->addVertex(Point2D(200,100));
scene->addEntity(poly);
```
I'm just not used to this shared\_ptr business. | I think that you need to post your calling code, but the essentially problem is this.
You have a concrete class `Polygon` deriving from another concrete class `Entity`. Your addEntity and getEntity functions take and return an `Entity` *by value* so if you try to pass in or retrieve an `Entity`, you will copy only the `Entity` part of that object (slicing it) and the information about the derived part of the object will be lost.
In addition you have a `vector` of `Entity`, which is a vector of base class objects, so you have no way of storing anything other than the base type of object.
If you need to have a collection of a mixed type of objects, but all derived from `Entity`, you may need to use dynamically created objects and some sort of smart pointer such as a `tr1::shared_ptr` or a `boost::shared_ptr`.
E.g.
```
class Scene {
private:
// -- PRIVATE DATA ------
vector< std::tr1::shared_ptr<Entity> > entityList;
public:
// -- STRUCTORS ---------
Scene();
// -- PUBLIC METHODS ----
void addEntity( const std::tr1::shared_ptr<Entity>& ); // Add entity to list
const std::tr1::shared_ptr<Entity> getEntity(int); // Get entity from list
int entityCount();
};
```
**Edit**
Your updated calling code is essentially correct, although using a local const reference to a shared pointer is a bit obscure.
I'd probably go with something like:
```
std::tr1::shared_ptr<Polygon> poly( new Polygon );
poly->addVertex(Point2D(100,100));
poly->addVertex(Point2D(100,200));
poly->addVertex(Point2D(200,200));
poly->addVertex(Point2D(200,100));
scene->addEntity(poly);
``` | chollida's comment is correct: you're pushing an object of type Polygon into a memory location meant for types Entity, and running into what is called *slicing*. The extra 'Polygon' info gets sliced away and all you have left is the Entity.
You should store pointers (or references, if possible) to base classes in these situations. | C++ Overriding Methods | [
"",
"c++",
"class",
"inheritance",
"methods",
"overriding",
""
] |
An interesting problem I've been pondering the past few days is how to copy one integer's bits into another integer at a given position in the destination integer. So, for example, given the destination integer `0xdeadbeef` and the source integer `0xabcd`, the idea would be to get a result of `0xabcdbeef` (given a destination position of 16 bits) or `0xdeabcdef` (given a destination position of 8 bits).
With the arbitrary limitation of avoiding conditionals or loops (allowing myself to use just mathematical/bitwise operations), I developed the following function (C++)
```
int setbits(int destination, int source, int at, int numbits)
{
int ones = ((1<<(numbits))-1)<<at;
return (ones|destination)^((~source<<at)&ones);
}
```
where `at` is the place where the source bits should be copied into the destination number (0-31) and `numbits` is the number of bits being copied from `source` (1-32). As far as I can tell, this algorithm works for all values except for `at` = 0 and `numbits` = 32 (the case when the entire destination integer is being overwritten by the source integer) due to the fact that 1<<32 results in 1 (since the shift wraps around) as opposed to 0.
My questions are:
1. How is this normally done? Are there any particularly notable algorithms used (by notable, I'm asking if there are any particularly efficient tricks that can be used to do this)?
2. Does my algorithm work as well as I think it does (that is, works for all values except at = 0 and numbits = 32)?
3. Related to 1), is there any way to do this only using mathematical/bitwise operators? The algorithm for all values is trivial using conditions or loops, so I'm not interested in that.
Algorithm design is usually a weak point for me, so I have no idea whether or not my algorithm is 'as good as it gets' when only using mathematical/bitwise operations. Thanks | I don't think it's the case that 1<<32 wraps (otherwise, why doesn't 2<<31 also wrap?), instead I think that internally modulus 32 is applied to the second operator, so that 1<<32 is actually equivalent to 1<<0. Also, consider changing the parameters types from "int" to "unsigned int". To get the value of "ones" without running into the "1<<32" problem, you can do this:
```
unsigned int ones = (0xffffffff >> (32-numbits)) << at;
```
I don't believe there are any "standard" methods for this kind of operation. I'm sure there are other ways of using bitwise operators in different ways to achieve the same outcome, but your algorithm is as good as any.
Having said that, though, maintainability and documentation is also important. Your function would benefit from the algorithm being documented with a comment, especially to explain how you use the bitwise XOR -- which is clever, but not easy to understand at first glance. | I don't think it can be done more efficient unless you write assembler.
You can improve the readability and solve your overflow problem changing some little things:
```
int setbits2(int destination, int source, int at, int numbits)
{
// int mask = ((1LL<<numbits)-1)<<at; // 1st aproach
int mask = ((~0u)>>(sizeof(int)*8-numbits))<<at; // 2nd aproach
return (destination&~mask)|((source<<at)&mask);
}
```
More efficient assembler version (VC++):
```
// 3rd aproach
#define INT_SIZE 32;
int setbits3(int destination, int source, int at, int numbits)
{ __asm {
mov ecx, INT_SIZE
sub ecx, numbits
or eax, -1
shr eax, cl
mov ecx, at
shl eax, cl // mask == eax
mov ebx, eax
not eax
and eax, destination
mov edx, source
shl edx, cl
and edx, ebx
or eax, edx
}}
```
* 1st aproach: Slower on 32bit architecture
* 2nd aproach: (~0u) and (sizeof(int)\*8) are calculated at compile time, so they don't charge any cost.
* 3rd aproach: You save 3 ops *(memory accesses)* writing it in assembler but you will need to write ifdefs if you want to make it portable. | Algorithm for copying N bits at arbitrary position from one int to another | [
"",
"c++",
"algorithm",
"bit-manipulation",
""
] |
I have just had to use LINQ to SQL on an SQL Server 2000 database and I have noticed that it does not include all the "Extensibility Method Definitions" actions, why is this? | What exactly is missing? What are you seeing (or not)?
In particular, LINQ-to-SQL's strategy for the database (i.e. how to do paging etc on SQL2000 vs SQL2005 etc) is chosen at *runtime* based on the connection and the specific server (so it updates automatically when you install SQL Server 2008).
The code generation is based purely on the dbml, which doesn't really care about the server version (it is just xml - take a look).
If you are missing some `partial` methods, I wonder if you haven't accidentally detached your `partial class`es from the dbml-generated ones, perhaps by changing the namespace or their names. | Extensibility methods are only generated for tables/entities with primary keys ([source](https://social.msdn.microsoft.com/Forums/en-US/a5402b43-ed4c-4f74-82f3-b9135e71e5f9/extensibility-methods-missing?forum=linqtosql)).
Note that this also applies to SQL Views.
After dropping an entity onto the Linq to SQL designer you can nominate a column as the primary key in the Properties window. Regenerate the designer.cs file and the extensibility methods will now be created. | LINQ to SQL - Extensibility method definitions missing | [
"",
"c#",
".net",
"linq-to-sql",
""
] |
I'm trying to paint a simple bar chart via `C#` but I've never experimented with the Graphics and Drawing namespaces. I thought of generating a "start" and "end" graphic and then repeating an image somehow (to show a "length") but I have no idea how to do this.
I'd be really happy if you can point me in the right direction and/or if you have sample code to do this. | I've got to agree with Eros. There are lots of very good graphing libraries to accomplish what you want. The best I've come across:
* [Microsoft Chart Controls](http://code.msdn.microsoft.com/mschart) - There's even a Visual Studio [plugin](http://www.microsoft.com/downloads/details.aspx?familyid=1D69CE13-E1E5-4315-825C-F14D33A303E9&displaylang=en) and a good [tutorial](http://www.mikesdotnetting.com/Article/90/Microsoft-Chart-Controls-For-ASP.NET).
* [Flot](http://code.google.com/p/flot/) - This one is jQuery based, so good for web apps.
* [Google Chart](http://code.google.com/apis/chart/) - Simple API and even an [ASP.Net Control](http://www.codeplex.com/GoogleChartNet) | Alex, here is a very simple example to get you started. To test the code, just add a panel control to your form and create a paint event handler for it. (Double click on the panel in the designer should do it by default.) Then replace the handler code with th code below.
The code draws five bars of arbitrary length across the panel and the bar widths and heights are related to the panel widths and heights. The code is arbitrary but is a good and simple way to introduce .Net drawing capabilities.
```
void Panel1Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
int objCount = 5;
for (int n=0; n<objCount; n++)
{
g.FillRectangle(Brushes.AliceBlue, 0, n*(panel1.Height/objCount),
panel1.Width/(n+1), panel1.Height/objCount);
g.DrawRectangle(new Pen(Color.Black), 0, n*(panel1.Height/objCount),
panel1.Width/(n+1), panel1.Height/objCount);
g.DrawString(n.ToString(), new Font("Arial", 10f), Brushes.Black,
2, 2+n*(panel1.Height/objCount));
}
}
``` | C#: Paint own Bar Chart | [
"",
"c#",
".net",
"graphics",
"drawing",
""
] |
How to get the CPU, RAM and Disk drive usage of the system in C# code? | Please search SO; there are several similiar questions:
* [How to get the CPU Usage C#](https://stackoverflow.com/questions/278071/how-to-get-the-cpu-usage-c)
* [C#: List all processes and their current memory & CPU consumption?](https://stackoverflow.com/questions/1248129/c-list-all-processes-and-their-current-memory-cpu-consumption)
* [How do I retrieve disk information in C#?](https://stackoverflow.com/questions/412632/how-to-get-disk-information-by-using-c) | Here is a solution which will output disk usage, the total disk percent being used at the time that Timer99 is polled:
```
using System;
using System.Diagnostics;
using System.Windows;
namespace diskpercent
{
public partial class MainWindow : Window
{
DispatcherTimer Timer99 = new DispatcherTimer();
public MainWindow()
{
InitializeComponent();
Timer99.Tick += Timer99_Tick; // don't freeze the ui
Timer99.Interval = new TimeSpan(0, 0, 0, 0, 1024);
Timer99.IsEnabled = true;
}
public PerformanceCounter myCounter =
new PerformanceCounter("PhysicalDisk", "% Disk Time", "_Total");
public Int32 j = 0;
public void Timer99_Tick(System.Object sender, System.EventArgs e)
{
//Console.Clear();
j = Convert.ToInt32(myCounter.NextValue());
//Console.WriteLine(j);
textblock1.Text = j.ToString();
}
}
}
```
and here is a list of common performance counters:
```
PerformanceCounter("Processor", "% Processor Time", "_Total");
PerformanceCounter("Processor", "% Privileged Time", "_Total");
PerformanceCounter("Processor", "% Interrupt Time", "_Total");
PerformanceCounter("Processor", "% DPC Time", "_Total");
PerformanceCounter("Memory", "Available MBytes", null);
PerformanceCounter("Memory", "Committed Bytes", null);
PerformanceCounter("Memory", "Commit Limit", null);
PerformanceCounter("Memory", "% Committed Bytes In Use", null);
PerformanceCounter("Memory", "Pool Paged Bytes", null);
PerformanceCounter("Memory", "Pool Nonpaged Bytes", null);
PerformanceCounter("Memory", "Cache Bytes", null);
PerformanceCounter("Paging File", "% Usage", "_Total");
PerformanceCounter("PhysicalDisk", "Avg. Disk Queue Length", "_Total");
PerformanceCounter("PhysicalDisk", "Disk Read Bytes/sec", "_Total");
PerformanceCounter("PhysicalDisk", "Disk Write Bytes/sec", "_Total");
PerformanceCounter("PhysicalDisk", "Avg. Disk sec/Read", "_Total");
PerformanceCounter("PhysicalDisk", "Avg. Disk sec/Write", "_Total");
PerformanceCounter("PhysicalDisk", "% Disk Time", "_Total");
PerformanceCounter("Process", "Handle Count", "_Total");
PerformanceCounter("Process", "Thread Count", "_Total");
PerformanceCounter("System", "Context Switches/sec", null);
PerformanceCounter("System", "System Calls/sec", null);
PerformanceCounter("System", "Processor Queue Length", null);
``` | Get current CPU, RAM and Disk drive usage in C# | [
"",
"c#",
"wpf",
"cpu-usage",
"diskspace",
""
] |
If I load an HTML from file:// and in the HTML there's an AJAX 'GET' request to a relative URL, it means the URL is pointing to a file and the file is loaded.
Can I have a similar behavior for 'PUT'? Meaning overwrite the file's content with the AJAX data?
I need this so I can easily debug scripts without the need to setup an HTTP server to host them and reply to the requested URLs.
Thank you | As far as I know, the HTML/script model in current browsers does not allow changing local resources (sort of a sandbox - think of the mess that would occur if an HTML page could write to a local file). The file:// is just a way to tell the browser a resource it's looking for is local to the machine it's running on, so it won't need to utilize the network. So you can read local resources from script - not change them.
Why not just run a local web server? You'll need it anyway eventually. | I don't think so. Where would the data post to and what would process it? | Can I do a 'PUT' ajax call to a local file similarly to how 'GET' works? | [
"",
"javascript",
"ajax",
"http-put",
"file-uri",
"local-files",
""
] |
I was debating with some colleagues about what happens when you throw an exception in a dynamically allocated class. I know that `malloc` gets called, and then the constructor of the class. The constructor never returns, so what happens to the `malloc`?
Consider the following example:
```
class B
{
public:
B()
{
cout << "B::B()" << endl;
throw "B::exception";
}
~B()
{
cout << "B::~B()" << endl;
}
};
void main()
{
B *o = 0;
try
{
o = new B;
}
catch(const char *)
{
cout << "ouch!" << endl;
}
}
```
What happens to the malloced memory `o`, does it leak? Does the CRT catch the exception of the constructor and deallocate the memory?
Cheers!
Rich | A call to
```
new B();
```
resolves in two things:
* allocating with an operator new() (either the global one or a class specific one, potentially a placement one with the syntax `new (xxx) B()`)
* calling the constructor.
If the constructor throw, the corresponding operator delete is called. The case where the corresponding delete is a placement one is the only case where a placement delete operator is called without the syntax ::operator delete(). `delete x;` or `delete[] x;` don't call the placement delete operators and there is no similar syntax to placement new to call them.
Note that while the destructor of B will **not** be called, already constructed subobjects (members or B and base classes of B) will be destructed before the call to operator delete. The constructor which isn't called is the one for B. | When an exception is thrown from the constructor, the memory allocated by new is released, but the destructor of class B is not called. | Does the memory get released when I throw an exception? | [
"",
"c++",
"exception",
"constructor",
"handler",
""
] |
Following this thread.
[Streaming large files in a java servlet](https://stackoverflow.com/questions/55709/streaming-large-files-in-a-java-servlet).
Is it possible to find the total internet bandwidth available in current machine thru java?
what i am trying to do is while streaming large files thru servlet, based on the number of parallel request and the total band width i am trying to reduce the BUFFER\_SIZE of the stream for each request. make sense?
Is there any pure java way? (without JNI) | Maybe you can time how long the app need to send one package (the buffer). And if that is larger than x milliseconds, then make your buffer smaller. You can use other values for the original `bufferSize` and `if (stop - start > 700)`.
This is based on the thread you noticed:
```
ServletOutputStream out = response.getOutputStream();
InputStream in = [ code to get source input stream ];
String mimeType = [ code to get mimetype of data to be served ];
int bufferSize = 1024 * 4;
byte[] bytes = new byte[bufferSize];
int bytesRead;
response.setContentType(mimeType);
while ((bytesRead = in.read(bytes)) != -1) {
long start = System.currentTimeMillis();
out.write(bytes, 0, bytesRead);
long stop = System.currentTimeMillis();
if (stop - start > 700)
{
bufferSize /= 2;
bytes = new byte[bufferSize];
}
}
// do the following in a finally block:
in.close();
out.close();
``` | The only way to find available bandwidth is to monitor / measure it. On windows you have access to Net.exe and can get the throughput on each NIC. | Finding server internet bandwidth thru java for streaming | [
"",
"java",
"servlets",
"streaming",
""
] |
A lot of JavaScript libraries have user interface widgets.
Usually they are created *after* the page is loaded. The user sees a page re-flow and parts of the page shift around or change appearance.
How can I reduce the amount of reflowing that occurs when a page loads, especially if I’m using several widgets?
Here’s some examples. These are really basic so the reflow isn’t so bad, but it’s there, especially with a cleared cache and a slow Internet connection:
* [YUI 3](http://developer.yahoo.com/yui/3/examples/node-menunav/node-menunav-2_source.html): The main paragraph shifts down after the menu nav element loads.
* [BBC Glow](http://www.bbc.co.uk/glow/docs/1.5/furtherinfo/widgets/carousel/): The logos load as a vertical list and then change to a Carousel control.
* [YUI 2](http://developer.yahoo.com/yui/examples/datatable/dt_enhanced_clean.html): The table appears plain and then suddenly changes appearance.
* [jQuery DataTables Plugin](http://www.datatables.net/examples/example_zero_config.html): The whole table appears, plain, before being restyled and paginated. | The problem when you use most 3rd party scripts is they will do all of the dirty/busy work for you in the JavaScript code of adding the classes or reformatting the HTML.
The only way to get around the redraw of the page is to either do what the others suggest of hiding and showing OR you have to have the HTMLfor the controls there already with the css classes enabled and the HTML properly marked up.
When I code widgets, I build that functionality in. I say to the developers that they can do the minimalist mark-up if they do not care about the flash of content or they can code their backend to spit out the spefic HTML mark-up the widget needs.
I do not think the scripts you listed have the ability to have the code marked up already, so you might be stuck living with the flash of content or doing the hiding. [The hiding may even screw up the scripts that rely on positioning data when rendering!] | You can:
a) Hide visibility of an element (e.g. applying "hidden" value to its "visibility") that constitutes a widget until it's being fully initialized.
b) Initialize widget earlier than window's "load" event happens, such as on "DOMContentLoaded" event in supporting clients. This might minimize any flickering. | How can I reduce page reflows when using JavaScript widgets? | [
"",
"javascript",
"jquery",
"user-interface",
"yui",
"bbc-glow",
""
] |
I have a SELECT statement similar to the one below which returns several counts in one query.
```
SELECT invalidCount = (SELECT COUNT(*) FROM <...a...> WHERE <...b...>),
unknownCount = (SELECT COUNT(*) FROM <...c...> WHERE <...d...>),
totalCount = (SELECT COUNT(*) FROM <...e...> WHERE <...f...>)
```
This works fine but I wanted to add two percentage columns to the SELECT:
```
invalidCount * 100 / totalCount AS PercentageInvalid,
unknownCount * 100 / totalCount AS UnknownPercentage
```
How do I modify my SELECT statement to handle this? | You can use a subquery in the `from` clause:
```
select
s.invalidCount,
s.unknownCount,
s.totalCount,
invalidCount * 100 / s.totalCount as PercentageInvalid,
unknownCount * 100 / s.totalCount as PercentageUnknown
from
(select invalidCount = (SELECT COUNT(*) FROM <...a...> WHERE <...b...>),
unknownCount = (SELECT COUNT(*) FROM <...c...> WHERE <...d...>),
totalCount = (SELECT COUNT(*) FROM <...e...> WHERE <...f...>)) s
``` | ```
SELECT invalidCount,
unknownCount,
totalCount,
invalidCount * 100 / totalCount AS PercentageInvalid,
unknownCount * 100 / totalCount AS UnknownPercentage
FROM
(
SELECT invalidCount = (SELECT COUNT(*) FROM <...a...> WHERE <...b...>),
unknownCount = (SELECT COUNT(*) FROM <...c...> WHERE <...d...>),
totalCount = (SELECT COUNT(*) FROM <...e...> WHERE <...f...>)
)
``` | Adding percentages to multiple counts in one SQL SELECT Query | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have a table which would contain information about a certain month, and one column in that row would have mysql row id's for another table in it to grab multiple information from
is there a more efficent way to get the information than exploding the ids and doing seperate sql queryies on each... here is an example:
```
Row ID | Name | Other Sources
1 Test 1,2,7
```
the `Other Sources` has the id's of the rows from the other table which are like so
```
Row ID | Name | Information | Link
1 John | No info yet? | http://blah.com
2 Liam | No info yet? | http://blah.com
7 Steve| No info yet? | http://blah.com
```
and overall the information returned wold be like the below
```
Hi this page is called test... here is a list of our sources
- John (No info yet?) find it here at http://blah.com
- Liam (No info yet?) find it here at http://blah.com
- Steve (No info yet?) find it here at http://blah.com
```
i would do this... i would explode the other sources by , and then do a seperate SQL query for each, i am sure there could be a better way? | Looks like a classic many-to-many relationship. You have pages and sources - each page can have many sources and each source could be the source for many pages?
Fortunately this is very much a solved problem in relational database design. You would use a 3rd table to relate the two together:
```
Pages (PageID, Name)
Sources (SourceID, Name, Information, Link)
PageSources (PageID, SourceID)
```
The key for the "PageSources" table would be both PageID and SourceID.
Then, To get all the sources for a page for example, you would use this SQL:
```
SELECT s.*
FROM Sources s INNER JOIN PageSources ps ON s.SourceID = ps.SourceID
AND ps.PageID = 1;
``` | Not easily with your table structure. If you had another table like:
```
ID Source
1 1
1 2
1 7
```
Then `join` is your friend. With things the way they are, you'll have to do some nasty splitting on comma-separated values in the "Other Sources" field. | Get details from another mysql table | [
"",
"php",
"mysql",
""
] |
I am using a serial port to communicate with a remote diagnostics device.
The length of the response from the remote device varies depending upon the command but is known ahead of time. So, currently I send the command and wait for the required number of response bytes to be received.
I subscribe to the 'SerialPort.DataReceived' event whenever I'm not actively soliciting data. The handler for this event simply dumps any 'unsolicited' received data to a log (unsolicited data is typically only received if the remote device restarts unexpectedly, etc).
In some cases I want to send commands at a rate of about 60Hz.
My question is whether it's best to unsubscribe/subscribe to the 'SerialPort.DataReceived' event every time I call my 'SendCommand' method to actively solicit data, or should I leave the event subscription alone and just toggle a boolean 'TransferInProgress' flag that the DataReceived handler can use to ignore incoming data when I'm actively soliciting it?
Here's the current implementation:
```
public virtual bool SendCommand(byte[] command, ref byte[] response) {
try {
TransferInProgress = true;
OnTransferStarted();
// temporarily unsubscribe since we're actively soliciting data
_port.DataReceived -=
new SerialDataReceivedEventHandler(SerialPort_DataReceived);
_port.DiscardInBuffer();
_port.Write(command, 0, command.Length);
OnCommandSent(command);
// read the requested number of response bytes
int responseBytesRead = 0;
while (responseBytesRead < response.Length) {
responseBytesRead +=
_port.Read(response, responseBytesRead, (response.Length - responseBytesRead));
}
OnCommandResponseReceived(response);
return true;
}
catch (Exception ex) {
OnCommandSendFailed(ex.Message);
return false;
}
finally {
_port.DataReceived +=
new SerialDataReceivedEventHandler(SerialPort_DataReceived);
OnTransferComplete();
TransferInProgress = false;
}
}
```
-Trevor | Have you thought about handling all of your data reception in one place? You could treat the commands you send as fire and forget, parsing the data received for the responses. If the responses do not have an identifying header and the ONLY way you know how to parse them is by knowing which command you sent and the length of the response, then you could keep track of the commands sent in a queue. The way that would work, is that in your Data Received handler you would check the queue of commands you're waiting on a response for, and then parse the data received like you do now.
Long story short, I would recommend handling all incoming data in one place. | My opinion if I'm understanding correctly would be to simply handle all your receiving data in the DataReceived handler or you have one other options.
If the data received between actual request isn't much you could just read the buffer and log it before transmitting your request. The serial driver receive buffer may be enough to store a small amount of data. Then send the request and read in just the response. This will probable be the easier method and simpler code. | SerialPort.DataReceived repeatedly subscribe/unsubscribe | [
"",
"c#",
"serial-port",
""
] |
I am a bit confused, there are so many functions in PHP, and some using this, some using that. Some people use: `htmlspecialchars()`, `htmlentities()`, `strip_tags()` etc
Which is the correct one and what do you guys usually use?
Is this correct (advise me a better one, if any):
```
$var = mysql_real_escape_string(htmlentities($_POST['username']));
```
This line can prevent MySQL injection and XSS attack??
**Btw, is there any other things I need to pay attention besides XSS attack and MySQL injection?**
**EDIT**
To conclude:
If I want to insert string to the database, I do not need to use `htmlentities`, just use the `mysql_real_escape_string`. When displaying the data, use `htmlentities()`, is that what you all mean??
Summarize:
* `mysql_real_escape_string` used when insert into database
* `htmlentities()` used when outputting data into webpage
* `htmlspecialchars()` used when?
* `strip_tags()` used when?
* `addslashes()` used when?
Can somebody fill in the question mark? | > * `mysql_real_escape_string` used when insert into database
> * `htmlentities()` used when outputting data into webpage
> * `htmlspecialchars()` used when?
> * `strip_tags()` used when?
> * `addslashes()` used when?
### htmlspecialchars() used when?
`htmlspecialchars` is roughly the same as `htmlentities`. The difference: character encodings.
Both encode control characters like `<`, `>`, `&` and so on used for opening tags etc. `htmlentities` also encode chars from other languages like umlauts, euro-symbols and such. If your websites are UTF, use `htmlspecialchars()`, otherwise use `htmlentities()`.
### strip\_tags() used when?
`htmlspecialchars` / `entities` encode the special chars, so they're *displayed but not interpreted*. `strip_tags` REMOVES them.
In practice, it depends on what you need to do.
An example: you've coded a forum, and give users a text field so they can post stuff. Malicious ones just try:
```
pictures of <a href="javascript:void(window.setInterval(function () {window.open('http://evil.example');}, 1000));">kittens</a> here
```
If you don't do anything, the link will be displayed and a victim that clicks on the link gets lots of pop-ups.
If you htmlentity/htmlspecialchar your output, the text will be there as-is. If you strip\_tag it, it simply removes the tags and displays it:
```
pictures of kittens here
```
Sometimes you may want a mixture, leave some tags in there, like `<b>` (`strip_tags` can leave certain tags in there). This is unsafe too, so better use some full blown library against XSS.
### addslashes
To quote an [old version of the PHP manual](https://web.archive.org/web/20100525131537/http://php.net/manual/en/function.addslashes.php):
> Returns a string with backslashes before characters that need to be quoted in database queries etc. These characters are single quote ('), double quote ("), backslash () and NUL (the **NULL** byte).
>
> An example use of **addslashes()** is when you're entering data into a database. For example, to insert the name *O'reilly* into a database, you will need to escape it. It's highly recommeneded to use DBMS specific escape function (e.g. mysqli\_real\_escape\_string() for MySQL or pg\_escape\_string() for PostgreSQL), but if the DBMS you're using does't have an escape function and the DBMS uses \ to escape special chars, you can use this function.
The [current version](http://php.net/addslashes) is worded differently. | I thought of this quick checklist:
* **Always use HTTPS, without HTTPS your site is totally unencrypted**. And no, client-side encrypting things and sending them won't work, think about it. **Invalid HTTPS certificates also make you vulnerable to a [MITM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attack**. Just use Let's Encrypt if you can't afford a certificate.
* **Always use [`htmlspecialchars()`](http://php.net/htmlspecialchars) on any output from your PHP code, that is, or contains a user input**. Most templating engines help you do that easily.
* Use HTTP-only flag in your `php.ini` to prevent scripts from accessing your cookies
* Prevent session-related problems
+ **Never expose user's `PHPSESSID` (session ID) outside the cookie**, if anybody gets to know a Session ID of somebody else, they can simply use it to login to their account
+ Be very careful with the `Remember me` function, show a little warning maybe.
+ Refresh session ID when the user signs in (or whatever appropriate)
+ Timeout inactive sessions
* *Never* trust a cookie, it can be changed, removed, modified, and created by a script/user at any moment
* Prevent SQL-related problems
+ **Always use prepared statements**. Prepared statements causes the user input to be passed separately and prevents [SQL Injection](https://en.wikipedia.org/wiki/SQL_injection)
+ Make your code throw an exception when it fails. Sometimes your SQL server might be down for some reason, libraries like `PDO` ignore that error by default, and log a warning in the logs. This causes the variables you get from the DB to be null, depending on your code, this may cause a security issue.
+ Some libraries like `PDO` *emulate* prepared statements. Turn that off.
+ Use `UTF-8` encoding in your databases, it allows you to store virtually any character and avoid encoding-related attacks
+ **Never concatenate anything to your query**. Things like `$myquery = "INSERT INTO mydb.mytable (title) VALUES(" . $user_input . ")"` pretty much mean you have a huge security risk of an SQL injection.
* Store uploaded files in random, extension-less filenames. If a user uploads a file with `.php` file extension then whenever your code loads that file it executes it, and enables the user to execute some backend code
* Make sure you're not vulnerable to a [CSRF attack](https://en.wikipedia.org/wiki/Cross-site_request_forgery).
* Always update your PHP copy to ensure the latest security patches and performance improvements | How to prevent code injection attacks in PHP? | [
"",
"php",
"code-injection",
""
] |
I have a custom thread pool class, that creates some threads that each wait on their own event (signal). When a new job is added to the thread pool, it wakes the first free thread so that it executes the job.
The problem is the following : I have around 1000 loops of each around 10'000 iterations do to. These loops must be executed sequentially, but I have 4 CPUs available. What I try to do is to split the 10'000 iteration loops into 4 2'500 iterations loops, ie one per thread. But I have to wait for the 4 small loops to finish before going to the next "big" iteration. This means that I can't bundle the jobs.
My problem is that using the thread pool and 4 threads is much slower than doing the jobs sequentially (having one loop executed by a separate thread is much slower than executing it directly in the main thread sequentially).
I'm on Windows, so I create events with `CreateEvent()` and then wait on one of them using `WaitForMultipleObjects(2, handles, false, INFINITE)` until the main thread calls `SetEvent()`.
It appears that this whole event thing (along with the synchronization between the threads using critical sections) is pretty expensive !
My question is : is it normal that using events takes "a lot of" time ? If so, is there another mechanism that I could use and that would be less time-expensive ?
Here is some code to illustrate (some relevant parts copied from my thread pool class) :
```
// thread function
unsigned __stdcall ThreadPool::threadFunction(void* params) {
// some housekeeping
HANDLE signals[2];
signals[0] = waitSignal;
signals[1] = endSignal;
do {
// wait for one of the signals
waitResult = WaitForMultipleObjects(2, signals, false, INFINITE);
// try to get the next job parameters;
if (tp->getNextJob(threadId, data)) {
// execute job
void* output = jobFunc(data.params);
// tell thread pool that we're done and collect output
tp->collectOutput(data.ID, output);
}
tp->threadDone(threadId);
}
while (waitResult - WAIT_OBJECT_0 == 0);
// if we reach this point, endSignal was sent, so we are done !
return 0;
}
// create all threads
for (int i = 0; i < nbThreads; ++i) {
threadData data;
unsigned int threadId = 0;
char eventName[20];
sprintf_s(eventName, 20, "WaitSignal_%d", i);
data.handle = (HANDLE) _beginthreadex(NULL, 0, ThreadPool::threadFunction,
this, CREATE_SUSPENDED, &threadId);
data.threadId = threadId;
data.busy = false;
data.waitSignal = CreateEvent(NULL, true, false, eventName);
this->threads[threadId] = data;
// start thread
ResumeThread(data.handle);
}
// add job
void ThreadPool::addJob(int jobId, void* params) {
// housekeeping
EnterCriticalSection(&(this->mutex));
// first, insert parameters in the list
this->jobs.push_back(job);
// then, find the first free thread and wake it
for (it = this->threads.begin(); it != this->threads.end(); ++it) {
thread = (threadData) it->second;
if (!thread.busy) {
this->threads[thread.threadId].busy = true;
++(this->nbActiveThreads);
// wake thread such that it gets the next params and runs them
SetEvent(thread.waitSignal);
break;
}
}
LeaveCriticalSection(&(this->mutex));
}
``` | If you are just parallelizing loops and using vs 2008, I'd suggest looking at OpenMP. If you're using visual studio 2010 beta 1, I'd suggesting looking at the [parallel pattern library](http://msdn.microsoft.com/en-us/library/dd492418(VS.100).aspx), particularly the ["parallel for" / "parallel for each"
apis](http://msdn.microsoft.com/en-us/library/dd728073(VS.100).aspx) or the ["task group](http://msdn.microsoft.com/en-us/library/dd728066(VS.100).aspx) class because these will likely do what you're attempting to do, only with less code.
Regarding your question about performance, here it really depends. You'll need to look at how much work you're scheduling during your iterations and what the costs are. WaitForMultipleObjects can be quite expensive if you hit it a lot and your work is small which is why I suggest using an implementation already built. You also need to ensure that you aren't running in debug mode, under a debugger and that the tasks themselves aren't blocking on a lock, I/O or memory allocation, and you aren't hitting false sharing. Each of these has the potential to destroy scalability.
I'd suggest looking at this under a profiler like [xperf](http://msdn.microsoft.com/en-us/performance/default.aspx) the f1 profiler in visual studio 2010 beta 1 (it has 2 new concurrency modes which help see contention) or Intel's vtune.
You could also share the code that you're running in the tasks, so folks could get a better idea of what you're doing, because the answer I always get with performance issues is first "it depends" and second, "have you profiled it."
Good Luck
-Rick | Yes, `WaitForMultipleObjects` is pretty expensive. If your jobs are small, the synchronization overhead will start to overwhelm the cost of actually doing the job, as you're seeing.
One way to fix this is bundle multiple jobs into one: if you get a "small" job (however you evaluate such things), store it someplace until you have enough small jobs together to make one reasonably-sized job. Then send all of them to a worker thread for processing.
Alternately, instead of using signaling you could use a multiple-reader single-writer queue to store your jobs. In this model, each worker thread tries to grab jobs off the queue. When it finds one, it does the job; if it doesn't, it sleeps for a short period, then wakes up and tries again. This will lower your per-task overhead, but your threads will take up CPU even when there's no work to be done. It all depends on the exact nature of the problem. | Overhead due to use of Events | [
"",
"c++",
"multithreading",
"synchronization",
"events",
"overhead",
""
] |
I'm quite frustrated. I want to be able to insert into my database names with single quotes - for example, O'Connor.
So, when inserting into the DB, I do:
```
$lname = mysql_real_escape_string($_POST['lname']);
```
And then I insert $lname into the DB.
When it's in the DB, it appears as O\'Connor.
So, if I were to recall that last name in my web application, I will have to use:
```
$lname = stripslashes($r["lname"]);
```
This all seems to work fine. However, I have a search function which will search for last names and display the results. When I search, I have to search for O\'Connor in order to get any results.
You see, after I search, the textbox automatically stores the value of what was just searched for (using sessions). So my code is this:
```
$search = mysql_real_escape_string($_GET['search']);
$_SESSION['search'] = $search;
```
Like I said before, when I search, I have to use "O\'Connor", and then after I search, the value in the textbox becomes "O\\\\'Connor"
It's been frustrating trying to figure this out. Does anyone know what I'm doing wrong? Thanks!
EDIT:
Here is my php5.ini file, regarding magic quotes:
```
; Magic quotes
;
; Magic quotes for incoming GET/POST/Cookie data.
magic_quotes_gpc = On
; Magic quotes for runtime-generated data, e.g. data from SQL, from exec(), etc.
magic_quotes_runtime = Off
; Use Sybase-style magic quotes (escape ' with '' instead of \').
magic_quotes_sybase = Off
```
However, my site is hosted on GoDaddy, and I do not have permissions to edit the file :( | It sounds like [Magic Quotes](https://www.php.net/magic_quotes) are enabled in your PHP configuration.
To check if it's actually enabled:
```
echo get_magic_quotes_gpc();
```
[To disable](https://www.php.net/manual/en/security.magicquotes.disabling.php), edit your php.ini file:
```
; Magic quotes
;
; Magic quotes for incoming GET/POST/Cookie data.
magic_quotes_gpc = Off
; Magic quotes for runtime-generated data, e.g. data from SQL, from exec(), etc.
magic_quotes_runtime = Off
; Use Sybase-style magic quotes (escape ' with '' instead of \').
magic_quotes_sybase = Off
```
Or add this line to your .htaccess:
```
php_flag magic_quotes_gpc Off
``` | Little edit to the fixinput function to check if your installation of PHP does indeed have real escape string (older versions don't):
```
function fixinput($value){
if (get_magic_quotes_gpc()){
$value = stripslashes($value);
}
if (function_exists('mysql_real_escape_string')) {
return mysql_real_escape_string($value);
}
else {
return mysql_escape_string($value);
}
}
``` | mysql_real_escape_string and single quote | [
"",
"php",
"mysql",
"escaping",
"mysql-real-escape-string",
""
] |
Can someone point in the right direction for printing barcode labels using Java? I can use the barbecue library (<http://barbecue.sourceforge.net/>) to generate them bar codes as images, but I need a way to put the image (and human readable caption) into an Avery document template for printing. | The iText library supports pretty much every kind of barcode imaginable. You can generate pdfs and either save them or print them internally. | I suggest using the barcode4j library instead of barbecue for 2 reasons:
1. Barbecue barcode objects are unnecessarily coupled to Java UI components (e.g. Barcode class extends JComponent). This creates unnecessary dependencies if the Java UI is not being used, e.g. for batch or command line based applications. They should have used aggregation rather than inheritance if they wanted to use their barcode classes with the Java UI.
2. Barcode4J looks like it is currently supported - version 2.0 released and copyright date is 2012
Then you have the problem of translating the barcode into a format that your printer understands. For this I suggest openlabelprint.org *(which I wrote!)* - it's another open source project that uses barcode4j and provides:
* facilities to define a label layout using SVG (Scalable Vector Graphics - an open w3c standard) and
* rasterization to a bitmap of the SVG from barcode4j (and the surrounding label layout in SVG) (openlabelprint applies the excellent Apache SVG Batik Java libraries for rasterization as well as for other SVG tasks)
* printing of the bitmap on Zebra printers using their ZPL low level language. openlabelprint has a built in utility to convert png bitmaps to ZPL and send this to the Zebra printer via the standard Java printer system. Also openlabelprint provides APIs to extend it for other printer languages though ZPL is supported by some non-Zebra brands | Java print barcode labels | [
"",
"java",
"templates",
"printing",
"label",
"barcode",
""
] |
As the title states, what's the best way to compare to DateTime's without hours and seconds?
I don't want to convert my dates to string, and a solution like
> `DateTime.Now.Year == last.Year &&
> DateTime.Now.Month == last.Month &&
> DateTime.Now.Day == last.Day`
is pretty damn ugly.
Edit: Oh my god, how silly I feel. Of course you can use the Date property.
"Another question": What is the best way to compare DateTime's with Year, Month, Day, Hour, Minute, but not seconds?
Thanks | Is this: what you're looking for?
```
DateTime.Now.Date
``` | Take a look at the [Date](http://msdn.microsoft.com/en-us/library/system.datetime.date.aspx) property, then you can compare dt1.Date with dt2.Date. | Compare two datetimes - without hour and second | [
"",
"c#",
""
] |
I have a c++ console app that has been doing just fine and upon clean make started throwing compiler errors. Obviously I've redefined or omitted something, but I'm not sure what.
```
------ Rebuild All started: Project: alpineProbe, Configuration: Release Win32 ------
Deleting intermediate and output files for project 'abc', configuration 'Release|Win32'
Compiling...
wmiTest.cpp
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h(60) : error C2065: '_$notnull' : undeclared identifier
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h(60) : error C3861: '_Pre1_impl_': identifier not found
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h(60) : error C2146: syntax error : missing ')' before identifier '_Deref_pre2_impl_'
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h(60) : warning C4229: anachronism used : modifiers on data are ignored
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h(64) : error C2059: syntax error : ')'
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\ctype.h(94) : error C2144: syntax error : 'int' should be preceded by ';'
C:\Program Files\Microsoft Visual Studio 9.0\VC\include\ctype.h(94) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
```
I'm sure it's something obvious, but I don't see it. One other thing, I reloaded the project from a backup copy.
Edit:
Using /showIncludes as suggested by Michael Burr gives the following:
```
1>Note: including file: c:\development\alpineaccess\final\Tokenizer.h
1>Note: including file: c:\development\alpineaccess\final\testFunctions.h
1>Note: including file: c:\development\alpineaccess\final\curl/curl.h
1>Note: including file: c:\development\alpineaccess\final\curl\curlver.h
1>Note: including file: C:\Program Files\Microsoft Visual Studio 9.0\VC\include\stdio.h
1>Note: including file: C:\Program Files\Microsoft Visual Studio 9.0\VC\include\crtdefs.h
1>Note: including file: C:\Program Files\Microsoft Visual Studio 9.0\VC\include\sal.h
1>Note: including file: c:\program files\microsoft visual studio 9.0\vc\include\codeanalysis\sourceannotations.h
1>Note: including file: C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\windows.h
1>Note: including file: C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\sdkddkver.h
1>Note: including file: C:\Program Files\Microsoft Visual Studio 9.0\VC\include\excpt.h
1>Note: including file: C:\Program Files\Microsoft Visual Studio 9.0\VC\include\crtdefs.h
```
Note that there's a circular reference via sourceannotations.h. I see what the problem is, but have no idea how to fix it. | Found it. Thanks to [Michael Burr](https://stackoverflow.com/users/12711/michael-burr) and his suggestion to use /showIncludes.
The problem was that the file \CodeAnalysis\sourceannotations.h in the C++ installation had gotten clobbered. Reinstalling should fix it.
Thanks for the help | [JaredPar's answer](https://stackoverflow.com/questions/1268604/vs-2008-623-complier-errors/1268626#1268626) has something to do with it, but you shouldn't have to include `sal.h` yourself - something's causing the wrong `sal.h` to be picked up (or another wrong header). `<sal.h>` should be included by `<crtdefs.h>` which is included by the standard headers, but clearly the right one isn't being picked up for some reason.
Try using the "`/showIncludes`" option ("Configuration Properties/C/C++/Advanced/Show Includes" in the IDE's project options) to see what headers are being included from where. | vs 2008 623 compiler errors | [
"",
"c++",
"visual-studio",
""
] |
Are there any bignum libraries that are good for Windows use? I looked at GMP, but unfortunately it does not look like it can be compiled on Windows...(I'm going to be doing some custom RSA and hashing routines)
Thanks. | People provide [pre-compiled binaries](ftp://deltatrinity.dyndns.org/gmp-4.3.1_DLL_SharedLibs/) for gmp on Windows; there are also [instructions](http://cs.nyu.edu/exact/core/gmp/) for compiling it yourself. Another option would be the [bignum library of OpenSSL](http://www.openssl.org/docs/crypto/bn.html). | Another option is [MPIR](http://mpir.org "MPIR"). MPIR is a fork of GMP that specifically support compilation on Windows. | Bignum libraries for windows? | [
"",
"c++",
"c",
"windows",
"encryption",
""
] |
when i get the DateTime.Hour property, i always get the 24 hour time (so 6PM would give me 18).
how do i get the "12 hour" time so 6PM just gives me 6.
i obviously can do the check myself but i assume there is a built in function for this. | How about:
```
DateTime.Hour % 12
```
That will give 0-11 of course... do you want 1-12? If so:
```
((DateTime.Hour + 11) % 12) + 1
```
I don't *think* there's anything simpler built in... | DateTime.Now.ToString("hh"); --> Using this you will get "06" for 18h. | how to get the 12 hour date from DateTime | [
"",
"c#",
"datetime",
""
] |
It's fairly general question, but I'd like to know what do you use in determination of primary key of the table. Examples supplied with you reasoning are highly desired.
*I noticed that many programmers add ID column and use it as a primary key. I think, it is flawed from design point of view, as ID in that case HAS NOTHING TO DO WITH THE TABLE.* | My thought process in determining a primary key goes like this.
"One record in this table will represent ...?"
"For distinct value of Col X, Col Y, Col Z.. there should only be one row in the table", What are Cols X Y and Z ?"
**The CAR\_MODEL table.**
Hmm this table will store information about different types of cars, should the MANUFACTURER\_NAME be the key ? No, I can have many rows identifying different car models from the same manufacturer. Hmm should the MANUFACTURER\_NAME and MODEL\_NAME be the key ? No, i want to have different rows with the same MANUFACTURER\_NAME and MODEL\_NAME but different release years in the table at the same time. Ok what about "MANUFACTURER\_NAME", "MODEL\_NAME" and "RELEASE\_YEAR".
Is it possible for me to have two rows with the same MANUFACTURER\_NAME, MODEL\_NAME and RELEASE\_YEAR at the same time? Hmmm no. That wouldn't make sense, they would be the same Car Model, and I only want 1 record per car model. Great, that's the key.
One record in this table will represent a particular model from a particular year from a particular manufacturer. I decide this when i create the table, that's why i created the table, if you can't describe what's going in the table in terms that help identify the key you don't really understand why you are creating it.
**Horrible Changes Over Time!!! (surrogate keys, Natural Key, Slowly changing dimensions)**
Ah but the information I am storing about a particular Car Model (from a particular Manufacturer and Release Year) may change. Initially I was told that it had two doors, now I find it has four, I want to have this correct information in my table but not lose the old record as people have reported off it and I need to be able to reproduce their old results.
Ok, I will add a new column "MODEL\_ID" and make it the primary key of the table, so I can store multiple records with the same model name, manufacturer name and release year. I will also add a valid\_from and valid\_to timestamp.
This can work well, and indeed with my changes the Primary Key of the table is now MODEL\_ID, a surrogate key. But the Natural Key, the Business Key, the key 'at any point in time', is still Model\_Name, Manufacturer\_Name and Release\_Year, and I can't loose sight of that.
***Note on Surrogate Keys* :**
A surrogate key is unique for each row, by definition! A surrogate key makes it easier to manipulate data sometimes, especially data that changes over time. But a surrogate key doesn't in any way replace a Natural Primary Key, you still need to know what the 'grain' of the table is.
If we said that every person in Australia will be assigned a Stack\_Overflow\_User\_id what would we do when Jeff and Joel started giving Stack\_Overflow\_User\_Id's to dogs and cats and multiple IDs to the same people ??
We would say, "hey Jeff and Joel, only give out 1 ID per First\_Name, Last\_Name, Date\_of\_Birth and Place\_of\_Birth!". \*
We need to know the natural key or we can give anything a surrogate key!
(\* what about people where all these are the same ? don't we need a passport number or some sort of surrogate ? In practice a surrogate is nice and clean, but where did it originate ? originally it came from a natural key.) | The role of a primary key is to uniquely identify each row in your table. If no column or set of columns matches this requirement, an column containing a unique id is often added as a primary key.
I do not agree with your comment about programmers adding an id that has nothing to do with table data. When you need to link data across several tables, a concise id is easier to use than a compound key. | How do you determine what should be a primary key? | [
"",
"sql",
"mysql",
"database-design",
""
] |
I have data that has been stored using binary serialization for the following class:
```
[Serializable]
public abstract class BaseBusinessObject
{
private NameValueCollection _fieldErrors = new NameValueCollection();
protected virtual NameValueCollection FieldErrors
{
get { return _fieldErrors; }
set { _fieldErrors = value; }
}
...
}
```
At some point, the class was changed to this:
```
[Serializable]
public abstract class BaseBusinessObject
{
private Dictionary<string, string> _fieldErrors = new Dictionary<string, string>();
protected virtual Dictionary<string, string> FieldErrors
{
get { return _fieldErrors; }
set { _fieldErrors = value; }
}
...
}
```
This is causing issues deserializing old data.
My first thought was to implement `ISerializable`, but this class has numerous properties as well as hundreds of inheriting classes that I would have to implement this for as well.
I would like to either change the old data to match the current structure during deserialization or have a clean way of upgrading the old data. | Add the new \_`fieldErrors` under a different name, say `_fieldErrors2`, and make it [`[Optional]`](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.optionalattribute.aspx). Then implement an [`[OnDeserialized]`](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.ondeserializedattribute.aspx) Method that copies the data from `_fieldErrors` to `_fieldErrors2` (if present), and clears \_fieldErrors. | If the data is only used internally, my first thought would be to write some simple throw-away code to de-serialize your binary data using the old "NameValueCollection", map it to a Dictionnary and re-serialize it. Even if it'll take a few days to process all the data, it doesn't seem worth it to implement a patch on your new code to support the old data.
Even if it's not only used internally, an importer seems like the simplest way to go. | How do I deserialize old data for a type that has changed? | [
"",
"c#",
"serialization",
"binary-serialization",
""
] |
I have a csv string like this "1,2,3" and want to be able to remove a desired value from it.
For example if I want to remove the value: 2, the output string should be the following:
"1,3"
I'm using the following code but seems to be ineffective.
```
var values = selectedvalues.split(",");
if (values.length > 0) {
for (var i = 0; i < values.length; i++) {
if (values[i] == value) {
index = i;
break;
}
}
if (index != -1) {
selectedvalues = selectedvalues.substring(0, index + 1) + selectedvalues.substring(index + 3);
}
}
else {
selectedvalues = "";
}
``` | ```
var removeValue = function(list, value, separator) {
separator = separator || ",";
var values = list.split(separator);
for(var i = 0 ; i < values.length ; i++) {
if(values[i] == value) {
values.splice(i, 1);
return values.join(separator);
}
}
return list;
}
```
If the value you're looking for is found, it's removed, and a new comma delimited list returned. If it is not found, the old list is returned.
Thanks to [Grant Wagner](https://stackoverflow.com/users/9254/grant-wagner) for pointing out my code mistake and enhancement!
John Resign (jQuery, Mozilla) has a neat article about [JavaScript Array Remove](http://ejohn.org/blog/javascript-array-remove/) which you might find useful. | ```
function removeValue(list, value) {
return list.replace(new RegExp(",?" + value + ",?"), function(match) {
var first_comma = match.charAt(0) === ',',
second_comma;
if (first_comma &&
(second_comma = match.charAt(match.length - 1) === ',')) {
return ',';
}
return '';
});
};
alert(removeValue('1,2,3', '1')); // 2,3
alert(removeValue('1,2,3', '2')); // 1,3
alert(removeValue('1,2,3', '3')); // 1,2
``` | remove value from comma separated values string | [
"",
"javascript",
"csv",
""
] |
I have data model classes that contain private fields which are meant to be read-only (via a getter function). These fields are set by my JPA persistence provider (eclipselink) during normal operation, using the contents of the database. For unit tests, I want to set them to fake values from a mockup of the persistence layer. How can I do that? How does eclipselink set these values, anyway?
Simplified example:
```
@Entity
class MyEntity
{
@Id
private Integer _ix;
public Integer ixGet()
{
return this._ix;
}
}
``` | Can you just Mock the Entity itself, providing your own implemenations of the getters?
You could create an anonymous extension in your mock persistence layer:
```
MyEntity x = new MyEntity() {
public Integer ixGet() { return new Integer(88); }
};
``` | You need to use the Reflection API. Use Class.getField() to get the field, then call setAccessable(true) on that field so that you may write to it, even though it is private, and finally you may call set() on it to write a new value.
For example:
```
public class A {
private int i;
}
```
You want to set the field 'i' to 3, even though it is private:
```
void forceSetInt(Object o, String fieldName, int value) {
Class<?> clazz = o.getClass();
Field field = clazz.getDeclaredField(fieldName);
field.setAccessible(true);
field.set(o, value);
}
```
There are a number of exceptions that you will need to handle. | How can I access private class members in Java? | [
"",
"java",
"unit-testing",
"jpa",
"mocking",
"private",
""
] |
I'm building a DLL class library - I want to make it usable by as many people as possible. Which version of the .NET Framework and which C# version should I use? Is it possible to produce a backwards-compatible DLL or different DLLs for different versions? Or does Windows automatically update the .NET framework so I should just use the latest version? Any guidance appreciated! | We target multiple runtime versions concurrently (.NET 1.1, .NET 2.0, and .NET 3.5) for some products.
We handle this in several ways:
* Separate Solution and Project files and for each of .NET 1.1, 2.0, and 3.5 SP1, but referencing the same source files.
eg:
```
\ProductFoo_1_1.sln (.NET 1.1 solution, VS 2003)
\ProductFoo_2_0.sln (.NET 2.0 solution, VS 2008)
\ProductFoo_3_5.sln (.NET 3.5 solution, VS 2008)
\FooLibrary\FooLibrary_1_1.csproj (.NET 1.1 Project, VS 2003)
\FooLibrary\FooLibrary_2_0.csproj (.NET 2.0 Project, VS 2008)
\FooLibrary\FooLibrary_3_5.csproj (.NET 3.5 Project, VS 2008)
\FooLibrary\FooClass.cs (shared amongst all Projects)
\FooLibrary\FooHelpers_1_1.cs (only referenced by the .NET 1.1 project)
\FooService\FooService_3.5.csproj (.NET 3.5 Project, VS 2008)
\FooService\FooService.cs
```
* Defining `NET_X_X` symbols in each of the solutions
* For .NET Framework specific code, we use preprocessor instructions such as this:
```
public void SomeMethod(int param)
{
#ifdef NET_1_1
// Need to use Helper to Get Foo under .NET 1.1
Foo foo = Helper.GetFooByParam(param);
#elseif NET_2_0 || NET_3_5
// .NET 2.0 and above can use preferred method.
var foo = new Foo { Prop = param };
foo.LoadByParam();
#endif
foo.Bar();
}
#ifdef NET_3_5
// A method that is only available under .NET 3.5
public int[] GetWithFilter(Func Filter)
{
// some code here
}
#endif
```
For clarification, the above lines starting with # are preprocessor commands. When you compile a solution, the C# Compiler (csc) pre-processes the source files.
If you have an `#ifdef` statement, then csc will evaluate to determine if that symbol is defined - and if so, include the lines within that segment when compiling the project.
It's a way to mark up code to compile in certain conditions - we also use it to include more intensive debugging information in specific verbose debug builds, like so:
```
#if DEBUG_VERBOSE
Logging.Log("Web service Called with parameters: param = " + param);
Logging.Log("Web service Response: " + response);
Logging.Log("Current Cache Size (bytes): " + cache.TotalBytes);
// etc.
#endif
```
* We then have NAnt scripts which automate the production of a release for each .NET version.
We happen to control all this through TeamCity, but we can trigger the NAnt scripts manually too.
It does make things more complicated, so we only tend to do it where we need to maintain a legacy .NET 1.1 or 2.0 instance (eg where a customer can't/won't upgrade).
I imagine that when .NET 4.0 rolls around, we'll do the same thing and just add a NET\_4\_0 symbol. | Personally, I'd target .NET 2.0. This means, among other things:
* No extension methods (there is a workaround though)
* No linq
* you CAN use lambda expressions
* you CAN use the 'var' keyword
The thing is, you can use C# 3.x language features (the so-called syntactic sugar), but you can't use libraries that target C# 3.x (System.Core to name one, includes extension methods and linq).
I wouldn't try to support C# 1.x, as it's quite different from C# 2.x and higher. Besides, I expect most people who would use your library are people building new things, who wouldn't in their right minds use C# 1.x ;-) | What .NET Framework and C# version should I target with my class library? | [
"",
"c#",
".net",
"dll",
"build",
""
] |
i've just come into the world of easymock.i'll like to ask if easymock only does mock object for interfaces?
So in my effort to understand i wrote a class to generate unique voucher in java.i obviously can't know which value it will generate to use in the assert stuff.So how to make sure the generated voucher is of the type long?
here is the function
```
public static Long generateID(int length) {
logger.info("Calling generateID with specify length");
Long result = null;
if (length > 0) {
StringBuffer id = new StringBuffer(length);
for (int i = 0; i < length; i++) {
id.append(NUMS[(int)Math.floor(Math.random() * 20)]);
}
result = Long.parseLong(id.toString());
}
return result;
}
```
here is the test class
```
@Before
public void setUp() {
mockgenerator = createMock(VGenerator.class);
}
/**
* Test of generateID method, of class VGenerator.
*/
@Test
public void testGenerateID() {
Long exp = (long)1;
int length = 15;
expect(mockgenerator.generateID(length)).equals(Long.class);
replay(mockgenerator);
long res = mockgenerator.generatedID(length);
assertEquals(exp.class, res.class);
}
```
well this might look terrific to you but i'm still confused about how to do this
thanks for helping | If it is absolutely crucial that the return type is `long`, and you want to make sure that future changes don't inadvertently change this, then you don't need easymock. Just do this:
```
@Test
public void TestGenerateIDReturnsLong()
{
Method method =
VGenerator.class.getDeclaredMethod("generateID", new Class[0]);
Assert.Equals(long.Class, method.GetReturnType());
}
```
Currently you are generating a mock implementation of `VGenerator`, and then you test the mock. This is **not** useful. The point of unit testing is to test a real implementation. So now you might be wondering what mocks are good for?
As an example, imagine that `VGenerator` needs to use a random number generator internally, and you provide this in the constructor (which is called "dependency injection"):
```
public VGenerator
{
private final RandomNumberGenerator rng;
// constructor
public VGenerator(RandomNumberGenerator rng)
{
this.rng = rng;
}
public long generateID(length)
{
double randomNumber = this.rng.getRandomNumber();
// ... use random number in calculation somehow ...
return id;
}
}
```
When implementing `VGenerator`, you are not really interested in testing the random number generator. What you are interested in is how `VGenerator` calls the random number generator, and how it uses the results to produce output. What you want is to take full control of the random number generator for the purpose of testing, so you create a mock of it
:
```
@Test
public void TestGenerateId()
{
RandomNumberGenerator mockRNG = createMock(RandomNumberGenerator.class);
expect(mockRNG.getRandomNumber()).andReturn(0.123);
replay(mockRNG);
VGenerator vgenerator = new VGenerator(mockRNG);
long id = vgenerator.generateID();
Assert.Equals(5,id); // e.g. given random number .123, result should be 5
verify(mockRNG);
}
``` | I believe you misunderstand how easymock is used,
Calling expect tells the mock object that when you are replaying it, this call should be called. Appending .andReturn() Tells the mock object to return whatever you put in there, in my example a long value of 1.
**The point of easymock is that you do not need to implement the mocked interface to test the classses that use it.** By mocking you can isolate a class from the classes it depends on and only test the contained code of the class your are currently testing.
```
interface VGenerator {
public Long generateID(int in);
}
@Before
public void setUp() {
mockgenerator = createMock(VGenerator.class);
}
@Test
public void testGenerateID() {
int length = 15;
expect(mockgenerator.generateID(length)).andReturn(new Long(1));
replay(mockgenerator);
myMethodToBeTested();
verify(mockgenerator);
}
public void myMethodToBeTested(){
//do stuff
long res = mockgenerator.generatedID(length);
//do stuff
}
```
If I misunderstood your question and it was really, does easymock only mock interfaces? then the answer is Yes, Easymock only mocks interfaces. Read the documentation for more help [Easymock](http://easymock.org/EasyMock2_2_Documentation.html) | verify object type with easymock | [
"",
"java",
"unit-testing",
"junit",
"easymock",
""
] |
I want to test the JavaScript code in our web app. We are using the jQuery library and .Net MVC. I've looked at [these answers](https://stackoverflow.com/questions/tagged/unit-testing+jquery), and [jqUnit](http://code.google.com/p/jqunit/) looked nice, so I tried it, just to realize that I'll have to pretty much recreate all markup for every page I want to test.
Am I missing something? Are there any alternative approaches for testing JS/jQuery code? Our markup is at times complex, and changes drastically as a result of AJAX calls.
The best I can think of is trying to stick our app in an iframe on a test page. Haven't tried it yet. | If your testing javascript functions/objects, might i suggest YUI's testing component.
<http://developer.yahoo.com/yui/yuitest/>
Very similar setup to JUnit but only requires that you include a few test javascript files. It would be pretty easy to include this in your page while in a Test mode only. | You could just create the structure you want to test in the memory without appending it to the document. This won't work will all the tests (as for example CSS propeties won't affect items not inserted to the document but in some cases this may be worth a try).
First it's very fast, and second you don't spoil your document with test items you need to remove after you complete the tests.
Here's that idea in a very simple example.
```
test('My tests', function () {
var testSubj = $('<div>This <b>is</b> my <span>test subject</span></div>');
ok(testSubj.children('span') == 'test subject', 'Span is correct');
});
``` | Testing JavaScript code without recreating markup? | [
"",
"javascript",
"jquery",
"unit-testing",
"markup",
""
] |
I have an Interface as Contract. Now I'd like to define all Classes implementing the Interface as Serializable (XML).
What I'd like to do is something like follows:
```
public void DoSomethingWithElement( string element )
{
IElement e = DeserializeElement(element);
}
```
void SerializeElement(IElement e)
{
XmlSerializer xmlFormat = new XmlSerializer(typeof(IElement));
Stream fStream = new MemoryStream();
xmlFormat.Serialize(fStream, e);
string element = ASCIIEncoding.Default.GetString(fStream.GetBuffer());
DoSomethingWithElement(element);
}
Is there a simple way for this? Maybe using Attributes?
If not, what do you propose:
Not using an interface but maybe an abstract class?
Implementing ISerializable? | An abstract base class with the [Serializable] attribute is a very good choice, provided it suits your needs. All classes that inherit from the base class will also be marked as serializable.
Keep in mind that the Serializable attribute just says "this class can be serialized" whereas implementing ISerializable (or IXmlSerializable) means "this class wants manual control over its serialization". ISerializable isn't required for simple serialization.
**Update:** you could also consider writing a custom FxCop rule to generate warnings if a class implements your interface without being marked serializable.
If your abstract class *replaces* the interface then it will cover all grounds, but if you keep your interface seperate then there is still the potential for a class to implement the interface rather than inherit from the base-class--so having a way to detect these classes is still good. | Because interfaces support inheritance, you should just be able to say:
```
interface IElement : ISerializable
{
// IElement specific items
}
```
That will ensure any class that implements `IElement` also implements `ISerializable`. | on Interface or similar | [
"",
"c#",
"interface",
"serialization",
""
] |
Is there a rigid guideline as to when one should preferably use boost::shared\_ptr over normal pointer(T\*) and vice-versa? | My general rule is, when memory gets passed around a lot and it's difficult to say what owns that memory, shared pointers should be used. (Note that this may also indicate a poor design, so think about things before you just go to shared pointers.) If you're using shared pointers in one place, you should try to use them everywhere. If you don't you'll have to be very careful about how you pass around pointers to avoid double frees.
If your usage of memory is simple and it's obvious what owns memory, then just use normal pointers.
Typically the bigger your project is, the more benefit you'll get out of shared pointers. There aren't rigid rules about this and there shouldn't be. As with many development decisions, there are trade offs and you have to do what's best for you. | > A simple guideline that nearly
> eliminates the possibility of memory
> leaks is: always use a named smart
> pointer variable to hold the result of
> **new**.
— [`boost::shared_ptr` documentation](http://www.boost.org/doc/libs/1_39_0/libs/smart_ptr/shared_ptr.htm) | Usage guidelines: shared versus normal pointers | [
"",
"c++",
""
] |
How would I get the hours from a DATETIME format variable.
Example: 2009-08-17 13:00:00
And I just need to get '13' | in php,
```
list($date, $time) = explode(' ', '2009-08-17 13:00:00');
list($hour, $min, $sec) = explode(':', $time);
```
$hour should contain 13. | ```
$hour = date("H", strtotime('2009-08-17 13:00:00'));
``` | extracting hour value from a DATETIME format | [
"",
"php",
"mysql",
"datetime",
""
] |
```
ax.plot_date((dates, dates), (highs, lows), '-')
```
I'm currently using this command to plot financial highs and lows using [Matplotlib](http://en.wikipedia.org/wiki/Matplotlib). It works great, but how do I remove the blank spaces in the x-axis left by days without market data, such as weekends and holidays?
I have lists of dates, highs, lows, closes and opens. I can't find any examples of creating a graph with an x-axis that show dates but doesn't enforce a constant scale. | I think you need to "artificially synthesize" the exact form of plot you want by using `xticks` to set the tick labels to the strings representing the dates (of course placing the ticks at equispaced intervals even though the dates you're representing aren't equispaced) and then using a plain `plot`. | One of the advertised features of [scikits.timeseries](http://pytseries.sourceforge.net/) is "Create time series plots with intelligently spaced axis labels".
You can see some example plots [here](http://pytseries.sourceforge.net/lib.plotting.examples.html). In the first example (shown below) the 'business' frequency is used for the data, which automatically excludes holidays and weekends and the like. It also masks missing data points, which you see as gaps in this plot, rather than linearly interpolating them.
[](https://i.stack.imgur.com/vuHr6.png) | How to skip empty dates (weekends) in a financial plot | [
"",
"python",
"matplotlib",
"graph",
"time-series",
"finance",
""
] |
I was always using JDBC in JavaSE on single-threaded environment. But now I need to use a connection pool and let many threads to have interaction with the database (MSSQL and Oracle) and I am having a hard time trying to make it as it seems that I am lacking some fundamental undestanding of the api.
AFAIK after connect and logging a `Connection` represents a phisical tcp/ip connection to the database. It creates `Statement`(s) that can be seen as SQL interaction(s) with the database over the `Connection`.
* Where does the transaction and rollback comes in ? Is it at the `Connection` or `Statement` level.
* Is it safe that 'one' `Connection` create N statements and give it to diferent threads so to let each one own the use of that `Statement` ?
If not, and after configuring the pool something like this:
```
OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:@tnsentryname");
ods.setUser("u");
ods.setPassword("p");
```
* BTW, where do I set the connection pool size ?
* Is this what I would be doing in each thread in order to correctly use the connection ?
//thead run method
```
Connection conn = ods.getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("the sql");
// do what I need to do with rs
rs.close();
int updateStatus = stmt.executeUpdate("the update");
stmt.close();
conn.close();
```
// end of thread run method
* If any physical Connection of the Pool somehow crashes or disconects, will the pool automaticaly try to reconnect and inject the new connection in the pool so that subsequent pool.getConnection() will just get a health connection ?
Thanks a lot and forgive my bad english please. | Connection pools decorate Connection and Statement instances with their own wrapper implementations. When you call close on a connection you are actually just releasing it back to the pool. When you call close on a prepared statement you are actually just releasing it back to the connection's statement cache. When you prepare a statement you might just be fetching a cached statement instance from the connection. All this is hidden from view so that you don't have to worry about it.
When a connection is given to a client it is no longer available for any other client to use until the connection is released back to the pool. You generally just fetch connections when you need them and then return them as soon as you are finished with them. Because the connections are being held open in the pool there is little overhead in fetching and releasing connections.
You should use a connection from the pool just as you would a single JBDC connection and follow best-practices regarding the closing of resources so that you do not leak any connections or statements. See the try/catch/finally examples in some of the other answers.
Pools can manage the connection resources and test them before handing them out to clients to ensure that they aren't stale. Also a pool will create and destroy connections as needed. | If you've mastered JDBC with single-threading, going to multi-threading and connection pools shouldn't be a big deal. All you need to do differently is: 1. When you need a connection, get it from the pool instead of directly. 2. Each thread should get its own connections.
To clarify point 2: If you get a connection and then pass it to multiple threads, you could have two threads trying to execute queries against the same connection at the same time. Java will throw exceptions on this. You can only have one active Statement per Connection and one active query (i.e. ResultSet) per Statement. If two threads are both holding the same Connection object, they are likely to promptly violate this rule.
One other caveat: With Connection pooling, be very very careful to always close your connections when you're done. The pool manager has no definitive way to know when you're done with a connection, so if you fail to close one, it's going to sit out there dangling for a long time, possibly forever depending on the pool manager. I always always always follow every "getConnection" with a try block, and close the connection in the finally block. Then I KNOW that I've closed it before the function exits.
Besides that, everything should be the same as what you're used to. | JDBC fundamental concepts, Pooling and Threading | [
"",
"java",
"oracle",
"multithreading",
"jdbc",
"connection-pooling",
""
] |
Let's say I have the following code which update a field of a `struct` using reflection. Since the struct instance is copied into the `DynamicUpdate` method, [it needs to be boxed to an object before being passed](https://stackoverflow.com/questions/448158/reflection-on-structure-differs-from-class-but-only-in-code/450453#450453).
```
struct Person
{
public int id;
}
class Test
{
static void Main()
{
object person = RuntimeHelpers.GetObjectValue(new Person());
DynamicUpdate(person);
Console.WriteLine(((Person)person).id); // print 10
}
private static void DynamicUpdate(object o)
{
FieldInfo field = typeof(Person).GetField("id");
field.SetValue(o, 10);
}
}
```
The code works fine. Now, let's say I don't want to use reflection because it's slow. Instead, I want to generate some CIL directly modifying the `id` field and convert that CIL into a reusable delegate (say, using Dynamic Method feature). Specially, I want to replace the above code with s/t like this:
```
static void Main()
{
var action = CreateSetIdDelegate(typeof(Person));
object person = RuntimeHelpers.GetObjectValue(new Person());
action(person, 10);
Console.WriteLine(((Person)person).id); // print 10
}
private static Action<object, object> CreateSetIdDelegate(Type t)
{
// build dynamic method and return delegate
}
```
---
**My question:** is there any way to implement `CreateSetIdDelegate` excepts from using one of the following techniques?
1. Generate CIL that invoke the setter using reflection (as the 1st code segment in this post). This makes no sense, given the requirement is to get rid of reflection, but it's a possible implementation so I just mention.
2. Instead of using `Action<object, object>`, use a custom delegate whose signature is `public delegate void Setter(ref object target, object value)`.
3. Instead of using `Action<object, object>`, use `Action<object[], object>` with the 1st element of the array being the target object.
The reason I don't like 2 & 3 is because I don't want to have different delegates for the setter of object and setter of struct (as well as not wanting to make the set-object-field delegate more complicated than necessary, e.g. `Action<object, object>`). I reckon that the implementation of `CreateSetIdDelegate` would generate different CIL depending whether the target type is struct or object, but I want it to return the same delegate offering the same API to user. | **EDIT again**: This works structs now.
There's a gorgeous way to do it in C# 4, but you'll have to write your own `ILGenerator` emit code for anything before that. They added an `ExpressionType.Assign` to the .NET Framework 4.
This works in C# 4 (tested):
```
public delegate void ByRefStructAction(ref SomeType instance, object value);
private static ByRefStructAction BuildSetter(FieldInfo field)
{
ParameterExpression instance = Expression.Parameter(typeof(SomeType).MakeByRefType(), "instance");
ParameterExpression value = Expression.Parameter(typeof(object), "value");
Expression<ByRefStructAction> expr =
Expression.Lambda<ByRefStructAction>(
Expression.Assign(
Expression.Field(instance, field),
Expression.Convert(value, field.FieldType)),
instance,
value);
return expr.Compile();
}
```
Edit: Here was my test code.
```
public struct SomeType
{
public int member;
}
[TestMethod]
public void TestIL()
{
FieldInfo field = typeof(SomeType).GetField("member");
var setter = BuildSetter(field);
SomeType instance = new SomeType();
int value = 12;
setter(ref instance, value);
Assert.AreEqual(value, instance.member);
}
``` | I ran into a similar issue, and it took me most of a weekend, but I finally figured it out after a lot of searching, reading, and disassembling C# test projects. And this version only requires .NET 2, not 4.
```
public delegate void SetterDelegate(ref object target, object value);
private static Type[] ParamTypes = new Type[]
{
typeof(object).MakeByRefType(), typeof(object)
};
private static SetterDelegate CreateSetMethod(MemberInfo memberInfo)
{
Type ParamType;
if (memberInfo is PropertyInfo)
ParamType = ((PropertyInfo)memberInfo).PropertyType;
else if (memberInfo is FieldInfo)
ParamType = ((FieldInfo)memberInfo).FieldType;
else
throw new Exception("Can only create set methods for properties and fields.");
DynamicMethod setter = new DynamicMethod(
"",
typeof(void),
ParamTypes,
memberInfo.ReflectedType.Module,
true);
ILGenerator generator = setter.GetILGenerator();
generator.Emit(OpCodes.Ldarg_0);
generator.Emit(OpCodes.Ldind_Ref);
if (memberInfo.DeclaringType.IsValueType)
{
#if UNSAFE_IL
generator.Emit(OpCodes.Unbox, memberInfo.DeclaringType);
#else
generator.DeclareLocal(memberInfo.DeclaringType.MakeByRefType());
generator.Emit(OpCodes.Unbox, memberInfo.DeclaringType);
generator.Emit(OpCodes.Stloc_0);
generator.Emit(OpCodes.Ldloc_0);
#endif // UNSAFE_IL
}
generator.Emit(OpCodes.Ldarg_1);
if (ParamType.IsValueType)
generator.Emit(OpCodes.Unbox_Any, ParamType);
if (memberInfo is PropertyInfo)
generator.Emit(OpCodes.Callvirt, ((PropertyInfo)memberInfo).GetSetMethod());
else if (memberInfo is FieldInfo)
generator.Emit(OpCodes.Stfld, (FieldInfo)memberInfo);
if (memberInfo.DeclaringType.IsValueType)
{
#if !UNSAFE_IL
generator.Emit(OpCodes.Ldarg_0);
generator.Emit(OpCodes.Ldloc_0);
generator.Emit(OpCodes.Ldobj, memberInfo.DeclaringType);
generator.Emit(OpCodes.Box, memberInfo.DeclaringType);
generator.Emit(OpCodes.Stind_Ref);
#endif // UNSAFE_IL
}
generator.Emit(OpCodes.Ret);
return (SetterDelegate)setter.CreateDelegate(typeof(SetterDelegate));
}
```
Note the "#if UNSAFE\_IL" stuff in there. I actually came up with 2 ways to do it, but the first one is really... hackish. To quote from Ecma-335, the standards document for IL:
"Unlike box, which is required to make a copy of a value type for use in the object, unbox is not required to copy the value type from the object. Typically it simply computes the address of the value type that is already present inside of the boxed object."
So if you want to play dangerously, you can use OpCodes.Unbox to change your object handle into a pointer to your structure, which can then be used as the first parameter of a Stfld or Callvirt. Doing it this way actually ends up modifying the struct in place, and you don't even need to pass your target object by ref.
However, note that the standard doesn't guarantee that Unbox will give you a pointer to the boxed version. Particularly, it suggests that Nullable<> can cause Unbox to create a copy. Anyway, if that happens, you'll probably get a silent failure, where it sets the field or property value on a local copy which is then immediately discarded.
So the safe way to do it is pass your object by ref, store the address in a local variable, make the modification, and then rebox the result and put it back in your ByRef object parameter.
I did some rough timings, calling each version 10,000,000 times, with 2 different structures:
Structure with 1 field:
.46 s "Unsafe" delegate
.70 s "Safe" delegate
4.5 s FieldInfo.SetValue
Structure with 4 fields:
.46 s "Unsafe" delegate
.88 s "Safe" delegate
4.5 s FieldInfo.SetValue
Notice that the boxing makes the the "Safe" version speed decrease with structure size, whereas the other two methods are unaffected by structure size. I guess at some point the boxing cost would overrun the reflection cost. But I wouldn't trust the "Unsafe" version in any important capacity. | Generate dynamic method to set a field of a struct instead of using reflection | [
"",
"c#",
".net",
"reflection",
"reflection.emit",
"dynamic-method",
""
] |
I would like to do something like the below
```
public interface IFormatter<TData, TFormat>
{
TFormat Format(TData data);
}
public abstract class BaseFormatter<TData> : IFormatter<TData, XElement>
{
public abstract XElement Format(TData data);
}
```
However, when I do the above I get an error about "The type or method has 2 generic parameters but only 1 was provided ...". I'll try and tackle it another way but I'm curious as to why this cannot be done?
Note that while this compiles in a single assembly, I have since noticed that the error message is actually generated by an assembly that is using this piece of code (a test assembly). This is where the error message noted above is generated. | Just in case anyone has this issue:
It turns out that the problem was related to the private field accessor in a MSTest unit testing project in the same solution. When I removed the accessor the project compiled. | Is that the *exact* code you have? If it is, then you are missing the keyword **`class`**
aside from that, this should compile just fine:
```
public interface IFormatter<TData, TFormat>
{
TFormat Format(TData data);
}
public abstract class BaseFormatter<TData> : IFormatter<TData, XElement>
{
// blah blah
public XElement Format(TData data)
{
throw new NotImplementedException();
}
}
``` | Implementing an abstract class based on some generic parameters in C# | [
"",
"c#",
"generics",
""
] |
Why doesn't the STL string class have an overloaded char\* operator built-in? Is there any specific reason for them to avoid it?
If there was one, then using the string class with C functions would become much more convenient.
I would like to know your views. | Following is the quote from Josuttis STL book:
> However, there is no automatic type
> conversion from a string object to a
> C-string. This is for safety reasons
> to prevent unintended type conversions
> that result in strange behavior (type
> char\* often has strange behavior) and
> ambiguities (for example, in an
> expression that combines a string and
> a C-string it would be possible to
> convert string into char\* and vice
> versa). Instead, there are several
> ways to create or write/copy in a
> C-string, In particular, c\_str() is
> provided to generate the value of a
> string as a C-string (as a character
> array that has '\0' as its last
> character). | You should always avoid cast operators, as they tend to introduce ambiguities into your code that can only be resolved with the use of further casts, or worse still compile but don't do what you expect. A char\*() operator would have lots of problems. For example:
```
string s = "hello";
strcpy( s, "some more text" );
```
would compile without a warning, but clobber the string.
A const version would be possible, but as strings must (possibly) be copied in order to implement it, it would have an undesirable hidden cost. The explicit c\_str() function means you must always state that you really intend to use a const char \*. | operator char* in STL string class | [
"",
"c++",
"stl",
""
] |
I have some large HEX values that I want to display as regular numbers, I was using hexdec() to convert to float, and I found a function on PHP.net to convert that to decimal, but it seems to hit a ceiling, e.g.:
```
$h = 'D5CE3E462533364B';
$f = hexdec($h);
echo $f .' = '. Exp_to_dec($f);
```
Output: 1.5406319846274E+19 = 15406319846274000000
Result from calc.exe = 15406319846273791563
Is there another method to convert large hex values? | As said on the [hexdec manual page](http://php.net/hexdec):
> The function can now convert values
> that are to big for the platforms
> integer type, it will return the value
> as float instead in that case.
If you want to get some kind of big integer (not float), you'll need it stored inside a string. This might be possible using [BC Math](http://php.net/manual/en/book.bc.php) functions.
For instance, if you look in the comments of the hexdec manual page, you'll find [this note](http://fr.php.net/manual/en/function.hexdec.php#90309)
If you adapt that function a bit, to avoid a notice, you'll get:
```
function bchexdec($hex)
{
$dec = 0;
$len = strlen($hex);
for ($i = 1; $i <= $len; $i++) {
$dec = bcadd($dec, bcmul(strval(hexdec($hex[$i - 1])), bcpow('16', strval($len - $i))));
}
return $dec;
}
```
*(This function has been copied from the note I linked to; and only a bit adapted by me)*
And using it on your number:
```
$h = 'D5CE3E462533364B';
$f = bchexdec($h);
var_dump($f);
```
The output will be:
```
string '15406319846273791563' (length=20)
```
So, not the kind of big float you had ; and seems OK with what you are expecting:
> Result from calc.exe =
> 15406319846273791563
Hope this help ;-)
*And, yes, user notes on the PHP documentation are sometimes a real gold mine ;-)* | hexdec() switches from int to float when the result is too large to be represented as an int. If you want arbitrarily long values, you're probably going to have to roll your own conversion function to change the hex string to a [GMP integer](http://www.php.net/manual/en/book.gmp.php).
```
function gmp_hexdec($n) {
$gmp = gmp_init(0);
$mult = gmp_init(1);
for ($i=strlen($n)-1;$i>=0;$i--,$mult=gmp_mul($mult, 16)) {
$gmp = gmp_add($gmp, gmp_mul($mult, hexdec($n[$i])));
}
return $gmp;
}
print gmp_strval(gmp_hexdec("D5CE3E462533364B"));
Output: 15406319846273791563
``` | Large hex values with PHP hexdec | [
"",
"php",
"hex",
""
] |
I'm a new Java developer and would like to find out if it's possible to write Java code in Windows using Eclipse, and the code is actually compiled and run on Linux. I need to write java program for Linux (Ubuntu) environment, but I'm used to Windows for development.
I'm thinking of using Samba for file sharing, so I can use Windows to access the source code that resides in Ubuntu. But when I compile the code, it actually use Windows JVM, instead of Ubuntu JVM, will that matter? Also when I need to include external JAR libraries that resides in Ubuntu, the path is actually something like E:\home\java\project\abc.jar, where E: drive is a mapped network drive from Samba. so when I ran the program on Ubuntu, there won't be any D: drive anymore.
So I'm a bit confused and wonder if this is possible at all. any help will be really appreciated | You should use [Eclipse](http://www.eclipse.org/) for Development IDE and add dependent jars as relative not full directory link. You can compile on Windows and run on Linux. It does not matter. | Because Java is platform-independent, it won't matter where you compile vs. where you run. So you can compile under Windows and run on Linux with no problem.
You'll need copies of your libraries on the Linux box *as well* as the Windows box. It doesn't matter where these are, but they need to be referenced via the environment variable `CLASSPATH`.
So on Windows, your `CLASSPATH` looks like:
```
CLASSPATH=d:\jars\abc.jar;d:\jars\def.jar
```
and on Unix/Linux it will look like:
```
CLASSPATH=/home/user/lib/abc.jar:/home/user/lib/def.jar
```
(note the change between colon and semicolon).
Given the above, running
```
java MyApp.jar
```
will work on both platforms. Setting the CLASSPATH globally may affect different Java instances, so you may want to set it in a shell-script invoking each program. Or you can specify the `CLASSPATH` on the command line e.g.
```
java -cp /home/user/lib/abc.jar:/home/user/lib/def.jar
``` | How do I write Java code in Windows and compile and run in Linux | [
"",
"java",
""
] |
For those of you that use the underscore prefix for private class members in C# (i.e. private int \_count;), what do you use for private constants? I'm thinking the following but am curious as to what others are doing.
```
private const int _MaxCount;
``` | Well, private is private, so chose the convention you like best. I personally use PascalCasing, e.g:
```
private const int SomeConstant = 42;
```
---
This is what [MSDN has to say about it](http://msdn.microsoft.com/en-us/library/ms229012.aspx):
The naming guidelines for fields apply to static public and protected fields. You should not define public or protected instance fields:
* Do use Pascal casing in field names.
* Do name fields with nouns or noun phrases.
* Do not use a prefix for field names. For example, do not use g\_ or s\_ to distinguish static versus non-static fields. | I'm using:
```
private const int MAX_COUNT = 42;
```
I do not use *PascalCasing* because that's my standard for properties.
I do not use *camelCasing* because that's my standard for local variables.
I do not use \_camelCasing because that's my standard for private fields.
I do not use \_PascalCasing because IMO it's hard to distinguish it from \_camelCasing. | What should be the standard for private class constants in C#? | [
"",
"c#",
"coding-style",
""
] |
I'm developing a web app that will access and work with large amounts of data in a MySQL database, something like a dictionary/thesaurus. I need to test the performance of the DB as its size increases, so I know how slow each request will be in the future.
Any ideas? Like are there specific tools to check DB performance for a particular query, etc? | You can use [Maatkit](http://www.maatkit.org/)'s [query profiler](http://www.maatkit.org/doc/mk-query-profiler.html) to measure impact of data amount on MySQL performances.
And [generatedata.com](http://www.generatedata.com/) to generate the data you need to test your app.
You can also test your application responsiveness using HTTP testing tools like :
* Apache's bundled 'ab' tool (Apache Bench)
* [JMeter](http://jakarta.apache.org/jmeter/)
* [Selenium](http://seleniumhq.org/) | Do you know what, specifically you're testing? Measuring "performance" is almsot always useless, unless you know exactly what it is you want.
For example, are you looking for low latency on query result retrieval? Perhaps high throughput on date retrieval? Perhaps you care more about fast insertions into the database, and less about fast query results? Perhaps you care about different things on different tables (in fact, that's almost always the case).
My advice will probably be ignored, but I'll say it anyway:
**Don't** optimise before you know what you want.
**Don't** optimise as you write the code.
When you do get around to optimising your database, make sure you optimise for the right things. Use realistic data - if you're testing dictionary-sized hunks of text, don't test with binary data (for example).
Anyway, I realise you were probably looking for a more technical answer, but hey... | How to measure database performance? | [
"",
"php",
"mysql",
"database",
"performance",
"firebug",
""
] |
How can we debug JavaScript with IE 8 ?
The JavaScript debbuging with Visual Studio doesn't work after an update to IE 8. | I discovered today that we can now debug Javascript With the developer tool bar plugins integreted in IE 8.
* Click **▼ Tools** on the toolbar, to the right of the tabs.
* Select **Developer Tools**. The Developer Tools dialogue should open.
* Click the **Script** tab in the dialogue.
* Click the **Start Debugging** button.
You can use watch, breakpoint, see the call stack etc, similarly to debuggers in professional browsers.
You can also use the statement `debugger;` in your JavaScript code the set a breakpoint. | You can get more information about IE8 Developer Toolbar debugging at [Debugging JScript](http://msdn.microsoft.com/en-us/library/dd565628.aspx#dbugjscript) or [Debugging Script with the Developer Tools](http://msdn.microsoft.com/en-us/library/dd565625.aspx). | How to debug Javascript with IE 8 | [
"",
"javascript",
"internet-explorer-8",
"ie-developer-tools",
""
] |
I have a file called `tester.py`, located on `/project`.
`/project` has a subdirectory called `lib`, with a file called `BoxTime.py`:
```
/project/tester.py
/project/lib/BoxTime.py
```
I want to import `BoxTime` from `tester`. I have tried this:
```
import lib.BoxTime
```
Which resulted:
```
Traceback (most recent call last):
File "./tester.py", line 3, in <module>
import lib.BoxTime
ImportError: No module named lib.BoxTime
```
Any ideas how to import `BoxTime` from the subdirectory?
**EDIT**
The `__init__.py` was the problem, but don't forget to refer to `BoxTime` as `lib.BoxTime`, or use:
```
import lib.BoxTime as BT
...
BT.bt_function()
``` | Take a look at the **[Packages documentation (Section 6.4)](https://docs.python.org/3/tutorial/modules.html#packages)**.
In short, you need to put a blank file named
```
__init__.py
```
in the `lib` directory. | * Create a subdirectory named `lib`.
* Create an empty file named `lib\__init__.py`.
* In `lib\BoxTime.py`, write a function `foo()` like this:
```
def foo():
print "foo!"
```
* In your client code in the directory above `lib`, write:
```
from lib import BoxTime
BoxTime.foo()
```
* Run your client code. You will get:
```
foo!
```
---
Much later -- in linux, it would look like this:
```
% cd ~/tmp
% mkdir lib
% touch lib/__init__.py
% cat > lib/BoxTime.py << EOF
heredoc> def foo():
heredoc> print "foo!"
heredoc> EOF
% tree lib
lib
├── BoxTime.py
└── __init__.py
0 directories, 2 files
% python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from lib import BoxTime
>>> BoxTime.foo()
foo!
``` | Import a file from a subdirectory? | [
"",
"python",
"python-import",
""
] |
If I have a textarea like the one I'm using to type this message. I would like to append or concat a '>' to every line that breaks. The problem I am having is that I don't know where the lines break. What I want to do is emulate an email message when someone replies, they see '>' appended to every line. Is there a function for this?
Thanks. | If it's for an email message, you could take a string and use the [wordwrap](http://php.net/wordwrap) function to break at about 75 chars:
```
$reply='> '.wordwrap($original, 75, "\n> ");
```
Because you can supply your own break string, you can include the `>` right there!
If the original is already broken into lines, then simply replace existing line break:
```
$reply='> '.str_replace ("\n", "\n >", $original);
``` | A little better for email quoting is:
```
$text = $_POST['text'];
$text = str_replace("\r\n", "\n", $text);
$text = str_replace("\r", "\n", $text);
$text = str_replace("\n", "\n> ", $text);
$text = wordwrap($text, 75, "\n> ");
$text = str_replace("\n> >", "\n>>", $text);
$text = ($text[0] == '>' ? '>' : '> ') . $text;
$text = htmlspecialchars($text);
```
This adds the always-useful space between the '>' and the text, while not bloating it by adding spaces between multiple quote levels.
Even better is if you can make it not wordwrap lines that are already quoted in the original message. Better still, implement format=flowed. But either of these'll take considerably more code. | Is there a way to place a '>' sign at the beginning of a line using PHP? | [
"",
"php",
""
] |
I am trying to automate logging into Photobucket for API use for a project that requires automated photo downloading using stored credentials.
The API generates a URL to use for logging in, and using Firebug i can see what requests and responses are being sent/received.
My question is, how can i use HttpWebRequest and HttpWebResponse to mimic what happens in the browser in C#?
Would it be possible to use a web browser component inside a C# app, populate the username and password fields and submit the login? | I've done this kind of thing before, and ended up with a nice toolkit for writing these types of applications. I've used this toolkit to handle non-trivial back-n-forth web requests, so it's entirely possible, and not extremely difficult.
I found out quickly that doing the `HttpWebRequest`/`HttpWebResponse` from scratch really was lower-level than I wanted to be dealing with. My tools are based entirely around the [HtmlAgilityPack](http://www.codeplex.com/htmlagilitypack) by Simon Mourier. It's an excellent toolset. It does a lot of the heavy lifting for you, and makes parsing of the fetched HTML *really* easy. If you can rock XPath queries, the HtmlAgilityPack is where you want to start. It handles poorly foormed HTML quite well too!
You still need a good tool to help debug. Besides what you have in your debugger, being able to inspect the http/https traffic as it goes back-n-forth across the wire is priceless. Since you're code is going to be making these requests, not your browser, FireBug isn't going to be of much help debugging your code. There's all sorts of packet sniffer tools, but for HTTP/HTTPS debugging, I don't think you can beat the ease of use and power of [Fiddler 2](http://www.fiddler2.com/fiddler2/). The newest version even comes with a plugin for firefox to quickly divert requests through fiddler and back. Because it can also act as a seamless HTTPS proxy you can inspect your HTTPS traffic as well.
Give 'em a try, I'm sure they'll be two indispensable tools in your hacking.
**Update:** Added the below code example. This is pulled from a not-much-larger "Session" class that logs into a website and keeps a hold of the related cookies for you. I choose this because it does more than a simple 'please fetch that web page for me' code, plus it has a line-or-two of XPath querying against the final destination page.
```
public bool Connect() {
if (string.IsNullOrEmpty(_Username)) { base.ThrowHelper(new SessionException("Username not specified.")); }
if (string.IsNullOrEmpty(_Password)) { base.ThrowHelper(new SessionException("Password not specified.")); }
_Cookies = new CookieContainer();
HtmlWeb webFetcher = new HtmlWeb();
webFetcher.UsingCache = false;
webFetcher.UseCookies = true;
HtmlWeb.PreRequestHandler justSetCookies = delegate(HttpWebRequest webRequest) {
SetRequestHeaders(webRequest, false);
return true;
};
HtmlWeb.PreRequestHandler postLoginInformation = delegate(HttpWebRequest webRequest) {
SetRequestHeaders(webRequest, false);
// before we let webGrabber get the response from the server, we must POST the login form's data
// This posted form data is *VERY* specific to the web site in question, and it must be exactly right,
// and exactly what the remote server is expecting, otherwise it will not work!
//
// You need to use an HTTP proxy/debugger such as Fiddler in order to adequately inspect the
// posted form data.
ASCIIEncoding encoding = new ASCIIEncoding();
string postDataString = string.Format("edit%5Bname%5D={0}&edit%5Bpass%5D={1}&edit%5Bform_id%5D=user_login&op=Log+in", _Username, _Password);
byte[] postData = encoding.GetBytes(postDataString);
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.ContentLength = postData.Length;
webRequest.Referer = Util.MakeUrlCore("/user"); // builds a proper-for-this-website referer string
using (Stream postStream = webRequest.GetRequestStream()) {
postStream.Write(postData, 0, postData.Length);
postStream.Close();
}
return true;
};
string loginUrl = Util.GetUrlCore(ProjectUrl.Login);
bool atEndOfRedirects = false;
string method = "POST";
webFetcher.PreRequest = postLoginInformation;
// this is trimmed...this was trimmed in order to handle one of those 'interesting'
// login processes...
webFetcher.PostResponse = delegate(HttpWebRequest webRequest, HttpWebResponse response) {
if (response.StatusCode == HttpStatusCode.Found) {
// the login process is forwarding us on...update the URL to move to...
loginUrl = response.Headers["Location"] as String;
method = "GET";
webFetcher.PreRequest = justSetCookies; // we only need to post cookies now, not all the login info
} else {
atEndOfRedirects = true;
}
foreach (Cookie cookie in response.Cookies) {
// *snip*
}
};
// Real work starts here:
HtmlDocument retrievedDocument = null;
while (!atEndOfRedirects) {
retrievedDocument = webFetcher.Load(loginUrl, method);
}
// ok, we're fully logged in. Check the returned HTML to see if we're sitting at an error page, or
// if we're successfully logged in.
if (retrievedDocument != null) {
HtmlNode errorNode = retrievedDocument.DocumentNode.SelectSingleNode("//div[contains(@class, 'error')]");
if (errorNode != null) { return false; }
}
return true;
}
public void SetRequestHeaders(HttpWebRequest webRequest) { SetRequestHeaders(webRequest, true); }
public void SetRequestHeaders(HttpWebRequest webRequest, bool allowAutoRedirect) {
try {
webRequest.AllowAutoRedirect = allowAutoRedirect;
webRequest.CookieContainer = _Cookies;
// the rest of this stuff is just to try and make our request *look* like FireFox.
webRequest.UserAgent = @"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3";
webRequest.Accept = @"text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
webRequest.KeepAlive = true;
webRequest.Headers.Add(@"Accept-Language: en-us,en;q=0.5");
//webRequest.Headers.Add(@"Accept-Encoding: gzip,deflate");
}
catch (Exception ex) { base.ThrowHelper(ex); }
}
``` | Here is how i solved it:
```
public partial class Form1 : Form {
private string LoginUrl = "/apilogin/login";
private string authorizeUrl = "/apilogin/authorize";
private string doneUrl = "/apilogin/done";
public Form1() {
InitializeComponent();
this.Load += new EventHandler(Form1_Load);
}
void Form1_Load(object sender, EventArgs e) {
PhotobucketNet.Photobucket pb = new Photobucket("pubkey","privatekey");
string url = pb.GenerateUserLoginUrl();
webBrowser1.Url = new Uri(url);
webBrowser1.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(webBrowser1_DocumentCompleted);
}
void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) {
if (e.Url.AbsolutePath.StartsWith(LoginUrl))
{
webBrowser1.Document.GetElementById("usernameemail").SetAttribute("Value","some username");
webBrowser1.Document.GetElementById("password").SetAttribute("Value","some password");
webBrowser1.Document.GetElementById("login").InvokeMember("click");
}
if (e.Url.AbsolutePath.StartsWith(authorizeUrl))
{
webBrowser1.Document.GetElementById("allow").InvokeMember("click");
}
if (e.Url.AbsolutePath.StartsWith(doneUrl))
{
string token = webBrowser1.Document.GetElementById("oauth_token").GetAttribute("value");
}
}
}
```
the token capture in the last if block is what is needed to continue using the API. This method works fine for me as of course the code that needs this will be running on windows so i have no problem spawning a process to load this separate app to extract the token. | Techniques for logging into websites programmatically | [
"",
"c#",
"browser",
"login-automation",
""
] |
I need to customize the SSL handshaking when calling a JAX-WS API on top of Axis2.
I find no reference at all on how to do this. With Metro I can set a custom SSLSocketFactory, but that uses a non standard API.
How do I get access to the Axis engine so that I can reconfigure it before sending a soap request ? | I gave up on Axis2 and WebSphere SOAP. It took less time to just implement my own JAX-WS that support everything I need. Too bad. | You probably have a stub class that extends `org.apache.axis2.client.Stub`. You can set its transport properties:
```
YourStubClass stub = new YourStubClass();
stub.initStub(endpointUrl);
stub._getServiceClient().getOptions().setProperty(HTTPConstants.CACHED_HTTP_CLIENT, soapHttpClient);
```
Where `endpointUrl` is a String containing the endpoint URL and `soapHttpClient` is an instance of the Apaches's HTTP Client (`org.apache.commons.httpclient.HttpClient`).
When you create your HttpClient object, you can customize your SSL handshaking. | AXIS2 and JAX-WS how can I change the SSL handshaking? | [
"",
"java",
"web-services",
"client",
"jax-ws",
"apache-axis",
""
] |
UPDATE
Finally managed to work it out! Thanks for all the help from everyone. If you spot any potential errors or scope for improvement in my query please let me know.
```
SELECT *
FROM TBL_CAMPAIGNS C
INNER JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
INNER JOIN TBL_CAMPAIGNS_CHARITIES CC
ON C.campaign_Key = CC.camchar_CampaignID
INNER JOIN TBL_CHARITIES CH
ON CC.camchar_CharityID = CH.cha_Key
LEFT OUTER JOIN (
select recip_Chosen, count(recip_CampaignId) as ChosenCount
from TBL_CAMPAIGNRECIPIENTS
WHERE recip_CampaignId = @campaign
group by recip_Chosen
) CRC
on CH.cha_Key = CRC.recip_Chosen
WHERE C.campaign_Key = @campaign
```
Thanks!!!
///////////////////
After some really useful advice i decided to implement orbMan' suggestion as follows;
```
SELECT *
FROM TBL_CAMPAIGNS C
INNER JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
INNER JOIN TBL_CAMPAIGNS_CHARITIES CC
ON C.campaign_Key = CC.camchar_CampaignID
INNER JOIN TBL_CHARITIES CH
ON CC.camchar_CharityID = CH.cha_Key
WHERE C.campaign_Key = @campaign
```
This returns 1 row for each charity associated with a given campaign (as associated via TBL\_Campaigns\_Charities). However, i also have another table(TBL\_CAMPAIGNRECIPIENTS CR) which details each person invited to take part in the campaign. On visiting the campaign page they can select one of the charities linked to the campaign.
Now i need to know how many people have chosen each of the associated charities(CR.recip\_Chosen). Their details arent important. I just need to know how many people have selected each of the associated charities.
So something like;
```
COUNT CH.cha_Key, FROM CR WHERE CR.recip_Chosen = CH.cha_Key
```
but integrated into the statement above.
Thanks in advance.
ORIGINAL POST BELOW:
/ / / / / / / / / / / / / / / / / / /
Hi,
I need to gain data from across 3 tables. The first two are straight forward and are currently grabbed as;
```
SELECT * FROM TBL_CAMPAIGNS C
JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
WHERE C.campaign_Key = @campaign
```
The table 'TBL\_CAMPAIGNS' contains various columns, five of which hold an int. This int refers to the key of the 3rd table 'TBL\_CHARITIES'. How do i return the data of the third table in combination with the above?
Ive created the following so far;
```
SELECT * FROM TBL_CAMPAIGNS C
JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
JOIN TBL_CHARITIES CH
ON CH.cha_Key = C.campaign_Char1
WHERE C.campaign_Key = @campaign
```
But, as you can tell, that only returns C.campaign\_Char1. What about C.campaign\_Char2, C.campaign\_Char3, C.campaign\_Char4, C.campaign\_Char5 ?????
I did try this;
```
SELECT * FROM TBL_CAMPAIGNS C
JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
JOIN TBL_CHARITIES CH
ON CH.cha_Key = C.campaign_Char1
AND CH.cha_Key = C.campaign_Char2
AND CH.cha_Key = C.campaign_Char3
.......
WHERE C.campaign_Key = @campaign
```
But, of course this doesnt work!
Any suggestions / help?
Thanks in advance. | This is a denormalized design and that is why you are having difficulty querying it. It would be easier if (instead of columns campaign\_Char1 through 5) you had a many-to-many table between TBL\_CAMPAIGNS and TBL\_CHARITIES. E.g., TBL\_CAMPAIGNS\_CHARITIES. This would contain a Campaign ID and a CharityID.
Then your query would be:
```
SELECT *
FROM TBL_CAMPAIGNS C
INNER JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
INNER JOIN TBL_CAMPAIGNS_CHARITIES CC
ON C.campaign_Key = CC.CampaignID
INNER JOIN TBL_CHARITIES CH
ON CC.CharityID = CH.cha_Key
WHERE C.campaign_Key = @campaign
```
**Update:**
```
SELECT *
FROM TBL_CAMPAIGNS C
INNER JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
INNER JOIN TBL_CAMPAIGNS_CHARITIES CC
ON C.campaign_Key = CC.camchar_CampaignID
INNER JOIN TBL_CHARITIES CH
ON CC.camchar_CharityID = CH.cha_Key
LEFT OUTER JOIN (
select recip_Chosen, count(*) as ChosenCount
from TBL_CAMPAIGNRECIPIENTS
group by recip_Chosen
) CRC
on CH.cha_Key = CRC.recip_Chosen
WHERE C.campaign_Key = @campaign
``` | First thought is that you'll have to join the TBL\_CHARITIES table again for each reference you want to give.
```
SELECT * FROM TBL_CAMPAIGNS C
JOIN TBL_MEMBERS M
ON C.campaign_MemberId = M.members_Id
JOIN TBL_CHARITIES CH1
ON CH1.cha_Key = C.campaign_Char1
JOIN TBL_CHARITIES CH2
ON CH2.cha_Key = C.campaign_Char2
JOIN TBL_CHARITIES CH3
ON CH3.cha_Key = C.campaign_Char3
JOIN TBL_CHARITIES CH4
ON CH4.cha_Key = C.campaign_Char4
JOIN TBL_CHARITIES CH5
ON CH5.cha_Key = C.campaign_Char5
WHERE C.campaign_Key = @campaign
```
I'm sure someone has a better solution though. | SQL Statement help required | [
"",
"asp.net",
"sql",
"sql-server-2005",
"t-sql",
"stored-procedures",
""
] |
I have a string that looks like this:
'p10005c4'
I need to extract the productId 10005 and the colorId 4 from that string into these variables:
$productId
$colorId
productId is > 10000.
colorId > 1.
How can I do this clean and nice using regex? | This should be possible with the following regex:
```
/p(\d+)c(\d+)/
```
This basically means, match any string that consists of a p, followed by one or more digits, then followed by a c, then followed by one or more digits.
The parentheses indicate capture groups, and since you want to capture the two ids, they're surrounded by them.
To use this for your purposes, you'd do something like the following:
```
$str = 'p10005c4';
$matches = array();
preg_match('/p(\d+)c(\d+)/', $str, $matches);
$productId = $matches[1];
$colorId = $matches[2];
```
For more information on getting started with regular expressions, you might want to take a look at [Regular-Expressions.info](http://www.regular-expressions.info/ "Regular-Expressions.info"). | If your string is always `pXXXcXX`, I would suggest you forego the regular expressions, and just use a couple of string functions.
```
list($productId,$colorId) = explode('c',substr($string,1),2);
``` | Extract two values from string using regex | [
"",
"php",
"regex",
""
] |
How would I sort the following CSV file using PHP? I would like to sort by last name. Do I use regex somehow to get the first letter in the last name? Any help is appreciated
Here is an excerpt of my CSV file - with a ";" delimiter between names and addresses
```
John C. Buckley, M.D.;123 Main Street, Bethesda MD 20816
Steven P. Wood;345 Elm Street, Rockville, MD 20808
Richard E. Barr-Fernandez;234 Pine Street, Takoma Park MD 20820
Charles Andrew Powell; 678 Oak Street, Gaithersburg MD 20800
Mark David Horowitz, III; 987 Wall Street, Silver Spring MD 20856
``` | Here is my attempt. I'm not sure how robust the regex is to extract the surname though.
```
<?php
$handle = fopen('c:/csv.txt', 'r') or die('cannot read file');
$surnames = array();
$rows = array();
//build array of surnames, and array of rows
while (false != ( $row = fgetcsv($handle, 0, ';') )) {
//extract surname from the first column
//this should match the last word before the comma, if there is a comma
preg_match('~([^\s]+)(?:,.*)?$~', $row[0], $m);
$surnames[] = $m[1];
$rows[] = $row;
}
fclose($handle);
//sort array of rows by surname using our array of surnames
array_multisort($surnames, $rows);
print_r($rows);
//you could write $rows back to a file here if you wanted.
```
**Edit**
I just realised that you don't really need to strip off the peoples' suffixes, because this probably won't really affect sorting. You could just split the first column on space and take the last one (as karim79 suggested). This might break though if the person has more suffixes with spaces, so I'll leave my answer intact. | Well, because it is a CSV
```
$lines = file($file);
foreach($lines as $line)
{
$parts = explode(";", $line);
$last = explode(" ", $parts[0]);
$last = end($last);
$array[$last] = $parts;
}
ksort($array);
// ..... write it back to file
``` | Sort a CSV file by last name with PHP | [
"",
"php",
"arrays",
"csv",
"sorting",
""
] |
```
model.RegisterContext(typeof(NorthwindDataContext), new ContextConfiguration()
{
ScaffoldAllTables = true,
MetadataProviderFactory = (type => new DefaultTypeDescriptionProvider(type, new AssociatedMetadataTypeTypeDescriptionProvider(type)))
});
```
In particular the MetadataProviderFactory Line... I can't quite seem to figure out how it should look in VB... | ```
MetadataProviderFactory = Function(type) new DefaultTypeDescriptionProvider(
type, new AssociatedMetadataTypeTypeDescriptionProvider(type))
``` | ```
MetadataProviderFactory = Function (type) New DefaultTypeDescriptionProvider(
type, New AssociatedMetadataTypeTypeDescriptionProvider(type))
``` | Convert below C# dynamic data context registration code to VB? | [
"",
"c#",
"asp.net",
"vb.net",
"asp.net-dynamic-data",
""
] |
I'm creating a checkbox list to handle some preferences as follows...
```
<ul>
<%foreach (var item in ViewData["preferences"] as IEnumerable<MvcA.webservice.SearchablePreference>)
{
var feature = new StringBuilder();
feature.Append("<li>");
feature.Append("<label><input id=\"" + item.ElementId + "\" name=\"fpreferences\" type=\"checkbox\" />" + item.ElementDesc + "</label>");
feature.Append("</li>");
Response.Write(feature);
}
%>
</ul>
```
The data handed to the viewdata of SearchablePreference[] and the list displays fine.
The question is; How would I repopulate the selected boxes if the page had to return itself back (i.e failed validation).
In webforms its handled automatically by the viewstate; with the other input elements I'm simply passing the sent-data back to the page via ViewData.
Thanks | Use [Html.Checkbox](http://msdn.microsoft.com/en-us/library/system.web.mvc.html.inputextensions.checkbox.aspx) instead. | You have to read off all of the values, then add them to the ViewData of the view you come back to, writing them into the checkboxes in the view display (preferably with Html.Checkbox). To my knowledge, Html.Checkbox does not automatically manage viewstate for you.
Assuming i have a standard CRUD object I very often have something like this in the Edit view. (I use the same view for edit and new)
```
<input (other input attribute here) value="<%=(myObj == null ? "" : myObj.AppropriateProperty)%>" />
``` | Checkboxes in ASP.net MVC (c#) | [
"",
"c#",
"asp.net-mvc",
".net-3.5",
""
] |
OK. I've read things here and there about SQL Server heaps, but nothing too definitive to really guide me. I am going to try to measure performance, but was hoping for some guidance on what I should be looking into. This is SQL Server 2008 Enterprise. Here are the tables:
**Jobs**
* JobID (PK, GUID, externally generated)
* StartDate (datetime2)
* AccountId
* Several more accounting fields, mainly decimals and bigints
**JobSteps**
* JobStepID (PK, GUID, externally generated)
* JobID FK
* StartDate
* Several more accounting fields, mainly decimals and bigints
**Usage:** Lots of inserts (hundreds/sec), usually 1 JobStep per Job. Estimate perhaps 100-200M rows per month. No updates at all, and the only deletes are from archiving data older than 3 months.
Do ~10 queries/sec against the data. Some join JobSteps to Jobs, some just look at Jobs. Almost all queries will range on StartDate, most of them include AccountId and some of the other accounting fields (we have indexes on them). Queries are pretty simple - the largest part of the execution plans is the join for JobSteps.
The priority is the insert performance. Some lag (5 minutes or so) is tolerable for data to appear in the queries, so replicating to other servers and running queries off them is certainly allowable.
Lookup based on the GUIDs is very rare, apart from joining JobSteps to Jobs.
**Current Setup**: No clustered index. The only one that seems like a candidate is StartDate. But, it doesn't increase perfectly. Jobs can be inserted anywhere in a 3 hour window after their StartDate. That could mean a million rows are inserted in an order that is not final.
Data size for a 1 Job + 1 JobStepId, with my current indexes, is about 500 bytes.
**Questions**:
* Is this a good use of a heap?
* What's the effect of clustering on StartDate, when it's pretty much non-sequential for ~2 hours/1 million rows? My guess is the constant re-ordering would kill insert perf.
* Should I just add bigint PKs just to have smaller, always increasing keys? (I'd still need the guids for lookups.)
I read [GUIDs as PRIMARY KEYs and/or the clustering key](http://www.sqlskills.com/BLOGS/KIMBERLY/post/GUIDs-as-PRIMARY-KEYs-andor-the-clustering-key.aspx), and it seemed to suggest that even inventing a key will save considerable space on other indexes. Also some resources suggest that heaps have some sort of perf issues in general, but I'm not sure if that still applies in SQL 2008.
And again, yes, I'm going to try to perf test and measure. I'm just trying to get some guidance or links to other articles so I can make a more informed decision on what paths to consider. | Yes, heaps have issues. Your data will logically fragment all over the show and can not be defragmented simply.
Imagine throwing all your telephone directory into a bucket and then trying to find "bob smith". Or using a conventional telephone directory with a clustered index on lastname, firstname.
The overhead of maintaining the index is trivial.
StartDate, unless unique, is not a good choice. A clustered index requires internal uniqueness for the non-clustered indexes. If not declared unique, SQL Server will add a 4 byte "uniquifier".
Yes, I'd use int or bigint to make it easier. As for GUIDs: see the questions at the right hand side of the screen.
Edit:
Note, PK and clustered index are 2 separate issues even if SQL Server be default will make the PK clustered. | Heap fragmentation isn't necessarily the end of the world. It sounds like you'll rarely be scanning the data, so that's not the end of the world.
Your non-clustered indexes are the things that will impact your performance. Each one will need to store the address of the row in the underlynig table (either a heap or a clustered index). Ideally, your queries never have to use the underlying table itself, because it stores all the information needed in the ideal way (including all columns, so that it's a covering index).
And yes, Kimberly Tripp's stuff is the best around for indexes.
Rob | Uniqueidentifier PK: Is a SQL Server heap the right choice? | [
"",
"sql",
"sql-server",
"performance",
"sql-server-2008",
"indexing",
""
] |
I have an action I call from an anchor thusly, `Site/Controller/Action/ID` where `ID` is an `int`.
Later on I need to redirect to this same Action from a Controller.
Is there a clever way to do this? Currently I'm stashing `ID` in tempdata, but when you
hit f5 to refresh the page again after going back, the tempdata is gone and the page crashes. | You can pass the id as part of the routeValues parameter of the RedirectToAction() method.
```
return RedirectToAction("Action", new { id = 99 });
```
This will cause a redirect to Site/Controller/Action/99. No need for temp or any kind of view data. | [Kurt's answer](https://stackoverflow.com/a/1257632/282110) should be right, from my research, but when I tried it I had to do this to get it to actually work for me:
```
return RedirectToAction( "Main", new RouteValueDictionary(
new { controller = controllerName, action = "Main", Id = Id } ) );
```
If I didn't specify the controller and the action in the `RouteValueDictionary` it didn't work.
Also when coded like this, the first parameter (Action) seems to be ignored. So if you just specify the controller in the Dict, and expect the first parameter to specify the Action, it does not work either.
If you are coming along later, try Kurt's answer first, and if you still have issues try this one. | RedirectToAction with parameter | [
"",
"c#",
"asp.net-mvc",
"controller",
"redirecttoaction",
""
] |
Lets say each row of our table contains information about various vehicles for sale. And each user can store several images of each vehicle.
What might be a good approach to storing these images?
**Duplicate:** [User Images: Database or filesystem storage?](https://stackoverflow.com/questions/585224/user-images-database-vs-filesystem-storage)
**Duplicate:** [Storing images in database: Yea or nay?](https://stackoverflow.com/questions/3748/storing-images-in-db-yea-or-nay)
**Duplicate:** [Should I store my images in the database or folders?](https://stackoverflow.com/questions/713243/should-i-store-my-images-in-the-database-or-folders)
**Duplicate:** [Would you store binary data in database or folders?](https://stackoverflow.com/questions/662488/would-you-store-binary-data-in-database-or-in-file-system)
**Duplicate:** [Store pictures as files or or the database for a web app?](https://stackoverflow.com/questions/561447/store-pictures-as-files-or-in-the-database-for-a-web-app)
**Duplicate:** [Storing a small number of images: blob or fs?](https://stackoverflow.com/questions/325126/storing-a-small-number-of-images-blob-or-fs)
**Duplicate:** [store image in filesystem or database?](https://stackoverflow.com/questions/766048/store-image-in-database-or-in-a-system-file) | Sounds like homework. You’d want a one-to-many relationship from the `vehicles` table to the `pictures` table. The `pictures` table would contain a BLOB column with the images. Or it could contain a VARCHAR column with the filenames of the images as stored on disk. | i would prefer you should store pictures in some folders , so at the time of retrieving it will make easy. If you want to store in database you can use BLOB datatype. | MySQL - storing images in a database? | [
"",
"php",
"mysql",
""
] |
I snaged this sub off the web to recursively search all the files including subdirectoires
If i point this sub to a large area ( ie mydocuments or C:) I get an error:
> The CLR has been unable to transition
> from COM context 0x1f6c48 to COM
> context 0x1f6db8 for 60 seconds. The
> thread that owns the destination
> context/apartment is most likely
> either doing a non pumping wait or
> processing a very long running
> operation without pumping Windows
> messages. This situation generally has
> a negative performance impact and may
> even lead to the application becoming
> non responsive or memory usage
> accumulating continually over time. To
> avoid this problem, all single
> threaded apartment (STA) threads
> should use pumping wait primitives
> (such as CoWaitForMultipleHandles) and
> routinely pump messages during long
> running operations.
Here is the code
(I believe its because the sub is calling itself)
```
void DirSearch(string sDir)
{
try
{
foreach (string d in Directory.GetDirectories(sDir))
{
foreach (string f in Directory.GetFiles(d))
{
string hash = GetMD5HashFromFile(f);
Dic_Files.Add(f, hash);
}
DirSearch(d);
}
}
catch (System.Exception excpt)
{
Console.WriteLine(excpt.Message);
}
}
``` | debugging problem.
according to: <http://social.msdn.microsoft.com/forums/en-US/vsdebug/thread/ed6db6c8-3cdc-4a23-ab0a-2f9b32470d35/>
What you are seeing is one of "Managed Debugging Assistants" (MDA) and can be disabled by Debug->Exceptions ... > Expand the MDA node and uncheck the box against contextswitchdeadlock . | The GetFiles method has an override that allows for recursive searching. I would try using that and see if your problem goes away...
```
void DirSearch(string sDir)
{
try
{
var files = System.IO.Directory.GetFiles(sDir, "*.*", SearchOption.AllDirectories);
foreach (string f in files)
{
string hash = GetMD5HashFromFile(f);
Dic_Files.Add(f, hash);
}
}
catch (System.Exception excpt)
{
Console.WriteLine(excpt.Message);
}
}
``` | problem with searching subdirs | [
"",
"c#",
"multithreading",
"loops",
""
] |
A quick Google didn't turn up a definitive answer. If `URI` does do this, is there a widespread third-party library that won't do this? I just need some parsing, and I can't have unpredictable delays while the constructor attempts to resolve a URI?
Thanks. | From the [URI javadoc](http://java.sun.com/javase/6/docs/api/java/net/URI.html):
> **No lookup of the host, if any, is performed, and no scheme-dependent stream handler is constructed.** Equality, hashing, and comparison are defined strictly in terms of the character content of the instance. In other words, a URI instance is little more than a structured string that supports the syntactic, scheme-independent operations of comparison, normalization, resolution, and relativization. | Read this thread - <http://www.symphonious.net/2007/03/29/javaneturl-or-javaneturi/> | I know that java.net.URL tries to resolve the URL; does java.net.URI do the same? | [
"",
"java",
"url",
"uri",
""
] |
I have the following need (in python):
* generate all possible tuples of length 12 (could be more) containing either 0, 1 or 2 (basically, a ternary number with 12 digits)
* filter these tuples according to specific criteria, culling those not good, and keeping the ones I need.
As I had to deal with small lengths until now, the functional approach was neat and simple: a recursive function generates all possible tuples, then I cull them with a filter function. Now that I have a larger set, the generation step is taking too much time, much longer than needed as most of the paths in the solution tree will be culled later on, so I could skip their creation.
I have two solutions to solve this:
1. derecurse the generation into a loop, and apply the filter criteria on each new 12-digits entity
2. integrate the filtering in the recursive algorithm, so to prevent it stepping into paths that are already doomed.
My preference goes to 1 (seems easier) but I would like to hear your opinion, in particular with an eye towards how a functional programming style deals with such cases. | How about
```
import itertools
results = []
for x in itertools.product(range(3), repeat=12):
if myfilter(x):
results.append(x)
```
where `myfilter` does the selection. Here, for example, only allowing result with 10 or more 1's,
```
def myfilter(x): # example filter, only take lists with 10 or more 1s
return x.count(1)>=10
```
That is, my suggestion is your option 1. For some cases it may be slower because (depending on your criteria) you many generate many lists that you don't need, but it's much more general and very easy to code.
Edit: This approach also has a one-liner form, as suggested in the comments by hughdbrown:
```
results = [x for x in itertools.product(range(3), repeat=12) if myfilter(x)]
``` | `itertools` has functionality for dealing with this. However, here is a (hardcoded) way of handling with a generator:
```
T = (0,1,2)
GEN = ((a,b,c,d,e,f,g,h,i,j,k,l) for a in T for b in T for c in T for d in T for e in T for f in T for g in T for h in T for i in T for j in T for k in T for l in T)
for VAL in GEN:
# Filter VAL
print VAL
``` | Recursive generation + filtering. Better non-recursive? | [
"",
"python",
"functional-programming",
"recursion",
""
] |
I am trying to replace postbacks with javascript client side processing whenever I can in my aspnet c# 3.5 code so as to improve the user experience. I am not using any javascript frameworks. I am using vs2008.
I have a usercontrol .ascx file with a dropdown list and button which is used in three places on my main screen.
I had been doing a postback on OnSelectedIndexChanged for the dropdown list.
I don’t need to postback every time though, so I tried putting in a javascript function in the usercontrol to check the item selected and only do a postback if it starts with ‘-‘.
However the name of the elements I need to $get in my javascript function depend on the name of the usercontrol instance which calls it, and I need to get that name at runtime, for example : uc1\_ ddlLocations and uc1\_ btnPostBack.
Here is the code:
```
<script type="text/javascript" language="javascript">
function ddlChange(this) {
var loc = $get("GetTheUserControlNameAndInsertHere_ddlLocations");
if (loc.value.substring(0,1)=='-' )
{
var btn = $get("GetTheUserControlNameAndInsertHere_btnPostBack ");
btn.click();
}
</script>
```
How do I do this? Can someone suggest a better way to do what I am trying to do? | You could determine the UserControl name by parsing it out of the ddl name. Please keep in mind that I didn't debug the parsing logic, it might be off slightly :)
```
<script type="text/javascript" language="javascript">
function ddlChange(ddl)
{
if (ddl.value.substring(0,1)=='-' )
{
var prefixEndLoc = ddl.id.lastIndexOf("_");
var prefix = ddl.id.subString(0,prefixEndLoc);
var btn = $get(prefix + "_btnPostBack ");
btn.click();
}
}
</script>
``` | use the ClientID of you control and give it to your javascript | How do I use Javascript to get the name of the UserControl it is called from? | [
"",
"asp.net",
"javascript",
""
] |
in Silverligh + RIA app, I am retriving date from mssql08 server, when it reaches client side it is automatically converted into UTC time.
How can I prevent RIA or SL from doing that?
Regards
MK | A well-written application will store dates in UTC format, and convert them into the appropriate culture or locale format, depending on the regional settings on the client. Therefore, I'm not actually sure what your question is. | How are you storing the data in your database (is it a SQL 2008 db? No such thing as a SQL 09)? If you are storing it as a varchar, and you're entering it with the timezone notation, like so:
"2009-10-03T12:00:00Z"
Then you're actually storing the date as a UTC time, and the Silverlight app reads it this way; this can cause date conversion errors. I beleive you can run into the same problem using the the new datetimeoffset datatype in SQL Server, but I haven't tested it. | Preventing DateTime changes | [
"",
"c#",
"sql-server",
"silverlight",
""
] |
I have two arrays which I'm trying to merge based on the date of them. Here is what the arrays look like:
```
$a[0][0] = '11/15/08';
$a[0][1] = '50';
$a[1][0] = '11/20/08';
$a[1][1] = '75';
$a[2][0] = '01/04/09';
$a[2][1] = '23';
$a[3][0] = '01/09/09';
$a[3][1] = '92';
```
and
```
$b[0][0] = '01/04/09';
$b[0][1] = '30';
$b[1][0] = '01/05/09';
$b[1][1] = '54';
$b[2][0] = '01/08/09';
$b[2][1] = '89';
$b[3][0] = '01/09/09';
$a[3][1] = '62';
```
At the end of the merge I'm hoping for:
```
$n[0][0] = '11/15/08';
$n[0][1] = '50';
$n[0][2] = '0';
$n[1][0] = '11/20/08';
$n[1][1] = '75';
$n[1][2] = '0';
$n[2][0] = '01/04/09';
$n[2][1] = '23';
$n[2][2] = '30';
$n[3][0] = '01/05/09';
$n[3][1] = '0';
$n[3][2] = '54';
$n[4][0] = '01/08/09';
$n[4][1] = '0';
$n[4][2] = '89';
$n[4][0] = '01/09/09';
$n[4][1] = '92';
$n[4][2] = '62';
```
Is this possible? | I would first sort the array into one which has timestamps as its keys and arrays of numbers as its values. This will then be easy to sort. For example:
```
$result = array();
foreach (array_merge($a, $b) as $item) {
$date = $item[0];
$dateParts = explode('/', $date);
$timestamp = mktime(0,0,0,$dateParts[0], $dateParts[1], $dateParts[2]);
if (!isset($result[$timestamp])) {
$result[$timestamp] = array();
}
$result[$timestamp][] = $item[1];
}
```
Because the keys are date timestamps, we can easily sort the array by date:
```
ksort($result);
```
Then we can convert the array into the format you want it
```
$result2 = array();
foreach ($result as $timestamp => $item) {
$tmp = array();
$tmp[] = date('m/d/y', $timestamp);
$tmp = array_merge($tmp, $item);
$result2[] = $tmp;
}
print_r($result2);
``` | The easiest way is to use the date as index
```
#Initialization
$n = array();
$n[$date] = array();
$n[$date][0] = "";
$n[$date][1] = "";
#and then
$n["10/01/09"][0] = "10";
$n["10/01/09"][1] = "50";
$n["11/02/09"][1] = "70";
$n["01/05/09"][0] = "90";
```
---EDIT
you can see this functions in "array\_merge" , "array\_combine", ... but in this instance, this can be resolved | Php Merging Arrays on Date | [
"",
"php",
"function",
"sorting",
"merge",
""
] |
I'm getting started with Boost::Test driven development (in C++), and I'm retrofitting one of my older projects with Unit Tests. My question is -- where do I add the unit test code? The syntax for the tests themselves seems really simple according to Boost::Test's documentation, but I'm confused as to how I tell the compiler to generate the executable with my unit tests. Ideally, I'd use a precompiled header and the header-only version of the boost::test library.
Do I just create a new project for tests and add all my existing source files to it?
Billy3 | They way I've added Boost unit tests to existing solutions was to create new projects and put the test code in those projects. You don't need to worry about creating a main() function or setting up the tests. Boost takes care of all that for you.
Here is a [project](http://code.google.com/p/dynamic-cpp/source/browse/#svn/trunk/tests) I put on Google Code that uses Boost for its unit tests. | You can put your tests to the same project, but mark files with tests as Excluded from Build for Release and Debug configuration and create new project configuration for unit tests. Here is an [article](http://blog.yastrebkov.com/2010/07/boost-test-setup-and-usage.html) about using Boost Test in Visual Studio. | Visual Studio and Boost::Test | [
"",
"c++",
"visual-c++",
"boost-test",
""
] |
I want to stop the browser request, when user clicks on any button from UI, like stop button on browser. I want to do it through javascript. | You can try a few things .. I looked at a [forum here](http://forums.devshed.com/html-programming-1/does-window-stop-work-in-ie-1311.html)
followings from that ..
In Netscape, `window.stop()` seems to work (in the same way as the Stop button on the browser I guess). However, this does not work in IE.
> I don't think you can stop the processing in IE, but you might try one
> of the following:
>
> Event.cancelBubble this is IE only and stops EVENT propogation.
> However, once the event has occurred (onSubmit, onClick or whatever
> you used to start the download), I'm not sure this will stop it.
>
> Event.reason IE only. Reason holds the value of the code specifying
> the status of the data transfer. 0=successful, 1=aborted, 2=error. I
> don't remember if this is readonly. If it is not, perhaps you can
> assign a value of 1 to abort the transfer.
>
> Event.returnValue IE only. I'll quote this one. 'If returnValue is
> set, its value takes precedent over the value actually received by an
> event handler. Set this property to false to cancel the default action
> fo the sourece element on which the event occured.'
>
> Play with these a bit. I don't see anything else that might work. If
> they don't do anything to stop the process, it probably can't be done.
I found a way to do this after a lot of research - use
document.execCommand("Stop");
This works in IE. | As Amit Doshi and Slaks code suggests you can do that. But I find it more efficient to do a try and catch.
There is only one alternative to `window.stop()` so trying it first with a fallback (for Internet Explorer) is the way to go to support all browsers:
```
try {
window.stop();
} catch (exception) {
document.execCommand('Stop');
}
``` | I want to stop the browser request, when user clicks on any button from UI, like stop button on browser | [
"",
"javascript",
""
] |
Say I have a big file in one directory and an older version of it in another. Is it somehow possible to update the older version with only the changes in the new one? So that I would copy only a small fraction from one place to another. Would have to work for both text and binary files.
Not sure if this is possible though, but curious to if it is. | I don't see any gained from such thing because to see the difference between two files you must read the whole files and compare them. I don't think this will be faster than just copying them. | Sure! There are plenty of diff implementations.
* <http://www.mathertel.de/Diff/>
* <http://www.alexandre-gomes.com/?p=177>
Just adapt them to your needs. | C#: How to copy only the differences in a file from one directory to another | [
"",
"c#",
"file-io",
"diff",
""
] |
we have a view in our database which has an ORDER BY in it.
Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a **VERY SPECIFIC** use-case which demands a certain order. (It is team standings for a soccer league.)
The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box.
The view is defined as such:
```
CREATE VIEW season.CurrentStandingsOrdered
AS
SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING
FROM season.CurrentStandings
ORDER BY
GENDER, TEAMYEAR, CODE, POINTS DESC,
FORFEITS, GOALS_AGAINST, GOALS_FOR DESC,
DIFFERENTIAL, RANKING
```
It returns:
```
GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME,
WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST,
DIFFERENTIAL, POINTS, FORFEITS, RANKING
```
Now, when I run a **SELECT** against the view, it orders the results by **GENDER, TEAMYEAR, CODE, TEAMID**. Notice that it is ordering by **TEAMID** instead of **POINTS** as the order by clause specifies.
However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the **ORDER BY** clause. | The order of rows returned by a view with an `ORDER BY` clause is never guaranteed. If you need a specific row order, you must specify where you select from the view.
See this the note at the top of [this Book On-Line](http://msdn.microsoft.com/en-us/library/ms188385%28v=sql.105%29.aspx) entry. | SQL Server 2005 ignores TOP 100 PERCENT by design.
Try TOP 2000000000 instead.
Now, I'll try and find a reference... I was at a seminar presented by Itzak Ben-Gan who mentioned it
Found some...
[Kimberly L. Tripp](http://www.sqlskills.com/blogs/Kimberly/post/SQL-2000-v-2005-Using-Top-100-Percent-in-a-View.aspx)
["TOP 100 Percent ORDER BY Considered Harmful"](http://blogs.msdn.com/queryoptteam/archive/2006/03/24/560396.aspx)
> In this particular case, the optimizer
> recognizes that TOP 100 PERCENT
> qualifies all rows and does not need
> to be computed at all. | ORDER BY in a Sql Server 2008 view | [
"",
"sql",
"t-sql",
"sql-server-2008",
"view",
"sql-order-by",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.