Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I've been talking with friends and some completely agree that templates in C++ should be used, others disagree entirely.
Some of the good things are:
* They are more safe to use (type safety).
* They are a good way of doing generalizations for APIs.
What other good things can you tell me about C++ templates?
What bad things can you tell me about C++ templates?
Edit: One of the reasons I'm asking this is that I am studying for an exam and at the moment I am covering the topic of C++ templates. So I am trying to understand a bit more on them. | Templates are a very powerful mechanism which can simplify many things. However to use them properly requires much time and experience - in order to decide when their usage is appropriate.
For me the most important advantages are:
* reducing the repetition of code (generic containers, algorithms)
* reducing the repetition of code advanced (MPL and Fusion)
* static polymorphism (=performance) and other compile time calculations
* policy based design (flexibility, reusability, easier changes, etc)
* increasing safety at no cost (i.e. dimension analysis via Boost Units, static assertions, concept checks)
* functional programming (Phoenix), lazy evaluation, expression templates (we can create Domain-specific embedded languages in C++, we have great Proto library, we have Blitz++)
* other less spectacular tools and tricks used in everyday life:
+ STL and the algorithms (what's the difference between `for` and `for_each`)
+ bind, lambda (or Phoenix) ( write clearer code, simplify things)
+ Boost Function (makes writing callbacks easier)
+ tuples (how to genericly hash a tuple? Use Fusion for example...)
+ TBB (`parallel_for` and other STL like algorithms and containers)
* Can you imagine C++ without templates? Yes I can, in the early times you couldn't use them because of compiler limitations.
* Would you write in C++ without templates? No, as I would lose many of the advantages mentioned above.
Downsides:
* Compilation time (for example throw in Sprit, Phoenix, MPL and some Fusion and you can go for a coffee)
* People **who can** use and understand templates are not that common (and these people are useful)
* People **who think that they can** use and understand templates are quite common (and these people are dangerous, as they can make a hell out of your code. However most of them after some education/mentoring will join the group mentioned in the previous point)
* template `export` support (lack of)
* error messages could be less cryptic (after some learning you can find what you need, but still...)
I highly recommend the following books:
* [C++ Templates: The Complete Guide by David Vandevoorde and Nicolai Josuttis](https://rads.stackoverflow.com/amzn/click/com/0201734842) (thorough introduction to the subject of templates)
* [Modern C++ Design. Generic Programming and Design Patterns Applied by Andrei Alexandrescu](https://rads.stackoverflow.com/amzn/click/com/0201704315) (what is the less known way of using templates to simplify your code, make development easier and result in code robust to changes)
* [C++ Template Metaprogramming by David Abrahms and Aleksey Gutov](https://rads.stackoverflow.com/amzn/click/com/0321227255) (again - different way of using the templates)
* [More C++ Idioms](http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms) from Wikibooks presents some nice ideas. | On the positive side, C++ templates:
* Allow for generalization of type
* Decrease the amount of redundant code you need to type
* Help to build type-safe code
* Are evaluated at compile-time
* Can increase performance (as an alternative to polymorphism)
* Help to build very powerful libraries
On the negative side:
* Can get complicated quickly if one isn't careful
* Most compilers give cryptic error messages
* It can be difficult to use/debug highly templated code
* Have at least one syntactic quirk ( the >> operator can interfere with templates)
* Help make C++ very difficult to parse
All in all, careful consideration should be used as to when to use templates. | What are the good and bad points of C++ templates? | [
"",
"c++",
"templates",
""
] |
I am using the jQuery disable on submit plug-in but I have a problem. If I disable the submit buttons they don't get passed back to the server so I cant tell which button was pressed. Is this normal? Is there anything I can do about it?
I really don't want to retool my web site so I have to set a variable on form submission to tell which button was pressed.
Any ideas? | Here's a workaround I just found in [a jQuery forum](http://groups.google.com/group/jquery-en/browse_thread/thread/428f1d3c5f459506?pli=1):
```
<script type="text/javascript">
$(document).ready(function() {
$("#sendSearch").click(function() {
$('#loadingDiv').show();
$('#sendSearch').attr("disabled", "disabled");
// these two lines are the workaround
this.form.submit();
return true;
});
});
</script>
``` | [How to Disable the Submit Button of a Web Form](http://blog.josh420.com/archives/2008/02/how-to-disable-the-submit-button-ofweb-form.aspx)
This method hides the button instead of disabling it, and programmatically inserts a disabled <button> tag to make it *appear* just like the submit button was disabled. Works great. | Problem with disabling submit buttons on form submit | [
"",
"javascript",
"html",
"jquery",
"forms",
"jquery-events",
""
] |
How would you refactor something like this?
```
protected void btnAdd_Click(object sender, EventArgs e)
{
try
{
string username = txtUsername.Text.Trim().ToLower();
string password = txtPassword.Text.Trim().ToLower();
string email = txtEmail.Text.Trim().ToLower();
string status = ddlStatus.SelectedValue.Trim();
IUser user = UserFactory.getInstance().createUser(username, password, email,status);
if (user.save())
{
jsMsgBox("Successfully added new user");
Response.Redirect(ConfigurationManager.AppSettings["AdminLink"], true);
}
else
{
jsMsgBox("An error was encountered while trying to add a new user.");
}
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to add a new user.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
protected void btnUpdate_Click(object sender, EventArgs e)
{
try
{
string username = txtUsername.Text.Trim().ToLower();
string password = txtPassword.Text.Trim().ToLower();
string email = txtEmail.Text.Trim().ToLower();
int userPK = Int32.Parse(txtUserPK.Text.ToString());
string status = ddlStatus.SelectedValue.Trim();
IUser user = UserFactory.getInstance().createUser(userPK, username, password, email,status);
if (user.save())
{
jsMsgBox("Successfully updated selected users information.");
Response.Redirect(ConfigurationManager.AppSettings["AdminLink"], true);
}
else
{
jsMsgBox("An error was encountered while trying to update the selected users information.");
}
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to update the selected users information.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
```
Take care | For a start:
```
protected string cleaned(string raw) {
raw.Text.Trim().ToLower()
}
protected void attempt_to_save(IUser user, string task) {
if (user.save()) {
jsMsgBox("Successfully finished "+task);
Response.Redirect(ConfigurationManager.AppSettings["AdminLink"], true);
} else {
jsMsgBox("An error was encountered while "+task);
}
}
protected void btnAdd_Click(object sender, EventArgs e)
{
try
{
IUser user = UserFactory.getInstance().createUser(
cleaned(txtUsername),
cleaned(txtPassword),
cleaned(txtEmail),
ddlStatus.SelectedValue.Trim()
);
attempt_to_save(user,"adding a new user.");
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to add a new user.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
protected void btnUpdate_Click(object sender, EventArgs e)
{
try
{
IUser user = UserFactory.getInstance().createUser(
Int32.Parse(txtUserPK.Text.ToString()),
cleaned(txtUsername),
cleaned(txtPassword),
cleaned(txtEmail),
ddlStatus.SelectedValue.Trim()
);
attempt_to_save(user,"updating the selected users information.");
}
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to update the selected users information.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
```
Note that it was necessary to reword some of the messages slightly. | Try This
First create a user info object
```
class UserInfo
{
public string username {get;set;}
public string password {get;set;}
public string email {get;set;}
public string status {get;set;}
}
```
then refactor your code like this
```
protected void btnAdd_Click(object sender, EventArgs e)
{
UserInfo myUser = GetUserInfo();
try
{
IUser user = UserFactory.getInstance().createUser(myUser);
if (user.save())
{
jsMsgBox("Successfully added new user");
Response.Redirect(ConfigurationManager.AppSettings["AdminLink"], true);
}
else
{
jsMsgBox("An error was encountered while trying to add a new user.");
}
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to add a new user.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
protected void btnUpdate_Click(object sender, EventArgs e)
{
UserInfo myUser = GetUserInfo();
int userPK = Int32.Parse(txtUserPK.Text.ToString());
try
{
IUser user = UserFactory.getInstance().createUser(userPK,myUser);
if (user.save())
{
jsMsgBox("Successfully updated selected users information.");
Response.Redirect(ConfigurationManager.AppSettings["AdminLink"], true);
}
else
{
jsMsgBox("An error was encountered while trying to update the selected users information.");
}
}
catch (Exception ex)
{
jsMsgBox("An Error was encountered while trying to update the selected users information.");
lblInfo.Text = ex.Message;
lblInfo.Visible = true;
}
}
private UserInfo GetUserInfo()
{
UserInfo myUser = new UserInfo();
UserInfo.username = txtUsername.Text.Trim().ToLower();
UserInfo.password = txtPassword.Text.Trim().ToLower();
UserInfo.email = txtEmail.Text.Trim().ToLower();
UserInfo.status = ddlStatus.SelectedValue.Trim();
return myUser;
}
``` | How would you refactor this to make it nicer? | [
"",
"c#",
"asp.net",
"refactoring",
""
] |
I recently discovered that I could use the `sp_help` to get a table definition and have been hooked onto it since then. Before my discovery, I had to open up the Object explorer in SQL Management studio, manually search for the table name, right click on the table and select Design. That was a lot of effort!
What other system stored procedures do you all use that you can't simply live without? | `Alt` + `F1` is a good [shortcut](http://www.kodyaz.com/articles/sql-query-window-short-cuts.aspx) key for `sp_help`.
`sp_helptext` is another goodie for getting stored procedure text. | All of these undocumented ones
```
xp_getnetname
xp_fileexist
xp_dirtree
xp_subdirs
sp_who2
xp_getfiledetails
xp_fixeddrives
Sp_tempdbspace
xp_enumdsn
xp_enumerrorlogs
sp_MSforeachtable
sp_MSforeachDB
```
See here: [Undocumented stored procedures](http://wiki.lessthandot.com/index.php/SQL_Server_Programming_Hacks_-_100%2B_List#Undocumented_but_handy)
And now since SQl Server 2005 all the Dynamic Management Views like [sys.dm\_db\_index\_usage\_stats](http://wiki.lessthandot.com/index.php/Use_the_sys.dm_db_index_usage_stats_dmv_to_check_if_indexes_are_being_used) | Useful system stored procedures in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2005",
"stored-procedures",
""
] |
I have a GridView that lists a bunch of items and one of the columns has a link that displays a modal (AjaxToolkit ModalPopupExtender). Let's call that link "Show". In that modal, I have a asp:button for saving the data entered in that modal. Let's call that button "Save"
So when the user clicks on a "Show" link in a certain row, I'd like write some javascript that sets something in the "Save" button, so that in my code-behind, I can handle "Save".Command and use the CommandEventArgs parameter to get the value.
Is this possible, or do I just need to use a hidden input tag and set its value? | Well, after continuing the research, it looks like it cannot be done. The CommandArgument property might reside in the ViewState, but for this case, it is completely server side and cannot be changed using javascript. | Not a direct answer to your question, but another possible way of solving the problem:
Place a `HiddenField` control on the page. In your code-behind, before displaying the modal popup, set the value of that control to the ID of the row that was clicked (or the row number, or some identifying value). Then in the code-behind of your Save button, you can just read the value of the `HiddenField`. | Is there a way to set a asp.net button's CommandArgument in javascript? | [
"",
"asp.net",
"javascript",
""
] |
In Java, given a `java.net.URL` or a `String` in the form of `http://www.example.com/some/path/to/a/file.xml` , what is the easiest way to get the file name, minus the extension? So, in this example, I'm looking for something that returns `"file"`.
I can think of several ways to do this, but I'm looking for something that's easy to read and short. | Instead of reinventing the wheel, how about using Apache [commons-io](http://commons.apache.org/proper/commons-io/):
```
import org.apache.commons.io.FilenameUtils;
public class FilenameUtilTest {
public static void main(String[] args) throws Exception {
URL url = new URL("http://www.example.com/some/path/to/a/file.xml?foo=bar#test");
System.out.println(FilenameUtils.getBaseName(url.getPath())); // -> file
System.out.println(FilenameUtils.getExtension(url.getPath())); // -> xml
System.out.println(FilenameUtils.getName(url.getPath())); // -> file.xml
}
}
``` | ```
String fileName = url.substring( url.lastIndexOf('/')+1, url.length() );
String fileNameWithoutExtn = fileName.substring(0, fileName.lastIndexOf('.'));
``` | Get file name from URL | [
"",
"java",
"file",
"parsing",
"url",
"filenames",
""
] |
I've used Wordpress and Joomla to build a couple of small websites, and done some hacking about to get them running exactly as I want. But both of these, and probably many other PHP CMSs, are subject to a constant barrage of security fixes. I don't have to time to test the fixes, make sure my customizations are still working, and roll them out before anyone attacks the site, then do the same thing again a month later - I'll never get anything else done with that kind of overhead.
So my question is: Is there a (preferably PHP) content management system that somehow successfully avoids the constant barrage of security updates and resulting testing/sysadmin work? So I can just work on it when I have time, not keep racing to patch the latest attacks?
Bonus points for having a sane plugin model to make it easier to code against. More bonus points if it provides an easy method to import data from Joomla and/or wordpress.
Thanks
EDIT: As rightly pointed out, avoiding updates entirely is not a sensible goal. Rather, I want to minimize the pain of updates. So what I'm really looking for is:
* Easy to adapt and theme in a way that is guaranteed not break during updates
* Simple update process | there is no cms (no software, for that matter) so secure you never have to update. developers make mistakes, and new exploits appear. so every cms *should be* "subject to a constant barrage of security fixes". if it is not, you should ask yourself about the security policy of the project and the security of your site. see [The Open Security Model, Drupal and ExpressionEngine on Security](http://www.lullabot.com/articles/drupal-and-expressionengine-security-models) for a related read.
so unless you don't care about the security of your site, you are asking the wrong question. i think it should actually be: is there a cms that is customizable *without modifying core files* so that security updates don't break my customizations? or: how can i customize a cms so that security updates don't break my customizations? security updates usually don't break a (even customized) site - unless the customizations are done the wrong way.
my answer to that new question would be [Drupal](http://drupal.org/) (including bonus points). | The last versions of WordPress (2.7 branch) have auto update for core and plugins making it really easy to upgrade when a fix is available. The api is also awesome - I've done quite a few WordPress based sites and rarely (if at all) needed to hack the core.
As long as you customize through plugins or themes, and use auto update when a new version is available, you shouldn't have any problem at all. | Stable PHP CMS for hacking against | [
"",
"php",
"content-management-system",
""
] |
How does IronPython stack up to the default Windows implementation of Python from python.org? If I am learning Python, will I be learning a subtley different language with IronPython, and what libraries would I be doing without?
Are there, alternatively, any pros to IronPython (not including .NET IL compiled classes) that would make it more attractive an option? | There are a number of important differences:
1. Interoperability with other .NET languages. You can use other .NET libraries from an IronPython application, or use IronPython from a C# application, for example. This interoperability is increasing, with a movement toward greater support for dynamic types in .NET 4.0. For a lot of detail on this, see [these](https://web.archive.org/web/20100821162439/http://channel9.msdn.com:80/pdc2008/TL10/) [two](https://web.archive.org/web/20100813034546/http://channel9.msdn.com/pdc2008/TL16/) presentations at PDC 2008.
2. Better concurrency/multi-core support, due to lack of a GIL. (Note that the GIL doesn't inhibit threading on a single-core machine---it only limits performance on multi-core machines.)
3. Limited ability to consume Python C extensions. The [Ironclad](https://web.archive.org/web/20160109050215/https://code.google.com/p/ironclad/) project is making significant strides toward improving this---they've nearly gotten [Numpy](https://numpy.org/) working!
4. Less cross-platform support; basically, you've got the CLR and [Mono](https://www.mono-project.com/Main_Page/). Mono is impressive, though, and runs on many platforms---and they've got an implementation of Silverlight, called [Moonlight](https://www.mono-project.com/Moonlight/).
5. Reports of improved performance, although I have not looked into this carefully.
6. Feature lag: since CPython is the reference Python implementation, it has the "latest and greatest" Python features, whereas IronPython necessarily lags behind. Many people do not find this to be a problem. | There are some subtle differences in how you write your code, but the biggest difference is in the libraries you have available.
With IronPython, you have all the .Net libraries available, but at the expense of some of the "normal" python libraries that haven't been ported to the .Net VM I think.
Basically, you should expect the syntax and the idioms to be the same, but a script written for IronPython wont run if you try giving it to the "regular" Python interpreter. The other way around is probably more likely, but there too you will find differences I think. | Python or IronPython | [
"",
"python",
"ironpython",
"cpython",
""
] |
When is it appropriate to use AJAX?
what are the pros and cons of using AJAX?
In response to my last question: some people seemed very adamant that I should only use AJAX if the situation was appropriate:
[Should I add AJAX logic to my PHP classes/scripts?](https://stackoverflow.com/questions/549280/should-i-add-ajax-logic-to-my-php-classes-scripts)
In response to Chad Birch's answer:
Yes, I'm referring to when developing a "standard" site that would employ AJAX for its benefits, and wouldn't be crippled by its application. Using AJAX in a way that would kill search rankings would not be acceptable. So if "keeping the site intact" requires more work, than that would be a "con". | It's a pretty large subject, but you should be using AJAX to enhance the user experience, without making the site totally dependent on it. Remember that search engines and some other visitors won't be able to execute the AJAX, so if you rely on it to load your content, that will not work in your favor.
For example, you might think that it would be nice to have users visit your blog, and then have the page dynamically load the newest article(s) with AJAX once they're already there. However, when Google tries to index your blog, it's just going to get the blank site.
A good search term to find resources related to this subject is "progressive enhancement". There's plenty of good stuff out there, spend some time following the links around. Here's one to start you off:
[<http://www.alistapart.com/articles/progressiveenhancementwithjavascript/>](http://www.alistapart.com/articles/progressiveenhancementwithjavascript/) | When you are only updating part of a page or perhaps performing an action that doesn't update the page at all AJAX can be a very good tool. It's much more lightweight than an entire page refresh for something like this. Conversely, if your entire page reloads or you change to a different view, you really should just link (or post) to the new page rather than download it via AJAX and replace the entire contents.
One downside to using AJAX is that it requires javascript to be working OR you to construct your view in such a way that the UI still works without it. This is more complicated than doing it just via normal links/posts. | When is it appropriate to use AJAX? | [
"",
"php",
"ajax",
"webforms",
""
] |
We've run into an interesting situation that needs solving, and my searches have turned up nill. I therefore appeal to the SO community for help.
The issue is this: we have a need to programmatically access a shared file that is not in our domain, and is not within a trusted external domain via remote file sharing / UNC. Naturally, we need to supply credentials to the remote machine.
Typically, one solves this problem in one of two ways:
1. Map the file share as a drive and supply the credentials at that time. This is typically done using the `NET USE` command or the Win32 functions that duplicate `NET USE`.
2. Access the file with a UNC path as if the remote computer were on the domain and ensure that the account under which the program runs is duplicated (including password) on the remote machine as a local user. Basically leverage the fact that Windows will automatically supply the current user's credentials when the user attempts to access a shared file.
3. Don't use remote file sharing. Use FTP (or some other means) to transfer the file, work on it locally, then transfer it back.
For various and sundry reasons, our security / network architects have rejected the first two approaches. The second approach is obviously a security hole; if the remote computer is compromised, the local computer is now at risk. The first approach is unsatisfactory because the newly mounted drive is a shared resource available to other programs on the local computer during file access by the program. Even though it's quite possible to make this temporary, it's still a hole in their opinion.
They're open to the third option, but the remote network admins insist on SFTP rather than FTPS, and FtpWebRequest only supports FTPS. SFTP **is** the more firewall-friendly option and there are a couple libraries I could use for that approach, but I'd prefer to reduce my dependencies if I can.
I've searched MSDN for either a managed or a win32 means of using remote file sharing, but I have failed to come up with anything useful.
And so I ask: Is there another way? Did I miss a super-secret win32 function that does what I want? Or must I pursue some variant of option 3? | The way to solve your problem is to use a Win32 API called [WNetUseConnection](http://msdn.microsoft.com/en-us/library/aa385482(VS.85).aspx).
**Use this function to connect to a UNC path with authentication, NOT to map a drive**.
This will allow you to connect to a remote machine, even if it is not on the same domain, and even if it has a different username and password.
Once you have used WNetUseConnection you will be able to access the file via a UNC path as if you were on the same domain. The best way is probably through the administrative built in shares.
Example: \\computername\c$\program files\Folder\file.txt
Here is some sample C# code that uses WNetUseConnection.
Note, for the NetResource, you should pass null for the lpLocalName and lpProvider. The dwType should be RESOURCETYPE\_DISK. The lpRemoteName should be \\ComputerName.
```
using System;
using System.Runtime.InteropServices ;
using System.Threading;
namespace ExtremeMirror
{
public class PinvokeWindowsNetworking
{
#region Consts
const int RESOURCE_CONNECTED = 0x00000001;
const int RESOURCE_GLOBALNET = 0x00000002;
const int RESOURCE_REMEMBERED = 0x00000003;
const int RESOURCETYPE_ANY = 0x00000000;
const int RESOURCETYPE_DISK = 0x00000001;
const int RESOURCETYPE_PRINT = 0x00000002;
const int RESOURCEDISPLAYTYPE_GENERIC = 0x00000000;
const int RESOURCEDISPLAYTYPE_DOMAIN = 0x00000001;
const int RESOURCEDISPLAYTYPE_SERVER = 0x00000002;
const int RESOURCEDISPLAYTYPE_SHARE = 0x00000003;
const int RESOURCEDISPLAYTYPE_FILE = 0x00000004;
const int RESOURCEDISPLAYTYPE_GROUP = 0x00000005;
const int RESOURCEUSAGE_CONNECTABLE = 0x00000001;
const int RESOURCEUSAGE_CONTAINER = 0x00000002;
const int CONNECT_INTERACTIVE = 0x00000008;
const int CONNECT_PROMPT = 0x00000010;
const int CONNECT_REDIRECT = 0x00000080;
const int CONNECT_UPDATE_PROFILE = 0x00000001;
const int CONNECT_COMMANDLINE = 0x00000800;
const int CONNECT_CMD_SAVECRED = 0x00001000;
const int CONNECT_LOCALDRIVE = 0x00000100;
#endregion
#region Errors
const int NO_ERROR = 0;
const int ERROR_ACCESS_DENIED = 5;
const int ERROR_ALREADY_ASSIGNED = 85;
const int ERROR_BAD_DEVICE = 1200;
const int ERROR_BAD_NET_NAME = 67;
const int ERROR_BAD_PROVIDER = 1204;
const int ERROR_CANCELLED = 1223;
const int ERROR_EXTENDED_ERROR = 1208;
const int ERROR_INVALID_ADDRESS = 487;
const int ERROR_INVALID_PARAMETER = 87;
const int ERROR_INVALID_PASSWORD = 1216;
const int ERROR_MORE_DATA = 234;
const int ERROR_NO_MORE_ITEMS = 259;
const int ERROR_NO_NET_OR_BAD_PATH = 1203;
const int ERROR_NO_NETWORK = 1222;
const int ERROR_BAD_PROFILE = 1206;
const int ERROR_CANNOT_OPEN_PROFILE = 1205;
const int ERROR_DEVICE_IN_USE = 2404;
const int ERROR_NOT_CONNECTED = 2250;
const int ERROR_OPEN_FILES = 2401;
private struct ErrorClass
{
public int num;
public string message;
public ErrorClass(int num, string message)
{
this.num = num;
this.message = message;
}
}
// Created with excel formula:
// ="new ErrorClass("&A1&", """&PROPER(SUBSTITUTE(MID(A1,7,LEN(A1)-6), "_", " "))&"""), "
private static ErrorClass[] ERROR_LIST = new ErrorClass[] {
new ErrorClass(ERROR_ACCESS_DENIED, "Error: Access Denied"),
new ErrorClass(ERROR_ALREADY_ASSIGNED, "Error: Already Assigned"),
new ErrorClass(ERROR_BAD_DEVICE, "Error: Bad Device"),
new ErrorClass(ERROR_BAD_NET_NAME, "Error: Bad Net Name"),
new ErrorClass(ERROR_BAD_PROVIDER, "Error: Bad Provider"),
new ErrorClass(ERROR_CANCELLED, "Error: Cancelled"),
new ErrorClass(ERROR_EXTENDED_ERROR, "Error: Extended Error"),
new ErrorClass(ERROR_INVALID_ADDRESS, "Error: Invalid Address"),
new ErrorClass(ERROR_INVALID_PARAMETER, "Error: Invalid Parameter"),
new ErrorClass(ERROR_INVALID_PASSWORD, "Error: Invalid Password"),
new ErrorClass(ERROR_MORE_DATA, "Error: More Data"),
new ErrorClass(ERROR_NO_MORE_ITEMS, "Error: No More Items"),
new ErrorClass(ERROR_NO_NET_OR_BAD_PATH, "Error: No Net Or Bad Path"),
new ErrorClass(ERROR_NO_NETWORK, "Error: No Network"),
new ErrorClass(ERROR_BAD_PROFILE, "Error: Bad Profile"),
new ErrorClass(ERROR_CANNOT_OPEN_PROFILE, "Error: Cannot Open Profile"),
new ErrorClass(ERROR_DEVICE_IN_USE, "Error: Device In Use"),
new ErrorClass(ERROR_EXTENDED_ERROR, "Error: Extended Error"),
new ErrorClass(ERROR_NOT_CONNECTED, "Error: Not Connected"),
new ErrorClass(ERROR_OPEN_FILES, "Error: Open Files"),
};
private static string getErrorForNumber(int errNum)
{
foreach (ErrorClass er in ERROR_LIST)
{
if (er.num == errNum) return er.message;
}
return "Error: Unknown, " + errNum;
}
#endregion
[DllImport("Mpr.dll")] private static extern int WNetUseConnection(
IntPtr hwndOwner,
NETRESOURCE lpNetResource,
string lpPassword,
string lpUserID,
int dwFlags,
string lpAccessName,
string lpBufferSize,
string lpResult
);
[DllImport("Mpr.dll")] private static extern int WNetCancelConnection2(
string lpName,
int dwFlags,
bool fForce
);
[StructLayout(LayoutKind.Sequential)] private class NETRESOURCE
{
public int dwScope = 0;
public int dwType = 0;
public int dwDisplayType = 0;
public int dwUsage = 0;
public string lpLocalName = "";
public string lpRemoteName = "";
public string lpComment = "";
public string lpProvider = "";
}
public static string connectToRemote(string remoteUNC, string username, string password)
{
return connectToRemote(remoteUNC, username, password, false);
}
public static string connectToRemote(string remoteUNC, string username, string password, bool promptUser)
{
NETRESOURCE nr = new NETRESOURCE();
nr.dwType = RESOURCETYPE_DISK;
nr.lpRemoteName = remoteUNC;
// nr.lpLocalName = "F:";
int ret;
if (promptUser)
ret = WNetUseConnection(IntPtr.Zero, nr, "", "", CONNECT_INTERACTIVE | CONNECT_PROMPT, null, null, null);
else
ret = WNetUseConnection(IntPtr.Zero, nr, password, username, 0, null, null, null);
if (ret == NO_ERROR) return null;
return getErrorForNumber(ret);
}
public static string disconnectRemote(string remoteUNC)
{
int ret = WNetCancelConnection2(remoteUNC, CONNECT_UPDATE_PROFILE, false);
if (ret == NO_ERROR) return null;
return getErrorForNumber(ret);
}
}
}
``` | For people looking for a quick solution, you can use the `NetworkShareAccesser` I wrote recently (based on [this answer](https://stackoverflow.com/a/684040/808723) (thanks so much!)):
**Usage:**
```
using (NetworkShareAccesser.Access(REMOTE_COMPUTER_NAME, DOMAIN, USER_NAME, PASSWORD))
{
File.Copy(@"C:\Some\File\To\copy.txt", @"\\REMOTE-COMPUTER\My\Shared\Target\file.txt");
}
```
**WARNING:** Please make absolutely sure, that `Dispose` of the `NetworkShareAccesser` is called (even if you app crashes!), otherwise an open connection will remain on Windows. You can see all open connections by opening the `cmd` prompt and enter `net use`.
**The Code:**
```
/// <summary>
/// Provides access to a network share.
/// </summary>
public class NetworkShareAccesser : IDisposable
{
private string _remoteUncName;
private string _remoteComputerName;
public string RemoteComputerName
{
get
{
return this._remoteComputerName;
}
set
{
this._remoteComputerName = value;
this._remoteUncName = @"\\" + this._remoteComputerName;
}
}
public string UserName
{
get;
set;
}
public string Password
{
get;
set;
}
#region Consts
private const int RESOURCE_CONNECTED = 0x00000001;
private const int RESOURCE_GLOBALNET = 0x00000002;
private const int RESOURCE_REMEMBERED = 0x00000003;
private const int RESOURCETYPE_ANY = 0x00000000;
private const int RESOURCETYPE_DISK = 0x00000001;
private const int RESOURCETYPE_PRINT = 0x00000002;
private const int RESOURCEDISPLAYTYPE_GENERIC = 0x00000000;
private const int RESOURCEDISPLAYTYPE_DOMAIN = 0x00000001;
private const int RESOURCEDISPLAYTYPE_SERVER = 0x00000002;
private const int RESOURCEDISPLAYTYPE_SHARE = 0x00000003;
private const int RESOURCEDISPLAYTYPE_FILE = 0x00000004;
private const int RESOURCEDISPLAYTYPE_GROUP = 0x00000005;
private const int RESOURCEUSAGE_CONNECTABLE = 0x00000001;
private const int RESOURCEUSAGE_CONTAINER = 0x00000002;
private const int CONNECT_INTERACTIVE = 0x00000008;
private const int CONNECT_PROMPT = 0x00000010;
private const int CONNECT_REDIRECT = 0x00000080;
private const int CONNECT_UPDATE_PROFILE = 0x00000001;
private const int CONNECT_COMMANDLINE = 0x00000800;
private const int CONNECT_CMD_SAVECRED = 0x00001000;
private const int CONNECT_LOCALDRIVE = 0x00000100;
#endregion
#region Errors
private const int NO_ERROR = 0;
private const int ERROR_ACCESS_DENIED = 5;
private const int ERROR_ALREADY_ASSIGNED = 85;
private const int ERROR_BAD_DEVICE = 1200;
private const int ERROR_BAD_NET_NAME = 67;
private const int ERROR_BAD_PROVIDER = 1204;
private const int ERROR_CANCELLED = 1223;
private const int ERROR_EXTENDED_ERROR = 1208;
private const int ERROR_INVALID_ADDRESS = 487;
private const int ERROR_INVALID_PARAMETER = 87;
private const int ERROR_INVALID_PASSWORD = 1216;
private const int ERROR_MORE_DATA = 234;
private const int ERROR_NO_MORE_ITEMS = 259;
private const int ERROR_NO_NET_OR_BAD_PATH = 1203;
private const int ERROR_NO_NETWORK = 1222;
private const int ERROR_BAD_PROFILE = 1206;
private const int ERROR_CANNOT_OPEN_PROFILE = 1205;
private const int ERROR_DEVICE_IN_USE = 2404;
private const int ERROR_NOT_CONNECTED = 2250;
private const int ERROR_OPEN_FILES = 2401;
#endregion
#region PInvoke Signatures
[DllImport("Mpr.dll")]
private static extern int WNetUseConnection(
IntPtr hwndOwner,
NETRESOURCE lpNetResource,
string lpPassword,
string lpUserID,
int dwFlags,
string lpAccessName,
string lpBufferSize,
string lpResult
);
[DllImport("Mpr.dll")]
private static extern int WNetCancelConnection2(
string lpName,
int dwFlags,
bool fForce
);
[StructLayout(LayoutKind.Sequential)]
private class NETRESOURCE
{
public int dwScope = 0;
public int dwType = 0;
public int dwDisplayType = 0;
public int dwUsage = 0;
public string lpLocalName = "";
public string lpRemoteName = "";
public string lpComment = "";
public string lpProvider = "";
}
#endregion
/// <summary>
/// Creates a NetworkShareAccesser for the given computer name. The user will be promted to enter credentials
/// </summary>
/// <param name="remoteComputerName"></param>
/// <returns></returns>
public static NetworkShareAccesser Access(string remoteComputerName)
{
return new NetworkShareAccesser(remoteComputerName);
}
/// <summary>
/// Creates a NetworkShareAccesser for the given computer name using the given domain/computer name, username and password
/// </summary>
/// <param name="remoteComputerName"></param>
/// <param name="domainOrComuterName"></param>
/// <param name="userName"></param>
/// <param name="password"></param>
public static NetworkShareAccesser Access(string remoteComputerName, string domainOrComuterName, string userName, string password)
{
return new NetworkShareAccesser(remoteComputerName,
domainOrComuterName + @"\" + userName,
password);
}
/// <summary>
/// Creates a NetworkShareAccesser for the given computer name using the given username (format: domainOrComputername\Username) and password
/// </summary>
/// <param name="remoteComputerName"></param>
/// <param name="userName"></param>
/// <param name="password"></param>
public static NetworkShareAccesser Access(string remoteComputerName, string userName, string password)
{
return new NetworkShareAccesser(remoteComputerName,
userName,
password);
}
private NetworkShareAccesser(string remoteComputerName)
{
RemoteComputerName = remoteComputerName;
this.ConnectToShare(this._remoteUncName, null, null, true);
}
private NetworkShareAccesser(string remoteComputerName, string userName, string password)
{
RemoteComputerName = remoteComputerName;
UserName = userName;
Password = password;
this.ConnectToShare(this._remoteUncName, this.UserName, this.Password, false);
}
private void ConnectToShare(string remoteUnc, string username, string password, bool promptUser)
{
NETRESOURCE nr = new NETRESOURCE
{
dwType = RESOURCETYPE_DISK,
lpRemoteName = remoteUnc
};
int result;
if (promptUser)
{
result = WNetUseConnection(IntPtr.Zero, nr, "", "", CONNECT_INTERACTIVE | CONNECT_PROMPT, null, null, null);
}
else
{
result = WNetUseConnection(IntPtr.Zero, nr, password, username, 0, null, null, null);
}
if (result != NO_ERROR)
{
throw new Win32Exception(result);
}
}
private void DisconnectFromShare(string remoteUnc)
{
int result = WNetCancelConnection2(remoteUnc, CONNECT_UPDATE_PROFILE, false);
if (result != NO_ERROR)
{
throw new Win32Exception(result);
}
}
/// <summary>
/// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
/// </summary>
/// <filterpriority>2</filterpriority>
public void Dispose()
{
this.DisconnectFromShare(this._remoteUncName);
}
}
``` | Accessing a Shared File (UNC) From a Remote, Non-Trusted Domain With Credentials | [
"",
"c#",
"windows",
"unc",
"file-sharing",
""
] |
I have a set of macros that I have turned into an add-in in excel. The macros allow me to interact with another program that has what are called Microsoft Automation Objects that provide some control over what the other program does. For example, I have a filter tool in the add-in that filters the list provided by the other program to match a list in the Excel workbook. This is slow though. I might have fifty thousand lines in the other program and want to filter out all of the lines that don't match a list of three thousand lines in Excel. This type of matching takes about 30-40 minutes. I have begun wondering if there is way to do this with Python instead since I suspect the matching process could be done in seconds.
Edited:
Thanks- Based on the suggestion to look at Hammond's book I found out a number of resources. However, though I am still exploring it looks like many of these are old. For example, Hammond's book was published in 2000, which means the writing was finished almost a decade ago. Correction I just found the package called PyWin32 with a 2/2009 build.
This should get me started. Thanks | Mark Hammond and Andy Robinson have written [the book](http://shop.oreilly.com/product/9781565926219.do) on accessing Windows COM objects from Python.
[Here](http://books.google.com/books?id=ns1WMyLVnRMC&pg=PA198&lpg=PA198&dq=python+microsoft+automation+objects&source=bl&ots=NVoi1KaePn&sig=U7PW8ttWlZumpqiLrzWSBL8IxSU&hl=en&ei=FyPBSYWUA4mMsAPSptgv&sa=X&oi=book_result&resnum=1&ct=result#PPA198,M1) is an example using Excel. | You will probably need the win32com package.
This is a sample exemple I found at : <http://www.markcarter.me.uk/computing/python/excel.html> which shows how to use com with Excel. This might be a good start.
```
# this example starts Excel, creates a new workbook,
# puts some text in the first and second cell
# closes the workbook without saving the changes
# and closes Excel. This happens really fast, so
# you may want to comment out some lines and add them
# back in one at a time ... or do the commands interactively
from win32com.client import Dispatch
xlApp = Dispatch("Excel.Application")
xlApp.Visible = 1
xlApp.Workbooks.Add()
xlApp.ActiveSheet.Cells(1,1).Value = 'Python Rules!'
xlApp.ActiveWorkbook.ActiveSheet.Cells(1,2).Value = 'Python Rules 2!'
xlApp.ActiveWorkbook.Close(SaveChanges=0) # see note 1
xlApp.Quit()
xlApp.Visible = 0 # see note 2
del xlApp
# raw_input("press Enter ...")
``` | Accessing Microsoft Automation Objects from Python | [
"",
"python",
"object",
"automation",
""
] |
I need to remove all characters from a string which aren't in `a-z A-Z 0-9` set or are not spaces.
Does anyone have a function to do this? | Sounds like you almost knew what you wanted to do already, you basically defined it as a regex.
```
preg_replace("/[^A-Za-z0-9 ]/", '', $string);
``` | For unicode characters, it is :
```
preg_replace("/[^[:alnum:][:space:]]/u", '', $string);
``` | How to remove non-alphanumeric characters? | [
"",
"php",
"regex",
"string",
""
] |
I need a Javascript sample which reads contents from a url and creates a file in the client with same contents when some button is clicked. I also understand that Javascript cannot be allowed to access the local file system (Unless you use ActiveX objects). I do not want to rely on ActiveX objects (since my client is not IE always).
So here is what I did. I used the standard XmlHttpRequest made a request and got my bytes. Now I thought I could kinda stream this contents to the user, first by opening a standard dialog box (the box that pops up when you attempt download something from internet with options like Open/Save/Cancel) and then asking the user to save it somewhere.
I know how to do the read part, can someone show some javascript/html sample on "How to stream open a confirm dialog box to the user and stream some contents?"
PS: Not too sure if this could be done in javascript at all but with Javascript you never know what is possible and what is not :) | Rather than using Javascript to stream the content, which is not possible for obvious security reasons, you need to point the browser at a URL that will return a '`Content-Type`' header of '`application/octet-stream`'. In most cases this will force the browser to initiate a 'save as' operation and ask the user what to do with it.
I believe it is possible to do this using an iframe in the same page, such that the user will not have to navigate away from the page or open a new tab/window. | If I understand what your question is, then you want to use JavaScript to write data like a server-side script would (PHP, Python, Java, etc..) but in browser?
If so, then what your asking isn't possible with in browser JavaScript.
However if for some reason you wanted to do this with server-side JavaScript, then yes the "streaming" part is possible. | How can javascript stream some contents to the user? | [
"",
"javascript",
"stream",
""
] |
Consider this:
```
public class interface Person : IPerson
{
int ID { get; protected set; }
string FirstName { get; set; }
string LastName { get; set; }
string FullName { get { return FirstName + " " + LastName; } }
}
```
And this:
```
public class StubPerson : IPerson
{
int ID { get { return 0; protected set { } }
string FirstName { get { return "Test" } set { } }
string LastName { get { return "User" } set { } }
string FullName { get { return FirstName + " " + LastName; } }
}
```
Usage:
```
IPerson iperson = new Person();
```
Or:
```
IPerson ipersonStub = new StubPerson();
```
Or:
```
IPerson ipersonMock = mocks.CreateMock<IPerson>();
```
So in effect we are declaring the **IPerson** interface and the **Person** class at the same time:
```
public class interface Person : IPerson
```
**Do you think it would be useful to have this kind of support in .NET/C#?**
**Edit:**
Due to mass confusion I think I need to clarify the proposed purpose:
Without this feature you would have to write:
```
interface IPerson
{
int ID { get; }
string FirstName { get; set; }
string LastName { get; set; }
string FullName { get; }
}
```
as well as this:
```
public class Person : IPerson
{
int ID { get; protected set; }
string FirstName { get; set; }
string LastName { get; set; }
string FullName { get { return FirstName + " " + LastName; } }
}
```
**I'm not proposing any semantic change at all.** | I considered [the same sort of thing a while ago](https://codeblog.jonskeet.uk/2007/02/28/wacky-ideas-2-class-interfaces/), particularly for use in the case where you only have one production implementation of an interface, but you want to mock it out for testing. At the moment it ends up being a bit like the .c/.h files of yore.
I suspect in the end that the benefits of it are outweighed by the extra complexity both in the language and then reading the code afterwards. I'd still be interested in seeing it explored more thoroughly though. Even then, there are other things way higher on my priority list - better support for immutability being at the top :) | Let me see if I am understand what you're asking:
Why can't we declare an interface:
```
interface IPerson
{
string Name {get;set;}
int ID {get;set;}
}
```
And classes which implement that interface will inherit its properties without having to re-declare them:
```
class Person : IPerson { }
//person now has properties Name and ID
```
The reason you can't do this is even though the text of your interface code and your class code are very similar, they *mean very* different things. The interface simply says "implementor will have a string Name with getter and setter". It is the class which says "return private field when getter for name is invoked." Even if you use the auto-property shortcut to let the compiler implement that logic, it is still *logic*, which belongs in the class. Just because:
```
string Name {get;set;}
```
*looks* the same in an interface and in a class, it does not mean even remotely the same thing.
It would be very dangerous for the compiler to implement arbitrary logic to fulfill your contracts for you, instead of complaining at compile time that you haven't implemented them. It could introduce bugs very difficult to track down. Having compilers fall back to default behavior when no behavior is defined is a very, very bad idea. | Do you think "auto interface implementation" would be useful in .NET / C# | [
"",
"c#",
".net",
"interface",
"automatic-properties",
"language-enhancement",
""
] |
I have a vector of myObjects in global scope.
```
std::vector<myObject>
```
A method is passed a pointer to one of the elements in the vector.
Can this method increment the pointer, to get to the next element,
```
myObject* pmObj;
++pmObj; // the next element ??
```
or should it be passed an `std::Vector<myObject>::iterator` and increment that instead?
Assume for now that the vector will not get changed in the meantime. | Yes - the standard guarantees in a technical correction that the storage for a vector is contiguous, so incrementing pointers into a vector will work. | Yes, this will work as expected since `std::vector` is mandated to use contiguous storage by the Standard. I would suggest passing in a pair of iterators if you are working with a range of objects. This is pretty much the standard idiom as employed by the STL. This will make your code a little safer as well since you have an explicit endpoint for iteration instead of relying on a count or something like that. | Using a pointer to an object stored in a vector... c++ | [
"",
"c++",
"pointers",
"iterator",
"stdvector",
""
] |
How do you setup a JTable to be able to drag a row to a different index in the table. For example if I have 5 rows and I want to drag the 4th row to the 2nd position? | Check out the [drag and drop](http://java.sun.com/docs/books/tutorial/uiswing/dnd/index.html) section of the Java Tutorial. There are some examples on how to implement this for `JTable`. | The following allows JTable re-ordering of a single dragged row:
```
table.setDragEnabled(true);
table.setDropMode(DropMode.INSERT_ROWS);
table.setTransferHandler(new TableRowTransferHandler(table));
```
Your TableModel should implement the following to allow for re-ordering:
```
public interface Reorderable {
public void reorder(int fromIndex, int toIndex);
}
```
This TransferHandler class handles the drag & drop, and calls reorder() on your TableModel when the gesture is completed.
```
/**
* Handles drag & drop row reordering
*/
public class TableRowTransferHandler extends TransferHandler {
private final DataFlavor localObjectFlavor = new ActivationDataFlavor(Integer.class, "application/x-java-Integer;class=java.lang.Integer", "Integer Row Index");
private JTable table = null;
public TableRowTransferHandler(JTable table) {
this.table = table;
}
@Override
protected Transferable createTransferable(JComponent c) {
assert (c == table);
return new DataHandler(new Integer(table.getSelectedRow()), localObjectFlavor.getMimeType());
}
@Override
public boolean canImport(TransferHandler.TransferSupport info) {
boolean b = info.getComponent() == table && info.isDrop() && info.isDataFlavorSupported(localObjectFlavor);
table.setCursor(b ? DragSource.DefaultMoveDrop : DragSource.DefaultMoveNoDrop);
return b;
}
@Override
public int getSourceActions(JComponent c) {
return TransferHandler.COPY_OR_MOVE;
}
@Override
public boolean importData(TransferHandler.TransferSupport info) {
JTable target = (JTable) info.getComponent();
JTable.DropLocation dl = (JTable.DropLocation) info.getDropLocation();
int index = dl.getRow();
int max = table.getModel().getRowCount();
if (index < 0 || index > max)
index = max;
target.setCursor(Cursor.getPredefinedCursor(Cursor.DEFAULT_CURSOR));
try {
Integer rowFrom = (Integer) info.getTransferable().getTransferData(localObjectFlavor);
if (rowFrom != -1 && rowFrom != index) {
((Reorderable)table.getModel()).reorder(rowFrom, index);
if (index > rowFrom)
index--;
target.getSelectionModel().addSelectionInterval(index, index);
return true;
}
} catch (Exception e) {
e.printStackTrace();
}
return false;
}
@Override
protected void exportDone(JComponent c, Transferable t, int act) {
if ((act == TransferHandler.MOVE) || (act == TransferHandler.NONE)) {
table.setCursor(Cursor.getPredefinedCursor(Cursor.DEFAULT_CURSOR));
}
}
}
``` | How do I drag and drop a row in a JTable? | [
"",
"java",
"swing",
"drag-and-drop",
"jtable",
"mouse",
""
] |
When you do stuff like:
```
for (int i = 0; i < collection.Count; ++i )
```
is collection.Count called on every iteration?
Would the result change if the Count property dynamically gets the count on call? | Yes Count will be evaluated on every single pass. The reason why is that it's possible for the collection to be modified during the execution of a loop. Given the loop structure the variable i should represent a valid index into the collection during an iteration. If the check was not done on every loop then this is not provably true. Example case
```
for ( int i = 0; i < collection.Count; i++ ) {
collection.Clear();
}
```
The one exception to this rule is looping over an array where the constraint is the Length.
```
for ( int i = 0; i < someArray.Length; i++ ) {
// Code
}
```
The CLR JIT will special case this type of loop, in certain circumstances, since the length of an array can't change. In those cases, bounds checking will only occur once.
Reference: <http://blogs.msdn.com/brada/archive/2005/04/23/411321.aspx> | Count would be evaluated on every pass. If you continued to add to the collection and the iterator never caught up, you would have an endless loop.
```
class Program
{
static void Main(string[] args)
{
List<int> intCollection = new List<int>();
for(int i=-1;i < intCollection.Count;i++)
{
intCollection.Add(i + 1);
}
}
}
```
This eventually will get an out of memory exception. | Is the condition in a for loop evaluated each iteration? | [
"",
"c#",
".net",
"loops",
""
] |
Hello I have a chunk of memory (allocated with malloc()) that contains bits (bit literal), I'd like to read it as an array of char, or, better, I'd like to printout the ASCII value of 8 consecutively bits of the memory.
I have allocated he memory as char \*, but I've not been able to take characters out in a better way than evaluating each bit, adding the value to a char and shifting left the value of the char, in a loop, but I was looking for a faster solution.
Thank you
What I've wrote for now is this:
for allocation:
```
char * bits = (char*) malloc(1);
```
for writing to mem:
```
ifstream cleartext;
cleartext.open(sometext);
while(cleartext.good())
{
c = cleartext.get();
for(int j = 0; j < 8; j++)
{ //set(index) and reset(index) set or reset the bit at bits[i]
(c & 0x80) ? (set(index)):(reset(index));//(*ptr++ = '1'):(*ptr++='0');
c = c << 1;
}..
}..
```
and until now I've not been able to get character back, I only get the bits printed out using:
```
printf("%s\n" bits);
```
An example of what I'm trying to do is:
input.txt contains the string "AAAB"
My program would have to write "AAAB" as "01000001010000010100000101000010" to memory
(it's the ASCII values in bit of AAAB that are 65656566 in bits)
Then I would like that it have a function to rewrite the content of the memory to a file.
So if memory contains again "01000001010000010100000101000010" it would write to the output file "AAAB". | ```
int numBytes = 512;
char *pChar = (char *)malloc(numBytes);
for( int i = 0; i < numBytes; i++ ){
pChar[i] = '8';
}
```
Since this is C++, you can also use "new":
```
int numBytes = 512;
char *pChar = new char[numBytes];
for( int i = 0; i < numBytes; i++ ){
pChar[i] = '8';
}
``` | If you want to visit every *bit* in the memory chunk, it looks like you need `std::bitset`.
```
char* pChunk = malloc( n );
// read in pChunk data
// iterate over all the bits.
for( int i = 0; i != n; ++i ){
std::bitset<8>& bits = *reinterpret_cast< std::bitset<8>* >( pByte );
for( int iBit = 0; iBit != 8; ++iBit ) {
std::cout << bits[i];
}
}
``` | Howto read chunk of memory as char in c++ | [
"",
"c++",
"malloc",
"char",
""
] |
Very new to PHP5 and have some problems still. I figured out how to Select with prepared statements now trying to insert/update my code is as follows
```
function input_lab_results($name, $image, $descrip) {
$query = "INSERT INTO pat_table (pat_name, pat_image, pat_descrip, pat_doctor, pat_resident, pat_create, pat_modify) VALUES (?, ?, ?, ?, ?, ?, ?)";
if($stmt = $this->conn->prepare($query)){
$stmt->bind_param('sssiidd', $name, $image, $descrip, 0, 0, date("Ymd"), date("Ymd"));
$stmt->execute();
die();
} else{
die($this->conn->error);
}
}
```
The error I am getting is
> Fatal error: Cannot pass parameter 4 by reference html/classes/mySQL.php on line 43
Any help/references would be appreciated. Thank you.
---
Everyone is mentioning using PDO, do you guys have any good tutorials or examples of this please? | I changed it to
```
function input_lab_results($name, $image, $descrip) {
$query = "INSERT INTO pat_table (pat_name, pat_image, pat_descrip, pat_doctor, pat_resident, pat_create, pat_modify) VALUES (?, ?, ?, 0, 0, ?, ?)";
if($stmt = $this->conn->prepare($query)){
$stmt->bind_param('sssdd', $name, $image, $descrip, date("Ymd"), date("Ymd"));
$stmt->execute();
die();
} else{
die($this->conn->error);
}
}
```
Basically I changed where it was reading the ?? for the integer to 0 in the query I didn't bind it. | it's not `$descrip` that's the problem; it's the 0s (parameters 4 & 5). the solution is to pass variables rather than integers:
```
`$query = "INSERT INTO pat_table (pat_name, pat_image, pat_descrip, pat_doctor, pat_resident, pat_create, pat_modify) VALUES (?, ?, ?, ?, ?, ?, ?)";`
$pat_doctor = 0;
$pat_resident = 0;
if($stmt = $this->conn->prepare($query)){
$stmt->bind_param('sssiidd', $name, $image, $descrip, $pat_doctor, $pat_resident, date("Ymd"), date("Ymd"));`
```
evidently mysqli\_bind\_param wants its arguments as references so it looks for where they're stored in memory rather than copying their values. this makes sense as some of the things you'd like to bind to the sql statement, such as that image, are probably big enough that you would rather not have excess copies running around. literals, string or otherwise, are not accessible by reference. see: <http://us.php.net/references>
i do not suggest hard coding the 0s into the sql statement, as it unnecessarily confuses your code.
let me suggest PDO, by the way. its syntax is much saner. | Having problems inserting and updating tables in a database with prepared statements | [
"",
"mysql",
"mysqli",
"php",
""
] |
I've been browsing revision 1.38.0 of the Boost libraries, in an attempt to decide if there are enough jewels there to justify negotiating my company's external software approval process. In the course of writing test programs and reading the documents, I've reached a couple conclusions
* of course, not everything in Boost will ever be of use in my engineering group
* more importantly, some of these libraries seem more polished than others
In fact, some libraries seem a bit toy-like to me.
There are a number of fairly accessible libraries that I can see putting to use after only a short period of investigation, such as *boost::variant* (I really like the *visitor* component and the fact that the compiler barfs if a visitor lacks an operator for one of the variant types). I'd use *boost::shared\_ptr* except for the fact that our group already has a set of smart pointer types.
So based on the broad experience of Stack Overflow users, which Boost libraries
* have high quality?
* are more than toys?
* are there any which have a high entry barrier but which are well worth learning?
Note that this is a somewhat different question than that posed in [Boost considered harmful?](https://stackoverflow.com/questions/569198/boost-considered-harmful)
P.S. - Has one of the answers (from litb) been deleted? I can't see it here, and only an excerpt on my user page... | I use quite frequently (and it makes my life simpler):
* smart pointers (`shared_ptr`, `scoped_ptr`, `weak_ptr`, interprocess `unique_ptr`):
+ `scoped_ptr` for basic RAII (without shared ownership and ownership transfer), at no cost.
+ `shared_ptr` for more complex operations - when shared ownership is needed. However there is some cost.
+ `unique_ptr` - there is active work at boost on unifying various approaches (present at Boost) to `unique_ptr` with move emulation.
+ They are really simple to use (header only), easy to learn and very well tested (well, except maybe the `unique_ptr`)
* Boost Thread - actively developed (threads are now movable) library for working with threads. Hides the complexity of thread implementation on a given platform.
* Boost MPL and Fusion - these are more difficult to explain. For long time I didn't use compile time power, but after some reading and learning it turned out that some of my code can be nicely simplified. Still, beware of the compilation time...
* Boost Asio
+ Contrary to the first impression (at least some time ago) it is not only the networking library. It provides asynchronous I/O model that can be used for virtually anything.
* Boost Format (powerful output formatting, but very heavy)
* Boost Spirit2x (Karma and Qi used both for parsing and generating output based on a given grammar). Really powerful, can create a parser without resorting to external tools. Yet the compilation time might be a problem. Also version 2x is being actively developed and the documentation is rather scarce (the spirit-devel mailing list is very helpful though)
* Boost Bind, Function and Lambda to make your life easier and Boost Phoenix - just to experiment
* lexical\_cast (something similar might be born soon as boost::string)
* Regex/Xpressive - regular expressions
* Type traits and concept checks - once again to make your life easier
* Math:
+ various random number generators
+ various statistical distributions
+ ublas - for using LAPACK/BLAS bindings in C++ like way
+ some mathematical functions, normally not available in C++
+ some tools for controlling the conversions between numreric types
+ interval arithmetics
* Boost Iterator (specialized adaptors for iterators and facade for creating your own)
* Boost Unit Testing framework
And still there are some parts that I'd barely touched in Boost. Probably I also forgot to mention few obvious ones.
Remember to use right tools (hammers) for right problems (nails). Remember to keep the solutions simple. Remember about the cost of received functionality (for example `shared_ptr` or `boost::format` runtime overhead or MPL/Fusion/Spirit/Phoenix compile time costs and executable sizes). But experiment and learn - it's where the fun is.
And when it comes to convincing the management to use the new libraries - you don't have to start with all the libraries. Start with the simple things (probably the ones that have a long and stable Boost history, broad compiler support, are planned for inclusion in TR2/C++1x, etc) and simple examples that show the benefits. | I found `boost` to be an uncontested **must-have** when designing cross-platform (e.g. \*nix and win32) multi-threaded apps (`boost::thread`, `boost::interprocess`.) This alone has been justification enough in at least one instance for adopting `boost` as part of my employers' projects.
The rest (containers, generic programming and meta-programming, memory) followed as freebies. | What are the Best Components of Boost? | [
"",
"c++",
"boost",
"utility",
""
] |
How do you port C++ programs with makefile made from GNU C++ in Linux to Visual C++? | One thing I can suggest is to use CMake. If you implement your build system with CMake to auto-generate the makefiles for GCC on Linux, it takes only minor modifications to auto-generate projects and solutions for VC++.
Of course, this means learning a whole new build tool, so it may not be for you. It's only a suggestion. | I don't know about an easy way to simply convert from one to another, but..
Assuming you use only ANSI C/C++ features, usually you don't need to convert the makefile, just look which .c/.cpp files are in it and add them to the VS project; you'll also have to check about compiler options and defined macros, to put them inside the VS project. I've done this to compile libs like expat, freetype, agg and others, without problems. | Port GNU C++ programs to Visual C++ | [
"",
"c++",
"visual-studio",
"visual-c++",
"g++",
"port",
""
] |
In my program it adds a shortcut to the screen. I get the icon on the screen fine, but when I tap it, I get:
```
03-01 20:00:29.410: ERROR/AndroidRuntime(796): java.lang.SecurityException: Permission Denial: starting Intent { data=http://www.example.com/ flags=0x14000000 comp={com.isaacwaller.example/com.isaacwaller.example.ExampleCut} } from ProcessRecord{435c7398 796:android.process.acore/10005} (pid=796, uid=10005) requires null
```
Do you know the problem? Thanks,
Isaac | Figured it out, added this under `<activity>` tag of activity:
```
<intent-filter>
<action android:name="android.intent.action.MAIN"></action>
</intent-filter>
``` | I had something like this happen when I had accidentally duplicated the activity tag for one of my activities in my manifest. I had something like this in my application section.
```
<activity android:name=".ConventionHome" android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity android:name="ConventionHome"></activity>
```
When I removed the second activity tag, things started working normally. | Android homescreen shortcut permission error | [
"",
"java",
"android",
"shortcut",
"permissions",
"homescreen",
""
] |
I'm currently using the following code to scan files that have been uploaded as part of an application form:
```
$safe_path = escapeshellarg($dir . $file);
$command = '/usr/bin/clamscan --stdout ' . $safe_path;
$out = '';
$int = -1;
exec($command, $out, $int);
if ($int == 0) {
// all good;
} else {
// VIRUS!;
}
```
It works, but is slow. Anyone got any suggestions that would a). speed things up and b). improve the script generally (for instance, I'm not entirely clear on the benefits of exec() vs system(), etc)?
If the speed can't be improved then I'd ideally like to display some kind of interim "Please be patient your files are being scanned" message, but am not sure how to go about that either.
EDIT: Sorry, should have said the scan needs to be done at the time as the application in question won't be accepted without valid (i.e virus-free) files. | If you don't need to display the results to the user instantly, you could add the file to a Database table for scanning later.
Then, you could fork a new process to scan and update the results in the table. You have a good example here: <http://robert.accettura.com/blog/2006/09/14/asynchronous-processing-with-php/>.
If you absolutely need to display the results within the same request, then you could do it exactly as I said before but outputting a temp page requesting the results via AJAX; once the scan is over, redirect the user to the results page.
If you don't want to use JavaScript, then a simple meta refresh tag would do the trick. | Use clamdscan instead of clamscan. Clamdscan uses the built-in daemon that's running all the time and doesn't have to load the virus tables each time (as clamscan does). | Scan PHP uploads for viruses | [
"",
"php",
"upload",
"virus",
""
] |
I have a situation where I need to know the current color of an alternating row in a TemplateField of a GridView.
UPDATED:
How do I retrieve this Color value in a `<%# ??? %>`.
(Or an workaround where I get the row number). | Create this function in your codebehind page (or in a section in your .aspx page):
```
protected string GetColor(object container)
{
int ordinal = 0;
try
{
ordinal = int.Parse(DataBinder.Eval(container, "DataItemIndex").ToString());
}
catch (Exception)
{
ordinal = int.Parse(DataBinder.Eval(container, "ItemIndex").ToString());
}
return (ordinal % 2) == 0 ? "Row" : "Alternate Row";
}
```
Then in your markup, you'd call it like so:
```
<%# GetOrdinal(Container) %>
```
(Note the uppercase "Container"). | To get the colors from inside of a <% %> tag in the template field itself, you could use this code...
```
<asp:TemplateField>
<ItemTemplate>
<%# ((GridViewRow)Container).RowState == DataControlRowState.Alternate ? ((GridView)((GridViewRow)Container).Parent.Parent).AlternatingRowStyle.BackColor : ((GridView)((GridViewRow)Container).Parent.Parent).RowStyle.BackColor%>
</ItemTemplate>
</asp:TemplateField>
```
You can also do this in the RowDataBound event of the GridView. In the RowDataBound command, you can query the e.Row.RowState to find out what type of row you are on. Values include DataControlRowState.Alternate and DataControlRowState.Normal. You can use the sender to grab the color based on that row type...
```
protected void MyGridView_RowDataBound(object sender, GridViewRowEventArgs e)
{
// set first cell in the row to color just for demonstration purpose.
if (e.Row.RowType == DataControlRowType.DataRow && e.Row.RowState == DataControlRowState.Alternate)
{
e.Row.Cells[0].Text = ((GridView)sender).AlternatingRowStyle.BackColor.ToString();
}
}
``` | Get current row color in TemplateField? | [
"",
"c#",
"asp.net",
"templates",
"gridview",
""
] |
A very common complexity for technical architects is to divide the application in assemblies and namespaces.
* Assemblies can be partitioned according: deployment, performance and security boundaries.
* Namespaces can be partitioned according logical application boundaries.
Also: namespaces can span multiple assemblies.
I had a bad experience in a project once where we partitioned assemblies according logical units of the application. This decision ended up with solution files with 30 or 40 projects! The master solution file loadtime was approx. 5 minutes!!! This ended up in a great waste of time, pff...
The opposite scenario was to hold all code in 1 assembly and partition when it is really needed.
Do you have additional tips or best-practices regarding this issue? | I split code into separate assemblies only when I need to reuse it for two different applications (pretty much). So I start with everything in one project, and when the need to reuse code becomes obvious I create a new assembly and move the code (sometimes its obvious from the very beginning, e.g. when you need to have a web app and win forms doing same thing).
Re. Namespaces, I prefer to have it quite well partitioned within an assembly, so it is clear where each class belong and what it should be used for. | You can partition classes with Namespaces and use folders if you want to **group** source files together for easier maintenance. If you have security requirements and certain assemblies need to go through special processing such as Obfuscation for example, then you may need to separate those to a separate project.
Re-usability is also a factor that you may need to consider when thinking about whether a logical unit needs to get its own project since you may need this project in another solution as well. | Strategy tips on partitioning assemblies and namespaces | [
"",
"c#",
".net",
""
] |
I want to show a random record from the database. I would like to be able to show X number of random records if I choose. Therefore I need to select the top X records from a randomly selected list of IDs
(There will never be more than 500 records involved to choose from, unless the earth dramatically increases in size. Currently there are 66 possibles.)
This function works, but how can I make it better?
```
/***************************************************/
/* RandomSite */
//****************/
// Returns an array of random site IDs or NULL
/***************************************************/
function RandomSite($intNumberofSites = 1) {
$arrOutput = NULL;
//open the database
GetDatabaseConnection('dev');
//inefficient
//$strSQL = "SELECT id FROM site_info WHERE major <> 0 ORDER BY RAND() LIMIT ".$intNumberofSites.";";
//Not wonderfully random
//$strSQL = "SELECT id FROM site_info WHERE major <> 0 AND id >= (SELECT FLOOR( COUNT(*) * RAND()) FROM site_info ) ORDER BY id LIMIT ".$intNumberofSites.";";
//Manual selection from available pool of candidates ?? Can I do this better ??
$strSQL = "SELECT id FROM site_info WHERE major <> 0;";
if (is_numeric($intNumberofSites))
{
//excute my query
$result = @mysql_query($strSQL);
$i=-1;
//create an array I can work with ?? Can I do this better ??
while ($row = mysql_fetch_array($result, MYSQL_NUM))
{
$arrResult[$i++] = $row[0];
}
//mix them up
shuffle($arrResult);
//take the first X number of results ?? Can I do this better ??
for ($i=0;$i<$intNumberofSites;$i++)
{
$arrOutput[$i] = $arrResult[$i];
}
}
return $arrOutput;
}
```
**UPDATE QUESTION:**
I know about the ORDER BY RAND(), I just don't want to use it because there are rumors it isn't the best at scaling and performance. I am being overly critical of my code. What I have works, ORDER BY RAND() works, but can I make it better?
**MORE UPDATE**
There are holes in the IDs. There is not a ton of churn, but any churn that happens needs to be approved by our team, and therefore could handled to dump any caching.
Thanks for the replies! | Why not use the Rand Function in an orderby in your database query? Then you don't have to get into randomizing etc in code...
Something like (I don't know if this is legal)
```
Select *
from site_info
Order by Rand()
LIMIT N
```
where N is the number of records you want...
**EDIT**
Have you profiled your code vs. the query solution? I think you're just pre-optimizing here. | If you dont want to select with order by rand().
Instead of shuffeling, use [array\_rand](http://no.php.net/manual/en/function.array-rand.php) on the result:
```
$randKeys = array_rand($arrResult, $intNumberofSites);
$arrOutput = array_intersect_key(array_flip($randKeys), $arrResult);
```
edit: return array of keys not new array with key => value | Select variable number of random records from MySQL | [
"",
"php",
"mysql",
"random",
""
] |
I have the code below (I've included what I believe are all relevant sections):
```
private String readCommand = "SELECT LEVEL FROM USERS WHERE VAL_1 = ? AND VAL_@ = ?;";
public bool read(string id)
{
level = -1;
MySqlCommand m = new MySqlCommand(readCommand);
m.Parameters.Add(new MySqlParameter("", val1));
m.Parameters.Add(new MySqlParameter("", val2));
MySqlDataReader r = m.ExecuteReader();
if (r.HasRows)
level = Convert.ToInt32(r.GetValue(0).ToString());
r.Close();
return true;
}
```
When I run this, I get an IndexOutOfBoundsException on adding the first parameter. What have I done wrong? | Try this instead:
```
private String readCommand =
"SELECT LEVEL FROM USERS WHERE VAL_1 = @param_val_1 AND VAL_2 = @param_val_2;";
public bool read(string id)
{
level = -1;
MySqlCommand m = new MySqlCommand(readCommand);
m.Parameters.AddWithValue("@param_val_1", val1);
m.Parameters.AddWithValue("@param_val_2", val2);
level = Convert.ToInt32(m.ExecuteScalar());
return true;
}
``` | ```
protected void Login1_Authenticate(object sender, AuthenticateEventArgs e)
{
MySqlConnection con = new MySqlConnection("server=localhost;User Id=root;database=result;password=1234");
con.Open();
MySqlCommand cmd = new MySqlCommand("Select * from users where username=?username and password=?password", con);
cmd.Parameters.Add(new MySqlParameter("username", this.Login1.UserName));
cmd.Parameters.Add(new MySqlParameter("password", this.Login1.Password));
MySqlDataReader dr = cmd.ExecuteReader();
if (dr.HasRows ==true)
{
e.Authenticated = true;
}
}
``` | Parameterized Query for MySQL with C# | [
"",
"c#",
"mysql",
""
] |
As seen in [This SO question on getting icons](https://stackoverflow.com/questions/524137/get-icons-for-common-file-types) for common file types, it's quite possible for a windows program to get the icons for a registered file type using the C++ Shell API. These icons may or may not exist on disk - for example, we wanted to make our own custom file browser and want to display the system-associated icon with the file.
Is there a native C# way to get the icons for various file types (and if so, how) or must it be done through PInvoke with shell API?
And as a follow up, if there is a native .NET way of doing it, is there a cross-platform way of doing it? | Take a look at: <http://mvolo.com/display-pretty-file-icons-in-your-aspnet-applications-with-iconhandler/>
It's not the cleanest solution but it works. Otherwise, try to get your hands on a library of Icons that's based on mime type or file extension. | One of my old open source project include an [Icon class](http://win32iam.svn.sourceforge.net/viewvc/win32iam/trunk/BlackFox.UninstallInformations/IconHandler.cs?view=markup) that does exactly that, feel free to rip it, seeing the age I put this file in the public domain anyway it's just PInvoke for most part.
To get an icon you use for example :
```
Icon zipIcon = BlackFox.Win32.Icons.IconFromExtension(".zip", SystemIconSize.Small);
```
Full sample :
```
using System;
using System.Windows.Forms;
using BlackFox.Win32;
using System.Drawing;
class Program
{
static void Main(string[] args)
{
PictureBox pict = new PictureBox();
pict.Image = Icons.IconFromExtension(".zip", Icons.SystemIconSize.Large).ToBitmap();
pict.Dock = DockStyle.Fill;
pict.SizeMode = PictureBoxSizeMode.CenterImage;
Form form = new Form();
form.Controls.Add(pict);
Application.Run(form);
}
}
```
The library :
```
using System;
using System.Drawing;
using System.Runtime.InteropServices;
using Microsoft.Win32;
using System.Reflection;
using System.Collections.Generic;
namespace BlackFox.Win32
{
public static class Icons
{
#region Custom exceptions class
public class IconNotFoundException : Exception
{
public IconNotFoundException(string fileName, int index)
: base(string.Format("Icon with Id = {0} wasn't found in file {1}", index, fileName))
{
}
}
public class UnableToExtractIconsException : Exception
{
public UnableToExtractIconsException(string fileName, int firstIconIndex, int iconCount)
: base(string.Format("Tryed to extract {2} icons starting from the one with id {1} from the \"{0}\" file but failed", fileName, firstIconIndex, iconCount))
{
}
}
#endregion
#region DllImports
/// <summary>
/// Contains information about a file object.
/// </summary>
struct SHFILEINFO
{
/// <summary>
/// Handle to the icon that represents the file. You are responsible for
/// destroying this handle with DestroyIcon when you no longer need it.
/// </summary>
public IntPtr hIcon;
/// <summary>
/// Index of the icon image within the system image list.
/// </summary>
public IntPtr iIcon;
/// <summary>
/// Array of values that indicates the attributes of the file object.
/// For information about these values, see the IShellFolder::GetAttributesOf
/// method.
/// </summary>
public uint dwAttributes;
/// <summary>
/// String that contains the name of the file as it appears in the Microsoft
/// Windows Shell, or the path and file name of the file that contains the
/// icon representing the file.
/// </summary>
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)]
public string szDisplayName;
/// <summary>
/// String that describes the type of file.
/// </summary>
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 80)]
public string szTypeName;
};
[Flags]
enum FileInfoFlags : int
{
/// <summary>
/// Retrieve the handle to the icon that represents the file and the index
/// of the icon within the system image list. The handle is copied to the
/// hIcon member of the structure specified by psfi, and the index is copied
/// to the iIcon member.
/// </summary>
SHGFI_ICON = 0x000000100,
/// <summary>
/// Indicates that the function should not attempt to access the file
/// specified by pszPath. Rather, it should act as if the file specified by
/// pszPath exists with the file attributes passed in dwFileAttributes.
/// </summary>
SHGFI_USEFILEATTRIBUTES = 0x000000010
}
/// <summary>
/// Creates an array of handles to large or small icons extracted from
/// the specified executable file, dynamic-link library (DLL), or icon
/// file.
/// </summary>
/// <param name="lpszFile">
/// Name of an executable file, DLL, or icon file from which icons will
/// be extracted.
/// </param>
/// <param name="nIconIndex">
/// <para>
/// Specifies the zero-based index of the first icon to extract. For
/// example, if this value is zero, the function extracts the first
/// icon in the specified file.
/// </para>
/// <para>
/// If this value is �1 and <paramref name="phiconLarge"/> and
/// <paramref name="phiconSmall"/> are both NULL, the function returns
/// the total number of icons in the specified file. If the file is an
/// executable file or DLL, the return value is the number of
/// RT_GROUP_ICON resources. If the file is an .ico file, the return
/// value is 1.
/// </para>
/// <para>
/// Windows 95/98/Me, Windows NT 4.0 and later: If this value is a
/// negative number and either <paramref name="phiconLarge"/> or
/// <paramref name="phiconSmall"/> is not NULL, the function begins by
/// extracting the icon whose resource identifier is equal to the
/// absolute value of <paramref name="nIconIndex"/>. For example, use -3
/// to extract the icon whose resource identifier is 3.
/// </para>
/// </param>
/// <param name="phIconLarge">
/// An array of icon handles that receives handles to the large icons
/// extracted from the file. If this parameter is NULL, no large icons
/// are extracted from the file.
/// </param>
/// <param name="phIconSmall">
/// An array of icon handles that receives handles to the small icons
/// extracted from the file. If this parameter is NULL, no small icons
/// are extracted from the file.
/// </param>
/// <param name="nIcons">
/// Specifies the number of icons to extract from the file.
/// </param>
/// <returns>
/// If the <paramref name="nIconIndex"/> parameter is -1, the
/// <paramref name="phIconLarge"/> parameter is NULL, and the
/// <paramref name="phiconSmall"/> parameter is NULL, then the return
/// value is the number of icons contained in the specified file.
/// Otherwise, the return value is the number of icons successfully
/// extracted from the file.
/// </returns>
[DllImport("Shell32", CharSet = CharSet.Auto)]
extern static int ExtractIconEx(
[MarshalAs(UnmanagedType.LPTStr)]
string lpszFile,
int nIconIndex,
IntPtr[] phIconLarge,
IntPtr[] phIconSmall,
int nIcons);
[DllImport("Shell32", CharSet = CharSet.Auto)]
extern static IntPtr SHGetFileInfo(
string pszPath,
int dwFileAttributes,
out SHFILEINFO psfi,
int cbFileInfo,
FileInfoFlags uFlags);
#endregion
/// <summary>
/// Two constants extracted from the FileInfoFlags, the only that are
/// meaningfull for the user of this class.
/// </summary>
public enum SystemIconSize : int
{
Large = 0x000000000,
Small = 0x000000001
}
/// <summary>
/// Get the number of icons in the specified file.
/// </summary>
/// <param name="fileName">Full path of the file to look for.</param>
/// <returns></returns>
static int GetIconsCountInFile(string fileName)
{
return ExtractIconEx(fileName, -1, null, null, 0);
}
#region ExtractIcon-like functions
public static void ExtractEx(string fileName, List<Icon> largeIcons,
List<Icon> smallIcons, int firstIconIndex, int iconCount)
{
/*
* Memory allocations
*/
IntPtr[] smallIconsPtrs = null;
IntPtr[] largeIconsPtrs = null;
if (smallIcons != null)
{
smallIconsPtrs = new IntPtr[iconCount];
}
if (largeIcons != null)
{
largeIconsPtrs = new IntPtr[iconCount];
}
/*
* Call to native Win32 API
*/
int apiResult = ExtractIconEx(fileName, firstIconIndex, largeIconsPtrs, smallIconsPtrs, iconCount);
if (apiResult != iconCount)
{
throw new UnableToExtractIconsException(fileName, firstIconIndex, iconCount);
}
/*
* Fill lists
*/
if (smallIcons != null)
{
smallIcons.Clear();
foreach (IntPtr actualIconPtr in smallIconsPtrs)
{
smallIcons.Add(Icon.FromHandle(actualIconPtr));
}
}
if (largeIcons != null)
{
largeIcons.Clear();
foreach (IntPtr actualIconPtr in largeIconsPtrs)
{
largeIcons.Add(Icon.FromHandle(actualIconPtr));
}
}
}
public static List<Icon> ExtractEx(string fileName, SystemIconSize size,
int firstIconIndex, int iconCount)
{
List<Icon> iconList = new List<Icon>();
switch (size)
{
case SystemIconSize.Large:
ExtractEx(fileName, iconList, null, firstIconIndex, iconCount);
break;
case SystemIconSize.Small:
ExtractEx(fileName, null, iconList, firstIconIndex, iconCount);
break;
default:
throw new ArgumentOutOfRangeException("size");
}
return iconList;
}
public static void Extract(string fileName, List<Icon> largeIcons, List<Icon> smallIcons)
{
int iconCount = GetIconsCountInFile(fileName);
ExtractEx(fileName, largeIcons, smallIcons, 0, iconCount);
}
public static List<Icon> Extract(string fileName, SystemIconSize size)
{
int iconCount = GetIconsCountInFile(fileName);
return ExtractEx(fileName, size, 0, iconCount);
}
public static Icon ExtractOne(string fileName, int index, SystemIconSize size)
{
try
{
List<Icon> iconList = ExtractEx(fileName, size, index, 1);
return iconList[0];
}
catch (UnableToExtractIconsException)
{
throw new IconNotFoundException(fileName, index);
}
}
public static void ExtractOne(string fileName, int index,
out Icon largeIcon, out Icon smallIcon)
{
List<Icon> smallIconList = new List<Icon>();
List<Icon> largeIconList = new List<Icon>();
try
{
ExtractEx(fileName, largeIconList, smallIconList, index, 1);
largeIcon = largeIconList[0];
smallIcon = smallIconList[0];
}
catch (UnableToExtractIconsException)
{
throw new IconNotFoundException(fileName, index);
}
}
#endregion
//this will look throw the registry
//to find if the Extension have an icon.
public static Icon IconFromExtension(string extension,
SystemIconSize size)
{
// Add the '.' to the extension if needed
if (extension[0] != '.') extension = '.' + extension;
//opens the registry for the wanted key.
RegistryKey Root = Registry.ClassesRoot;
RegistryKey ExtensionKey = Root.OpenSubKey(extension);
ExtensionKey.GetValueNames();
RegistryKey ApplicationKey =
Root.OpenSubKey(ExtensionKey.GetValue("").ToString());
//gets the name of the file that have the icon.
string IconLocation =
ApplicationKey.OpenSubKey("DefaultIcon").GetValue("").ToString();
string[] IconPath = IconLocation.Split(',');
if (IconPath[1] == null) IconPath[1] = "0";
IntPtr[] Large = new IntPtr[1], Small = new IntPtr[1];
//extracts the icon from the file.
ExtractIconEx(IconPath[0],
Convert.ToInt16(IconPath[1]), Large, Small, 1);
return size == SystemIconSize.Large ?
Icon.FromHandle(Large[0]) : Icon.FromHandle(Small[0]);
}
public static Icon IconFromExtensionShell(string extension, SystemIconSize size)
{
//add '.' if nessesry
if (extension[0] != '.') extension = '.' + extension;
//temp struct for getting file shell info
SHFILEINFO fileInfo = new SHFILEINFO();
SHGetFileInfo(
extension,
0,
out fileInfo,
Marshal.SizeOf(fileInfo),
FileInfoFlags.SHGFI_ICON | FileInfoFlags.SHGFI_USEFILEATTRIBUTES | (FileInfoFlags)size);
return Icon.FromHandle(fileInfo.hIcon);
}
public static Icon IconFromResource(string resourceName)
{
Assembly assembly = Assembly.GetCallingAssembly();
return new Icon(assembly.GetManifestResourceStream(resourceName));
}
/// <summary>
/// Parse strings in registry who contains the name of the icon and
/// the index of the icon an return both parts.
/// </summary>
/// <param name="regString">The full string in the form "path,index" as found in registry.</param>
/// <param name="fileName">The "path" part of the string.</param>
/// <param name="index">The "index" part of the string.</param>
public static void ExtractInformationsFromRegistryString(
string regString, out string fileName, out int index)
{
if (regString == null)
{
throw new ArgumentNullException("regString");
}
if (regString.Length == 0)
{
throw new ArgumentException("The string should not be empty.", "regString");
}
index = 0;
string[] strArr = regString.Replace("\"", "").Split(',');
fileName = strArr[0].Trim();
if (strArr.Length > 1)
{
int.TryParse(strArr[1].Trim(), out index);
}
}
public static Icon ExtractFromRegistryString(string regString, SystemIconSize size)
{
string fileName;
int index;
ExtractInformationsFromRegistryString(regString, out fileName, out index);
return ExtractOne(fileName, index, size);
}
}
}
``` | How do I get common file type icons in C#? | [
"",
"c#",
".net",
"windows",
"icons",
"file-type",
""
] |
I am building a POP3 mailbox in PHP.
I have the following **files**:
* server\_access.php (fetch mails from the POP3 server)
* data\_access.php (which fetches/writes mails to local DB)
* mime\_parser.php (parses MIME content)
* core.php (uses above files and stores parsed mail as an assoc array called $inbox)
Now, I have the **pages** mailbox.php to show the inbox and showmail.php to display each mail. The user's credentials are stored in a .ini file and used as necessary. The thing is, I do a require\_once('core.php') in both mailbox.php and in showmail.php
I am able to display the inbox (ie. $inbox has values), however, if i select to read a mail (pop-up window of showmail.php), the $inbox is an empty array.
$inbox is define as a static array in core.php | Static data is only static within the context of a class, meaning a static data member in a class is shared by all instances of that class.
What you seem to be talking about is data persisting across multiple HTTP requests. Static data won't do that for you. That's what $\_SESSION data is for.
To put it another way: once a script finishes servicing the current request, it completely dies. All data is had is completely cleaned up. The new request starts fresh.
Session data persists until PHP decides to clean it up or you manually destroy it. Typically all you have to do to use session data is put in your script:
**Script 1: mailbox.php**
```
session_start();
$_SESSION['mailbox'] = array( /* messages */ );
```
**Script 2: showmail.php**
```
session_start();
$mailbox = $_SESSION['mailbox'];
```
One thing to note: if your script is long-running try and put a session\_commit() in as soon as possible because session access blocks in PHP, meaning if another script tries to session\_start() for the same user it will block until the first script finishes executing or releases the session. | php Sessions needs a place to store session data between requests. In your case it is a temp\php\session\ folder in your home directory. Either create that folder or change session.save\_path in php.ini to point to a valid directory. | PHP static variables across multiple .php pages | [
"",
"php",
"static-variables",
""
] |
What are some tools/practices used to measure performance with Silverlight?
I am interested in the performance costs of rendering certain xaml objects as well as algorithms I have written. I was about to start writing my own classes for this, but I thought I would ask here first.
Thank you in advance. | Have a look at Silverlight Spy 2. There are Events, Network, and Performance tabs that might give you some of the information you're looking for.
[Silverlight Spy 2](http://silverlightspy.com/silverlightspy/download-silverlight-spy/) | [dotTrace](http://www.jetbrains.com/profiler/) by JetBrains is a good tool for measuring performance of ASP.NET applications. Not sure how much it will help you with Silverlight, but will at least analyze your back-end code | How do you measure the performance of your Silverlight applications? | [
"",
"c#",
"silverlight",
"performance",
"profiler",
""
] |
I am looking for a .NET library that is able to decode data from a [PDF-417 barcode](http://en.wikipedia.org/wiki/PDF417) that is embedded either in an image file or PDF. At this point, I have only been able to find a [Java version](http://turbulence.org/Works/swipe/barcode.html) and a [C version](http://sourceforge.net/projects/pdf417decode/).
Ideally, this library would be both open-source and free of charge, but I doubt such a decoder exists.
I am open to trying out demos of existing products that you may have had experience with - which leads me to the question - have you had any experience reading PDF-417 barcodes embedded in images or PDFs using .NET, and which of the available products would you recommend to do so? | We use components (not free) from [IDAutomation](http://www.idautomation.com) for PDF417. They are very good. We use them for encoding, as opposed to reading and decoding.
Haven't used this component of theirs, but have a look it is C#, and you can obtain the source code, but again, not free.
<http://www.idautomation.com/barcode-recognition/> | The [ClearImage Barcode Recognition SDK for .NET](http://www.inliteresearch.com/barcode-recognition/) is probably the easiest way to decode PDF 417 and many other barcodes. I use it in many projects... although it is not free
```
var bitmap = WpfImageHelper.ConvertToBitmap(_BarcodeCam.BitmapSource);
_ImageEditor.Bitmap = bitmap;
_ImageEditor.AutoDeskew();
_ImageEditor.AdvancedBinarize();
var reader = new BarcodeReader();
reader.Horizontal = true;
reader.Vertical = true;
reader.Pdf417 = true;
//_ImageEditor.Bitmap.Save("c:\\barcodeimage.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
var barcodes = reader.Read(_ImageEditor.Bitmap);
if (barcodes.Count() > 0)
``` | Reading and decoding PDF-417 barcodes stored in an image or PDF file from within a .NET application | [
"",
"c#",
".net",
"barcode",
"decode",
""
] |
I see this flag a lot in the makefiles. What does it mean and when should it be used? | Optimization level 2.
From the GCC man page:
> [-O1](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O1) Optimize. Optimizing compilation takes somewhat more time, and a lot
> more memory for a large function.
>
> [-O2](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O2) Optimize even more. GCC performs nearly all supported optimizations
> that do not involve a space-speed
> tradeoff. The compiler does not
> perform loop unrolling or function
> inlining when you specify -O2. As
> compared to -O, this option increases
> both compilation time and the
> performance of the generated code.
>
> [-O3](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O3) Optimize yet more. -O3 turns on all optimizations specified by -O2 and
> also turns on the -finline-functions,
> -funswitch-loops, -fpredictive-commoning, -fgcse-after-reload and -ftree-vectorize options.
>
> [-O0](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O0) Reduce compilation time and make debugging produce the expected
> results. This is the default.
>
> [-Os](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-Os) Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also
> performs further optimizations
> designed to reduce code size. | Optimization [level 2](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O2). The maximum is [3](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O3).
See: *[Options That Control Optimization](http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html)*
Note, that in a few years ago [-O3](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O3) could cause some glitches by excessively "optimizing" the code. AFAIK, that's no longer true with modern versions of GCC. But with inertia, [-O2](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-O2) is considered "the maximum safe". | Meaning of gcc -O2 | [
"",
"c++",
"c",
"gcc",
""
] |
So I have these checkboxes:
```
<input type="checkbox" name="type" value="4" />
<input type="checkbox" name="type" value="3" />
<input type="checkbox" name="type" value="1" />
<input type="checkbox" name="type" value="5" />
```
And so on. There are about 6 of them and are hand-coded (i.e not fetched from a db) so they are likely to remain the same for a while.
My question is how I can get them all in an array (in javascript), so I can use them while making an AJAX `$.post` request using Jquery.
Any thoughts?
Edit: I would only want the selected checkboxes to be added to the array | Formatted :
```
$("input:checkbox[name=type]:checked").each(function(){
yourArray.push($(this).val());
});
```
Hopefully, it will work. | **Pure JS**
For those who don't want to use jQuery
```
var array = []
var checkboxes = document.querySelectorAll('input[type=checkbox]:checked')
for (var i = 0; i < checkboxes.length; i++) {
array.push(checkboxes[i].value)
}
``` | Getting all selected checkboxes in an array | [
"",
"javascript",
"jquery",
"ajax",
"dhtml",
""
] |
Recently, I took it upon myself to try and learn OO programming. It has been about 3 months since I started, but I think I might be *missing the point* because I seem to prefer static methods (which seem 'easier' to me).
Example
Here is what a typical DB query looks like in my code.
```
$bindings = array(':name'=>$articleName);
Db::query('SELECT id, name, title, image, content FROM ' . CONFIG_MYSQL_TABLE_PREFIX . 'articles WHERE name = :name LIMIT 1', $bindings);
```
And here is how I resize/crop/cache images
```
$image = Img::thumbnail($imagePath, 200);
$imgHtml = '<img alt="' . $this->getTitle() . '" src="' . '' . $image['src'] . '" width="' . $image['width'] . '" height="' . $image['height'] . '" />';
```
Both of the static methods utilise a singleton pattern.. the first one creates one PDO object and the second one creates one ImageResize class I found on Google code.
Should these be 2 objects if I really wanted to call it object oriented programming? i.e.
```
$db = new Db();
$image = new Image($src, $width, $height);
```
For everytime I use them? I've read singletons are also a bad idea unless they're being used for logging to a file. But isn't a singleton good for one DB connection being opened when needed and closed only after it's been used and finished with?
My question is, am I still stuck in the procedural mindset, and if so, is what I'm doing considered bad practise? How can I immerse myself in the correct OO thought patterns?
## Update
Thanks for the answers. I do find the original methods I'm doing are *easier* as I have to type less code and let the static methods worry about little implementation things.
I will look into another language to get a solid grasp of OO, which language though will be another question itself. | Well, imho PHP is a bad example for this because [PHP is not object-oriented](http://michaelkimsal.com/blog/php-is-not-object-oriented/). Yes it has objects. Yes they support inheritance and all those OO principles. It supports objects. There's a difference.
I say this because PHP doesn't by default exist in a state between requests. Every single HTTP request will completely recreate a PHP environment from scratch (which is reasonably cheap), meaning there is no static data persisted between requests. You might say "what about session data?" (and maybe append an "a ha!') but that isn't persistent data in a PHP sense either. It's (typically) stored in the filesystem and keyed by a cookie the client sends.
Why do I mention these two things?
Because the "global" scope in not like the global scope in C, Java, C++ or these other languages because they tend to persist between requests. PHP is more like the CGI programming model from the 90s (which is no coincidence because that's where it originated).
So your objects aren't truly global: they are simply visible to all parts of the code servicing the current request.
To me, that's nowhere near as bad. In fact, I often find it quite acceptable. Sometimes it's every necessary (eg in a callback to preg\_replace\_callback if you want to send information back to the caller or pass state to the callback without doing eval()/create\_function() hacks).
And the point about PHP not being object-*oriented* is because even in PHP 5 OO features are still somewhat "tacked on", meaning you could quite happily code away and code well in PHP without ever using them. This is different to, say, Java where you have to create a class even if all you do is write a bunch of static methods in it.
So if you want to learn OO, honestly I wouldn't do it in PHP. PHP is good for a lot of things but it's designed to have an HTTP request lifecycle (yes I know you can run it from the command line but that's not what the vast majority of users do) and it's quite good at the job it's designed for. | the best way to grasp object oriented programming is to think of objects passing messages to each other, not objects calling functions. i had this "eureka" moment when I learned Smalltalk.
there are principles that apply to OOP such as the "tell don't ask" principle and others. use your favourite search engine to look for those principles.
an abundance of static methods are, in my opinion, a sign of being stuck in a procedural mindset. sure there are scenarios where they really make sense but if your code has more static methods than instance methods i'd say you're not doing things the OO way. | Have I missed the point of object oriented programming? | [
"",
"php",
"oop",
""
] |
Hi there I want to read out a file that lies on the server.
I get the path to the file by a parameter
```
<PARAM name=fileToRead value="http://someserver.de/file.txt">
```
when I now start the applet following error occurs
Caused by: java.lang.IllegalArgumentException: URI scheme is not "file"
Can someone give me a hint?
```
BufferedReader file;
String strFile = new String(getParameter("fileToRead"));
URL url = new URL(strFile);
URI uri = url.toURI();
try {
File theFile = new File(uri);
file = new BufferedReader(new FileReader(new File(uri)));
String input = "";
while ((input = file.readLine()) != null) {
words.add(input);
}
} catch (IOException ex) {
Logger.getLogger(Hedgeman.class.getName()).log(Level.SEVERE, null, ex);
}
``` | You are trying open as a file, something which doesn't follow the file:// uri, as the error suggests.
If you want to use a URL, I suggest you just use url.openStream() which should be simpler. | ```
File theFile = new File(uri);
```
is not the correct method. You accessing an URL, not a File.
Your code should look like this:
```
try
{
URL url = new URL(strFile);
InputStream in = url.openStream();
(... read file...)
in.close();
} catch(IOException err)
{
(... process error...)
}
``` | read file in an applet | [
"",
"java",
"file",
"applet",
""
] |
I'm currently brushing up on LINQ and am trying to comprehend the difference between the `let` and using the `into` keyword. So far the `let` keyword seems better than the `into` keyword as far as my understanding goes.
The `into` keyword essentially allows one to continue a query after a projection. (Just want to state explicitly that I'm not referring to the one for group join.)
Given an array of names it allows one to do the following:
```
var intoQuery =
from n in names
select Regex.Replace(n, "[aeiou]", "")
into noVowel
where noVowel.Length > 2
select noVowel;
```
It takes the result of the select and places it into the `noVowel` variable which then allows one to introduce additional `where`, `orderby`, and `select` clauses. Once the `noVowel` variable is created, the `n` variable is no longer available.
The `let` keyword, on the other hand, uses temp anonymous types to allow you to reuse more than one variable at a time.
You can do the following:
```
var letQuery =
from n in names
let noVowel = Regex.Replace(n, "[aeiou]", "")
where noVowel.Length > 2
select noVowel;
```
Both the `noVowel` and `n` variables are available for use (even though I haven't used it in this case).
While I can see the difference, I can't quite understand why one would want to use the `into` keyword over the `let` keyword unless one explicitly wanted to make sure that previous variables were not able to be used in latter parts of the query.
So, is there a good reason why both keywords exist? | Yes, because they're doing different things, as you've said.
`select ... into` effectively isolates the whole of one query and lets you use it as the input to a new query. Personally I *usually* prefer to do this via two variables:
```
var tmp = from n in names
select Regex.Replace(n, "[aeiou]", "");
var noVowels = from noVowel in tmp
where noVowel.Length > 2
select noVowel;
```
(Admittedly in this case I would do it with dot notation in two lines, but ignoring that...)
Often you don't *want* the whole baggage of the earlier part of the query - which is when you use `select ... into` or split the query in two as per the above example. Not only does that mean the earlier parts of the query can't be used when they shouldn't be, it simplifies what's going on - and of course it means there's potentially less copying going on at each step.
On the other hand, when you *do* want to keep the rest of the context, `let` makes more sense. | The primary difference is the `let` injects the variable into the context/scope, where `into` creates a new context/scope. | Is linq's let keyword better than its into keyword? | [
"",
"c#",
"linq",
""
] |
Do you know where I can find a high level explanation of [Lucene Similarity Class](http://lucene.apache.org/java/2_2_0/api/org/apache/lucene/search/Similarity.html) algorithm. I will like to understand it without having to decipher all the math and terms involved with searching and indexing. | Lucene's built-in `Similarity` is a fairly standard ["Inverse Document Frequency"](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) scoring algorithm. The Wikipedia article is brief, but covers the basics. The book [*Lucene in Action*](http://www.amazon.com/gp/search?field-isbn=978-1932394283) breaks down the Lucene formula in more detail; it doesn't mirror the current Lucene formula perfectly, but all of the main concepts are explained.
Primarily, the score varies with number of times that term occurs in the current document (the *term frequency*), and *inversely* with the number of times a term occurs in all documents (the *document frequency*). The other factors in the formula are secondary, adjusting the score in attempt to make scores from different queries fairly comparable to each other. | Think of each document and search term as a vector whose coordinates represent some measure of how important each word in the entire corpus of documents is to that particular document or search term. Similarity tells your the distance between two different vectors.
Say your corpus is normalized to ignore some terms, then a document consisting only of those terms would be located at the origin of a graph of all of your documents in the vector space defined by your corpus. Each document that contains some other terms, then represents a point in the space whose coordinates are defined by the importance of that term in the document relative to that term in the corpus. Two documents (or a document and search) whose coordinates put their "points" closer together are more similar than those with coordinates that put their "points" further apart. | High level explanation of Similarity Class for Lucene? | [
"",
"java",
"search",
"lucene",
"search-engine",
""
] |
Here's my situation: I'm building a small website where, when registring, you can select which days you're available for something.
Mondaymorning
Mondayafternoon
Tuesdaymorning
Tuesdayafternoon
etc...
All the way to sundayafternoon.
I'm formchecking this offcourse, in PHP, and I need to declare 14 variables and go over each one to determine wheter or not the checkbox was clicked.
Now I'm wondering, isn't there an easier way? Now, these are checkboxes, it's not like when uploading files you just add [] to the name atribute in HTML and they get uploaded as an array.
But that would be handy. Does anybody know how I can get something like this going? | Actually, you can just add `[]` to the end of the names, just make sure that you set all their values differently.
```
<input type="checkbox" name="available[]" value="Mon AM" />
<input type="checkbox" name="available[]" value="Mon PM" />
```
Then just loop over the `$_POST['available']` array in your form-processing code. | Building on Chad's answer, here's the code you would use to go through the variables:
```
foreach($_POST['available'] as $a)
{
//Do stuff with $a
}
``` | Having PHP receive the days you're available without tons of variables | [
"",
"php",
"html",
"arrays",
""
] |
Hi I want to create a simple stored proecudre which does the following:
Psudocode
```
@tempSelect = "SELECT * FROM Table"
if (@Param is NULL) then
exec @tempSelect
else
exec @tempSelect + ' WHERE id = ' + @Param + '
```
Is this method efficent? Thank you. | Try
```
select *
from table
where id=isnull(@param, id)
``` | Select \* from Table
Where (ID = @Param or @Param is null)
Or
Select \* from Table
Where ID=Coalesce(@Param, ID)
[And if you are aiming for efficiency, replace \* with the specific field you want to return.] | SQL Stored Procedure: Conditional Return | [
"",
"sql",
"stored-procedures",
"branch",
""
] |
```
XmlElement updateRecipient = doc.CreateElement("UpdateRecipient");
XmlElement email = doc.CreateElement("EMAIL");
XmlElement listID = doc.CreateElement("LIST_ID");
XmlElement column = doc.CreateElement("COLUMN");
XmlElement name = doc.CreateElement("NAME");
XmlElement value = doc.CreateElement("VALUE")
root.AppendChild(body);
body.AppendChild(updateRecipient);
updateRecipient.AppendChild(listID);
listID.InnerText = _listID;
updateRecipient.AppendChild(email);
email.InnerText = _email;
updateRecipient.AppendChild(column);
column.AppendChild(name);
name.InnerText = _columnNameFrequency;
column.AppendChild(value);
value.InnerText = _actionID.ToString();
updateRecipient.AppendChild(column);
column.AppendChild(name);
name.InnerText = _columnNameStatus;
column.AppendChild(value);
```
for some reason, I end up getting only one sub column instead of two under updateRecipient Element. I need both to show up under UpdateRecipient Node like this:
```
<UpdateRecipient>
<LIST_ID>85628</LIST_ID>
<EMAIL>somebody@domain.com</EMAIL>
<COLUMN>
<NAME>Frequency</NAME>
<VALUE>1</VALUE>
</COLUMN>
<COLUMN>
<NAME>Status</NAME>
<VALUE>Opted In</VALUE>
</COLUMN>
</UpdateRecipient>
```
but so far I am getting only one :
```
<UpdateRecipient>
<LIST_ID>85628</LIST_ID>
<EMAIL>somebody@domain.com</EMAIL>
<COLUMN>
<NAME>Status</NAME>
<VALUE>Opted In</VALUE>
</COLUMN>
</UpdateRecipient>
```
When it hits the first AppendChild(column) and then name and value, frequency shows find but then is later overrident by the Status and I want it to just append a new under and I'mnot sure why it's overriding rather than adding another tag. | The problem is that you are reusing the "column", "name", & "value" variables. You need to create new XmlElements for the 2nd set. | I don't know, but try doing it in the opposite order. That's what I've always done. Don't append updateRecipient to the root, until you're through with it. | XmlReader AppendChild is not appending same child value | [
"",
"c#",
"xml",
""
] |
I have an interface like so:
```
public interface IDocument : ISerializable
{
Boolean HasBeenUploaded { get; set; }
void ISerializable.GetObjectData(SerializationInfo, StreamingContext) { }
}
```
There are three documents that inherit from this, all of which serialize just fine. But when creating a simple web service, that does nothing, where they can be uploaded to...
```
public class DCService : System.Web.Services.WebService
{
[WebMethod]
public Boolean ReceiveDocument(IDocument document)
{
DBIO io = new DBIO();
return io.InsertIntoDB(document); // does nothing; just returns true
}
}
```
I get this when trying to run it: "Cannot serialize interface IDocument"
I'm not quite sure why this would be a problem. I know that [some people](https://stackoverflow.com/questions/360208/enforcing-serializable-from-an-interface-without-forcing-classes-to-custom-serial) have had trouble because they didn't want to force subclasses to implement custom serialization but I do, and up to this point it has been successful.
edit> If I create individual webmethods that accept the objects that implement the interface, it works fine, but that weakens the contract between the client/server (and undermines the purpose of having the interface in the first place) | You may need to use an XmlInclude attribute to your web method. A example can be found [here](http://www.pluralsight.com/community/blogs/craig/archive/2004/07/08/1580.aspx). We have run into this issue before and have added XmlInclude attributes to both our web service proxy class on the client and to certain web service methods.
```
[WebMethod]
[XmlInclude(typeof(MyDocument))]
public Boolean ReceiveDocument(IDocument document)
{
DBIO io = new DBIO();
return io.InsertIntoDB(document); // does nothing; just returns true
}
``` | Asp.net must need to be able to tell which specific class it will instantiate when calling that method. This is why it works when defining multiple methods with the specific classes i.e. the call will tell you which class to use.
Consider whether you want the client to send the same set of info for any document, or if you really need to be able to send different info for different documents. With the later, you need the client to know the classes that implement the IDocument, and you do this with the XmlInclude (as firedfly posted).
If you instead want to always send the same info and not now about the specific classes, define a class with that info and that is what you receive in the methods. If you do need to play with IDocument in the rest of the service code, have appropiate logic in the service that gets you an IDocument instance using the data received. | web service can't serialize an interface | [
"",
"c#",
"web-services",
"serialization",
"interface",
""
] |
I have a table in my database that I use to manage relationships across my application. it's pretty basic in it's nature - parentType,parentId, childType, childId... all as ints. I've done this setup before, but I did it with a switch/case setup when I had 6 different tables I was trying to link. Now I have 30 tables that I'm trying to do this with and I would like to be able to do this without having to write 30 case entries in my switch command.
Is there a way that I can make reference to a .Net class using a string? I know this isn't valid (because I've tried several variations of this):
```
Type t = Type.GetType("WebCore.Models.Page");
object page = new t();
```
I know how to get the Type of an object, but how do I use that on the fly to create a new object? | This link should help:
<https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance>
Activator.CreateInstance will create an instance of the specified type.
You could wrap that in a generic method like this:
```
public T GetInstance<T>(string type)
{
return (T)Activator.CreateInstance(Type.GetType(type));
}
``` | If the type is known by the caller, there's a better, faster way than using Activator.CreateInstance: you can instead use a generic constraint on the method that specifies it has a default parameterless constructor.
Doing it this way is type-safe and doesn't require reflection.
```
T CreateType<T>() where T : new()
{
return new T();
}
``` | Dynamically create an object of <Type> | [
"",
"c#",
".net",
"dynamic",
""
] |
I'm developing an application that needs to execute until a count down. When the handled turns off the screen, the countdown halts. How can I continue the execution when this situation happens? | I assume you mean that you want your code to continue executing after the device suspends? First off, you can't. When the device suspends, the processor stops running. You have a couple options though. You can periodically call [SystemIdleTimerReset](http://SystemIdleTimerReset) to prevent the device from suspending, run it in "[unattended mode](http://blogs.msdn.com/windowsmobile/archive/2004/11/29/271991.aspx)" so the backlight shuts off but the device does not suspend, or use an API like [CeRunAppAtTime](http://msdn.microsoft.com/en-us/library/aa931253.aspx) to preiodically wake the processor to run your code. | Note that there's an article on CodeProject.com that contains examples of managed (C#) code for using the Windows Mobile powermanagement features.
<http://www.codeproject.com/KB/mobile/WiMoPower1.aspx>
Additionally information on starting an application due to various conditions from managed code can be found in a CodeProject.com article too.
<http://www.codeproject.com/KB/mobile/WiMoAutostart.aspx> | How to continue execution when mobile device sleeps? | [
"",
"c#",
"windows-mobile",
""
] |
First time poster, be gentle ;-)
I'm writing an audio app (in C++) which runs as a Windows service, uses WASAPI to take samples from the line in jack, and does some processing on it.
Something I've noticed is that when my app is "recording", Windows won't automatically suspend or hibernate.
I've registered for power event notifications and, if I push the suspend button myself, my service gets the appropriate power events and handles them ok. If I leave the system to suspend on its own, the power events are never received.
If I remove the bits of code where I reference WASAPI, the power events are received as normal on both manual and automatic suspend. So it seems like there's something about using WASAPI that tells Windows to ignore the automatic suspend timer.
Can anyone help explain this behavior, and is there anything I can do to stop it? I don't want my app to be one of those which misbehaves and prevents systems from suspending.. | Many thanks to Larry for confirming this behaviour is by design and not me doing something silly.
To work around this issue I used the Win32 `CallNtPowerInformation()` API to retrieve the system idle timer:
```
SYSTEM_POWER_INFORMATION spi = {0};
NTSTATUS status = CallNtPowerInformation(SystemPowerInformation, NULL, 0,
&spi, sizeof(spi));
if (NT_SUCCESS(status) && (spi.TimeRemaining==0))
{
// should have gone to sleep
}
```
The `spi.TimeRemaining` member counts down (in seconds) from the time specified by the user in Control Panel e.g. "System standby after 1 hour", and gets reset whenever CPU usage (as a percentage) rises above `spi.MaxIdlenessAllowed`.
If `spi.TimeRemaining` ever reaches zero, the system *should* have gone to sleep, so I close all my WASAPI handles and let it do so. | Unfortuantely there's no mechanism to do what you want - opening an audio stream prevents power state transitions as does opening a file up over the network and any one of a number of other things.
This is a function of the audio driver (portcls.sys) and not WASAPI and is not a new behavior for Vista - I believe that XP and Win2K had similar behaviors (although power state transitions are much more reliable on Vista than they were on XP and Win2K so users tend to depend on them more).
On Windows 7 you can use the "powercfg -requests" to find if any parts of the system are preventing a machine from entering sleep. More information on that can be found [here](http://channel9.msdn.com/pdc2008/PC02/) | WASAPI prevents Windows automatic suspend? | [
"",
"c++",
"audio",
"windows-vista",
"suspend",
""
] |
I am proper struggling getting that "magic" moment when WCF is configured nicely and jQuery is structuring its requests/understanding responses nicely.
I have a service:
```
<%@ ServiceHost Language="C#" Debug="true" Service="xxx.yyy.WCF.Data.ClientBroker" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>
```
This was recommended by the man [Rick Strahl](http://www.west-wind.com/WebLog/posts/310747.aspx) to avoid having to define the behaviours within Web.config.
My interface for the WCF service sits in another assembly:
```
namespace xxx.yyy.WCF.Data
{
[ServiceContract(Namespace = "yyyWCF")]
public interface IClientBroker
{
[OperationContract]
[WebInvoke(Method="POST",BodyStyle=WebMessageBodyStyle.Wrapped,ResponseFormat=WebMessageFormat.Json)]
IClient GetClientJson(int clientId);
}
}
```
The concrete service class is:
```
namespace xxx.yyy.WCF.Data
{
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
class ClientBroker : IClientBroker
{
public IClient GetClientJson(int clientId)
{
IClient client=new Client();
// gets and returns an IClient
return client;
}
}
}
```
My IClient is an Entity Framework class so is decorated with DataContract/DataMember attributes appropriately.
I am trying to call my WCF service using the methods outlined on Rick Strahl's blog at <http://www.west-wind.com/weblog/posts/324917.aspx> (the "full fat" version). The debugger jumps into the WCF service fine (so my jQuery/JSON is being understood) and gets the IClient and returns it. However, when I return the response, I get various useless errors. The errors I am getting back don't mean much.
I am using POST.
Am I right to be using an Interface instead of a concrete object? As it does get into the WCF service, it does seem to be the encoding of the result that is failing.
Does anyone have any ideas? | At first glance there are three problems with your code:
1: you should use the [ServiceKnownTypeAttribute](http://msdn.microsoft.com/en-us/library/system.servicemodel.serviceknowntypeattribute.aspx) to specify known types when exposing only base types in your operation contracts:
```
[ServiceContract(Namespace = "yyyWCF")]
public interface IClientBroker
{
[OperationContract]
[ServiceKnownType(typeof(Client))]
[WebInvoke(
Method="GET",
BodyStyle=WebMessageBodyStyle.WrappedRequest,
ResponseFormat=WebMessageFormat.Json)]
IClient GetClientJson(int clientId);
}
```
2: You should use `WebMessageBodyStyle.WrappedRequest` instead of `WebMessageBodyStyle.Wrapped` because the latter is not compatible with [WebScriptServiceHostFactory](http://msdn.microsoft.com/en-us/library/system.servicemodel.activation.webscriptservicehostfactory.aspx).
3: IMHO using Method="GET" would be more RESTful for a method called GetClientJson than Method="POST"
Another advice I could give you when working with WCF services is to use [SvcTraceViewer.exe](http://msdn.microsoft.com/en-us/library/ms732023.aspx) bundled with Visual Studio. It is a great tool for debugging purposes. All you need is to add the following section to your app/web.config:
```
<system.diagnostics>
<sources>
<source name="System.ServiceModel"
switchValue="Information, ActivityTracing"
propagateActivity="true">
<listeners>
<add name="sdt"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData= "WcfDetailTrace.e2e" />
</listeners>
</source>
</sources>
</system.diagnostics>
```
Then invoke the web method and WcfDetailTrace.e2e file will be generated in your web site root directory. Next open this file with SvcTraceViewer.exe and you will see lots of useful information. For example it could say:
> Cannot serialize parameter of type
> 'MyNamespace.Client' (for operation
> 'GetClientJson', contract
> 'IClientBroker') because it is not the
> exact type 'MyNamespace.IClient' in
> the method signature and is not in the
> known types collection. In order to
> serialize the parameter, add the type
> to the known types collection for the
> operation using
> ServiceKnownTypeAttribute.
Of course you should not forget commenting this section before going into production or you might end up with some pretty big files. | I am 99% sure you cant return an interface. I dont think Interfaces are serializable.
check out this [thread](http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/31102bd8-0a1a-44f8-b183-62926390b3c3/) | JQuery/WCF without ASP.NET AJAX: | [
"",
"c#",
"jquery",
"wcf",
"json",
".net-3.5",
""
] |
I've added a Notify Icon to my app, and quite often I see up to 3 copies of the notify icon in my systray. is there a reason for this?
is there a way to stop it from happening.
Often this persists after my app has closed, untill I mose over to the systray and the systray expands and collapses snd then they all disapear. | Is this while you are debugging your application? if so this is because the messages that remove the icon from the system tray are only sent when the application exits normally, if it terminates because of an exception or because you terminate it from Visual Studio the icon will remain until you mouse over it. | You can kill the icon using the parent Window's Closed event. This works in my WPF app, even when testing in Visual Studio (2010 in my case):
```
parentWindow.Closing += (object sender, CancelEventArgs e) =>
{
notifyIcon.Visible = false;
notifyIcon.Icon = null;
notifyIcon.Dispose();
};
``` | Why am I seeing multiple Systray Icons? | [
"",
"c#",
"systray",
""
] |
I'm sure I am missing some of the basics here so please bear with me.
I have a two objects, a SalesOrder and a SalesOrderLineItem. The lineitems are kept in an `Observablecollection<SalesOrderLineItem>` of SalesOrder, and if I add line items my databound listView knows about the new items.
Now where I am having a problem is here: The SalesOrder has a read only property called "OrderTotal", which knows what the total price from all the line items combined are.
If I change the quantity, or price of a line item, I don't seem to know how to get change notification to bubble up to "OrderTotal".
Both classes inherit from, and fire `INotifyPropertyChanged`.
Wham am I missing? | The ObservableCollection only notifies when an Item is added, removed, or when the list is refreshed, not when a property of an Item within the collection changes.
The SalesOrder class needs to be listening to each SalesOrderLineItem's PropertyChanged event.
-
The OrderTotal property could also be dynamic, generating the total each time... | If you feel adventurous, you can try [BindableLinq](http://www.codeplex.com/BindableLinq) or something similar.
It allows you to expose the result of a query that listens to changes.
But it's not out of beta yet, and I'm not sure it will ever be...
[ContinuousLinq](http://www.codeplex.com/clinq) is another one, but it does not support Silverlight at all yet.
Other than that, you'll have to listen to each child, and add/remove handlers whenever the collection changes. | How do I pass a property changed event to an ancestor? | [
"",
"c#",
"wpf",
"data-binding",
".net-3.5",
""
] |
I was digging around in MSDN and found [this article](http://msdn.microsoft.com/en-us/library/ms229030(VS.80).aspx) which had one interesting bit of advice: **Do not have public members that can either throw or not throw exceptions based on some option.**
For example:
```
Uri ParseUri(string uriValue, bool throwOnError)
```
Now of course I can see that in 99% of cases this would be horrible, but is its occasional use justified?
One case I have seen it used is with an "AllowEmpty" parameter when accessing data in the database or a configuration file. For example:
```
object LoadConfigSetting(string key, bool allowEmpty);
```
In this case, the alternative would be to return null. But then the calling code would be littered with null references check. (And the method would also preclude the ability to actually allow null as a specifically configurable value, if you were so inclined).
What are your thoughts? Why would this be a big problem? | I think it's definitely a bad idea to have a throw / no throw decision be based off of a boolean. Namely because it requires developers looking at a piece of code to have a functional knowledge of the API to determine what the boolean means. This is bad on it's own but when it changes the underlying error handling it can make it very easy for developers to make mistakes while reading code.
It would be much better and more readable to have 2 APIs in this case.
```
Uri ParseUriOrThrow(string value);
bool TryParseUri(string value, out Uri uri);
```
In this case it's 100% clear what these APIs do.
Article on why booleans are bad as parameters: <http://blogs.msdn.com/jaredpar/archive/2007/01/23/boolean-parameters.aspx> | It's usually best to choose one error handling mechanism and stick with it consistently. Allowing this sort of flip-flop code can't really improve the life of developers.
In the above example, what happens if parsing fails and throwOnError is false? Now the user has to guess if NULL if going to be returned, or god knows...
True there's an ongoing debate between exceptions and return values as the better error handling method, but I'm pretty certain there's a consensus about being consistent and sticking with whatever choice you make. The API can't surprise its users and error handling should be part of the interface, and be as clearly defined as the interface. | Throw/do-not-throw an exception based on a parameter - why is this not a good idea? | [
"",
"c#",
"exception",
"error-handling",
"parameters",
""
] |
I decided to leave my windows install behind and am now running Debian as my default OS. I have always coded in Windows and specifically with Visual Studio. I am currently trying to get used to compiling my code under linux.
Although I still have a lot of documentation to read, and don't expect you guys to make it too easy for me, it'd still be nice to get some pointers on where to start. I have some specific questions, but feel free to suggest/recommend anything else regarding the subject.
* What are recommended guides on creating a make file, how do I compile from this makefile (do I call g++ myself, do I use 'make'?)
* Looking at other linux software, they almost always seem to have a 'configure' file. What exactly does it do? Does it only check if the required libraries are installed or does it more than just checking requirements?
* How do I link libraries, and how does this relate to my makefile or g++ parameters? In windows I would compile the library, include some header files, tell my linker what additional lib file to link, and copy a dll file. How exactly does this process work in linux?
* Recommendations for code editors? I am currently using nano and I've heard of vim and emacs, but don't know what the benefits of them are over eachother. Are there any others, and why would I consider them over any of the previous three? Note: I am not looking for an IDE.
Any help, links to guides & documentation (preferably those that are aimed at beginners) are very much appreciated! | > What are recommended guides on creating a make file, how do I compile from this makefile (do I call g++ myself, do I use 'make'?)
You build from the makefile by invoking "make". And inside your makefile, you compile and link using g++ and ld.
> Looking at other linux software, they almost always seem to have a 'configure' file. What exactly does it do? Does it only check if the required libraries are installed or does it more than just checking requirements?
It's a script usually used to set up various things based on the environment being used for building. Sometimes it's just a basic shell script, other times it invokes tools like Autoconf to discover what is available when building. The "configure" script is usually also a place for the user to specify various optional things to be built or excluded, like support for experimental features.
> How do I link libraries, and how does this relate to my makefile or g++ parameters? In windows I would compile the library, include some header files, tell my linker what additional lib file to link, and copy a dll file. How exactly does this process work in linux?
ld is the GNU linker. You can invoke it separately (which is what most makefiles will end up doing), or you can have g++ delegate to it. The options you pass to g++ and ld determine where to look for included headers, libraries to link, and how to output the result.
> Recommendations for code editors? I am currently using nano and I've heard of vim and emacs, but don't know what the benefits of them are over eachother. Are there any others, and why would I consider them over any of the previous three? Note: I am not looking for an IDE.
Vim and Emacs are very flexible editors that support a whole bunch of different usages. Use whatever feels best to you, though I'd suggest you might want a few minimal things like syntax highlighting. | Just a note to go with MandyK's answers.
Creating make files by hand is usually a very unportable way of building across linux distro's/unix variants. There are many build systems for auto generating make files, building without make files. [GNU Autotools](http://sources.redhat.com/autobook/), [Cmake](http://www.cmake.org/), [Scons](http://www.scons.org/), [jam](http://www.perforce.com/jam/jam.html), etc.
Also to go more in depth about configure.
* Checks available compilers, libraries, system architecture.
* Makes sure your system matches the appropriate compatible package list.
* Lets you specify command line arguments to specialize your build, install path, option packages etc.
* Configure then generates an appropriate Makefile specific to your system. | C++ development on linux - where do I start? | [
"",
"c++",
"linux",
"editor",
"makefile",
"linker",
""
] |
What's the best method to bind a linq2sql collection to a GridView, given that I want ot use builtin pagination and sorting?
I've tried:
```
public static void BindEnquiryList(EnquiryQuery query, GridView view)
{
DataContext db = DataContextManager.Context
//view.DataSource = (from e in EnquiryMethods.BuildQuery(query)
view.DataSource = (from e in db.Enquiries
select new
{
Id = e.Id,
Name = e.Name,
PublicId = EnquiryMethods.GetPublicId(e.PublicId),
What = e.WorkType.DescriptionText,
Where = e.EnquiryArea.DescriptionText,
Who = e.EnquiryType0.DescriptionText,
When = e.EnquiryTime0.DescriptionText,
PriceRange = e.EnquiryPrice0.DescriptionText,
DisplayPriceRange = e.EnquiryPrice0.Display,
Description = e.Description,
Published = e.EnquiryPublished
});
view.DataBind();
}
```
And the GridView is set to pagesize 20 (and properly shows 20) but the execution takes forever to load and when I log the sql I get:
```
SELECT [t0].[Id], [t0].[Name], [t0].[PublicId] AS [publicId], [t1].[DescriptionText] AS [What], [t2].[DescriptionText] AS [Where], [t3].[DescriptionText] AS [Who], [t4].[DescriptionText] AS [When], [t5].[DescriptionText] AS [PriceRange], [t5].[Display] AS [DisplayPriceRange], [t0].[Description], [t0].[EnquiryPublished] AS [Published]
FROM [dbo].[Enquirys] AS [t0]
LEFT OUTER JOIN [dbo].[WorkTypes] AS [t1] ON [t1].[Id] = [t0].[EnquiryPlace]
LEFT OUTER JOIN [dbo].[EnquiryAreas] AS [t2] ON [t2].[Id] = [t0].[EnquiryPlace]
LEFT OUTER JOIN [dbo].[EnquiryTypes] AS [t3] ON [t3].[Id] = [t0].[EnquiryType]
LEFT OUTER JOIN [dbo].[EnquiryTimes] AS [t4] ON [t4].[Id] = [t0].[EnquiryTime]
LEFT OUTER JOIN [dbo].[EnquiryPrices] AS [t5] ON [t5].[Id] = [t0].[EnquiryPrice]
-- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.21022.8
```
Without any top. I'd expected the GridView to manage Take and Skip operations? | The best way is to use the LinqDataSource. Create an event handler for the selecting event on the LinqDataSource.
```
protected void LinqDataSource1_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
e.Result = (from e in db.Enquiries
select new
{
Id = e.Id,
Name = e.Name,
PublicId = EnquiryMethods.GetPublicId(e.PublicId),
What = e.WorkType.DescriptionText,
Where = e.EnquiryArea.DescriptionText,
Who = e.EnquiryType0.DescriptionText,
When = e.EnquiryTime0.DescriptionText,
PriceRange = e.EnquiryPrice0.DescriptionText,
DisplayPriceRange = e.EnquiryPrice0.Display,
Description = e.Description,
Published = e.EnquiryPublished
});
}
```
Then the LinqDataSource will handle the paging and sorting for you and you get compile checks on you linq query | I'm not sure, but can you use a LinqDataSource? Otherwise, I think GridView requires that the data be in a DataSet in order for it to do paging and sorting on its own. | Linq2sql/GridView: Pagination not working? | [
"",
"c#",
"sql-server",
"linq-to-sql",
"t-sql",
""
] |
I've got a working PHP script that gets Longitude and Latitude values and then inputs them into a MySQL query. I'd like to make it solely MySQL. Here's my current PHP Code:
```
if ($distance != "Any" && $customer_zip != "") { //get the great circle distance
//get the origin zip code info
$zip_sql = "SELECT * FROM zip_code WHERE zip_code = '$customer_zip'";
$result = mysql_query($zip_sql);
$row = mysql_fetch_array($result);
$origin_lat = $row['lat'];
$origin_lon = $row['lon'];
//get the range
$lat_range = $distance/69.172;
$lon_range = abs($distance/(cos($details[0]) * 69.172));
$min_lat = number_format($origin_lat - $lat_range, "4", ".", "");
$max_lat = number_format($origin_lat + $lat_range, "4", ".", "");
$min_lon = number_format($origin_lon - $lon_range, "4", ".", "");
$max_lon = number_format($origin_lon + $lon_range, "4", ".", "");
$sql .= "lat BETWEEN '$min_lat' AND '$max_lat' AND lon BETWEEN '$min_lon' AND '$max_lon' AND ";
}
```
Does anyone know how to make this entirely MySQL? I've browsed the Internet a bit but most of the literature on it is pretty confusing. | From [Google Code FAQ - Creating a Store Locator with PHP, MySQL & Google Maps](http://web.archive.org/web/20170126150533/https://developers.google.com/maps/articles/phpsqlsearch_v3):
> Here's the SQL statement that will find the closest 20 locations that are within a radius of 25 miles to the 37, -122 coordinate. It calculates the distance based on the latitude/longitude of that row and the target latitude/longitude, and then asks for only rows where the distance value is less than 25, orders the whole query by distance, and limits it to 20 results. To search by kilometers instead of miles, replace 3959 with 6371.
```
SELECT id, ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) )
* cos( radians( lng ) - radians(-122) ) + sin( radians(37) ) * sin(radians(lat)) ) ) AS distance
FROM markers
HAVING distance < 25
ORDER BY distance
LIMIT 0 , 20;
``` | `$greatCircleDistance = acos( cos($latitude0) * cos($latitude1) * cos($longitude0 - $longitude1) + sin($latitude0) * sin($latitude1));`
with latitude and longitude in radian.
so
```
SELECT
acos(
cos(radians( $latitude0 ))
* cos(radians( $latitude1 ))
* cos(radians( $longitude0 ) - radians( $longitude1 ))
+ sin(radians( $latitude0 ))
* sin(radians( $latitude1 ))
) AS greatCircleDistance
FROM yourTable;
```
is your SQL query
to get your results in Km or miles, multiply the result with the mean radius of Earth (`3959` miles,`6371` Km or `3440` nautical miles)
The thing you are calculating in your example is a bounding box.
If you put your coordinate data in a [spatial enabled MySQL column](http://dev.mysql.com/doc/refman/5.0/en/gis-introduction.html), you can use [MySQL's build in functionality](http://dev.mysql.com/doc/refman/5.0/en/relations-on-geometry-mbr.html) to query the data.
```
SELECT
id
FROM spatialEnabledTable
WHERE
MBRWithin(ogc_point, GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))'))
``` | MySQL Great Circle Distance (Haversine formula) | [
"",
"php",
"mysql",
"great-circle",
""
] |
I am having trouble with a stored procedure (SQL 2005).
I have a table called `tbrm_Tags` with two columns, `TagID` and `TagName`. I want to pass a `TagName` value to the stored procedure and then I want to:
1. Check if the `Tagname` exists and if it does return the `TagID`
2. If the `Tagname` does not exist I want it to insert into the table and return the `TagID`.
Here is the stored procedure I am using:
```
@TagID int = null,
@TagName varchar(50)
AS
DECLARE @returnValue int
BEGIN
IF EXISTS (SELECT * FROM tbrm_Tags WHERE TagName = @TagName)
BEGIN
SELECT
TagID
FROM tbrm_Tags
WHERE TagName = @TagName
END
ELSE
BEGIN
IF NOT EXISTS (SELECT * FROM tbrm_Tags WHERE TagName = @TagName)
INSERT INTO tbrm_Tags
(
TagName
)
VALUES
(
@TagName
)
SELECT @returnValue = @@IDENTITY
END
END
RETURN @returnValue
```
I cannot get the select statement to return the `TagID` when the `Tagname` exists. | Note: don't use `@@IDENTITY` - it is subject to triggers; always use `SCOPE_IDENTITY()`
I might just do:
```
DECLARE @returnValue int
SELECT @returnValue = TagID
FROM tbrm_Tags
WHERE TagName = @TagName
IF @returnValue IS NULL
BEGIN
INSERT tbrm_Tags(TagName)
VALUES (@TagName)
SET @returnValue = SCOPE_IDENTITY()
END
RETURN @returnValue
``` | ```
SELECT
@returnValue = TagID
FROM tbrm_Tags
WHERE TagName = @TagName
``` | SQL 2005 stored procedure to return value or create it if it does not exist | [
"",
"sql",
"stored-procedures",
""
] |
I've got a list view that I'm populating with 8 columns of user data. The user has the option to enable auto refreshing, which causes the ListView to be cleared and repopulated with the latest data from the database.
The problem is that when the items are cleared and repopulated, the visible area jumps back to the top of the list. So if I'm looking at item 1000 of 2000, it's very inconvenient to get back to that item.
Basically, what I'm asking is, how do I get the current scroll distances (x and y) and then restore them? | I had the same problem with a while ago and I ended up implementing an algorithm to compare the model with the list, so I only added/removed elements that had changed. This way if there were no massive changes the list didn't jump to the beginning. And the main thing I wanted to achieve was the efficiency (so that the list doesn't blink). | I just wanted to provide some information for those who desperately try to use the buggy ListView.TopItem property:
1. You MUST set the TopItem property AFTER calling ListView.EndUpdate
2. The items of the ListView control MUST have their Text property set to something other
than String.Empty, or the property won't work.
3. Setting the ListView.TopItem throws null reference exceptions intermittently. Always keep this line of code inside a Try...Catch block.
Of course, this will cause the ListView's scrollbar to jump to 0 and back to the location of the top item, which is annoying. Please update this question if you find a workaround to this problem. | WinForms ListView, Remembering Scrolled Location on Reload | [
"",
"c#",
"winforms",
"listview",
".net-2.0",
"listviewitem",
""
] |
My project is an application in which we load various assemblies and perform operations on them.
We are stuck at a situation where we need to add a reference to the assembly we load (which will be selected by user). So I need to add a reference to the DLL at run time.
I tried [this site](http://www.codeproject.com/KB/cs/addrefvsnet.aspx) but here they support only microsoft DLLs like System.Security etc. I want to add a reference to a user created dll (class library). | You can't "add a reference" at runtime - but you can load assemblies - `Assembly.LoadFrom` / `Assembly.LoadFile` etc. The problem is that you can't *unload* them unless you use `AppDomain`s. Once you have an `Assembly`, you can use `assemblyInstance.GetType(fullyQualifiedTypeName)` to create instances via reflection (which you can then cast to known interfaces etc).
For a trivial example:
```
// just a random dll I have locally...
Assembly asm = Assembly.LoadFile(@"d:\protobuf-net.dll");
Type type = asm.GetType("ProtoBuf.ProtoContractAttribute");
object instance = Activator.CreateInstance(type);
```
At which point I can either cast `instance` to a known base-type/interface, or continue to use reflection to manipulate it. | If the assembly is in another location than the current or in the GAC, just use the
***AppDomain.CurrentDomain.AssemblyResolve*** event to deliver the assembly yourself. | Loading assemblies at run-time | [
"",
"c#",
".net",
""
] |
I have a rails application where I store created\_at as datetime (standard).
I am building a form for searching and I find I have to use find\_by\_sql to do some complex subqueries. The form has a date range (no time) to search on for items created\_at field.
The problem I find is that if I pass in just the date string for range to query...
```
... status_changes.created_at between '2009-01-24' and '2009-03-12' ...
```
I am getting back records that have a created\_at date of 2009-01-23 17:10:39 -0800 because this is stored in the db as 2009-01-24 01:10:39 (UTC)
How can I fix this so that the result is not returning the record in question?
It seems I either need to convert the date range to be UTC specific or tell the find\_by\_sql to search based on current time zone instead of reading the column as utc...
Any takers?
John | I know that this is an old question, but in answer to Streamlines query about CONVERT\_TZ:
Unless you have the mySQL named timezone tables loaded (which is pretty unlikely on a vanilla install), you need to enter timezones as an offset from UTC.
```
CONVERT_TZ(status_changes.created_at, '+00:00', '+08:00') between '2009-01-24' and '2009-03-12')
``` | The modern ActiveRecord way of doing this is:
```
Model.where(time_field: date1..date2)
``` | How to Properly Convert or Query Date Range for Rails / MySQL DateTime Column | [
"",
"sql",
"mysql",
"ruby-on-rails",
"ruby",
"datetime",
""
] |
I need to run a shell command asynchronously from a Python script. By this I mean that I want my Python script to continue running while the external command goes off and does whatever it needs to do.
I read this post:
> [Calling an external command in Python](https://stackoverflow.com/questions/89228/how-to-call-external-command-in-python)
I then went off and did some testing, and it looks like `os.system()` will do the job provided that I use `&` at the end of the command so that I don't have to wait for it to return. What I am wondering is if this is the proper way to accomplish such a thing? I tried `commands.call()` but it will not work for me because it blocks on the external command.
Please let me know if using `os.system()` for this is advisable or if I should try some other route. | [`subprocess.Popen`](http://docs.python.org/library/subprocess.html) does exactly what you want.
```
from subprocess import Popen
p = Popen(['watch', 'ls']) # something long running
# ... do other stuff while subprocess is running
p.terminate()
```
(Edit to complete the answer from comments)
The Popen instance can do various other things like you can [`poll()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.poll) it to see if it is still running, and you can [`communicate()`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate) with it to send it data on stdin, and wait for it to terminate. | If you want to run many processes in parallel and then handle them when they yield results, you can use polling like in the following:
```
from subprocess import Popen, PIPE
import time
running_procs = [
Popen(['/usr/bin/my_cmd', '-i %s' % path], stdout=PIPE, stderr=PIPE)
for path in '/tmp/file0 /tmp/file1 /tmp/file2'.split()]
while running_procs:
for proc in running_procs:
retcode = proc.poll()
if retcode is not None: # Process finished.
running_procs.remove(proc)
break
else: # No process is done, wait a bit and check again.
time.sleep(.1)
continue
# Here, `proc` has finished with return code `retcode`
if retcode != 0:
"""Error handling."""
handle_results(proc.stdout)
```
The control flow there is a little bit convoluted because I'm trying to make it small -- you can refactor to your taste. :-)
**This has the advantage of servicing the early-finishing requests first.** If you call `communicate` on the first running process and that turns out to run the longest, the other running processes will have been sitting there idle when you could have been handling their results. | How can I run an external command asynchronously from Python? | [
"",
"python",
"asynchronous",
"subprocess",
"scheduler",
""
] |
I have a web application that is polling a web service on another server. The server is located on the same network, and is referenced by an internal IP, running on port 8080.
Every 15 secs, a request is sent out, which receives an xml response with job information. 95% of the time, this works well, however at random times, the request to the server is null, and reports a "response forcibly closed by remote host."
Researching this issue, others have set KeepAlive = false. This has not solved the issue. The web server is running .NET 3.5 SP1.
```
Uri serverPath = new Uri(_Url);
// create the request and set the login credentials
_Req = (HttpWebRequest)WebRequest.Create(serverPath);
_Req.KeepAlive = false;
_Req.Credentials = new NetworkCredential(username, password);
_Req.Method = this._Method;
```
Call to the response:
```
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
_ResponseStream = response.GetResponseStream();
```
The method for this is GET. I tried changing the timeout, but the default is large enough to take this into account.
The other request we perform is a POST to post data to the server, and we are getting the same issue randomly as well. There are no firewalls affecting this, and we ruled out the Virus scanner. Any ideas to help solving this is greatly appreciated! | Are you closing the response stream and disposing of the response itself? That's the most frequent cause of "hangs" with WebRequest - there's a limit to how many connections you can open to the same machine at the same time. The GC will finalize the connections eventually, but if you dispose them properly it's not a problem. | I wouldn't rule out network issues as a possible reason for problems. Have you run a ping to your server to see if you get dropped packets that correspond to the same times as your failed requests? | HttpWebRequest not returning, connection closing | [
"",
"c#",
"asp.net",
"web-services",
"httpwebrequest",
""
] |
I am writing a PHP/MySQL program and I would like to know how to search across multiple tables using MySQL.
Basically, I have a search box as in the top right of most sites, and when user's search something in that box, it needs to search in users.username, users.profile\_text, uploads.title, uploads.description, sets.description and comments.text. I need to get the ID (stored in a id field in each table) and if possible, a google like excerpt. | You can either write your procedure to query each of these tables individually, or you could create a relatively simple view that conglomerates all of the searchable columns of the important tables along with an indicator showing which table they're from. There's not really a magic way to search multiple tables other than writing the statements normally.
The second approach would look something like this:
```
(SELECT 'Table 1' AS TableName, id as Id, text as Searchable
FROM table1)
UNION
(SELECT 'Table 2' AS TableName, table2_id as Id, name as Searchable
FROM table2)
UNION
...
```
Then search on the resulting view. It's important to note that this method won't be fast.
A similar, and faster, alternative would be to dedicate a table to this task instead of a view, and populate it on insert/update/delete of the real tables instead of recomputing it on access. | To write a select query that's tailored for each of the tables, you can union the results to get one resultset. It looks like it will look something like this:
```
select 'user-name', id, username from users where username like '%term%'
union
select 'user-profile', id, profile from users where profile like '%term%'
union
select 'uploads-title', id, title from uploads where title like '%term%'
union
select 'uploads-description', id, description from uploads where description like '%term%'
``` | Searching multiple tables with MySQL | [
"",
"sql",
"mysql",
"search",
""
] |
We have some projects that have CPPUnit tests that are build and run using an ant script to build them all (right now we're using Borland C++, but we're moving to VS2008).
The problem is that the interface to run and see the result of tests is unpleasant (command prompt). It would be awesome to have them run inside eclipse or VS2008.
It would be a lot better if a plugin that I could select the tests I want to run and get some visual feedback (green bar/red bar), pointing me to the tests that failed and the messages.
This exists with JUnit in Eclipse (for java), but is there something similar for C++ using Eclipse CDT or VS2008? A UI test runner would be usefull too, so I could launch the UI as a Post build action.
**EDIT (possible answer):**
I found this project: [ECUT](http://sourceforge.net/project/screenshots.php?group_id=236439), however I didn't test it yet. Looks promising thou. I'll update this question as soon as I get more experience with the project. | Their doesn't seem to be a test runner for within Visual Studio, just the [MfcTestRunner](http://apps.sourceforge.net/mediawiki/cppunit/index.php?title=MfcTestRunner) and the [QtTestRunner](http://apps.sourceforge.net/mediawiki/cppunit/index.php?title=QtTestRunner).
For Visual Studio, [Resharper](http://www.jetbrains.com/resharper/) and [TestDriven.Net](http://www.testdriven.net/) are the usual suspects to first look for test runners, but both don't have support for CPPUnit. | I found your post in my quest for a simple CppUnit integration with Eclipse CDT, so for everyone like me who is seeking a simple CppUnit integration with Eclipse CDT:
I have created an Eclipse plugin to support vanilla CppUnit XML Test Reports with the official Eclipse CDT C/C++ Unit Testing Support plugin.
I have made it available on [GitHub](https://github.com/jmrharris/eclipse.cppunit.testsrunner)
So **yes**, now there is :) | Is there a plugin that integrates to CPPUnit with VS2008 or Eclipse CDT? | [
"",
"c++",
"visual-studio-2008",
"eclipse",
"unit-testing",
"user-interface",
""
] |
If I am calling a function:
```
public User GetUserById(int UserId)
{
User someUser = new User();
//Look up this UserId in the database
...
someUser.Name = dbResult["Name"];
return someUser;
}
```
Let's say I pass in a UserId that has no "User information" associated with it.
Suddenly the 'someUser' I'm passing back is instantiated, but empty. What's the best way to check that it's "empty" so I don't try and display data from it?
The caveman in me wants to check if someUser.Name has a length of greater than zero. I know that's awful, so if you know what I'm missing - I'd appreciate your help! | The real question is, why would `GetUserById()` return an uninitialized object if the ID is invalid? Wouldn't it be better to throw an exception or - at worst - return `null`? | Personally, if the id doesn't exist, I'd either return `null`, or (less preferred) I'd throw an `ArgumentException` or `KeyNotFoundException`. That avoids the whole question - you just check against `null`... | How to identify when an object is empty? | [
"",
"c#",
""
] |
I write a script to force download mp3 files from a site. The code is working very fine but the problem is that it can't download large files. I tried it with a file of 9.21mb and it downloaded correctly, but whenever i try to use the code to download a file of 25mb, it simply gives me a cannot find server page or The website cannot display the page. So i now know it has problems downloading large files. Below is the code snippet that does the downloading of files.
```
header("Pragma: public");
header("Cache-Control: must-revalidate, post-check=0, pre-check=0");
header("Cache-Control: private",false);
header("Content-type: application/force-download");
header("Content-Disposition: attachment; filename=\"".$dname.".mp3\";" );
header("Content-Transfer-Encoding: binary");
header("Content-Length: ".filesize($secretfile));
$downloaded=readfile($secretfile);
```
The displayed error is: HTTP 500 Internal Server Error
thank u very much for ur time guys. | It could be memory limits, but usually PHP will output an error saying that the memory limit has been reached.
Also, before all of that you should disable output compression if it's enabled:
```
if(ini_get('zlib.output_compression')) {
ini_set('zlib.output_compression', 'Off');
}
```
Sometimes IE can screw up if output compression is enabled. | Watch your PHP configuration for memory limits and timeouts
In php.ini :
```
memory_limit = 32M
max_execution_time = 300
```
Note that if you want to go really high in execution time you also need to change your web server timeout. | problem downloading file | [
"",
"php",
"download",
""
] |
I have a class that creates an object inside one public method. The object is private and not visible to the users of the class. This method then calls other **private** methods inside the same class and pass the created object as a parameter:
```
class Foo {
...
};
class A {
private:
typedef scoped_ptr<Foo> FooPtr;
void privateMethod1(FooPtr fooObj);
public:
void showSomethingOnTheScreen() {
FooPtr fooObj(new Foo);
privateMethod1(fooObj);
};
};
```
I believe the correct smart pointer in this case would be a scoped\_ptr, however, I can't do this because scoped\_ptr makes the class non copyable if used that way, so should I make the methods like this:
```
void privateMethod1(FooPtr& fooObj);
```
privateMethod1 doesn't store the object, neither keeps references of it. Just retrieves data from the class *Foo*.
The correct way would probably be not using a smart pointer at all and allocating the object in the stack, but that's not possible because it uses a library that doesn't allow objects on the stack, they must be on the Heap.
After all, I'm still confused about the real usage of scoped\_ptr. | Use here simple std::auto\_ptr as you can't create objects on the stack. And it is better to your private function just simply accept raw pointer.
Real usage is that you don't have to catch all possible exceptions and do manual delete.
In fact if your object is doesn't modify object and your API return object for sure you'd better to use
```
void privateMethod1(const Foo& fooObj);
```
and pass the object there as
```
privateMethod1(*fooObj.get());
``` | One further possibility is to create the object as a static\_ptr for ease of memory management, but just pass the raw pointer to the other private methods:
```
void privateMethod1(Foo *fooObj);
void showSomethingOnTheScreen() {
scoped_ptr<Foo> fooObj(new Foo);
privateMethod1(fooObj.get());
};
``` | Passing a smart pointer as argument inside a class: scoped_ptr or shared_ptr? | [
"",
"c++",
"class",
"boost",
"smart-pointers",
""
] |
What is the difference between for..in and for each..in statements in javascript?
Are there subtle difference that I don't know of or is it the same and every browser has a different name for it? | **"for each...in"** iterates a specified variable over **all values** of the specified object's properties.
Example:
```
var sum = 0;
var obj = {prop1: 5, prop2: 13, prop3: 8};
for each (var item in obj) {
sum += item;
}
print(sum); // prints "26", which is 5+13+8
```
[Source](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Statements/for_each...in)
**"for...in"** iterates a specified variable over **all properties** of an object, in arbitrary order.
Example:
```
function show_props(obj, objName) {
var result = "";
for (var i in obj) {
result += objName + "." + i + " = " + obj[i] + "\n";
}
return result;
}
```
[Source](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Statements/for...in)
---
Note 03.2013, `for each... in` loops are [deprecated](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Statements/for_each...in). The 'new' syntax recommended by MDN is [`for... of`](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Statements/for...of). | This demonstration should hopefully illustrate the difference.
```
var myObj = {
a : 'A',
b : 'B',
c : 'C'
};
for each (x in myObj) {
alert(x); // "A", "B", "C"
}
for (x in myObj) {
alert(x); // "a", "b", "c"
alert(myObj[x]); // "A", "B", "C"
}
``` | What is the difference between for..in and for each..in in javascript? | [
"",
"javascript",
"loops",
"foreach",
"enumeration",
"for-in-loop",
""
] |
I have a Model as follows:
```
class TankJournal(models.Model):
user = models.ForeignKey(User)
tank = models.ForeignKey(TankProfile)
ts = models.IntegerField(max_length=15)
title = models.CharField(max_length=50)
body = models.TextField()
```
I also have a model form for the above model as follows:
```
class JournalForm(ModelForm):
tank = forms.IntegerField(widget=forms.HiddenInput())
class Meta:
model = TankJournal
exclude = ('user','ts')
```
I want to know how to set the default value for that tank hidden field. Here is my function to show/save the form so far:
```
def addJournal(request, id=0):
if not request.user.is_authenticated():
return HttpResponseRedirect('/')
# checking if they own the tank
from django.contrib.auth.models import User
user = User.objects.get(pk=request.session['id'])
if request.method == 'POST':
form = JournalForm(request.POST)
if form.is_valid():
obj = form.save(commit=False)
# setting the user and ts
from time import time
obj.ts = int(time())
obj.user = user
obj.tank = TankProfile.objects.get(pk=form.cleaned_data['tank_id'])
# saving the test
obj.save()
else:
form = JournalForm()
try:
tank = TankProfile.objects.get(user=user, id=id)
except TankProfile.DoesNotExist:
return HttpResponseRedirect('/error/')
``` | You can use [`Form.initial`](https://docs.djangoproject.com/en/dev/ref/forms/api/#initial-form-values), which is explained [here](https://djangobook.com/tying-forms-views/#setting-initial-values).
You have two options either populate the value when calling form constructor:
```
form = JournalForm(initial={'tank': 123})
```
or set the value in the form definition:
```
tank = forms.IntegerField(widget=forms.HiddenInput(), initial=123)
``` | Other solution: Set initial after creating the form:
```
form.fields['tank'].initial = 123
``` | Django set default form values | [
"",
"python",
"django",
"django-models",
"django-forms",
""
] |
Im using an Oracle database.
In my query 100 rows are fetched. If I want to filter rows between rownum 50 and 60, what would be the query?
```
SELECT EMPLID, EFFDT, ACTION, ACTION_REASON
from JOB where emplid ='12345'
``` | Most people will commonly tell you to use ROWNUM, to do this, however the more succinct way is the use the row\_number() analytic function.
```
select EMPLID, EFFDT, ACTION, ACTION_REASON
from
(
SELECT EMPLID, EFFDT, ACTION, ACTION_REASON, row_number() over (order by emplid) rn
from JOB where emplid ='12345'
)
where rn between 50 and 60;
```
Using the row\_number function allows you to order the results AND number them in a single query, then you just need one wrapper query to get the rows you need. | Since I did a comparison of [Chad's](https://stackoverflow.com/users/41665/chad-birch) and [Nick's](https://stackoverflow.com/users/62985/nick) approaches to make a comment on Nick's answer, I thought I'd post my findings here. I used Tom Kyte's [runstats](http://asktom.oracle.com/tkyte/runstats.html) package to compare them with this script:
```
begin
runstats_pkg.rs_start('Chad');
for i in 1..10000 loop
for r in (
SELECT EMPLID,EFFDT,ACTION,ACTION_REASON
FROM (SELECT ROWNUM rnum, EMPLID,EFFDT,ACTION,ACTION_REASON
FROM (SELECT EMPLID,EFFDT,ACTION,ACTION_REASON
FROM JOB
WHERE emplid = '12345')
WHERE rownum <= 60
)
WHERE rnum >= 50
) loop
null;
end loop;
end loop;
runstats_pkg.rs_middle('Nick');
for i in 1..10000 loop
for r in (
select EMPLID, EFFDT, ACTION, ACTION_REASON
from
(
SELECT EMPLID, EFFDT, ACTION, ACTION_REASON, row_number() over (order by emplid) rn
from JOB where emplid ='12345'
)
where rn between 50 and 60
) loop
null;
end loop;
end loop;
runstats_pkg.rs_stop(0,false,false,false,false,false,false,false,false);
end;
/
```
The results:
```
Run1 = Chad
Run2 = Nick
*** Comparative Time Report ***
Run Time (hsecs)
--------------------------------------------------
Run1 69
Run2 77
Run1 ran in 89.61% of the time of Run2
Run2 ran in 111.59% of the time of Run1
PL/SQL procedure successfully completed.
```
Using autotrace the plans can be seen to be pretty similar:
```
SQL> SELECT EMPLID,EFFDT,ACTION,ACTION_REASON
2 FROM (SELECT ROWNUM rnum, EMPLID,EFFDT,ACTION,ACTION_REASON
3 FROM (SELECT EMPLID,EFFDT,ACTION,ACTION_REASON
4 FROM JOB
5 WHERE emplid = '12345')
6 WHERE rownum <= 60
7 )
8 WHERE rnum >= 50
9 /
no rows selected
Execution Plan
----------------------------------------------------------
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 41 | 3 (0)|
|* 1 | VIEW | | 1 | 41 | 3 (0)|
|* 2 | COUNT STOPKEY | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| JOB | 1 | 13 | 3 (0)|
|* 4 | INDEX RANGE SCAN | JOB2_PK | 1 | | 2 (0)|
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNUM">=50)
2 - filter(ROWNUM<=60)
4 - access("EMPLID"=12345)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
264 bytes sent via SQL*Net to client
231 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL> select EMPLID, EFFDT, ACTION, ACTION_REASON
2 from
3 (
4 SELECT EMPLID, EFFDT, ACTION, ACTION_REASON, row_number() over (order by emplid) rn
5 from JOB where emplid ='12345'
6 )
7 where rn between 50 and 60
8 /
no rows selected
Execution Plan
----------------------------------------------------------
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 41 | 3 (0)|
|* 1 | VIEW | | 1 | 41 | 3 (0)|
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 17 | 3 (0)|
| 3 | TABLE ACCESS BY INDEX ROWID| JOB | 1 | 17 | 3 (0)|
|* 4 | INDEX RANGE SCAN | JOB2_PK | 1 | | 2 (0)|
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RN">=50 AND "RN"<=60)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "EMPLID")<=60)
4 - access("EMPLID"=12345)
filter("EMPLID"=12345)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
264 bytes sent via SQL*Net to client
231 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
```
It would appear that there isn't much to choose between the 2 methods, though Chad's is consistently slightly faster on my database which is:
Oracle Database 10g Release 10.2.0.3.0 - 64bit Production | Oracle: How to return a partial result only? | [
"",
"sql",
"oracle",
""
] |
I have tended to shy away from adding properties to my ASP.NET pages. It never seemed like a great idea to me. However, recently I have seen the practice employed in a few sample applications. Has my aversion to adding custom properties to a page been unwarranted or is this a "it depends" situation? | The thing about properties you need to remember is that they last for the entire page lifecycle. This makes them both very useful (set a property early in the lifecycle and it's still valid later) and dangerous (it's easy to access a property before it's set, or not realize another life-cycle phase changed it on you).
One area I've seen properties used to great effect is as a good type-safe way to wrap the query string and Session. Define properties for each of your expected query string or session values and it becomes very clear to future developers what is expected and available.
Another common use is to wrap ViewState items. I expect this is where you're seeing them in samples, since most samples tend to assume ViewState is turned on. | I see nothing wrong with using properties to cleanup the code on a server side page. I like to use properties to access Session State or View State information, this way if I modify how I access the data, I only change one place. | ASP.NET Page Properties Good Idea or Bad Idea | [
"",
"c#",
"asp.net",
"properties",
""
] |
This should (hopefully) be a pretty easy question for some of you to answer.
I have a working Recursive menu from a mySQL database, now my main problem is:
What is the best way to create the URL? I would prefer to bring in the title of each row like /eggs/milk/bacon/. Eggs being level 0 like: eggs-0, milk-1, bacon-2. Any ideas on how to dynamicly output this?
I am pretty much going for what "cletus" said a few comments down on this question:
[PHP/MySQL - building a nav menu hierarchy](https://stackoverflow.com/questions/477793/php-mysql-building-a-nav-menu-hierarchy)
But I need a bit more explanation on how to do it. | Unless you plan to modify your menu tree often, pre-storing the required hierarchical URL for each menu item is probably the easiest (for run-time resolution that is).
If you expect the tree to be modified often enough, lets say - through a web interface, then it would be easier to generate the paths every time you read the menu, something like this:
```
id | name | parent
----+--------+-------
0 | eggs | NULL
1 | milk | 0
2 | bacon | 1
3 | tomato | 0
4 | lettuce| 1
foreach (query("SELECT * FROM menu ORDER BY parent ASC") as $row) {
$menuitem = array_merge(array(), $row);
$menuLookup[$menuitem['id']] &= $menuitem;
if ($menuitem['parent'] == null) {
$menuitem['path'] = "/" . $menuitem['name'];
$menu[] &= $menuitem[];
} else {
$parent &= $menuLookup[$menuitem['parent']];
$menuitem['path'] = $parent['path'] . "/" . $menuitem['name'];
$parent['menu'][] &= $menuitem;
}
}
```
I haven't debugged this code, only tested it for correctness ;-) | Well, if you want a hierarchy, be best method I know of is called "Modified Preorder Tree Traversal" that is described in great detail [in this Sitepoint article](http://www.sitepoint.com/print/hierarchical-data-database/), starts about halfway down.
The main difference from what Guss suggested is that it's a bit more performant and a lot easier to only fetch the part of the tree you're looking for. | Hierarchical recursion menu with PHP/MySQL | [
"",
"php",
"mysql",
"recursion",
""
] |
I've finished my current "part-time" project and am looking for something new.
I've decided to have a crack at writing my own compression / decompression utility. I'm not planning on selling it or anything - it's more for my own interest.
Preferably, it will be be in C# but Java or C is OK.
Can anyone suggest some good sources on compression / decompression techniques that I can study. Hopefully, some that don't involve maths at a doctorate level! | A good book on this topic is [The Data Compression Book](https://rads.stackoverflow.com/amzn/click/com/1558514341). It starts off with the basics and ends up with JPEG and fractal image compression algorithms.
I worked through this whole book years ago (the first edition, I think) and I still remember it as a really rewarding experience. | This [competition](http://prize.hutter1.net/) gives a good idea of the current state of the art for text compression, and something to aim for if you come up with something good!
I've found this a [useful survey](http://datacompression.dogma.net/index.php?title=FAQ:What_is_the_state_of_the_art_in_lossless_image_compression%3F) of lossless image compression.
If you read only one academic paper on the subject, make it C.E. Shannon's ["A Mathematical Theory of Communication"](http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf). The ideas there are absolutely fundamental to compression. | Documentation on compression / decompression techniques | [
"",
"c#",
"documentation",
"compression",
""
] |
I can see that this question has been asked several times, but none of the proposed solutions seem to work for the site I am building, so I am reopening the thread. I am attempting to size an iframe based on the height of it's content. Both the page that contains the iframe and it's source page exist on the same domain.
I have tried the proposed solutions in each of the following threads:
* [Resize iframe height according to content height in it](https://stackoverflow.com/questions/525992/resize-iframe-height-according-to-content-height-in-it)
* [Resizing an iframe based on content](https://stackoverflow.com/questions/153152/resizing-an-iframe-based-on-content)
I believe that the solutions above are not working because of when the reference to body.clientHeight is made, the browser has not actually determined the height of the document.
Here is the code I am using:
```
var ifmBlue = document.getElementById("ifmBlue");
ifmBlue.onload = resizeIframe;
function resizeIframe()
{
var ifmBlue = document.getElementById("ifmBluePill");
var ifmDiv = ifmBlue.contentDocument.getElementById("main");
var height = ifmDiv.clientHeight;
ifmBlue.style.height = (ifmBlue.contentDocument.body.scrollHeight || ifmBlue.contentDocument.body.offsetHeight || ifmBlue.contentDocument.body.parentNode.clientHeight || height || 500) + 5 + 'px';
}
```
If I debug the script using fire debug, the client height of the iframe.contentDocument's main div is 0. Additionally, body.offsetHieght, & body.scrollHeight are 0. However, after the script is finished running, if I inspect the DOM of the HTML iframe element (using fire debug) I can see that the body's clientHeight is 456 and the inner div's clientHeight is 742. This leads me to believe that these values are not yet set when iframe.onload is fired. So, per one of the threads above, I moved the code into the body.onload event handler of the iframe's source page. This solution also did not work.
Any help you can provide is much appreciated.
Thanks,
CJ | DynamicDrive has [such a script](http://dynamicdrive.com/dynamicindex17/iframessi.htm), which I think does what you're asking for.
There's also a [newer version](http://www.dynamicdrive.com/dynamicindex17/iframessi2.htm) now.
---
# 2011 update:
I would *strongly* recommend using AJAX over something like this, especially considering that a dynamically resizing iframe only works across the same domain.
Even so, it's a bit iffy, so if you absolutely must use AJAX over standard page loading, you really, *really* should use things like `history.pushState` (and have standard page loading as a fallback for browsers that don't support it). There's a jQuery plugin which handles this stuff for you, written by a GitHubber, called [pjax](https://github.com/defunkt/jquery-pjax), which they use *only* for repo navigation. | you moved the handler? maybe you should move the function to the inner frame as well, so that when you grab height values you reference the body directly rather than frame object... then call a parent.set height function
another trick, call function after settimeout of 10 msecs
i remember I had that problem once but I used IE's getBoundingClientRect() to get height of content, check mozilla developer center for something similar, this is just a hint, i did not research it
on another note, what is ifmBluePill? is it the iframe? or a div inside of it? why do you reference "contentDocument" of a div? | Dynamically Resizing an Iframe | [
"",
"javascript",
"html",
"dom",
""
] |
When do you use each inheritance?
```
class Base{};
class Derived: protected Base{};
class Derived2: public Base{};
```
**My case:**
I have class called Snapshot which only contains GetXXX methods. It is a light-weight classed used to store current state of the Value class. I also use it for recovery, keeping instances of this class long after Value class instance are gone. A Manager class, processes instances of the Snapshot class. Let me show you the code:
```
class Snapshot
{
public:
Snapshot (const Snapshot * snap)
{
_x=snap->_x;
_y=snap->_y;
_z=snap->_z;
}
Snapshot (){_x=_y=_z=0;}
int GetX(){return _x;}
int GetY(){return _y;}
int GetZ(){return _z;}
~virtual Snapshot(){}
protected:
int _x,_y,_z;
};
class Value:public Snapshot
{
/*Very heavy class with a lot of components used to calculate _x, _y, _z*/
};
class Manager
{
public:
void Process( const Snapshot * snap)
{
}
};
```
How do you feel about this design? What are the alternatives?
Thanks
**Solutions and issues**
* Solution: I would create makeSnapshot function which would return Snapshot object by given Value object.
* Issues:
+ major issue: I sent snapshots at very frequently (every second, even less), hence I don't want to incur the construction and destruction cost minor issue:
+ semi-major issue I will have to make Value a friend of Snapshot, as I don't want
to introduce setters. | Generally speaking I would use public inheritance, if I want to implement a specific interface, e.g. if my class is to be accessed thrugh a specific contract.
Protected inheritance could be used, if you just want to reuse the functionality implemented in the parent.
I would make Snapshot a pure virtual class, e.g. just an interface, and Value would implement the getXYZ methods. E.g. you probably don't need the \_x,\_y,\_z members in Snapshot. | As for the question about the private and protected inheritance, the pretty thorough explanation can be found here:
* [C++ Faq Lite](http://www.parashift.com/c++-faq-lite/private-inheritance.html)
* [Uses and Abuses of Inheritance, part 1](http://www.gotw.ca/publications/mill06.htm)
* [Uses and Abuses of Inheritance, part 2](http://www.gotw.ca/publications/mill07.htm)
Main issue is the semantic - whether something IS-A something, or whether something is IS-IMPLEMENTED-IN-TERMS-OF something. | protected inheritance vs. public inheritance and OO design | [
"",
"c++",
"oop",
""
] |
For compliance reasons, when I delete a user's personal information from the database in my current project, the relevant rows need to be really, irrecoverably deleted.
The database we are using is postgres 8.x,
Is there anything I can do, beyond running COMPACT/VACUUM regularly?
Thankfully, our backups will be held by others, and they are allowed to keep the deleted information. | "Irrecoverable deletion" is harder than it sounds, and extends beyond your database. For example, are you planning on going back to all previous instances of your database on tape/backup where this row also exists, and deleting it there too?
Consider a regular deletion and the periodic VACUUMing that you mentioned before. | To accomplish the "D" in ACID, relational databases use a transaction log type system for changes to the database. When a delete is made that delete is made to a memory copy of the data (buffer cache) and then written to a transaction log file in synchronous mode. If the database were to crash the transaction log would be replayed to bring the system back to the correct state. So a delete exists in multiple locations where it would have to be removed. Only at some later time is the record "deleted" from the actual data file on disk (and any indexes). This amount of time varies depending on the database. | How do I securely delete a row from a database? | [
"",
"sql",
"postgresql",
""
] |
I have gotten a bit lazy(it's sometimes good) and started updating WinForms UI by invoking a callback without checking *InvokeRequired* first.
Are there a performance issues or considerations that I should be aware of?
```
private delegate void SetStatusEventHandler(string statusMessage);
private void SetStatus(string statusMessage)
{
Invoke((MethodInvoker) (() =>
{
resultLabel.Text = statusMessage;
}));
// - vs -
if (InvokeRequired)
{
SetStatusEventHandler cb = SetStatus;
Invoke(cb, statusMessage);
}
else
{
resultLabel.Text = statusMessage;
}
}
```
***[EDIT]***: Most of times that a method that calls "invoke" will be called at most like say 10~20 times a second with a wide interval inbetween.
***[UPDATE]*** Settled with the following extension method
```
public static class SmartInvoker
{
public static void InvokeHandler(this Control control, MethodInvoker del)
{
if (control.InvokeRequired)
{
control.Invoke(del);
return;
}
del();
}
}
...
private void SetStatus(string statusMessage)
{
this.InvokeHandler(() => resultLabel.Text = statusMessage);
}
```
I guess finding out how to manage extension method classes is another topic I need to dig in. Thank you for your help | EDIT: See the comments for debate about the whole posting vs immediately dispatching malarky.
Either way, my answer is actually the same: unless this is happening *hugely* often unnecessarily (i.e. most of the time you're on the UI thread to start with, and it's attached to something like a mouse movement handler) I wouldn't worry. It certainly makes the code simpler. If this *is* going to be invoked very often, I'd measure and test more :)
`Invoke` is faster with an `EventHandler` or `MethodInvoker` delegate than with others. I don't know if there's any difference between the two - you may want to check.
You can make this even simpler by writing an extension method, e.g.
```
public static void InvokeHandler(this Control control, MethodInvoker handler)
{
control.Invoke(handler);
}
```
Then you can make your code:
```
private void SetStatus(string statusMessage)
{
this.InvokeHandler(delegate
{
resultLabel.Text = statusMessage;
});
}
```
or
```
private void SetStatus(string statusMessage)
{
InvokeHandler(() => resultLabel.Text = statusMessage);
}
```
That way you don't need to specify the delegate type. | Why not just add an extension method so you don't have to think about it anymore?
```
public static object SmartInvoke(this Control control, MethodInvoker del) {
if ( control.InvokeRequired ) {
control.Invoke(del);
return;
}
del();
}
```
Now your code becomes
```
private void SetStatus(string statusMessage) {
this.SmartInvoke(() => resultLabel.Text = statusMessage);
}
``` | Performance issues when updating UI without checking InvokeRequired first? | [
"",
"c#",
"winforms",
"performance",
"user-interface",
""
] |
Is using .h as a header for a c++ file wrong?
I see it all over the place, especially with code written in the "C style".
I noticed that Emacs always selects C highlighting style for a .h header, but c++ for hpp or hh.
Is it actually "wrong" to label your headers .h or is it just something which annoys me?
EDIT:
There is a good (ish) reason why this annoys me, if I have project files labelled, 'hpp & cpp' I can get away with 'grep something \*pp' etc. otherwise I have to type '.h cpp' | It's not *wrong* to call your C++ headers .h. There are no rules of what extensions your code must use. For your non-headers, MSVC uses .cpp and on Linux, .cc as well. There is no one global standard, and .h is definitely very widely used.
But I'd say calling your headers .hpp (I've seen .hh a few times as well) is a lot more consistent and informative than just using .h. | nothing wrong with that. This is the default with Microsoft Visual C++.
Just follow the standard you like, and stick with it. | Is using .h as a header for a c++ file wrong? | [
"",
"c++",
"header-files",
"file-extension",
""
] |
## window.location.hash
When using a link for a javascript action, I usually do something like this:
```
<a href="#">Link Text</a>
```
That way, when someone clicks the link before the page loads nothing terrible happens.
## Html Base Tag
On my current project I use this same construct, but with a base tag:
```
<html>
<head>
<base href="http://example.com/" />
</head>
<body>
<a href="#">Link Text</a>
</body>
</html>
```
However, if the page url is:
```
http://example.com/dir/page
```
clicking the link navigates to
```
http://example.com/#
```
rather than
```
http://example.com/dir/page#
```
How can I fix this? | Either remove your `base` tag or change your `href` attributes to be fully qualified. What you are observing is the intended behavior when you mix `base` with `a` elements. | If you're inclined to use an a tag another solution is to not use # as the href target (when you don't specify one it causes a jump to the top of the page which I find undesirable). What you can do is:
```
<a href="javascript:">Some Link that Goes nowhere</a>
```
Really though, unless you are doing something that requires that to be an a tag a span would be your best bet:
CSS:
```
.generic_link {
text-decoration:underline;
}
.generic_link:hover {
text-decoration:none;
}
```
HTML:
```
<span class="generic_link">Something that really isn't a link</span>
``` | Url Hash with Html Base Tag | [
"",
"javascript",
"html",
"hash",
"base-tag",
""
] |
First off, I am fairly new to MVC and jQuery. I apologize if my question or terminology is incorrect.
I currently have a view in my MVC application that displays a list of addresses. On the same page, I also have a map where I wish to map these locations.
I am trying to find the 'proper' way of getting the list of address objects to the javascript in the view so that it may be iterated through and mapped.
I have seen some solutions which require a getJSON call to the controller from the javascript code. I wish to avoid this solution since it requires another trip to the database and webserver. All of the information that I need to render the addresses on the map is already being presented to the View via ViewData.
I have also seen a solution in which the javascript could access the data passed into the view via ViewModel.Data, however this example was working on a single object, as opposed to a list.
I would appreciate it if anyone had any tips or resources available.
Thanks | Just render the data into your javascript. Say you have a list of address objects like this:
```
public class Address
{
public string Line1 { get; set; }
public string City { get; set; }
}
// in your controller code
ViewData["Addresses"] = new List<Address>(new Address[] { new Address() { Line1="bla", City="somewhere"}, new Address() {Line1="foo", City="somewhereelse"}});
```
Render it into your javascript like this:
```
<script type="text/javascript">
var addresses = new Array(
<% for (int i = 0; i < ViewData["Addresses"].Count; i++) { %>
<%= i > 0 : "," : "" %>({line1:"<%= addr.Line1 %>", city:"<%= addr.City %>"})
<% } %>);
</script>
```
This basically creates a JSON formatted array with your address objects in javascript.
**UPDATE:** If you want to do this automatically using the framework code instead of writing your own code to serialize to JSON, take a look at the JavaScriptSerializer. Here's a howto from the great ScottGu on doing this: [Tip/Trick: Building a ToJSON() Extension Method using .NET 3.5](http://weblogs.asp.net/scottgu/archive/2007/10/01/tip-trick-building-a-tojson-extension-method-using-net-3-5.aspx) | Technically, ViewData is not render to output HTMl thus will not be sent to client browser. The only way you can access to ViewData is render it to an object in the HTML like array or something like:
```
var cityList = new Array();
function addCity(cityId, cityName) {
var city = new Object();
city.CityID = cityId;
city.CityName = cityName
cityList .push(city);
}
<% foreach (Something.DB.City item in ViewData["Cities"] as List<City>)
{ %>
addCity(item.Id, item.Name);
<% } %>
```
This's the way I usually do when I need to render data for javascript | How to iterate through objects in ViewData via javascript on the page/view? | [
"",
"javascript",
"jquery",
"asp.net-mvc",
""
] |
I want to use a library that makes heavy use of singletons, but I actually need some of the manager classes to have multiple instances. The code is open source, but changing the code myself would make updating the lib to a newer version hard.
What tricks are there, if any, to force creation of a new instance of a singleton or even the whole library? | The answer is "it depends". A good example is the C++ heap used by Microsofts C runtime. This is implemented as a singleton, of course. Now, when you statically link the CRT into multiple DLLs, you end up with multiple copies. The newer implementations have a single heap, whereas the older CRTs created one heap per library linked in. | Find out who wrote the library in the first place, visit their home address and beat them to a bloody pulp, preferrably with a book about software design. :)
Apart from that: maintain your changes to the original library as a patch set so you can (more or less) easily apply it to each new version. Also, try to get your changes into the library so you don’t have to maintain the patches yourself. ;) | Can I un-singleton a singleton | [
"",
"c++",
""
] |
I know I can return an empty table using the following query :
```
select * from tbFoo where 1=2
```
but that code doesn't look nice to me.
**Is there a 'standard' way of doing this?**
If you're wondering why I want to do such a strange thing, it's because [I can't name the datatables I return from a stored procedure](https://stackoverflow.com/questions/589976/how-can-you-name-the-datasets-tables-you-return-in-a-stored-proc), so I need empty placeholders. | Having just run both:
```
SELECT TOP 0 * FROM Table
and
SELECT * FROM Table WHERE 1=0
```
They produce exactly the same execution plan. | Most of the time I see 1=0 but yes thats pretty much the standard approach when you really have to. Although really having to is rare. | What's the preferred way to return an empty table in SQL? | [
"",
"sql",
""
] |
I recently asked this question:
[Compiler error referencing custom C# extension method](https://stackoverflow.com/questions/638463)
Marc Gravell answer was perfect and it solved my problem. But it gave me something to think about...
If and Extension method must be placed on a Static Class and the method itself must be static, why can't we create a static Extension method?
I understand that the parameter marked as "this" will be used to allow access to an instance of the object we are extending. What I do not understand is why can't a method be created to be static... it just seems to me that this is a senseless limitation...
My question is: Why can't we create an extension method that will work as a static Method? | I expect the real answer is simply: there wasn't a good use-case. For instances, the advantage is that it enables a fluent-API over existing types (that don't themselves provide the logic) - i.e.
```
var foo = data.Where(x=>x.IsActive).OrderBy(x=>x.Price).First();
```
which enables LINQ:
```
var foo = (from x in data
where x.IsActive
order by x.Price
select x).First();
```
With static methods, this simply isn't an issue, so there is no justification; just use the static method on the second type.
As it is, extension methods are not *properly* object orientated - they are a pragmatic abuse to make life easier at the expense of purity. There was no reason to dilute static methods in the same way. | Because that feature doesn't exist in C#.
As a workaround, static methods can be implemented in another class and called through that class to provide the added functionality.
For example, XNA has a [MathHelper](http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.mathhelper_members.aspx) class which ideally would have been static extensions to the [Math](http://msdn.microsoft.com/en-us/library/system.math_members.aspx) class.
The community is [asking](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=379978) if we think it's a good idea for C# 4.0 | C# Extension Methods Architecture Question | [
"",
"c#",
"extension-methods",
""
] |
I have a table with about 250 rows (may double within 6 months), and 50 columns on [this page](http://www.reviews-web-hosting.com/wizard.html) (warning: slow with IE). I'm using the [JQuery Table sorter](http://tablesorter.com/docs/). But it is too slow with IE 7: it gives a warning about a slow javascript, and ask if I want to stop it. I've spent a lot of time to improve performances, so it works fine for all other browsers:
* sort text and digit only
* removed all but 2 parsers
* created an extra table that contains all the values, much faster than doing node.textContent() for each cell
* removed lowercase, trim, etc.
My version of the javascript is [here](http://www.reviews-web-hosting.com/js/jquery.tablesorter.js). I think I cannot optimize it much more. I am looking for another fast implementation of a table sorter, or any good optimization I may have forgot, so that IE 7 won't complain about the execution time.
**Edit:** I've disabled sorting on 35 columns, it is still too long for IE | I use [sorttable](http://www.kryogenix.org/code/browser/sorttable/) an extremely fast table sort javascript library. (not JQuery) | I'm a big fan of Christian Bach's jQuery table sorter ... <http://tablesorter.com/docs/>
* Compact and fast
* Sort on multiple columns
* Dynamic zebra stripe
* Optional metadata extension makes things even easier
* Parsers for sorting text, html, numbers, whatever (easy to customize)
* CSS styleable headers (of course)
* Cross-browser: IE 6.0+, FF 2+, Safari 2.0+, Opera 9.0+

**EDIT**: Anyone interested in tablesorter should see what Mottie has done with it: <https://github.com/Mottie/tablesorter> | Fast Javascript Table sorter? | [
"",
"javascript",
"jquery",
""
] |
I'm switching back and forth between Java and C# and one thing I miss while I'm coding in C# is the enforced exception checking (Although I admit I also find it really irritating sometimes while I'm coding in Java).
I'm aware that [Exception Hunter](http://www.red-gate.com/products/Exception_Hunter/index.htm) can help you track down what exceptions a piece of code might throw but is there a free/cheaper alternative? I can't really justify £200 for this kind of software addon as it's only an annoyance rather than a major problem. | Yes. Run the free [Microsoft Pex](http://research.microsoft.com/en-us/projects/Pex/) on your code. It will show all possible exceptions that can be thrown. | While I understand the enforced exceptions thing, I'm not sure how genuinely essential it is... for example, most *interesting* exceptions are those that you **wouldn't** normally include (or even expect). For example, I'm currently fighting what looks very much like a CLI bug in CF35, intermittently raising `MethodMissingException` from code that really does exist (emphasis: intermittently).
If you want to document your exceptions, use the `///<exception ... >...</exception>` markup. For other thoughts on this theme, perhaps see [Vexing Exceptions](http://blogs.msdn.com/ericlippert/archive/2008/09/10/vexing-exceptions.aspx) (I wonder if [GhostDoc](http://www.roland-weigelt.de/ghostdoc/) might help any?) | Is there a free alternative to Exception hunter? | [
"",
"c#",
"exception",
""
] |
What is the best plain javascript way of inserting X rows into a table in IE.
The table html looks like this:
```
<table><tbody id='tb'><tr><td>1</td><td>2</td></tr></tbody></table>
```
What I need to do, is drop the old body, and insert a new one with 1000 rows. I have my 1000 rows as a javascript string variable.
The problem is that table in IE has no innerHTML function. I've seen lots of hacks to do it, but I want to see your best one.
Note: using jquery or any other framework does not count. | Here's a great article by the guy who implemented IE's `innerHTML=` on [how he got IE to do `tbody.innerHTML="<tr>..."`](http://www.ericvasilik.com/2006/07/code-karma.html):
> At first, I thought that IE was not
> capable of performing the redraw for
> modified tables with innerHTML, but
> then I remembered that I was
> responsible for this limitation!
Incidentally the trick he uses is basically how all the frameworks do it for `table`/`tbody` elements.
*Edit*: @mkoryak, your comment tells me you have zero imagination and don't deserve an answer. But I'll humor you anyway. Your points:
> he is not inserting what i need
Wha? He is inserting rows (that he has as an html string) into a `table` element.
> he also uses an extra hidden element
The point of that element was to illustrate that all IE needs is a "context". You could use an element created on the fly instead (`document.createElement('div')`).
> and also the article is old
I'm never helping you again ;)
But seriously, if you want to see how others have implemented it, take a look at the jQuery source for [`jQuery.clean()`](http://dev.jquery.com/browser/trunk/jquery/src/core.js#L890), or Prototype's [`Element._insertionTranslations`](http://www.prototypejs.org/assets/2008/1/25/prototype-1.6.0.2.js). | the code ended up being this:
```
if($.support.scriptEval){
//browser needs to support evaluating scripts as they are inserted into document
var temp = document.createElement('div');
temp.innerHTML = "<table><tbody id='"+bodyId +"'>"+html;
var tb = $body[0];
tb.parentNode.replaceChild(temp.firstChild.firstChild, tb);
temp = null;
$body= $("#" + bodyId);
} else {
//this way manually evaluates each inserted script
$body.html(html);
}
```
Things that beed to exist beforehand: a table that has a body with id of 'bodyId'. $body is a global variable (or the function has a closure on it), and there is a bit of jquery in there too, because IE does not evalute scripts that are inserted into the html on the fly. | Insert rows into table | [
"",
"javascript",
"html-table",
"innerhtml",
""
] |
I'm documenting a few methods I wrote in C# that deal with parsing tokens. Due to some technical restraints in other areas of the system, these tokens need to take the form of XML elements (i.e., `<tokenName />`). I'd like to put the format of those tokens in the summary statement itself.
However, this throws an error: Badly formed XML -- A name was started with an invalid character". Is there any sort of escape character sequence I can use to embed XML in my C# summary comments? | Use standard XML escaping. For example:
```
<summary>This takes a <token1> and turns it into a <token2></summary>
```
It's not super-easy to type or read as code, but IntelliSense properly unescapes this and you see the right, readable thing in the tooltip. | Use a CDATA section. For example:
```
<![CDATA[ <name>Bob</name> ]]>
```
This is more elegant and readable in source than encoding special characters in entity references when you have a larger XML piece.
If the XML you want to embed itself contains CDATA sections, you need to use multiple CDATA sections as described in [another answer on Stack Overflow](//stackoverflow.com/a/223782/2157640) or on [Wikipedia](https://en.wikipedia.org/wiki/CDATA#Nesting). Or you can always use plain entity references as described in other answers here. | Xml string in a C# summary comment | [
"",
"c#",
"documentation",
""
] |
I want to design this system which has two major components:
1. Base/core stuff. Never changes.
2. Stuff running on the core. Changes rather frequently.
This is going to be developed in Java, but the problem applies to any classical OO language. How can I replace 2 above in a running system without recompiling 1, and without even stopping 1 when it's running. It's ok to recompile 2, but I shouldn't be disturbing 1.
Is there any design-pattern to do this? I would think this is somewhat similar to plugin behavior, but 2 is actually critical to the working of the application, not just an add-on. | Without more info it is hard to answer... but you might check out [OSGi](http://en.wikipedia.org/wiki/OSGi) as a starting point for some ideas. | We need some more information to solve this. If you're talking about loading entirely new logic at run-time, that could be pretty difficult. If you're talking about just swapping implementations, this can easily be done with the Strategy Pattern. | Swapping in and out logic on a running system | [
"",
"java",
"design-patterns",
"oop",
""
] |
I'm a seasoned *desktop* developer working in C++/C#/WinForms/etc. Up until this point, I have done *very* little in terms of web development. I've come to the point in my career where I feel like I should start doing web development - not to replace my desktop experience but to become more well rounded as a developer.
I already know some HTML and JavaScript, but I am by no means proficient. I'm very comfortable with .NET.
So what is your opinion? Should I focus mastering HTML/CSS/JavaScript/JQuery (with ASP.NET or PHP on the back-end), or should I nurture my .NET experience and dive into Silverlight?
I'm curious about factors such as performance, adoption rate, etc. and any other advice that should guide my decision.
**PS:** And I have read [this](https://stackoverflow.com/questions/477759/javascript-css-vs-silverlight-vs-flex) article, but it is slightly different from my question. | If you are truly looking to grow your skills as a developer and make the transition into the world of web development. I strongly recommend starting with the HTML, CSS, JavaScript, jQuery, ASP.NET AJAX route.
There are many reasons for this, but more than anything these are the fundamentals of web development. Everything in the end is rendered to the user in HTML, and Javascript/CSS are things that we have to deal with on a regular basis. CSS and cross browser functionality is still and issue and understanding how that works is a fundamental piece to being able to be a proficient developer.
Then the JavaScript/JQuery piece, this is also a now fundamental requirement in many ways as people expect rich, functional User Interfaces and understanding how to leverage these technologies is key.
Sliverlight is great to learn as well, however, I think that the base knowledge and getting experience with general web development techniques is needed first. Especially since silverlight in most cases is just a small portion of a website. | Occasionally you might use Silverlight, *ubiquitously* you'll use XHTML/JS/CSS. The choice should be self-evident. | Should I learn Silverlight or JavaScript/JQuery/CSS/HTML? | [
"",
"javascript",
"silverlight",
""
] |
I'm building a recipe-finder for a new food blog. The design I have basically involves the user selecting ingredients, one at a time, from a drop down `<select>`, the option disappearing from the list (so they can't select it again) and appearing on another HTML list with a link to remove it from the list. Once they're done, they click a button and that takes them through to a results page.
Here's the select markup as generated by the PHP:
```
<select>
<option value="">Please select</option>
<option value="beef-mince">Beef mince</option>
<option value="carrots">Carrots</option>
...
</select>
```
It's not drastically complex but it does raise a few questions on how I'm going to do some of these things. I'm using jquery.
1. I need to store the selected items in memory so I know what to send to the search page when they've done selecting items. What's the best way of doing that in your opinion as each item has two values (its "real" value and its database-value)?
2. How do I make "Please select" the selected option after they've selected something (preferable without triggering the onchange event)?
3. Once I've stored it in memory and added it to the displayed list of things they're searching for, how do I delete that item from the available items? Can I just "hide" or disable it (safely)?
4. If in #3 I have to delete it from the DOM, when I add it again, can I sort the list (based on either value) and keep the please-select option at the top? | 1.) You can append hidden form elements to the page whose value is the value of the selected option.
2.)
`jQuery("#select-list")[0].options[0].selected = true // assuming it's the first item`
3.) I would remove the element from the DOM using jQuery("#select-list option:selected").remove()
4.) You can use before(). jQuery(your\_default\_option).before("#select-list option:first"); | 1)
Basic idea, you need to check to make sure the first is not picked
```
var selections = [];
var mySel = document.getElementById("mySelectId");
var ind = mySel.selectedIndex;
selections.push( mySel.options[ind].value ); //add to a list for you to remember
mySel.options[ind] = null; //remove
```
2)
mySel.selectedIndex = 0;
3)
See #1
4) Yes you can add it anywhere you want by using insertBefore
Example here: <http://www.pascarello.com/lessons/forms/moveSelectOptions.html> | Selecting options from a drop down | [
"",
"javascript",
"jquery",
"html",
""
] |
I know about `unittest` Python module.
I know about `assertRaises()` method of `TestCase` class.
I would like to write a test that succeeds when an exception is **not** raised.
Any hints please? | The test runner will catch all exceptions you didn't assert would be raised.
Thus:
```
doStuff()
self.assert_(True)
```
This should work fine. You can leave out the self.assert\_ call, since it doesn't really do anything. I like to put it there to document that I didn't forget an assertion. | ```
def runTest(self):
try:
doStuff()
except:
self.fail("Encountered an unexpected exception.")
```
UPDATE: As liw.fi mentions, the default result is a success, so the example above is something of an antipattern. You should probably only use it if you want to do something special before failing. You should also catch the most specific exceptions possible. | Python - test that succeeds when exception is not raised | [
"",
"python",
"unit-testing",
"exception",
""
] |
I have this SQL statement:
```
SELECT * FROM converts
WHERE email='myemail@googlemail.com' AND status!='1'
ORDER BY date ASC, priority DESC
```
This just orders by date but I want to give my column "priority" more authority. How can I do this?
It should order by date first but if the time between two records is 10 mintues then I want priority to take over. How can I do this in my SQL statement or does this have to be in my application logic? I was hoping I could do it in my SQL statement.
Thank you all for any help | You could quantize the 'date' ordering into 10 minute chunks, so how about ordering by floor(unix\_timestamp(date)/600), and then by priority
```
SELECT * FROM converts
WHERE email='myemail@googlemail.com' AND status!='1'
ORDER BY floor(unix_timestamp(date)/600) ASC, priority DESC
```
Though two dates can be still be less than 10 mins apart but straddle two different 10 minute "chunks". Maybe that is sufficient, but I think to do exactly what you request is better done by the application.
*(OP requested expanded explanation....)*
Take two times which straddle a ten minute boundary, like 9:09 and 9:11 today:
* floor(unix\_timestamp('2009-03-16 09:**09**:00')/600) = 2061990
* floor(unix\_timestamp('2009-03-16 09:**11**:00')/600) = 2061991
Suppose you had a higher priority row for 09:11 than 09:09 - it will still appear *after* the 09:09 row because it fell into the next 10 minute chunk, even though it was only 2 minutes different.
So this approach is an approximation, but doesn't solve the problem as originally stated.
The way you stated your problem, a high priority row could appear before one recorded several hours (or days, or months!) earlier, as long there was an unbroken series of lower priority row with an interval less than 10 minutes. | Another variation would be:
```
SELECT * FROM converts
WHERE email='myemail@googlemail.com' AND status!='1'
ORDER BY (unix_timestamp(date)/60) - priority
```
Still not exactly what you required, but pretty close. | SQL, Order By - How to give more authority to other columns? | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
I read data from MS Access using C#. But get the OleDbException trying to execute such query:
```
SELECT * FROM Flats
WHERE Flats.VersionStamp <= [theDate] AND Flats.Flat=[theFlat]
```
OleDbException:
```
Data type mismatch in criteria expression.
```
On the other side, any one of the following queries works fine:
```
SELECT * FROM Flats
WHERE Flats.VersionStamp <= [theDate] AND Flats.Flat=1
SELECT * FROM Flats
WHERE Flats.VersionStamp <= #1/1/2009# AND Flats.Flat=[theFlat]
```
The C# code stays the same all the time:
```
DbParameter theFlat = new OleDbParameter("theFlat", 1);
DbParameter theDate = new OleDbParameter("theDate", new DateTime(2009, 1, 1));
using (DbDataReader reader = dbHelper.ExecuteReader(sqlText, theFlat, theDate))
{ }
```
Finally, the query can be successfully executed directly in the MS Access UI.
What is wrong here? | I am not sure but I don't think the OleDb classes support named parameters. Try the following SQL instead:
```
SELECT * FROM Flats WHERE Flats.VersionStamp <= ? AND Flats.Flat=?
```
The parameters must be added to the command object in the right order (I don't see you adding the parameters in your code). | Where are you defining/using the parameters in your SQL String; I don't see them.
Try this:
```
SELECT * From Flats WHERE VersionStamp = @theDate AND Flat = @theFlat
DbParameter = new OleDbParameter ("@theDate", someDate);
``` | OleDbException: Data type mismatch in criteria expression | [
"",
".net",
"sql",
"ms-access",
""
] |
The same database and application acts weirdly on our test machine, but it works nice on other computers.
On the test machine:
* We get SSL error exception. We fixed that based on an MS KB article, but after that it said
* "`Server error`" or "`General network error`" and slowed down to 1-2 stored procedures/second.
* The profiler said that we have 2000-2500 connections when the application runs. The same application has only 5-10 connection on other machines. I think the random error messages are caused by this huge connection count.
We reinstalled SQL Server, turned off the connection pool, and closed all datareaders.
What else can I do? Is there a "deeper" configuration tool for MSSQL2k? Any hidden component/ini/config/registry key? Or another profiler other than SQL Profiler that I can use? | Thanx again Mitch, sadly none of those ideas was real solution. No suprise - it seems that those error messages from MSSQL are *random*.
*Random*, I mean:
* After X[1] concurrent connection MSSQL stops to close connections automatically, and the connection pool grooves huge. Before X, I saw only 5-10 connections[2] / but after that there was 2500 and MSSQL chrased.
* In this case, MSSQL throws non deterministic error messages like *'General failure'*, *'User (null)'* etc.
* We had unclosed connection in our DAL (hidden since 2 years...brrr), and when we used that to much, it caused this wreid error.
[1] I have no idea about concrete value of X
[2] I've used this query:
```
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame
``` | Yet another possibility(!):
[Multiple Fixes for SQL Server .NET Data Provider](http://support.microsoft.com/default.aspx?scid=kb;en-us;823679)
> When the SQLCommand.CommandTimeout is
> set to zero, you expect an infinite
> timeout. However, versions 1.1 and 1.0
> of the SqlClient provider incorrectly
> timeout when a response from SQL
> Server is broken into two packets.
> Immediately upon receipt of the second
> packet, versions 1.1 and 1.0 of the
> provider incorrectly timeout. The fix
> that is included in this article fixes
> this issue so that the command will
> have an infinite timeout. | SQL Server 2000 Server Errors | [
"",
"c#",
"sql-server",
""
] |
Given the following code:
```
using System.Collections.Generic;
static class Program {
static void Main() {
bar Bar = new bar();
baz Baz = new baz();
System.Console.WriteLine(
"We have {0} bars, rejoice!", bar.Cache.Count);
}
}
public abstract class foo {
public static List<foo> Cache = new List<foo>();
}
public class bar : foo {
public bar() { Cache.Add(this); }
}
public class baz : foo {
public baz() { Cache.Add(this); }
}
```
You get the (somewhat expected) output "We have 2 bars, rejoice!".
This is fabulous, we now have twice as many places to have our beer (apparently), but what I really want is for each class to have it's own cache. The reason I don't want to just implement this cache in the subclass is because I also have some methods in my abstract class that needs to be able to operate on the cache (namely, to iterate over all of them). Is there a way to do this? I've looked at using an interface for `foo`, but the interface doesn't allow static members to be defined as part of the interface. | Each derived class of foo should define how/where to get a cache, so each can (potentially) have its own cache. Methods in foo can refer to GetCache() without the implementation being known.
```
public abstract class foo
{
public abstract ICache GetCache();
public void DoSomethingToCache()
{
ICache cache = this.GetCache();
cache.DoSomething();
}
}
public class bar : foo
{
public static ICache BarCache = new FooCache();
public override ICache GetCache()
{
return bar.BarCache;
}
}
public class FooCache : ICache { }
``` | Use a generic base class parameterized with the subclass:
```
using System.Collections;
using System.Collections.Generic;
static class Program
{
static void Main()
{
bar Bar = new bar();
baz Baz = new baz();
System.Console.WriteLine(
"We have {0} bars, rejoice!", Bar.GetCache().Count);
}
}
public abstract class foo<T>
{
private static List<foo<T> > Cache = new List<foo<T> >();
public IList GetCache()
{
return Cache;
}
}
public class bar : foo<bar>
{
public bar() { GetCache().Add(this); }
}
public class baz : foo<baz>
{
public baz() { GetCache().Add(this); }
}
``` | Abstract base class or Interface? Neither seem right | [
"",
"c#",
".net",
"inheritance",
"interface",
"abstract",
""
] |
I am writing php web applications, and simply deploy them via FTP.
To make it work, I often have some tweaking/debugging to do given that I have little control over the (free) web server hosting me, so what's working in my local environment might not work live.
For example I keep a separate php file containing class\_db\_myapp.php which extends a class\_db.php with specific DB parameters : db name, username, password which won't be the same local and live.
(For information : Lately I started using git for version control)
As my app evolves, some files get renamed / deleted / created.
When comes the time to upload a new version, I have to either rely on my memory to know what I have to upload / delete or simply delete all / upload all. But in the second case I need to avoid erasing the class\_db\_myapp.php file...
I haven't come up with a proper solution to this.
What are the best practices in this domain?
I may have missed an existing discussion on this subject, if so please point me to it.
Thank you. | If the ftp server supports symbolic links you can use the following technique:
1. Make the public\_html folder a symlink to the folder containing the current version. ("version1" for example)
2. Upload the new version in a new folder.
3. When the upload is completed, modify the symlink so the new version becomes active.
If something went wrong you can easily revert to the previous version by modifying the symlink again.
For database and other settings that are different in the live environment, there are several options:
* Create a file containing environment: "live" or "local" and put "if statement" in the code based on the environment setting.
* If you're able to dectect the enviroment in php, use that instead of a file.
* Place all settings in a file outside the "versionX" folders. | 1) To solve the "different configuration on dev and live servers" problem I use this:
```
// Change 'localhost' to your dev server's address
define('IS_LIVE', 'localhost' != $_SERVER['HTTP_HOST']);
// Database configuration
$db_cfg = IS_LIVE?
array(...): // Live server config
array(...); // Dev server config
```
2) To keep the dev and live files synched I use [Beyond Compare](http://www.scootersoftware.com/moreinfo.php), a visual diff tool that allows me to compare whole directories, including remote ones via (S)FTP.
I set up a profile so that the left window shows files on dev server, the right one shows files on live server. This way I can see what differences there are between the servers (changed, missing or added files) and allows me to easily copy whole directories, files, or specific lines in files between them. Very useful.
It also lets you 'ignore' particular directories that you don't want to synch, like the ones with user generated files or logs. | Web app deployment Best Practices : how to manage local & live files? | [
"",
"php",
"version-control",
"deployment",
"ftp",
"web-deployment-project",
""
] |
I know, there's JSONP, which involves server cooperation to name-space the data.
What is bothering me is the fact that the content of script tag `src` is **evaluated**, but it's *NOT* available to read.
```
<script src="http://www.google.com"></script>
```
All we need to figure out is how to namespace the data, that's all.
Of course I tried pretty idiotic things with no relevant result (I know this doesn't work, but you can see what I'm trying to achieve):
```
<script>eval('var namespace="');</script>
<script src="http://www.google.com"></script>
<script>eval('";');</script>
```
Since there's really no relevant info on how the `src` content is evaluated, I know it's global scope, but if we could trace evaluation steps or maybe *chain* evals scope somehow (not much documentation about this as well), we could solve this annoying "*evaluated but not readable*" thing.
Any ideas? | HTML5 provides `window.postMessage` which provides a mechanism for safe cross domain messaging, and is supported by Firefox 3, Opera 9.6, and WebKit nightlies.
That said your suggestion above cannot work because it requires fundamentally different behaviour from javascript's `eval`. `eval` parses and executes the given string in the current context -- what you're requesting is that eval change the actual code of the containing function. eg.
```
for (var i = 0; i < 10; i++) eval("; doSomething();");
```
would become
```
for (var i = 0; i < 10; i++) ; doSomething();;
```
meaning the for-loop becomes empty, and `doSomething` would only be called once. Clearly this would result in incredibly difficult to comprehend semantics, as well as making it substantially less safe to use, as eval would gain the ability to directly influence control flow. | I'm not sure this is at all possible due to browser security policies. | Lets solve cross-domain ajax, totally on the client, using script tags | [
"",
"javascript",
"ajax",
"cross-domain",
"eval",
""
] |
A program I'm expanding uses `std::pair<>` a lot.
There is a point in my code at which the compiler throws a rather large:
> Non-static const member, 'const Ptr std::pair, const double\*>::first' can't use default assignment operator
I'm not really sure what this is referring to?
Which methods are missing from the Ptr class?
The original call that causes this problem is as follows:
```
vector_of_connections.pushback(pair(Ptr<double,double>,WeightValue*));
```
Where it's putting an `std::Pair<Ptr<double,double>, WeightValue*>` onto a vector, where `WeightValue*` is a const variable from about 3 functions back, and the `Ptr<double,double>` is taken from an iterator that works over another vector.
For future reference, `Ptr<double,double>` is a pointer to a `Node` object. | You have a case like this:
```
struct sample {
int const a; // const!
sample(int a):a(a) { }
};
```
Now, you use that in some context that requires `sample` to be assignable - possible in a container (like a map, vector or something else). This will fail, because the implicitly defined copy assignment operator does something along this line:
```
// pseudo code, for illustration
a = other.a;
```
But `a` is const!. You have to make it non-const. It doesn't hurt because as long as you don't change it, it's still logically const :) You could fix the problem by introducing a suitable `operator=` too, making the compiler *not* define one implicitly. But that's bad because you will not be able to change your const member. Thus, having an operator=, but still not assignable! (because the copy and the assigned value are not identical!):
```
struct sample {
int const a; // const!
sample(int a):a(a) { }
// bad!
sample & operator=(sample const&) { }
};
```
**However** in your case, the apparent problem apparently lies within `std::pair<A, B>`. Remember that a `std::map` is sorted on the keys it contains. Because of that, you *cannot* change its keys, because that could easily render the state of a map invalid. Because of that, the following holds:
```
typedef std::map<A, B> map;
map::value_type <=> std::pair<A const, B>
```
That is, it forbids changing its keys that it contains! So if you do
```
*mymap.begin() = make_pair(anotherKey, anotherValue);
```
The map throws an error at you, because in the pair of some value stored in the map, the `::first` member has a const qualified type! | I faced the same issue, and came across this page.
<http://blog.copton.net/archives/2007/10/13/stdvector/index.html>
From the page:
> Please note that this is no GNU specific problem here. The ISO C++ standard requires that T has an assignment operator (see section 23.2.4.3). I just showed on the example of GNU's STL implementation where this can lead to. | Non-static const member, can't use default assignment operator | [
"",
"c++",
"constants",
""
] |
If the release version produces .pdb files and you can step into every line, put breakpoints etc then why ever bother to build a "debug" version of my components?
I'm using c# for my projects and i didn't have problem debugging release versions. In C++ i had problems debugging optimized code but in C# it works fine. I'm not talking about silly code blocks like `if(false)`... | One reason is attach vs. launch.
If you launch a Retail process in .Net, the debugging is almost nearly as good as launching a Debug process. You will likely not notice any difference in your debugging experience.
Attach is a completely different ball game. Both C# and VB are passed the /optimize+ flag for retail builds. This will embed the [DebuggableAttribute](http://msdn.microsoft.com/en-us/library/system.diagnostics.debuggableattribute.aspx) at the assembly level without the DebuggingMode.DisableOptimizations flag. During a process launch, VS / CLR, will communicate to essentially ignore this fact and disable JIT optimizationss that impact debugging. During attach, no such item happens and the JIT/CLR will optimize to it's hearts content. I guarantee you, the debugging experience is much worse in this case.
You can experiment with this in VS
* Switch build to Release
* CTRL+F5 to launch with no debugging
* Attach to the process. | The release builds are more optimized, e.g. when I debug release builds it annoys me that local variable's values disappear when their values are not going to be used by the runtime. | C# - Is there really a need for debug build in .net | [
"",
"c#",
".net",
"debug-build",
""
] |
I'm trying to learn the basic jist of a Semaphore in the Dining Philosopher problem. Right now, I have an array of class Chopstick, and each Chopstick has a semaphore with 1 available permit:
```
public class Chopstick
{
Thread holder = null;
private Semaphore lock = new Semaphore(1);
public synchronized void take() throws InterruptedException
{
this.lock.acquire();
holder = Thread.currentThread();
}
public synchronized void release()
{
this.lock.release();
holder = null;
}
}
```
The holder variable is used for a function that I am not sure I need:
```
public synchronized void conditionalRelease()
{
if (holder == Thread.currentThread())
{
holder = null;
this.lock.release();
}
}
```
The program compiles and runs, but seems to have some trouble releasing the chopsticks. Sometimes, the chopsticks get released, sometimes they don't. When they don't release, the program eventually hangs up when all of the chopsticks are taken and one philosopher is hungry.
Here is the code within the Philosopher class to release the chopstick after a random amount of time:
```
System.out.println(this.name + " is eating");
Thread.sleep(this.getRandTime());
System.out.println(this.name + " has finished eating");
rightChopstick.release();
System.out.println(this.name + " has released the right chopstick");
leftChopstick.release();
System.out.println(this.name + " has released the left chopstick");
```
My program does output "Philosopher 0 has finished eating", for example, and continues execution. The other two lines never output, so obviously something is wrong with the way I am releasing.
Any help is appreciated. | I would take the 'synchronized' keyword out of your method signatures. You're using an external locking mechanism (the semaphore, in this case). The 'synchronized' keyword is trying to get locks using the object's own mutex. You are now locking on 2 resources which I suspect might be causing a deadlock. | The problem is that when thread1 has a specific chopstick and another tries to get the same one it will wait in the `take()`-method on line `this.lock.acquire();` but it will **NOT** release the monitor on the object itself.
If now thread1 tries to release the chopstick it can not enter the `release()`-method since it's still locked by the other thread waiting in `take()`. That's a deadlock | Semaphore problems in Java with the Dining Philosophers | [
"",
"java",
"multithreading",
"locking",
"semaphore",
""
] |
Is there a Java Application Architecture Guide that is a counterpart of this: <http://www.codeplex.com/AppArchGuide> ? | The following should be helpful to you
1. [Core J2EE Patterns](http://www.corej2eepatterns.com/Patterns2ndEd/index.htm)
2. [Effective Enterprise Java](https://rads.stackoverflow.com/amzn/click/com/0321130006)
3. [Patterns of Enterprise Application Architecture](https://rads.stackoverflow.com/amzn/click/com/0321127420)
4. [Head First Design Patterns](http://oreilly.com/catalog/9780596007126/)
5. [J2EE Blueprints](http://java.sun.com/reference/blueprints/)
6. [Sun Certified Enterprise Architect, Study Guide](https://rads.stackoverflow.com/amzn/click/com/0130449164)
Although, having had a quick glance at the document from codeplex, I can tell you that probably 70-80% of what is in there, applies to Java as well. | I apologize for not reading the very nice link you provided.
I will say that architecture ought to be a language-independent sort of thing. Once you understand the principles it ought to be a matter of mapping the features and implementation details of one platform onto the other.
I hesitate to post links to any Java EE standards, because the changes made in going to EJB 3.0 make a lot of the "best practices" of earlier versions obsolete.
Object-relational mapping is now embodied in JPA; Spring has introduced ideas like dependecy injection and aspect-oriented programming.
Right now I'd say that studying [Spring](http://www.springframework.org) would give you the best insight into Java best practices for enterprise architecture. | Java Application Architecture Guide | [
"",
"java",
"architecture",
"jakarta-ee",
""
] |
I found a bug in an application that completely freezes the JVM. The produced stacktrace would provide valuable information for the developers and I would like to retrieve it from the Java console. When the JVM crashes, the console is frozen and I cannot copy the contained text anymore.
Is there way to pipe the Java console directly to a file or some other means of accessing the console output of a Java application?
Update: I forgot to mention, without changing the code. I am a manual tester.
Update 2: This is under Windows XP and it's actually a web start application. Piping the output of
```
javaws jnlp-url
```
does not work (empty file). | Actually one can activate [tracing](http://java.sun.com/javase/6/docs/technotes/guides/deployment/deployment-guide/tracing_logging.html#tracing) in the Java Control Panel. This will pipe anything that ends up in the Java console in a tracing file.
The log files will end up in:
* <user.home>/.java/deployment/log on Unix/Linux
* <User Application Data Folder>\Sun\Java\Deployment\log on Windows
* /~/Library/Caches/Java/log on OS X | (If you can modify the code) you can set the `System.out` field to a different value:
```
System.setOut(new PrintStream(new FileOutputStream(fileName)));
```
If you are running a script (invoking the program via `java`) from `Unix` you could do:
```
/path/to/script.sh >& path/to/output.log
``` | How do I pipe the Java console output to a file? | [
"",
"java",
"testing",
"logging",
"java-web-start",
""
] |
I would like to pass a pointer (I am putting a file with data in memory with mmap) to processes spawned using fork + exec, but I am stuck on how to pass a pointer to the exec() spawned process?
UPDATE1:
Thanks for your inputs, I do use shared memory creating it with mmap with MAP\_INHERIT flag:
Each mapped file and shared memory region created with the mmap() function
is unmapped by a successful call to any of the exec functions, except those
regions mapped with the MAP\_INHERIT option. Regions mapped with the
MAP\_INHERIT option remain mapped in the new process image.
source: <http://www.uwm.edu/cgi-bin/IMT/wwwman?topic=exec(2)&msection=>
UPDATE2:
This is homework excercise, but I think I must stop thinking about pointers and think about the IPC itself. I guess I will go with trying to mmap the same file in child process.
Short code example much appreciated.
Thanks in advance for your help. | If you use shared memory, you can't pass the pointer. The pointer will contain the virtual address, which is different from one process to another. You have to exchange offset values, based on the start of the shared memory area.
If you don't use shared memory, you can't exchange pointers of any kind: The other process won't be able to access the memory of your process. | This can't work. The new process should `mmap` the file itself as well. | Passing a pointer to process spawned with exec() | [
"",
"c++",
"linux",
"gcc",
"exec",
"mmap",
""
] |
(Follow-Up-Question to [How to change LINQ O/R-M table name/source during runtime?](https://stackoverflow.com/questions/582221/how-to-change-linq-o-r-m-table-name-source-during-runtime))
I need to change the table source of a LINQ 2 SQL O/R-Mapper table during runtime. To achieve this, I need to create an [XmlMappingSource](http://msdn.microsoft.com/en-us/library/bb386907.aspx). On command line, I could use SqlMetal to create this mapping file, but I would like to create the mapping file during runtime in memory.
The XmlMappingSource is a simple xml file, looking something like this:
```
<?xml version="1.0" encoding="utf-8"?>
<Database Name="MyDatabase" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007">
<Table Name="dbo.MyFirstTable" Member="MyFirstTable">
<Type Name="MyFirstTable">
<Column Name="ID" Member="ID" Storage="_ID" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
<Association Name="WaStaArtArtikel_WaVerPreisanfragen" Member="WaStaArtArtikel" Storage="_WaStaArtArtikel" ThisKey="ArtikelID" OtherKey="ID" IsForeignKey="true" />
</Type>
</Table>
<Table Name="dbo.MySecondTable" Member="MySecondTable">
<Type Name="MySecondTable">
<Column Name="ID" Member="ID" Storage="_ID" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
<Column Name="FirstTableID" Member="FirstTableID" Storage="_FirstTableID" DbType="UniqueIdentifier NOT NULL" />
<Association Name="MySecondTable_MyFirstTable" Member="MyFirstTable" Storage="_MyFirstTable" ThisKey="FirstTableID" OtherKey="ID" IsForeignKey="true" />
</Type>
</Table>
</Database>
```
This should be possible to create using reflection, for example I can get the database name from a data context like this:
```
using System.Data.Linq.Mapping;
using System.Xml.Linq;
XDocument mapWriter = new XDocument();
DatabaseAttribute[] catx = (DatabaseAttribute[])typeof(WcfInterface.WaDataClassesDataContext).GetCustomAttributes(typeof(DatabaseAttribute), false);
XElement xDatabase = new XElement("Database");
xDatabase.Add(new XAttribute("Name", catx[0].Name));
mapWriter.Add(xDatabase);
```
My problem: I can't find good documentation of the mapping, so extracting the necessary information is quite error-prone - maybe someone can point me to good docs of the mapping, or, even better, to a code example how to create the mapping file? | Have you considered using LINQ to Entities, the mapping formats for LINQ to Entities are documented. | Use Damien Guard's Open Source T4 templates. They do everything SQLMetal can do and more, and you'll have the full T4 engine behind you. | How to create an XmlMappingSource during runtime? | [
"",
"c#",
".net",
"xml",
"linq-to-sql",
""
] |
I'm writing PHP code to parse a string. It needs to be as fast as possible, so are regular expressions the way to go? I have a hunch that PHP string functions are more expensive, but it's just a guess. What's the truth?
Here's specifically what I need to do with the string:
Grab the first half (based on the third location of a substring "000000") and compare its hash to the next 20 bytes, throwing away anything left.
Parse the 9th byte through the next "000000" as one piece of data. Then grab the next 19 bytes after that, and split that into 8 (toss 1) and 8. Then I do some other stuff that converts those two 8 byte strings into dates.
So that's the kind of thing I need to do. | It depends on your case: if you're trying to do something fairly basic (eg: search for a string, replace a substring with something else), then the regular string functions are the way to go. If you want to do something more complicated (eg: search for IP addresses), then the Regex functions are definitely a better choice.
I haven't profiled regexes so I can't say that they'll be faster at runtime, but I can tell you that the extra time spent hacking together the equivalent using the basic functions wouldn't be worth it.
---
**Edit with the new information in the OP:**
It sounds as though you actually need to do a number of small string operations here. Since each one individually is quite basic, and I doubt you'd be able to do all those steps (or even a couple of those steps) at one time using a regex, I'd go with the basic functions:
> Grab the first half (based on the third location of a substring "000000") and compare its hash to the next 20 bytes, throwing away anything left.
Use: [`strpos()`](http://php.net/strpos) and [`substr()`](http://php.net/substr)
Or : `/$(.*?0{6}.*?0{6}.*?)0{6}/`
> Then grab the next 19 bytes after that, and split that into 8 (toss 1) and 8.
Use: `substr()` - *(I assume you mean 17 bytes here -- 8 + 1 + 8)*
```
$part1 = substr($myStr, $currPos, 8);
$part2 = substr($myStr, $currPos + 9, 8);
``` | I believe there is a threshold from which a regular expression is faster than a bunch of PHP string function calls. Anyway, depends a lot on what you're doing. You have to find out the balance.
Now that you edited your question. I'd use string functions for what you're trying to accomplish. strpos() and substr() is what comes to mind at a first glance. | Which is more efficient, PHP string functions or regex in PHP? | [
"",
"php",
"regex",
"string",
"performance",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.