Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have an instance of a `System.Drawing.Bitmap` and would like to make it available to my WPF app in the form of a `System.Windows.Media.Imaging.BitmapImage`.
What would be the best approach for this?
|
Thanks to Hallgrim, here is the code I ended up with:
```
ScreenCapture = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
bmp.GetHbitmap(),
IntPtr.Zero,
System.Windows.Int32Rect.Empty,
BitmapSizeOptions.FromWidthAndHeight(width, height));
```
I also ended up binding to a BitmapSource instead of a BitmapImage as in my original question
|
How about loading it from MemoryStream?
```
using(MemoryStream memory = new MemoryStream())
{
bitmap.Save(memory, ImageFormat.Png);
memory.Position = 0;
BitmapImage bitmapImage = new BitmapImage();
bitmapImage.BeginInit();
bitmapImage.StreamSource = memory;
bitmapImage.CacheOption = BitmapCacheOption.OnLoad;
bitmapImage.EndInit();
}
```
|
Load a WPF BitmapImage from a System.Drawing.Bitmap
|
[
"",
"c#",
"wpf",
"bitmap",
""
] |
Are there any dictionary classes in the .NET base class library which allow duplicate keys to be used? The only solution I've found is to create, for example, a class like:
```
Dictionary<string, List<object>>
```
But this is quite irritating to actually use. In Java, I believe a MultiMap accomplishes this, but cannot find an analog in .NET.
|
If you're using .NET 3.5, use the [`Lookup`](http://msdn.microsoft.com/en-us/library/bb460184.aspx) class.
EDIT: You generally create a `Lookup` using [`Enumerable.ToLookup`](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.tolookup.aspx). This does assume that you don't need to change it afterwards - but I typically find that's good enough.
If that *doesn't* work for you, I don't think there's anything in the framework which will help - and using the dictionary is as good as it gets :(
|
The List class actually works quite well for key/value collections containing duplicates where you would like to iterate over the collection. Example:
```
List<KeyValuePair<string, string>> list = new List<KeyValuePair<string, string>>();
// add some values to the collection here
for (int i = 0; i < list.Count; i++)
{
Print(list[i].Key, list[i].Value);
}
```
|
Duplicate keys in .NET dictionaries?
|
[
"",
"c#",
".net",
"dictionary",
"multimap",
""
] |
This may seem like a programming 101 question and I had thought I knew the answer but now find myself needing to double check. In this piece of code below, will the exception thrown in the first catch block then be caught by the general Exception catch block below?
```
try {
// Do something
} catch(IOException e) {
throw new ApplicationException("Problem connecting to server");
} catch(Exception e) {
// Will the ApplicationException be caught here?
}
```
I always thought the answer would be no, but now I have some odd behaviour that could be caused by this. The answer is probably the same for most languages but I'm working in Java.
|
No, since the new `throw` is not in the `try` block directly.
|
No. It's very easy to check.
```
public class Catch {
public static void main(String[] args) {
try {
throw new java.io.IOException();
} catch (java.io.IOException exc) {
System.err.println("In catch IOException: "+exc.getClass());
throw new RuntimeException();
} catch (Exception exc) {
System.err.println("In catch Exception: "+exc.getClass());
} finally {
System.err.println("In finally");
}
}
}
```
Should print:
```
In catch IOException: class java.io.IOException
In finally
Exception in thread "main" java.lang.RuntimeException
at Catch.main(Catch.java:8)
```
Technically that could have been a compiler bug, implementation dependent, unspecified behaviour, or something. However, the JLS is pretty well nailed down and the compilers are good enough for this sort of simple thing (generics corner case may be a different matter).
Also note, if you swap around the two catch blocks, it wont compile. The second catch would be completely unreachable.
Note the finally block always runs even if a catch block is executed (other than silly cases, such as infinite loops, attaching through the tools interface and killing the thread, rewriting bytecode, etc.).
|
Exception thrown inside catch block - will it be caught again?
|
[
"",
"java",
"exception",
""
] |
I'm building an installer for an application. The user gets to select a datasource they have configured and nominate what type of database it is. I want to confirm that the database type is indeed Postgres, and if possible, what version of Postgres they are running by sending a SQL statement to the datasource.
|
Try this:
```
mk=# SELECT version();
version
-----------------------------------------------------------------------------------------------
PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7)
(1 row)
```
The command works too in MySQL:
```
mysql> select version();
+--------------------------------+
| version() |
+--------------------------------+
| 5.0.32-Debian_7etch1~bpo.1-log |
+--------------------------------+
1 row in set (0.01 sec)
```
There is no version command in sqlite as far as I can see.
|
`SHOW server_version;`
(for completeness)
|
How can I confirm a database is Postgres & what version it is using SQL?
|
[
"",
"sql",
"postgresql",
""
] |
Any recommended crypto libraries for Python. I know I've asked something similar in [x509 certificate parsing libraries for Java](https://stackoverflow.com/questions/143523/), but I should've split the question in two.
What I need is the ability to parse X.509 Certificates to extract the information contained in them.
Looking around, I've found two options:
* Python OpenSSL Wrappers ([http://sourceforge.net/projects/pow](http://sourceforge.net/projects/pow "Python OpenSSL Wrappers"))
* [pyOpenSSL](https://github.com/pyca/pyopenssl)
Of the two, pyOpenSSL seems to be the most "maintained", but I'd like some feedback on anybody who might have experience with them?
|
You might want to try [keyczar](http://www.keyczar.org/) as mentioned by me in your other post, since that library actually has implementations for both python and java. That would make it easier to use it in both contexts.
A word of warning: I have not actually used this library 8(, so please take this with a grain of salt.
|
Use M2Crypto, it is the most complete tool IMHO
|
Crypto/X509 certificate parsing libraries for Python
|
[
"",
"python",
"cryptography",
"openssl",
"x509",
""
] |
I am building a menu in HTML/CSS/JS and I need a way to prevent the text in the menu from being highlighted when double-clicked on. I need a way to pass the id's of several divs into a function and have highlighting turned off within them.
So when the user accidentally (or on purpose) double clicks on the menu, the menu shows its sub-elements but its text does not highlight.
There are a number of scripts out there floating around on the web, but many seem outdated. What's the best way?
|
In (Mozilla, Firefox, Camino, Safari, Google Chrome) you can use this:
```
div.noSelect {
-moz-user-select: none; /* mozilla browsers */
-khtml-user-select: none; /* webkit browsers */
}
```
For IE there is no CSS option, but you can capture the ondragstart event, and return false;
**Update**
Browser support for this property has expanded since 2008.
```
div.noSelect {
-webkit-user-select: none; /* Safari */
-ms-user-select: none; /* IE 10 and IE 11 */
user-select: none; /* Standard syntax */
}
```
[https://www.w3schools.com/csSref/css3\_pr\_user-select.php](https://css-tricks.com/almanac/properties/u/user-select/)
|
You could use this CSS to simply hide the selection color (not supported by IE):
```
#id::-moz-selection {
background: transparent;
}
#id::selection {
background: transparent;
}
```
|
What is the best way to prevent highlighting of text when clicking on its containing div in javascript?
|
[
"",
"javascript",
"html",
"css",
""
] |
I have made a SVG image, or more like mini application, for viewing graphs of data. I want to include this in a HTML page, and call methods on the SVG image.
Example:
```
<object id="img" data="image.svg" width="500" height="300"/>
<script>document.getElementById("img").addData([1,23,4]);</script>
```
Is it at all possible to call methods on the SVG document? If so, how do I declare the methods to expose in the SVG file, and how do I call them from the HTML document?
|
Solution:
in svg:
```
<script>document.method = function() {}</script>
```
in html (using prototype to add event listeners):
```
<script>$("img").observe("load", function() {$("img").contentDocument.method()});
```
You need to listen to the load event on the image. Once the image is loaded, you can use the `element.contentDocument` to access the document variable on the svg document. Any methods added to that, will be available.
|
Things are actually simpler than you expect. You do not really need to read convoluted tutorial to understand the concept, neither do you have to use JQuery. Here is the basic layout:
* A JavaScript function in your html document.
```
<script type="text/javascript">
function change(){
var s=document.getElementById("cube");
s.setAttribute("stroke","0000FF");
}
</script>
```
* An SVG element that we are trying to manipulate.
```
<svg width=100 height=100 style='float: left;'>
<rect x="10" y="10" width="60" height="60" id="cube" onclick="change()" stroke=#F53F0C stroke-width=10 fill=#F5C60C />
</svg>
```
* An inline Button that would trigger the change. Notice that in my example the event can also be triggered by clicking on the cube itself.
```
<button onclick="change()">Click</button>
```
|
Is it possible to manipulate an SVG document embedded in an HTML doc with JavaScript?
|
[
"",
"javascript",
"html",
"svg",
""
] |
I would like to create a page where all images which reside on my website are listed with title and alternative representation.
I already wrote me a little program to find and load all HTML files, but now I am stuck at how to extract `src`, `title` and `alt` from this HTML:
```
<img src="/image/fluffybunny.jpg" title="Harvey the bunny" alt="a cute little fluffy bunny" />
```
I guess this should be done with some regex, but since the order of the tags may vary, and I need all of them, I don't really know how to parse this in an elegant way (I could do it the hard char by char way, but that's painful).
|
## EDIT : now that I know better
Using regexp to solve this kind of problem is [a bad idea](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454) and will likely lead in unmaintainable and unreliable code. Better use an [HTML parser](http://simplehtmldom.sourceforge.net/).
## Solution With regexp
In that case it's better to split the process into two parts :
* get all the img tag
* extract their metadata
I will assume your doc is not xHTML strict so you can't use an XML parser. E.G. with this web page source code :
```
/* preg_match_all match the regexp in all the $html string and output everything as
an array in $result. "i" option is used to make it case insensitive */
preg_match_all('/<img[^>]+>/i',$html, $result);
print_r($result);
Array
(
[0] => Array
(
[0] => <img src="/Content/Img/stackoverflow-logo-250.png" width="250" height="70" alt="logo link to homepage" />
[1] => <img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />
[2] => <img class="vote-down" src="/content/img/vote-arrow-down.png" alt="vote down" title="This was not helpful (click again to undo)" />
[3] => <img src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG" height=32 width=32 alt="gravatar image" />
[4] => <img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />
[...]
)
)
```
Then we get all the img tag attributes with a loop :
```
$img = array();
foreach( $result as $img_tag)
{
preg_match_all('/(alt|title|src)=("[^"]*")/i',$img_tag, $img[$img_tag]);
}
print_r($img);
Array
(
[<img src="/Content/Img/stackoverflow-logo-250.png" width="250" height="70" alt="logo link to homepage" />] => Array
(
[0] => Array
(
[0] => src="/Content/Img/stackoverflow-logo-250.png"
[1] => alt="logo link to homepage"
)
[1] => Array
(
[0] => src
[1] => alt
)
[2] => Array
(
[0] => "/Content/Img/stackoverflow-logo-250.png"
[1] => "logo link to homepage"
)
)
[<img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />] => Array
(
[0] => Array
(
[0] => src="/content/img/vote-arrow-up.png"
[1] => alt="vote up"
[2] => title="This was helpful (click again to undo)"
)
[1] => Array
(
[0] => src
[1] => alt
[2] => title
)
[2] => Array
(
[0] => "/content/img/vote-arrow-up.png"
[1] => "vote up"
[2] => "This was helpful (click again to undo)"
)
)
[<img class="vote-down" src="/content/img/vote-arrow-down.png" alt="vote down" title="This was not helpful (click again to undo)" />] => Array
(
[0] => Array
(
[0] => src="/content/img/vote-arrow-down.png"
[1] => alt="vote down"
[2] => title="This was not helpful (click again to undo)"
)
[1] => Array
(
[0] => src
[1] => alt
[2] => title
)
[2] => Array
(
[0] => "/content/img/vote-arrow-down.png"
[1] => "vote down"
[2] => "This was not helpful (click again to undo)"
)
)
[<img src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG" height=32 width=32 alt="gravatar image" />] => Array
(
[0] => Array
(
[0] => src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG"
[1] => alt="gravatar image"
)
[1] => Array
(
[0] => src
[1] => alt
)
[2] => Array
(
[0] => "http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG"
[1] => "gravatar image"
)
)
[..]
)
)
```
Regexps are CPU intensive so you may want to cache this page. If you have no cache system, you can tweak your own by using [ob\_start](https://www.php.net/manual/fr/function.ob-start.php) and loading / saving from a text file.
## How does this stuff work ?
First, we use [preg\_ match\_ all](https://www.php.net/manual/fr/function.preg-match-all.php), a function that gets every string matching the pattern and ouput it in it's third parameter.
The regexps :
```
<img[^>]+>
```
We apply it on all html web pages. It can be read as *every string that starts with "`<img`", contains non ">" char and ends with a >*.
```
(alt|title|src)=("[^"]*")
```
We apply it successively on each img tag. It can be read as *every string starting with "alt", "title" or "src", then a "=", then a ' " ', a bunch of stuff that are not ' " ' and ends with a ' " '. Isolate the sub-strings between ()*.
Finally, every time you want to deal with regexps, it handy to have good tools to quickly test them. Check this [online regexp tester](http://regex.larsolavtorvik.com/).
EDIT : answer to the first comment.
It's true that I did not think about the (hopefully few) people using single quotes.
Well, if you use only ', just replace all the " by '.
If you mix both. First you should slap yourself :-), then try to use ("|') instead or " and [^ø] to replace [^"].
|
```
$url="http://example.com";
$html = file_get_contents($url);
$doc = new DOMDocument();
@$doc->loadHTML($html);
$tags = $doc->getElementsByTagName('img');
foreach ($tags as $tag) {
echo $tag->getAttribute('src');
}
```
|
How to extract img src, title and alt from html using php?
|
[
"",
"php",
"html",
"regex",
"html-parsing",
"html-content-extraction",
""
] |
We have got loads of options for `php + MySQL + Apache` combo... Which is the best pack among these ?
Lets confine our ideas to `WAMP vs XAMPP`, is there a better option to go for?
I created an online programming contest web app called `CodeFire` on `XAMPP`, later I had to switch to `WAMP`, where none of the `php` scripts worked properly... what standard should I follow?
|
I like XAMPP, personally. I have an install running on a thumbdrive that I carry around that's pretty much my development environment for LAMP web dev on any machine I happen to be at (I'm mostly on Windows client machines).
Small, fully-functional, and stable - works really well for my needs.
|
I like [WAMP](http://www.wampserver.com/en/) the best, real simple interface and I can easily switch between different versions of PHP 5.26/4.44/4.3.9, MySQL 5/4, and Apache 2/1.3
|
XAMPP and WAMP in the LAMP, what's the best?
|
[
"",
"php",
"mysql",
"apache",
""
] |
I have a temp table with the exact structure of a concrete table `T`. It was created like this:
```
select top 0 * into #tmp from T
```
After processing and filling in content into `#tmp`, I want to copy the content back to `T` like this:
```
insert into T select * from #tmp
```
This is okay as long as `T` doesn't have identity column, but in my case it does. Is there any way I can ignore the auto-increment identity column from `#tmp` when I copy to `T`? My motivation is to avoid having to spell out every column name in the Insert Into list.
EDIT: toggling identity\_insert wouldn't work because the pkeys in `#tmp` may collide with those in `T` if rows were inserted into `T` outside of my script, that's if `#tmp` has auto-incremented the pkey to sync with T's in the first place.
|
As identity will be generated during insert anyway, could you simply remove this column from #tmp before inserting the data back to T?
```
alter table #tmp drop column id
```
**UPD:** Here's an example I've tested in SQL Server 2008:
```
create table T(ID int identity(1,1) not null, Value nvarchar(50))
insert into T (Value) values (N'Hello T!')
select top 0 * into #tmp from T
alter table #tmp drop column ID
insert into #tmp (Value) values (N'Hello #tmp')
insert into T select * from #tmp
drop table #tmp
select * from T
drop table T
```
|
SET IDENTITY\_INSERT ON
INSERT command
SET IDENTITY\_INSERT OFF
|
Insert into ... Select *, how to ignore identity?
|
[
"",
"sql",
"sql-server",
""
] |
**EDIT: This question is more about language engineering than C++ itself. I used C++ as an example to show what I wanted, mostly because I use it daily. I didn't want to know how it works on C++ but open a discussion on how it *could* be done.**
That's not the way it works right now, that's the way I *wish* it could be done, and that would break C compability for sure, but that's what I think extern "C" is all about.
I mean, in every function or method that you declare right now you have to explicit write that the object will be sent by reference prefixing the reference operator on it. I wish that every non-POD type would be automatically sent by reference, because I use that a lot, actually for every object that is more than 32 bits in size, and that's almost every class of mine.
Let's exemplify how it's right now, assume *a*, *b* and *c* to be classes:
```
class example {
public:
int just_use_a(const a &object);
int use_and_mess_with_b(b &object);
void do_nothing_on_c(c object);
};
```
Now what I wish:
```
class example {
public:
int just_use_a(const a object);
int use_and_mess_with_b(b object);
extern "C" void do_nothing_on_c(c object);
};
```
Now, do\_nothing\_on\_c() could behave just like it is today.
That would be interesting at least for me, feels much more clear, and also if you *know* every non-POD parameter is coming by reference I believe the mistakes would be the same that if you had to explicit declare it.
Another point of view for this change, from someone coming from C, the reference operator seems to me a way to get the variable *address*, that's the way I used for getting pointers. I mean, it is the same operator but with different semantic on different contexts, doesn't that feel a little bit wrong for you too?
|
I guess you're missing the point of C++, and C++ semantics. You missed the fact **C++ is correct in passing (almost) everything by value, because it's the way it's done in C. Always**. But not only in C, as I'll show you below...
## Parameters Semantics on C
In C, everything is passed by value. "primitives" and "PODs" are passed by copying their value. Modify them in your function, and the original won't be modified. Still, the cost of copying some PODs could be non-trivial.
When you use the pointer notation (the \* ), you're not passing by reference. You're passing a copy of the address. Which is more or less the same, with but one subtle difference:
```
typedef struct { int value ; } P ;
/* p is a pointer to P */
void doSomethingElse(P * p)
{
p->value = 32 ;
p = malloc(sizeof(P)) ; /* Don't bother with the leak */
p->value = 45 ;
}
void doSomething()
{
P * p = malloc(sizeof(P)) ;
p->value = 25 ;
doSomethingElse(p) ;
int i = p->value ;
/* Value of p ? 25 ? 32 ? 42 ? */
}
```
The final value of p->value is 32. Because p was passed by copying the value of the address. So the original p was not modified (and the new one was leaked).
## Parameters Semantics on Java and C Sharp
It can be surprising for some, but in Java, everything is copied by value, too. The C example above would give exactly the same results in Java. This is almost what you want, but you would not be able to pass primitive "by reference/pointer" as easily as in C.
In C#, they added the "ref" keyword. It works more or less like the reference in C++. The point is, on C#, you have to mention it both on the function declaration, and on each and every call. I guess this is not what you want, again.
## Parameters Semantics on C++
In C++, almost everything is passed by copying the value. When you're using nothing but the type of the symbol, you're copying the symbol (like it is done in C). This is why, when you're using the \*, you're passing a copy of the address of the symbol.
But when you're using the &, then assume you are passing the real object (be it struct, int, pointer, whatever): The reference.
It is easy to mistake it as syntaxic sugar (i.e., behind the scenes, it works like a pointer, and the generated code is the same used for a pointer). But...
The truth is that the reference is more than syntaxic sugar.
* Unlike pointers, it authorizes manipulating the object as if on stack.
* Unline pointers, when associatied with the const keyword, it authorizes implicit promotion from one type to another (through constructors, mainly).
* Unlike pointers, the symbol is not supposed to be NULL/invalid.
* Unlike the "by-copy", you are not spending useless time copying the object
* Unlike the "by-copy", you can use it as an [out] parameter
* Unlike the "by-copy", you can use the full range of OOP in C++ (i.e. you pass a full object to a function waiting an interface).
So, references has the best of both worlds.
Let's see the C example, but with a C++ variation on the doSomethingElse function:
```
struct P { int value ; } ;
// p is a reference to a pointer to P
void doSomethingElse(P * & p)
{
p->value = 32 ;
p = (P *) malloc(sizeof(P)) ; // Don't bother with the leak
p->value = 45 ;
}
void doSomething()
{
P * p = (P *) malloc(sizeof(P)) ;
p->value = 25 ;
doSomethingElse(p) ;
int i = p->value ;
// Value of p ? 25 ? 32 ? 42 ?
}
```
The result is 42, and the old p was leaked, replaced by the new p. Because, unlike C code, we're not passing a copy of the pointer, but the reference to the pointer, that is, the pointer itself.
When working with C++, the above example must be cristal clear. If it is not, then you're missing something.
## Conclusion
**C++ is pass-by-copy/value because it is the way everything works, be it in C, in C# or in Java (even in JavaScript... :-p ...). And like C#, C++ has a reference operator/keyword, *as a bonus*.**
Now, as far as I understand it, you are perhaps doing what I call half-jockingly **C+**, that is, C with some limited C++ features.
Perhaps your solution is using typedefs (it will enrage your C++ colleagues, though, to see the code polluted by useless typedefs...), but doing this will only obfuscate the fact you're really missing C++ there.
As said in another post, you should change your mindset from C development (of whatever) to C++ development, or you should perhaps move to another language. But do not keep programing the C way with C++ features, because by consciously ignoring/obfuscating the power of the idioms you use, you'll produce suboptimal code.
Note: And do not pass by copy anything else than primitives. You'll castrate your function from its OO capacity, and in C++, this is not what you want.
## Edit
*The question was* somewhat *modified (see <https://stackoverflow.com/revisions/146271/list> ). I let my original answer, and answer the new questions below.*
**What you think about default pass-by-reference semantics on C++?** Like you said, it would break compatibility, and you'll have different pass-by for primitives (i.e. built-in types, which would still be passed by copy) and structs/objects (which would be passed as references). You would have to add another operator to mean "pass-by-value" (the extern "C" is quite awful and already used for something else quite different). No, I really like the way it is done today in C++.
**[...] the reference operator seems to me a way to get the variable address, that's the way I used for getting pointers. I mean, it is the same operator but with different semantic on different contexts, doesn't that feel a little bit wrong for you too?** Yes and no. Operator >> changed its semantic when used with C++ streams, too. Then, you can use operator += to replace strcat. I guess the operator & got used because its signification as "opposite of pointer", and because they did not want to use yet another symbol (ASCII is limited, and the scope operator :: as well as pointer -> shows that few other symbols are usable). But now, if & bothers you, && will really unnerve you, as they added an unary && in C++0x (a kind of super-reference...). I've yet to digest it myself...
|
A compiler option that totally changes the meaning of a section of code sounds like a really bad idea to me. Either get use to the C++ syntax or find a different language.
|
Default pass-by-reference semantics in C++
|
[
"",
"c++",
"programming-languages",
"syntax",
"semantics",
""
] |
I am currently working on an web application that uses ASP.NET 2.0 framework. I need to redirect to a certain page, say SessionExpired.aspx, when the user session expires. There are lot of pages in the project, so adding code to every page of the site is not really a good solution. I have MasterPages though, which I think might help.
Thanks!
|
You can handle this in global.asax in the Session\_Start event. You can check for a session cookie in the request there. If the session cookie exists, the session has expired:
```
public void Session_OnStart()
{
if (HttpContext.Current.Request.Cookies.Contains("ASP.NET_SessionId") != null)
{
HttpContext.Current.Response.Redirect("SessionTimeout.aspx")
}
}
```
Alas I have not found any elegant way of finding out the name of the session cookie.
|
I usually add an HtmlMeta control to the Page.Header.Controls collection on the master page when the user has "logged in". Set it to Refresh to your SessionExpired.aspx page with an appropriate timeout length, and you're good to go.
|
How can I redirect to a page when the user session expires?
|
[
"",
"c#",
"asp.net",
""
] |
Is there a way of comparing two bitmasks in Transact-SQL to see if any of the bits match? I've got a User table with a bitmask for all the roles the user belongs to, and I'd like to select all the users that have *any* of the roles in the supplied bitmask. So using the data below, a roles bitmask of 6 (designer+programmer) should select Dave, Charlie and Susan, but not Nick.
```
User Table
----------
ID Username Roles
1 Dave 6
2 Charlie 2
3 Susan 4
4 Nick 1
Roles Table
-----------
ID Role
1 Admin
2 Programmer
4 Designer
```
Any ideas? Thanks.
|
The answer to your question is to use the Bitwise `&` like this:
```
SELECT * FROM UserTable WHERE Roles & 6 != 0
```
The `6` can be exchanged for any combination of your bitfield where you want to check that any user has one or more of those bits. When trying to validate this I usually find it helpful to write this out longhand in binary. Your user table looks like this:
```
1 2 4
------------------
Dave 0 1 1
Charlie 0 1 0
Susan 0 0 1
Nick 1 0 0
```
Your test (6) is this
```
1 2 4
------------------
Test 0 1 1
```
If we go through each person doing the bitwaise And against the test we get these:
```
1 2 4
------------------
Dave 0 1 1
Test 0 1 1
Result 0 1 1 (6)
Charlie 0 1 0
Test 0 1 1
Result 0 1 0 (2)
Susan 0 0 1
Test 0 1 1
Result 0 0 1 (4)
Nick 1 0 0
Test 0 1 1
Result 0 0 0 (0)
```
The above should demonstrate that any records where the result is not zero has one or more of the requested flags.
Edit: Here's the test case should you want to check this
```
with test (id, username, roles)
AS
(
SELECT 1,'Dave',6
UNION SELECT 2,'Charlie',2
UNION SELECT 3,'Susan',4
UNION SELECT 4,'Nick',1
)
select * from test where (roles & 6) != 0 // returns dave, charlie & susan
```
or
```
select * from test where (roles & 2) != 0 // returns Dave & Charlie
```
or
```
select * from test where (roles & 7) != 0 // returns dave, charlie, susan & nick
```
|
Use the Transact-SQL [bitwise AND operator](http://msdn.microsoft.com/en-us/library/ms174965.aspx) "&" and compare the result to zero. Even better, instead of coding the roles as bits of an integer column, use boolean columns, one for each role. Then your query would simply be designer AND programmer friendly. If you expect the roles to change a lot over the lifetime of your application, then use a many-to-many table to map the association between users and their roles. both alternatives are more portable than relying on the existence of the bitwise-AND operator.
|
Comparing two bitmasks in SQL to see if any of the bits match
|
[
"",
"sql",
"sql-server",
"t-sql",
"bitmask",
""
] |
In PHP, a string enclosed in "double quotes" will be parsed for variables to replace whereas a string enclosed in 'single quotes' will not. In Python, does this also apply?
|
[No](http://docs.python.org/reference/lexical_analysis.html#strings):
> ### 2.4.1. String and Bytes literals
>
> ...In plain English: Both types of literals can be enclosed in matching single quotes (`'`) or double quotes (`"`). They can also be enclosed in matching groups of three single or double quotes (these are generally referred to as triple-quoted strings). The backslash (`\`) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character...
|
Python is one of the few (?) languages where ' and " have identical functionality. The choice for me usually depends on what is inside. If I'm going to quote a string that has single quotes within it I'll use double quotes and visa versa, to cut down on having to escape characters in the string.
Examples:
```
"this doesn't require escaping the single quote"
'she said "quoting is easy in python"'
```
This is documented on the "String Literals" page of the python documentation:
* <http://docs.python.org/2/reference/lexical_analysis.html#string-literals> (2.x)
* <http://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals> (3.x)
|
Is there any difference between "string" and 'string' in Python?
|
[
"",
"python",
"string",
"quotes",
"double-quotes",
""
] |
Is it possible to have a `MySQLi prepared statement` within the `fetch()` call of a previous statement? If not, what's the best way around it?
Example code:
```
if($stmt = $link->prepare("SELECT item FROM data WHERE id = ?")) {
$stmt->bind_param("i", $id);
$stmt->execute();
$stmt->bind_result($item);
while( $stmt->fetch() ) {
/* Other code here */
$itemSummary = $item + $magic;
if($stmt2 = $link->prepare("INSERT INTO summaries (itemID, summary) VALUES (?, ?)")) {
$stmt2->bind_param("is", $itemID, $itemSummary);
$stmt2->execute();
$stmt2->close();
}
}
}
```
|
You should be able to do that, although you make have to start a second connection.
|
This is the single connection way:
```
if($stmt = $link->prepare("SELECT item FROM data WHERE id = ?")) {
$stmt->bind_param("i", $id);
$stmt->execute();
$stmt->store_result(); // <-- this
$stmt->bind_result($item);
while( $stmt->fetch() ) {
/* Other code here */
$itemSummary = $item + $magic;
if($stmt2 = $link->prepare("INSERT INTO summaries (itemID, summary) VALUES (?, ?)")) {
$stmt2->bind_param("is", $itemID, $itemSummary);
$stmt2->execute();
$stmt2->store_result(); // <-- this
/*DO WHATEVER WITH STMT2*/
$stmt2->close();
}
}
}
```
|
Possible to use multiple/nested MySQLi statements?
|
[
"",
"php",
"mysql",
"mysqli",
""
] |
In wxPython, if I create a list of radio buttons and place the list initially, is it possible to change the contents in that list later?
For example, I have a panel that uses a boxSizer to place the widgets initially. One of those widgets is a list of radio buttons (I have also tried a normal radiobox). I would like to dynamically change the list based on variables from another class.
However, once the list is placed in the sizer, it's effectively "locked"; I can't just modify the list and have the changes appear. If I try re-adding the list to the sizer, it just gets put in the top left corner of the panel.
I'm sure I could hide the original list and manually place the new list in the same position but that feels like a kludge. I'm sure I'm making this harder than it is. I'm probably using the wrong widgets for this, much less the wrong approach, but I'm building this as a learning experience.
```
class Job(wiz.WizardPageSimple):
"""Character's job class."""
def __init__(self, parent, title, attribs):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
self.charAttribs = attribs
#---Create widgets
self.Job_list = ["Aircraft Mechanic", "Vehicle Mechanic", "Electronics Specialist"]
box1_title = wx.StaticBox( self, -1, "" )
box1 = wx.StaticBoxSizer( box1_title, wx.VERTICAL )
grid1 = wx.BoxSizer(wx.VERTICAL)
for item in self.Job_list:
radio = wx.RadioButton(self, -1, item)
grid1.Add(radio)
##Debugging
self.btn = wx.Button(self, -1, "click")
self.Bind(wx.EVT_BUTTON, self.eligibleJob, self.btn)
#---Place widgets
self.sizer.Add(self.Job_intro)
self.sizer.Add(self.btn)
box1.Add(grid1)
self.sizer.Add(box1)
def eligibleJob(self, event):
"""Determine which Jobs a character is eligible for."""
if self.charAttribs.intelligence >= 12:
skillList = ["Analyst", "Interrogator", "Fire Specialist", "Aircraft Pilot"]
for skill in skillList:
self.Job_list.append(skill)
print self.Job_list ##Debugging
#return self.Job_list
```
|
To make new list elements appear in correct places, you have to re-layout the grid after adding new elements to it. For example, to add a few new items, you could call:
```
def addNewSkills(self, newSkillList):
'''newSkillList is a list of skill names you want to add'''
for skillName in newSkillList:
newRadioButton = wx.RadioButton(self, -1, skillName)
self.grid1.Add(newRadioButton) # appears in top-left corner of the panel
self.Layout() # all newly added radio buttons appear where they should be
self.Fit() # if you need to resize the panel to fit new items, this will help
```
where `self.grid1` is the sizer you keep all your radio buttons on.
|
Two possible solutions
1. Rebuild the sizer with the radio widgets each time you have to make a change
2. Hold the radio button widgets in a list, and call SetLabel each time you have to change their labels.
|
Dynamic radio button creation
|
[
"",
"python",
"user-interface",
"layout",
"wxpython",
""
] |
> **Possible Duplicate:**
> [Pre & post increment operator behavior in C, C++, Java, & C#](https://stackoverflow.com/questions/6457130/pre-post-increment-operator-behavior-in-c-c-java-c-sharp)
Here is a test case:
```
void foo(int i, int j)
{
printf("%d %d", i, j);
}
...
test = 0;
foo(test++, test);
```
I would expect to get a "0 1" output, but I get "0 0"
What gives??
|
This is an example of unspecified behavior. The standard does **not** say what order arguments should be evaluated in. This is a compiler implementation decision. The compiler is free to evaluate the arguments to the function in any order.
In this case, it looks like actually processes the arguments right to left instead of the expected left to right.
In general, doing side-effects in arguments is bad programming practice.
Instead of **foo(test++, test);** you should write **foo(test, test+1); test++;**
It would be semantically equivalent to what you are trying to accomplish.
Edit:
As Anthony correctly points out, it is undefined to both read and modify a single variable without an intervening sequence point. So in this case, the behavior is indeed **undefined**. So the compiler is free to generate whatever code it wants.
|
This is not just *unspecified* behaviour, it is actually **undefined behaviour** .
Yes, the order of argument evaluation is *unspecified*, but it is *undefined* to both read and modify a single variable without an intervening sequence point unless the read is solely for the purpose of computing the new value. There is no sequence point between the evaluations of function arguments, so `f(test,test++)` is **undefined behaviour**: `test` is being read for one argument and modified for the other. If you move the modification into a function then you're fine:
```
int preincrement(int* p)
{
return ++(*p);
}
int test;
printf("%d %d\n",preincrement(&test),test);
```
This is because there is a sequence point on entry and exit to `preincrement`, so the call must be evaluated either before or after the simple read. Now the order is just *unspecified*.
Note also that the comma *operator* provides a sequence point, so
```
int dummy;
dummy=test++,test;
```
is fine --- the increment happens before the read, so `dummy` is set to the new value.
|
Post increment operator behavior
|
[
"",
"c++",
"c",
"post-increment",
""
] |
I have email addresses encoded with HTML character entities. Is there anything in .NET that can convert them to plain strings?
|
You can use [`HttpUtility.HtmlDecode`](https://learn.microsoft.com/en-us/dotnet/api/system.web.httputility.htmldecode).
If you are using .NET 4.0+ you can also use [`WebUtility.HtmlDecode`](https://learn.microsoft.com/en-us/dotnet/api/system.net.webutility.htmldecode) which does not require an extra assembly reference as it is available in the `System.Net` namespace.
|
On .Net 4.0:
```
System.Net.WebUtility.HtmlDecode()
```
No need to include assembly for a C# project
|
How can I decode HTML characters in C#?
|
[
"",
"c#",
""
] |
What number of classes do you think is ideal per one namespace "branch"? At which point would one decide to break one namespace into multiple ones? Let's not discuss the logical grouping of classes (assume they are logically grouped properly), I am, at this point, focused on the maintainable vs. not maintainable number of classes.
|
"42? No, it doesn't work..."
Ok, let's put our programming prowess to work and see what is Microsoft's opinion:
```
# IronPython
import System
exported_types = [
(t.Namespace, t.Name)
for t in System.Int32().GetType().Assembly.GetExportedTypes()]
import itertools
get_ns = lambda (ns, typename): ns
sorted_exported_types = sorted(exported_types, key=get_ns)
counts_per_ns = dict(
(ns, len(list(typenames)))
for ns, typenames
in itertools.groupby(sorted_exported_types, get_ns))
counts = sorted(counts_per_ns.values())
print 'Min:', counts[0]
print 'Max:', counts[-1]
print 'Avg:', sum(counts) / len(counts)
print 'Med:',
if len(counts) % 2:
print counts[len(counts) / 2]
else: # ignoring len == 1 case
print (counts[len(counts) / 2 - 1] + counts[len(counts) / 2]) / 2
```
And this gives us the following statistics on number of types per namespace:
```
C:\tools\nspop>ipy nspop.py
Min: 1
Max: 173
Avg: 27
Med: 15
```
|
With modern IDEs and other dev tools, I would say that if all the classes belong in a namespace, then there is no arbitrary number at which you should break up a namespace just for maintainability.
|
Ideal number of classes per namespace branch
|
[
"",
"c#",
"namespaces",
"grouping",
""
] |
That's the question: Which library can help me to access data available via WebDAV in my Java-programs? OpenSource is preferred.
|
The now deprecated [Apache Jakarta Slide](http://jakarta.apache.org/slide/) project includes a Java [WebDAV client library](http://jakarta.apache.org/slide/clientjavadoc/index.html) - but this project is retired due to the lack of a developer community.
[Apache Jackrabbit](http://jackrabbit.apache.org/) is mentioned as alternative to Slide. You might want to check if its [WebDAV library](http://jackrabbit.apache.org/jackrabbit-webdav-library.html) can be used instead.
If you just want to access files from a WebDAV repository, you can simply use a HTTP library as WebDAV builds upon HTTP. You only need a WebDAV client library if you want to use WebDAV features like locking, directory listings or access to properties (meta-data).
|
I created a very easy to use java webdav client: <http://sardine.googlecode.com/>
This now moved to github : <https://github.com/lookfirst/sardine>
|
Which Java-library can be used to access data via WebDAV?
|
[
"",
"java",
"webdav",
""
] |
For example, I have an ASP.NET form that is called by another aspx:
```
string url = "http://somewhere.com?P1=" + Request["param"];
Response.Write(url);
```
I want to do something like this:
```
string url = "http://somewhere.com?P1=" + Request["param"];
string str = GetResponse(url);
if (str...) {}
```
I need to get whatever Response.Write is getting as a result or going to url, manipulate that response, and send something else back.
Any help or a point in the right direction would be greatly appreciated.
|
Webclient.DownloadString() is probably want you want.
|
```
WebClient client = new WebClient();
string response = client.DownloadString(url);
```
|
How do I read an HttpResponse in ASP.NET 2.0?
|
[
"",
"c#",
"asp.net",
"httpresponse",
""
] |
When I try the following lookup in my code:
```
Context initCtx = new InitialContext();
Context envCtx = (Context) initCtx.lookup("java:comp/env");
return (DataSource) envCtx.lookup("jdbc/mydb");
```
I get the following exception:
```
java.sql.SQLException: QueryResults: Unable to initialize naming context:
Name java:comp is not bound in this Context at
com.onsitemanager.database.ThreadLocalConnection.getConnection
(ThreadLocalConnection.java:130) at
...
```
I installed embedded JBoss following the JBoss [wiki instructions](http://wiki.jboss.org/wiki/Tomcat5.5.x?action=e&windowstate=normal&mode=view). And I configured Tomcat using the "Scanning every WAR by default" deployment as specified in the [configuration wiki page](http://wiki.jboss.org/wiki/EmbeddedAndTomcat).
Quoting the config page:
> JNDI
>
> Embedded JBoss components like connection pooling, EJB, JPA, and transactions make
> extensive use of JNDI to publish services. Embedded JBoss overrides Tomcat's JNDI
> implementation by layering itself on top of Tomcat's JNDI instantiation. There are a few > reasons for this:
>
> 1. To avoid having to declare each and every one of these services within server.xml
> 2. To allow seemeless integration of the java:comp namespace between web apps and
> EJBs.
> 3. Tomcat's JNDI implementation has a few critical bugs in it that hamper some JBoss
> components ability to work
> 4. We want to provide the option for you of remoting EJBs and other services that can > be remotely looked up
Anyone have any thoughts on how I can configure the JBoss naming service which according to the above quote is overriding Tomcat's JNDI implementation so that I can do a lookup on java:comp/env?
FYI - My environment Tomcat 5.5.9, Seam 2.0.2sp, Embedded JBoss (Beta 3),
Note: I do have a -ds.xml file for my database connection properly setup and accessible on the class path per the instructions.
Also note: I have posted this question in embedded Jboss forum and seam user forum.
|
Thanks for the response toolkit.... yes, I can access my datasource by going directly to java:jdbc/mydb, but I'm using an existing code base that connects via the ENC. Here's some interesting info that I've found out ....
1. The above code works with **JBoss 4.2.2.GA** and here's the JNDI ctx parameters being used:
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces:
org.jboss.naming:org.jnp.interfaces
2. The above code works with **Tomcat 5.5.x** and here's the JNDI ctx parameters being used:
java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory
java.naming.factory.url.pkgs=org.apache.naming
3. The above code fails with **Embedded JBoss (Beta 3)** in Tomcat 5.5.x with the above error message.
java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory
java.naming.factory.url.pkgs=org.apache.namingThe above code fails with the above error using JBoss Embedded in tomcat 5.5.x
Anyone have any thoughts I what I need to do with configuring embedded JBoss JNDI configuration?
|
java:comp/env is known as the Enterprise Naming Context (ENC) and is not globally visible. See [here](http://www.informit.com/articles/article.aspx?p=384904) for more information. You will need to locate the global JNDI name which your datasource is regsitered at.
The easiest way to do this is to navigate to JBoss' web-based JMX console and look for a 'JNDIView' (not exactly sure of the name - currently at home) mbean. This mbean should have a list method which you can invoke, which will display the context path for all of the JNDI-bound objects.
|
Help configuring JNDI with embedded JBoss in Tomcat 5.5.x
|
[
"",
"java",
"jboss",
"seam",
"jndi",
"embedded-jboss",
""
] |
Is there a Java package providing funcionality like the .Net System.Data namespace ?
Specificaly the DataSet and Adaptor classes ?
|
Use [java.util](http://java.sun.com/javase/6/docs/api/java/util/package-summary.html) for the collections.
[java.sql](http://java.sun.com/javase/6/docs/api/java/sql/package-summary.html) for databases.
|
The equivalent to ADO.NET is JDBC. You can get the flavor of it here:
<http://www.heimetli.ch/jdbc/JDBCQuery.html>
|
Is there a Java equivalent for .Net System.Data?
|
[
"",
"java",
".net",
"database",
""
] |
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework.
The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect.
What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java.
The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results.
Edited to add:
After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
|
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes.
It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects.
I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions.
Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful.
I think you might have two alternatives:
1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution.
2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
|
In MySQL?
I'd say don't bother with the connection pooling. They're often a source of trouble and with MySQL they're not going to bring you the performance advantage you're hoping for. This road may be a lot of effort to follow--politically--because there's so much best practices hand waving and textbook verbiage in this space about the advantages of connection pooling.
Connection pools are simply a bridge between the post-web era of stateless applications (e.g. HTTP protocol) and the pre-web era of stateful long-lived batch processing applications. Since connections were very expensive in pre-web databases (since no one used to care too much about how long a connection took to establish), post-web applications devised this connection pool scheme so that every hit didn't incur this huge processing overhead on the RDBMS.
Since MySQL is more of a web-era RDBMS, connections are extremely lightweight and fast. I have written many high volume web applications that don't use a connection pool at all for MySQL.
This is a complication you may benefit from doing without, so long as there isn't a political obstacle to overcome.
|
What is the best solution for database connection pooling in python?
|
[
"",
"python",
"mysql",
"connection-pooling",
""
] |
I have a Python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the `help` function on each one. In Ruby I can do something like `ClassName.methods` to get a list of all the methods available on that class. Is there something similar in Python?
e.g. something like:
```
from somemodule import foo
print(foo.methods) # or whatever is the correct method to call
```
|
Use the [`inspect`](https://docs.python.org/3/library/inspect.html) module:
```
from inspect import getmembers, isfunction
from somemodule import foo
print(getmembers(foo, isfunction))
```
Also see the [`pydoc`](https://docs.python.org/3/library/pydoc.html) module, the `help()` function in the interactive interpreter and the `pydoc` command-line tool which generates the documentation you are after. You can just give them the class you wish to see the documentation of. They can also generate, for instance, HTML output and write it to disk.
|
You can use `dir(module)` to see all available methods/attributes. Also check out PyDocs.
|
How to list all functions in a module?
|
[
"",
"python",
"reflection",
"module",
"inspect",
""
] |
I'm attempting to do an AJAX call (via JQuery) that will initiate a fairly long process. I'd like the script to simply send a response indicating that the process has started, but JQuery won't return the response until the PHP script is done running.
I've tried this with a "close" header (below), and also with output buffering; neither seems to work. Any guesses? or is this something I need to do in JQuery?
```
<?php
echo( "We'll email you as soon as this is done." );
header( "Connection: Close" );
// do some stuff that will take a while
mail( 'dude@thatplace.com', "okay I'm done", 'Yup, all done.' );
?>
```
|
The following PHP manual page (incl. user-notes) suggests multiple instructions on how to close the TCP connection to the browser without ending the PHP script:
* [Connection handling *Docs*](http://php.net/features.connection-handling)
Supposedly it requires a bit more than sending a close header.
---
OP then confirms: *yup, this did the trick:* [pointing to user-note #71172 (Nov 2006)](http://php.net/manual/en/features.connection-handling.php#71172) copied here:
> Closing the users browser connection whilst keeping your php script running has been an issue since [PHP] 4.1, when the behaviour of `register_shutdown_function()` was modified so that it would not automatically close the users connection.
>
> sts at mail dot xubion dot hu Posted the original solution:
>
> ```
> <?php
> header("Connection: close");
> ob_start();
> phpinfo();
> $size = ob_get_length();
> header("Content-Length: $size");
> ob_end_flush();
> flush();
> sleep(13);
> error_log("do something in the background");
> ?>
> ```
>
> Which works fine until you substitute `phpinfo()` for `echo('text I want user to see');` in which case the headers are never sent!
>
> The solution is to explicitly turn off output buffering and clear the buffer prior to sending your header information. Example:
>
> ```
> <?php
> ob_end_clean();
> header("Connection: close");
> ignore_user_abort(true); // just to be safe
> ob_start();
> echo('Text the user will see');
> $size = ob_get_length();
> header("Content-Length: $size");
> ob_end_flush(); // Strange behaviour, will not work
> flush(); // Unless both are called !
> // Do processing here
> sleep(30);
> echo('Text user will never see');
> ?>
> ```
>
> Just spent 3 hours trying to figure this one out, hope it helps someone :)
>
> Tested in:
>
> * IE 7.5730.11
> * Mozilla Firefox 1.81
---
Later on in July 2010 in a [related answer](https://stackoverflow.com/a/3270882/367456) [*Arctic Fire*](https://stackoverflow.com/users/347655/arctic-fire) then linked two further user-notes that were-follow-ups to the one above:
* [Connection Handling user-note #89177 (Feb 2009)](http://www.php.net/manual/en/features.connection-handling.php#89177)
* [Connection Handling user-note #93441 (Sep 2009)](http://www.php.net/manual/en/features.connection-handling.php#93441)
|
It's necessary to send these 2 headers:
```
Connection: close
Content-Length: n (n = size of output in bytes )
```
Since you need know the size of your output, you'll need to buffer your output, then flush it to the browser:
```
// buffer all upcoming output
ob_start();
echo 'We\'ll email you as soon as this is done.';
// get the size of the output
$size = ob_get_length();
// send headers to tell the browser to close the connection
header('Content-Length: '.$size);
header('Connection: close');
// flush all output
ob_end_flush();
ob_flush();
flush();
// if you're using sessions, this prevents subsequent requests
// from hanging while the background process executes
if (session_id()) {session_write_close();}
/******** background process starts here ********/
```
Also, if your web server is using automatic gzip compression on the output (ie. Apache with mod\_deflate), this won't work because actual size of the output is changed, and the Content-Length is no longer accurate. Disable gzip compression the particular script.
|
How do I close a connection early?
|
[
"",
"php",
"jquery",
"ajax",
""
] |
First of all, I'm fairly sure snapping to grid is fairly easy, however I've run into some odd trouble in this situation and my maths are too weak to work out specifically what is wrong.
Here's the situation
I have an abstract concept of a grid, with Y steps exactly Y\_STEP apart (the x steps are working fine so ignore them for now)
The grid is in an abstract coordinate space, and to get things to line up I've got a magic offset in there, let's call it Y\_OFFSET
to snap to the grid I've got the following code (python)
```
def snapToGrid(originalPos, offset, step):
index = int((originalPos - offset) / step) #truncates the remainder away
return index * gap + offset
```
so I pass the cursor position, Y\_OFFSET and Y\_STEP into that function and it returns me the nearest floored y position on the grid
That appears to work fine in the original scenario, however when I take into account the fact that the view is scrollable things get a little weird.
Scrolling is made as basic as I can get it, I've got a viewPort that keeps count of the distance scrolled along the Y Axis and just offsets everything that goes through it.
Here's a snippet of the cursor's mouseMotion code:
```
def mouseMotion(self, event):
pixelPos = event.pos[Y]
odePos = Scroll.pixelPosToOdePos(pixelPos)
self.tool.positionChanged(odePos)
```
So there's two things to look at there, first the Scroll module's translation from pixel position to the abstract coordinate space, then the tool's positionChanged function which takes the abstract coordinate space value and snaps to the nearest Y step.
Here's the relevant Scroll code
```
def pixelPosToOdePos(pixelPos):
offsetPixelPos = pixelPos - self.viewPortOffset
return pixelsToOde(offsetPixelPos)
def pixelsToOde(pixels):
return float(pixels) / float(pixels_in_an_ode_unit)
```
And the tools update code
```
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(originalPos, Y_OFFSET, Y_STEP)
```
The last relevant chunk is when the tool goes to render itself. It goes through the Scroll object, which transforms the tool's snapped coordinate space position into an onscreen pixel position, here's the code:
```
#in Tool
def render(self, screen):
Scroll.render(screen, self.image, self.snappedPos)
#in Scroll
def render(self, screen, image, odePos):
pixelPos = self.odePosToPixelPos(odePos)
screen.blit(image, pixelPos) # screen is a surface from pygame for the curious
def odePosToPixelPos(self.odePos):
offsetPos = odePos + self.viewPortOffset
return odeToPixels(offsetPos)
def odeToPixels(odeUnits):
return int(odeUnits * pixels_in_an_ode_unit)
```
Whew, that was a long explanation. Hope you're still with me...
The problem I'm now getting is that when I scroll up the drawn image loses alignment with the cursor.
It starts snapping to the Y step exactly 1 step below the cursor.
Additionally it appears to phase in and out of allignment.
At some scrolls it is out by 1 and other scrolls it is spot on.
It's never out by more than 1 and it's always snapping to a valid grid location.
Best guess I can come up with is that somewhere I'm truncating some data in the wrong spot, but no idea where or how it ends up with this behavior.
Anyone familiar with coordinate spaces, scrolling and snapping?
|
Ok, I'm answering my own question here, as alexk mentioned, using int to truncate was my mistake.
The behaviour I'm after is best modeled by math.floor().
Apologies, the original question does not contain enough information to really work out what the problem is. I didn't have the extra bit of information at that point.
With regards to the typo note, I think I may be using the context in a confusing manner... From the perspective of the positionChanged() function, the parameter is a new position coming in.
From the perspective of the snapToGrid() function the parameter is an original position which is being changed to a snapped position.
The language is like that because part of it is in my event handling code and the other part is in my general services code. I should have changed it for the example
|
Do you have a typo in positionChanged() ?
```
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(newPos, Y_OFFSET, Y_STEP)
```
I guess you are off by one pixel because of the accuracy problems during float division. Try changing your snapToGrid() to this:
```
def snapToGrid(originalPos, offset, step):
EPS = 1e-6
index = int((originalPos - offset) / step + EPS) #truncates the remainder away
return index * gap + offset
```
|
What is wrong with my snap to grid code?
|
[
"",
"python",
"graphics",
"grid",
""
] |
I'd like to automatically change my database connection settings on a per-vhost basis, so that I don't have to edit any PHP code as it moves from staging to live and yet access different databases. This is on a single dedicated server.
So I was wondering, can I set a PHP variable or constant in httpd.conf as part of the vhost definition that the site can then use to point itself to a testing database automatically?
```
$database = 'live';
if (some staging environment variable is true) {
$database = 'testing'; // and not live
}
```
If this isn't possible, I guess in this case I can safely examine the hostname I'm running on to tell, but I'd like something a little less fragile
Hope this makes sense
many thanks
Ian
|
Did you tried to use the .htaccess file? You could override the php.ini values using it.
Just put the .htaccess file into your htdocs directory:
```
php_value name value
```
Futher information:
* <https://php.net/manual/en/configuration.changes.php>
* <https://php.net/manual/en/ini.php>
|
Yep...you can do this:
```
SetEnv DATABASE_NAME testing
```
and then in PHP:
```
$database = $_SERVER["DATABASE_NAME"];
```
or
```
$database = getenv("DATABASE_NAME");
```
|
Setting PHP variables in httpd.conf?
|
[
"",
"php",
"apache",
""
] |
I'm using .NET 3.5. I have two string arrays, which may share one or more values:
```
string[] list1 = new string[] { "apple", "orange", "banana" };
string[] list2 = new string[] { "banana", "pear", "grape" };
```
I'd like a way to merge them into one array with no duplicate values:
```
{ "apple", "orange", "banana", "pear", "grape" }
```
I can do this with LINQ:
```
string[] result = list1.Concat(list2).Distinct().ToArray();
```
but I imagine that's not very efficient for large arrays.
Is there a better way?
|
```
string[] result = list1.Union(list2).ToArray();
```
from [msdn](http://msdn.microsoft.com/en-us/library/bb341731.aspx): "This method excludes duplicates from the return set. This is different behavior to the Concat(TSource) method, which returns all the elements in the input sequences including duplicates."
|
Why do you imagine that it would be inefficient? As far as I'm aware, both Concat and Distinct are evaluated lazily, using a HashSet behind the scenes for Distinct to keep track of the elements which have already been returned.
I'm not sure how you'd manage to make it more efficient than that in a general way :)
EDIT: Distinct actually uses Set (an internal class) instead of HashSet, but the gist is still correct. This is a really good example of just how neat LINQ is. The simplest answer is pretty much as efficient as you can achieve without more domain knowledge.
The effect is the equivalent of:
```
public static IEnumerable<T> DistinctConcat<T>(IEnumerable<T> first, IEnumerable<T> second)
{
HashSet<T> returned = new HashSet<T>();
foreach (T element in first)
{
if (returned.Add(element))
{
yield return element;
}
}
foreach (T element in second)
{
if (returned.Add(element))
{
yield return element;
}
}
}
```
|
Efficiently merge string arrays in .NET, keeping distinct values
|
[
"",
"c#",
"linq",
"arrays",
"merge",
"distinct",
""
] |
I am trying to figure out this:
```
c = 1
def f(n):
print c + n
def g(n):
c = c + n
f(1) # => 2
g(1) # => UnboundLocalError: local variable 'c' referenced before assignment
```
Thanks!
|
Within a function, variables that are assigned to are treated as local variables by default. To assign to global variables, use the [`global`](https://docs.python.org/3/reference/simple_stmts.html#the-global-statement) statement:
```
def g(n):
global c
c = c + n
```
This is one of the quirky areas of Python that has never really sat well with me.
|
Global state is something to avoid, especially needing to mutate it. Consider if `g()` should simply take two parameters or if `f()` and `g()` need to be methods of a common class with `c` an instance attribute
```
class A:
c = 1
def f(self, n):
print self.c + n
def g(self, n):
self.c += n
a = A()
a.f(1)
a.g(1)
a.f(1)
```
Outputs:
```
2
3
```
|
Python scope: "UnboundLocalError: local variable 'c' referenced before assignment"
|
[
"",
"python",
"scope",
""
] |
Could anyone explain with some examples when it is better to call functions by reference and when it is better to call by address?
|
Pass your arguments to function using reference whenever possible.
Passing arguments by reference eliminate the chance of them being NULL.
If you want it to be possible to pass NULL value to a function then use pointer.
|
This has already been discussed. See [Pointer vs. Reference](https://stackoverflow.com/questions/114180/pointer-vs-reference).
|
when to pass function arguments by reference and when by address?
|
[
"",
"c++",
"pointers",
"function",
"reference",
""
] |
As a way to find inspiration and improve my PHP skills, I am looking for some beautiful PHP source code to read, preferably an open source "standard" web site rather than a more tool-like project such as phpMyAdmin.
So, where can I find some beautiful PHP code?
|
CodeIgniter code is beautiful. There are many projects written in CodeIgniter which are publically available and you can check out the source code.
Have a look at the [Getting started](http://codeigniter.com/user_guide/overview/getting_started.html) page of CodeIgniter's tutorial and read through. I can gaurantee you'll be inspired and want to fire up your IDE and get coding straight away :).
You can also download and have a look at the source code of [Bamboo invoice](http://bambooinvoice.org/). When I need inspiration I also search for 'php code' on youtube :)
|
Not all opensource projects are great examples as of how things are done.
Take a look at [PEAR](http://pear.php.net), [ezComponents](http://ezcomponents.org/) and the [Zend Framework](http://framework.zend.com). All three are pretty popular in the PHP world, they solve a ton of different problems [each sometimes with a different approach] and they all follow great coding standards.
Also, there is often a trade-off between beautiful/elegant code and code that is very optimized. For example, if you want to see code that is powered to run a large website which needs to handle a lot of traffic, check out [Mediawiki](http://www.mediawiki.org/wiki/MediaWiki). It's build on very efficient code, but it's not very pretty.
|
Where to find beautiful PHP code to read?
|
[
"",
"php",
""
] |
According to this article [Silverlight 2 Beta 2](http://msdn.microsoft.com/en-us/library/cc197951(VS.95).aspx) supports the `DataContractJsonSerializer` object. But, when I try to use it VS says
> "Type 'DataContractJsonSerializer' is not defined".
I have a method marked as ScriptableMember that gets called from JavaScript and is passed an Object. Inside this method I need to serialize the object to a string (preferably JSON) and then save it in isolated storage.
Does Silverlight 2 Beta 2 really support DataContractJsonSerializer? Or would anyone recommend a different method of saving the JavaScript created ScriptObject in the Isolated Storage?
|
Actually the answer is, the DataContractJsonSerializer is part of Silverlight 2 Beta 2, but you need to add a reference to System.ServiceModel.Web to your Silverlight project to use it.
I didn't realize that you still needed to add dll references in Silverlight. I thought it automatically included everything in a similar way to how ASP.NET does.
|
There is a Silverlight version of Json.NET that will serialize your objects to JSON. It doesn't require [DataContract] and [DataMember] attributes all over your objects.
[Json.NET](http://james.newtonking.com/projects/json-net.aspx)
|
How do you Serialize ScriptObjects to JSON to save in Silverlight Isolated Storage?
|
[
"",
"javascript",
"silverlight",
"json",
"isolatedstorage",
""
] |
I am looking for a way to create a ZIP file from a folder in Windows C/C++ APIs. I can find the way to do this in VBScript using the Shell32.Application CopyHere method, and I found a tutorial explaining how to do it in C# also, but nothing for the C API (C++ is fine too, project already uses MFC).
I'd be really grateful if anyone can share some sample C code that can successfully create a zip file on Windows XP/2003. Failing that, if someone can find solid docs or a tutorial that would be great, since MSDN searches don't turn up much. I'm really hoping to avoid having to ship a third-party lib for this, because the functionality is obviously there, I just can't figure out how to access it. Google searches turn up nothing useful, just tantalizing bits and pieces of information. Here's hoping someone in the community has sorted this out and can share it for posterity!
|
EDIT: This answer is old, but I cannot delete it because it was accepted. See the next one
<https://stackoverflow.com/a/121720/3937>
----- ORIGINAL ANSWER -----
There is sample code to do that here
[EDIT: Link is now broken]
<http://www.eggheadcafe.com/software/aspnet/31056644/using-shfileoperation-to.aspx>
Make sure you read about how to handle monitoring for the thread to complete.
Edit: From the comments, this code only works on existing zip file, but @[Simon](https://stackoverflow.com/users/20135/simonbuchanmyopenidcom) provided this code to create a blank zip file
```
FILE* f = fopen("path", "wb");
fwrite("\x50\x4B\x05\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 22, 1, f);
fclose(f);
```
|
As noted elsewhere in the comments, this will only work on a already-created Zip file. The content must also not already exist in the zip file, or an error will be displayed. Here is the working sample code I was able to create based on the accepted answer. You need to link to shell32.lib and also kernel32.lib (for CreateToolhelp32Snapshot).
```
#include <windows.h>
#include <shldisp.h>
#include <tlhelp32.h>
#include <stdio.h>
int main(int argc, TCHAR* argv[])
{
DWORD strlen = 0;
char szFrom[] = "C:\\Temp",
szTo[] = "C:\\Sample.zip";
HRESULT hResult;
IShellDispatch *pISD;
Folder *pToFolder = NULL;
VARIANT vDir, vFile, vOpt;
BSTR strptr1, strptr2;
CoInitialize(NULL);
hResult = CoCreateInstance(CLSID_Shell, NULL, CLSCTX_INPROC_SERVER, IID_IShellDispatch, (void **)&pISD);
if (SUCCEEDED(hResult) && pISD != NULL)
{
strlen = MultiByteToWideChar(CP_ACP, 0, szTo, -1, 0, 0);
strptr1 = SysAllocStringLen(0, strlen);
MultiByteToWideChar(CP_ACP, 0, szTo, -1, strptr1, strlen);
VariantInit(&vDir);
vDir.vt = VT_BSTR;
vDir.bstrVal = strptr1;
hResult = pISD->NameSpace(vDir, &pToFolder);
if (SUCCEEDED(hResult))
{
strlen = MultiByteToWideChar(CP_ACP, 0, szFrom, -1, 0, 0);
strptr2 = SysAllocStringLen(0, strlen);
MultiByteToWideChar(CP_ACP, 0, szFrom, -1, strptr2, strlen);
VariantInit(&vFile);
vFile.vt = VT_BSTR;
vFile.bstrVal = strptr2;
VariantInit(&vOpt);
vOpt.vt = VT_I4;
vOpt.lVal = 4; // Do not display a progress dialog box
hResult = NULL;
printf("Copying %s to %s ...\n", szFrom, szTo);
hResult = pToFolder->CopyHere(vFile, vOpt); //NOTE: this appears to always return S_OK even on error
/*
* 1) Enumerate current threads in the process using Thread32First/Thread32Next
* 2) Start the operation
* 3) Enumerate the threads again
* 4) Wait for any new threads using WaitForMultipleObjects
*
* Of course, if the operation creates any new threads that don't exit, then you have a problem.
*/
if (hResult == S_OK) {
//NOTE: hard-coded for testing - be sure not to overflow the array if > 5 threads exist
HANDLE hThrd[5];
HANDLE h = CreateToolhelp32Snapshot(TH32CS_SNAPALL ,0); //TH32CS_SNAPMODULE, 0);
DWORD NUM_THREADS = 0;
if (h != INVALID_HANDLE_VALUE) {
THREADENTRY32 te;
te.dwSize = sizeof(te);
if (Thread32First(h, &te)) {
do {
if (te.dwSize >= (FIELD_OFFSET(THREADENTRY32, th32OwnerProcessID) + sizeof(te.th32OwnerProcessID)) ) {
//only enumerate threads that are called by this process and not the main thread
if((te.th32OwnerProcessID == GetCurrentProcessId()) && (te.th32ThreadID != GetCurrentThreadId()) ){
//printf("Process 0x%04x Thread 0x%04x\n", te.th32OwnerProcessID, te.th32ThreadID);
hThrd[NUM_THREADS] = OpenThread(THREAD_ALL_ACCESS, FALSE, te.th32ThreadID);
NUM_THREADS++;
}
}
te.dwSize = sizeof(te);
} while (Thread32Next(h, &te));
}
CloseHandle(h);
printf("waiting for all threads to exit...\n");
//Wait for all threads to exit
WaitForMultipleObjects(NUM_THREADS, hThrd , TRUE , INFINITE);
//Close All handles
for ( DWORD i = 0; i < NUM_THREADS ; i++ ){
CloseHandle( hThrd[i] );
}
} //if invalid handle
} //if CopyHere() hResult is S_OK
SysFreeString(strptr2);
pToFolder->Release();
}
SysFreeString(strptr1);
pISD->Release();
}
CoUninitialize();
printf ("Press ENTER to exit\n");
getchar();
return 0;
}
```
I have decided not to go this route despite getting semi-functional code, since after further investigation, it appears the Folder::CopyHere() method does not actually respect the vOptions passed to it, which means you cannot force it to overwrite files or not display error dialogs to the user.
In light of that, I tried the XZip library mentioned by another poster as well. This library functions fine for creating a Zip archive, but note that the ZipAdd() function called with ZIP\_FOLDER is not recursive - it merely creates a folder in the archive. In order to recursively zip an archive you will need to use the AddFolderContent() function. For example, to create a C:\Sample.zip and Add the C:\Temp folder to it, use the following:
```
HZIP newZip = CreateZip("C:\\Sample.zip", NULL, ZIP_FILENAME);
BOOL retval = AddFolderContent(newZip, "C:", "temp");
```
Important note: the AddFolderContent() function is not functional as included in the XZip library. It will recurse into the directory structure but fails to add any files to the zip archive, due to a bug in the paths passed to ZipAdd(). In order to use this function you'll need to edit the source and change this line:
```
if (ZipAdd(hZip, RelativePathNewFileFound, RelativePathNewFileFound, 0, ZIP_FILENAME) != ZR_OK)
```
To the following:
```
ZRESULT ret;
TCHAR real_path[MAX_PATH] = {0};
_tcscat(real_path, AbsolutePath);
_tcscat(real_path, RelativePathNewFileFound);
if (ZipAdd(hZip, RelativePathNewFileFound, real_path, 0, ZIP_FILENAME) != ZR_OK)
```
|
Creating a ZIP file on Windows (XP/2003) in C/C++
|
[
"",
"c++",
"winapi",
"zip",
""
] |
I keep reading about C99 and C++11 and all these totally sweet things that are getting added to the language standard that might be nice to use someday. However, we currently languish in the land of writing C++ in Visual Studio.
Will any of the new stuff in the standard ever get added to visual studio, or is Microsoft more interested in adding new C# variants to do that?
Edit: In addition to the accepted answer, I found the Visual C++ team blog:
<http://blogs.msdn.com/vcblog/>
And specifically, this post in it:
<https://web.archive.org/web/20190109064523/https://blogs.msdn.microsoft.com/vcblog/2008/02/22/tr1-slide-decks/>
Very useful. Thanks!
|
MS has a series of public replies to this, most of them blaming their users. Like this one:
<https://devblogs.microsoft.com/cppblog/iso-c-standard-update/>
> Now, the Visual C++ compiler team receives the occasionally question as to why we haven’t implemented C99. It’s really based on interest from our users. Where we’ve received many requests for certain C99 features, we’ve tried to implement them (or analogues). A couple examples are variadic macros, `long long`, `__pragma`, `__FUNCTION__`, and `__restrict`. If there are other C99 features that you’d find useful in your work, let us know! We don’t hear much from our C users, so speak up and make yourselves heard
<http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=345360>
> Hi: unfortunately the overwhelming feadback we get from the majority of our users is that they would prefer that we focus on C++-0x instead of on C-99. We have "cherry-picked" certain popular C-99 features (variadic macros, `long long`) but beyond this we are unlikely to do much more in the C-99 space (at least in the short-term).
>
> Jonathan Caves
>
> Visual C++ Compiler Team.
This is a pretty sad state of affairs, but also makes sense if you suspect MS wants to lock users in: it makes it very hard to port modern gcc-based code into MSVC, which at least I find extremely painful.
A workaround exists, though: Note that Intel is much more enlightened on this. the Intel C compiler can handle C99 code and even has the same flags as gcc, making it much easier to port code between platforms. Also, the Intel compiler works in visual studio. So by scrapping MS COMPILER you can still use the MS IDE that you seem to think has some kind of value, and use C99 to your hearts content.
A more sensible approach is honestly to move over to Intel CC or gcc, and use Eclipse for your programming environment. Portability of code across Windows-Linux-Solaris-AIX-etc is usually important in my experience, and that is not at all supported by MS tools, unfortunately.
|
Herb Sutter is both the chair and a very active member of C++ standardisation comitee, as well as software architect on Visual Studio for Microsoft.
He is among the author of the new C++ memory model standardised for C++0x. For example, the following papers:
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2669.htm>
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2197.pdf>
have his name on it. So I guess the inclusion on Windows of C++0x is assured as long as H. Sutter remains at Microsoft.
As for C99 only partly included in Visual Studio, I guess this is a question of priorities.
* Most interesting C99 features are already present in C++ (inlining, variable declaration anywhere, // comments, etc.) and probably already usable in C in Visual Studio (If only doing C code within the C++ compiler). See my answer [here](https://stackoverflow.com/questions/3879636/what-can-be-done-in-c-but-not-c/3880281#3880281) for a more complete discussion about C99 features in C++.
* C99 increases the divergence between C and C++ by adding features already existing in C++, but in an incompatible way (sorry, but the ~~boolean~~ complex implementation in C99 is laughable, at best... See <http://david.tribble.com/text/cdiffs.htm> for more information)
* The C community on Windows seems non-existent or not important enough to be acknowledged
* The C++ community on Windows seems too important to be ignored
* .NET is the way Microsoft wants people to program on Windows. This means C#, VB.NET, perhaps C++/CLI.
So, would I be Microsoft, why would I implement features few people will ever use when the same features are already offered in more community active languages already used by most people?
## Conclusion?
C++0x will be included, as extention of VS 2008, or on the next generation (generations?) of Visual Studio.
The C99 features not already implemented won't be in the next years, unless something dramatic happens (a country full of C99 developers appears out of nowhere?)
## Edit 2011-04-14
Apparently, the "country full of C99 developers" already exist: <http://blogs.msdn.com/vcblog/archive/2007/11/05/iso-c-standard-update.aspx#6415401>
^\_^
Still, the last comment at: <http://blogs.msdn.com/vcblog/archive/2007/11/05/iso-c-standard-update.aspx#6828778> is clear enough, I guess.
## Edit 2012-05-03
Herb Sutter made it clear that:
> 1. Our primary goal is to support "most of C99/C11 that is a subset of ISO C++98/C++11."
> 2. We also for historical reasons ship a C90 compiler which accepts (only) C90 and not C++
> 3. We do not plan to support ISO C features that are not part of either C90 or ISO C++.
The blog post add links and further explanations for those decisions.
Source: <http://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/>
|
Visual Studio support for new C / C++ standards?
|
[
"",
"c++",
"visual-studio",
"visual-c++",
"c++11",
"c99",
""
] |
Recently I noticed my application appears to be eating memory that never gets released. After profiling with CLRProfiler I've found that the Castle Windsor container I'm using is holding onto objects. These objects are declared with the lifestyle="transient" attribute in the config xml.
I've found if I put an explicit call to `IWindsorContainer.Release(hangingObject)`, that it will drop its references.
This is causing a problem though, I wasn't expecting that with a transient lifestyle object CastleWindsor would keep a reference and effectively create a leak. It's going to be a rather mundane and error prone task going around inserting explicit Release calls in all the appropriate places.
Have you seen this problem, and do you have any suggestions for how to get around it?
|
I think the answers here are missing a vital point - that this behavior is configurable out of the box via release policies - check out the documentation on the castle project site [here](http://www.castleproject.org/container/documentation/trunk/advanced/releasepolicy.html).
In many scenarios especially where your container exists for the lifetime of the hosting application, and where transient components really don't need to be tracked (because you're handling disposal in your calling code or component that's been injected with the service) then you can just set the release policy to the `NoTrackingReleasePolicy` implementation and be done with it.
Prior to Castle v 1.0 I believe Component Burden will be implemented/introduced - which will help alleviate some of these issues as well around disposal of injected dependencies etc.
**Edit:**
Check out the following posts for more discussion of component burden.
[The Component Burden - Davy Brions](http://elegantcode.com/2008/12/14/the-component-burden/)
Also component burden is implemented in the [official 2.0 release](http://ayende.com/Blog/archive/2009/05/05/castle-windsor-2.0-rtm-released.aspx) of the Windsor Container.
|
One thing to note is that this seems to have been fixed in the Castle Trunk. In r5475, Hammett changed the default release policy in MicroKernel to `LifecycledComponentsReleasePolicy`.
|
Why does Castle Windsor hold onto transient objects?
|
[
"",
"c#",
"castle-windsor",
""
] |
I'm building a `PHP` site, but for now the only `PHP` I'm using is a half-dozen or so includes on certain pages. (I will probably use some database queries eventually.)
Are simple `include()` statements a concern for speed or scaling, as opposed to static `HTML`? What kinds of things tend to cause a site to bog down?
|
Strictly speaking, straight HTML will always serve faster than a server-side approach since the server doesn't have to do any interpretation of the code.
To answer the bigger question, there **are** a number of things that will cause your site to bog down; there's just no specific threshold for when your code is causing the problem vs. PHP. (keep in mind that many of Yahoo's sites are PHP-driven, so don't think that PHP can't scale).
One thing I've noticed is that the PHP-driven sites that are the slowest are the ones that include more than is necessary to display a specific page. OSCommerce (oscommerce.com) is one of the most popular PHP-driven shopping carts. It has a bad habit, however, of including all of their core functionality (just in case it's needed) on every single page. So even if you don't need to display an 'info box', the function is loaded.
On the other hand, there are many PHP frameworks out there (such as CakePHP, Symfony, and CodeIgniter) that take a 'load it as you need it' approach.
I would advise the following:
1. Don't include more functionality than you need for a specific page
2. Keep base functions separate (use an MVC approach when possible)
3. Use require\_once instead of include if you think you'll have nested includes (e.g. page A includes file B which includes file C). This will avoid including the same file more than once. It will also stop the process if a file can't be found; thus helping your troubleshooting process ;)
4. Cache static pages as HTML if possible - to avoid having to reparse when things don't change
|
Certainly include() is slower than static pages. However, with modern systems you're not likely to see this as a bottleneck for a long time - if ever. The benefits of using includes to keep common parts of your site up to date outweigh the tiny performance hit, in my opinion (having different navigation on one page because you forgot to update it leads to a bad user experience, and thus bad feelings about your site/company/whatever).
Using caching will really not help either - caching code is going to be slower than just an include(). The only time caching will benefit you is if you're doing computationally-intensive calculations (very rare, on web pages), or grabbing data from a database.
|
PHP performance considerations?
|
[
"",
"php",
"performance",
"scalability",
""
] |
I have been told that there is a performance difference between the following code blocks.
```
foreach (Entity e in entityList)
{
....
}
```
and
```
for (int i=0; i<entityList.Count; i++)
{
Entity e = (Entity)entityList[i];
...
}
```
where
```
List<Entity> entityList;
```
I am no CLR expect but from what I can tell they should boil down to basically the same code. Does anybody have concrete (heck, I'd take packed dirt) evidence one way or the other?
|
foreach creates an instance of an enumerator (returned from GetEnumerator) and that enumerator also keeps state throughout the course of the foreach loop. It then repeatedly calls for the Next() object on the enumerator and runs your code for each object it returns.
They don't boil down to the same code in any way, really, which you'd see if you wrote your own enumerator.
|
[Here](http://www.codeproject.com/KB/cs/foreach.aspx) is a good article that shows the IL differences between the two loops.
Foreach is technically slower however much easier to use and easier to read. Unless performance is critical I prefer the foreach loop over the for loop.
|
Difference between foreach and for loops over an IEnumerable class in C#
|
[
"",
"c#",
"performance",
"loops",
""
] |
I am looking for some advice on what should I concentrate my efforts to get the needed skills to become a Java developer for Oracle applications. I'm bit confused as there are lot of technologies in the Java world. Where should I start? What to avoid? Is JDeveloper a good IDE for a beginner?
|
To become an Oracle Developer there is a bit more to learn than jdbc. You should take a look at the Oracle web site. It is kind of slow and not very intuitive but has a lot of good information. There are OUGs that have good info as well.
If you just want to access Oracle via JAVA then you should use a framework such as Spring. Takes away the pain of jdbc. Lets you write sql and map it to objects.
If you don't know PL/SQL it might be good to learn what it is.
My two cents from working with Oracle for the past 7 yrs.
|
You're question is a little bit too vague in order to give a proper answer...
If you plan to query the Oracle Database from an External Java Program (Either within a Swing Application or an Application Server) then you need to learn 2 core APIs:
* JDBC (Java Database Connectivity)
* JPA (Java Persistence API)
JDBC is the core API that allows a Java Program to interact with any RDBMS so you should at least know how it works so whenever you have to dig into low-level code, you will actually know what's happening.
JPA is the latest Java API for Persistence which basically allows one to map Plain Old Java Object (AKA PoJo) to RDBMS Table Structures. There are multiple known implementation available but I would recommend Hibernate or TopLink as good starting points.
After that, you can start to dig into other known frameworks like the Spring Framework for some other RDBMS related APIs.
|
How to start learning JAVA for use with Oracle RDBMS?
|
[
"",
"java",
"oracle",
""
] |
I realize that parameterized SQL queries is the optimal way to sanitize user input when building queries that contain user input, but I'm wondering what is wrong with taking user input and escaping any single quotes and surrounding the whole string with single quotes. Here's the code:
```
sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
```
Any single-quote the user enters is replaced with double single-quotes, which eliminates the users ability to end the string, so anything else they may type, such as semicolons, percent signs, etc., will all be part of the string and not actually executed as part of the command.
We are using Microsoft SQL Server 2000, for which I believe the single-quote is the only string delimiter and the only way to escape the string delimiter, so there is no way to execute anything the user types in.
I don't see any way to launch an SQL injection attack against this, but I realize that if this were as bulletproof as it seems to me someone else would have thought of it already and it would be common practice.
What's wrong with this code? Is there a way to get an SQL injection attack past this sanitization technique? Sample user input that exploits this technique would be very helpful.
---
UPDATE:
I still don't know of any way to effectively launch a SQL injection attack against this code. A few people suggested that a backslash would escape one single-quote and leave the other to end the string so that the rest of the string would be executed as part of the SQL command, and I realize that this method would work to inject SQL into a MySQL database, but in SQL Server 2000 the only way (that I've been able to find) to escape a single-quote is with another single-quote; backslashes won't do it.
And unless there is a way to stop the escaping of the single-quote, none of the rest of the user input will be executed because it will all be taken as one contiguous string.
I understand that there are better ways to sanitize input, but I'm really more interested in learning why the method I provided above won't work. If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it.
|
First of all, it's just bad practice. Input validation is always necessary, but it's also always iffy.
Worse yet, blacklist validation is always problematic, it's much better to explicitly and strictly define what values/formats you accept. Admittedly, this is not always possible - but to some extent it must always be done.
Some research papers on the subject:
* <http://www.imperva.com/docs/WP_SQL_Injection_Protection_LK.pdf>
* <http://www.it-docs.net/ddata/4954.pdf> (Disclosure, this last one was mine ;) )
* <https://www.owasp.org/images/d/d4/OWASP_IL_2007_SQL_Smuggling.pdf> (based on the previous paper, which is no longer available)
Point is, any blacklist you do (and too-permissive whitelists) can be bypassed. The last link to my paper shows situations where even quote escaping can be bypassed.
Even if these situations do not apply to you, it's still a bad idea. Moreover, unless your app is trivially small, you're going to have to deal with maintenance, and maybe a certain amount of governance: how do you ensure that its done right, everywhere all the time?
The proper way to do it:
* Whitelist validation: type, length, format or accepted values
* If you want to blacklist, go right ahead. Quote escaping is good, but within context of the other mitigations.
* Use Command and Parameter objects, to preparse and validate
* Call parameterized queries only.
* Better yet, use Stored Procedures exclusively.
* Avoid using dynamic SQL, and dont use string concatenation to build queries.
* If using SPs, you can also limit permissions in the database to executing the needed SPs only, and not access tables directly.
* you can also easily verify that the entire codebase only accesses the DB through SPs...
|
Okay, this response will relate to the update of the question:
> "If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it."
Now, besides the MySQL backslash escaping - and taking into account that we're actually talking about MSSQL, there are actually 3 possible ways of still SQL injecting your code
> sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
Take into account that these will not all be valid at all times, and are very dependant on your actual code around it:
1. Second-order SQL Injection - if an SQL query is rebuilt based upon data retrieved from the database **after escaping**, the data is concatenated unescaped and may be indirectly SQL-injected. See
2. String truncation - (a bit more complicated) - Scenario is you have two fields, say a username and password, and the SQL concatenates both of them. And both fields (or just the first) has a hard limit on length. For instance, the username is limited to 20 characters. Say you have this code:
> ```
> username = left(Replace(sInput, "'", "''"), 20)
> ```
Then what you get - is the username, escaped, and then trimmed to 20 characters. The problem here - I'll stick my quote in the 20th character (e.g. after 19 a's), and your escaping quote will be trimmed (in the 21st character). Then the SQL
```
sSQL = "select * from USERS where username = '" + username + "' and password = '" + password + "'"
```
combined with the aforementioned malformed username will result in the password already being *outside* the quotes, and will just contain the payload directly.
3. Unicode Smuggling - In certain situations, it is possible to pass a high-level unicode character that *looks* like a quote, but *isn't* - until it gets to the database, where suddenly *it is*. Since it isn't a quote when you validate it, it will go through easy... See my previous response for more details, and link to original research.
|
Can I protect against SQL injection by escaping single-quote and surrounding user input with single-quotes?
|
[
"",
"sql",
"security",
"sql-server-2000",
"sql-injection",
"sanitization",
""
] |
I can't get my GridView to enable a user to sort a column of data when I'm using a custom SqlDataSource.
I have a GridView in which the code in the ASP reference to it in the HTML is minimal:
```
<asp:GridView id="grid" runat="server" AutoGenerateColumns="False" AllowSorting="True">
</asp:GridView>
```
In the code-behind I attach a dynamically-created SqlDataSource (the columns it contains are not always the same so the SQL used to create it is constructed at runtime). For example:
I set up the columns...
```
BoundField column = new BoundField();
column.DataField = columnName;
column.HeaderText = "Heading";
column.SortExpression = columnName;
grid.Columns.Add(column);
```
the data source...
```
SqlDataSource dataSource = new SqlDataSource(
"System.Data.SqlClient",
connectionString,
generatedSelectCommand);
```
then the gridview...
```
grid.DataSource = dataSource;
grid.DataKeyNames = mylistOfKeys;
grid.DataBind();
```
At the moment nothing happens when a user clicks on a column heading when I'd expect it to sort the column data. Anyone any ideas what I'm missing?
If there's a nicer way of doing this that would be helpful too as this looks messy to me!
|
First you need to add an event:
```
<asp:GridView AllowSorting="True" OnSorting="gvName_Sorting" ...
```
Then that event looks like:
```
protected void gvName_Sorting( object sender, GridViewSortEventArgs e )
{
...
//rebind gridview
}
```
You basically have to get your data again.
You're right that it looks messy and there is a better way: ASP.Net MVC
Unfortunately that's a drastically different page model.
|
You could also just reassign the datasource.SelectCommand before the DataBind() call in the Sorting handler. Something like this:
```
protected void gvItems_Sorting(object sender, GridViewSortEventArgs e)
{
GridView gv = (GridView)sender;
SqlDataSource ds = (SqlDataSource)gv.DataSource;
ds.SelectCommand = ds.SelectCommand + " order by "
+ e.SortExpression + " " + GetSortDirection(e.SortDirection);
gvItems.DataSource = ds;
gvItems.DataBind();
}
string GetSortDirection(string sSortDirCmd)
{
string sSortDir;
if ((SortDirection.Ascending == sSortDirCmd))
{
sSortDir = "asc";
}
else
{
sSortDir = "desc";
}
return sSortDir;
}
```
I hope this help. Let me know if you need extra help to implement it.
Enjoy!
|
How to sort columns in an ASP.NET GridView if using a custom DataSource?
|
[
"",
"c#",
"asp.net",
"gridview",
"sqldatasource",
""
] |
Suppose I have a simple XHTML document that uses a custom namespace for attributes:
```
<html xmlns="..." xmlns:custom="http://www.example.com/ns">
...
<div class="foo" custom:attr="bla"/>
...
</html>
```
How do I match each element that has a certain custom attribute using jQuery? Using
```
$("div[custom:attr]")
```
does not work. (Tried with Firefox only, so far.)
|
[jQuery](https://jquery.com/) does not support custom namespaces directly, but you can find the divs you are looking for by using filter function.
```
// find all divs that have custom:attr
$('div').filter(function() { return $(this).attr('custom:attr'); }).each(function() {
// matched a div with custom::attr
$(this).html('I was found.');
});
```
|
This works in some conditions:
`$("div[custom\\:attr]")`
However, for a more advanced method, see [this XML Namespace jQuery plug-in](http://www.rfk.id.au/blog/entry/xmlns-selectors-jquery)
|
jQuery attribute selectors: How to query for an attribute with a custom namespace
|
[
"",
"javascript",
"jquery",
""
] |
I am using c# 2005 i want to write string diagonally on image. But by default c# provides the option to write horizontally or vertically.
how we write diagonally?
Thanks
|
Do a [Graphics.rotateTransform](http://msdn.microsoft.com/en-us/library/system.drawing.graphics.rotatetransform.aspx) before the drawString call. Don't forget to reverse the change afterwards, as Phil Wright points out.
|
You can use the RotateTransform and TranslateTransform that are available on the Graphics class. Because using DrawString is GDI+ the transforms affects the drawing. So use something like this...
```
g.RotateTransform(45f);
g.DrawString("My String"...);
g.RotateTransform(-45f);
```
Don't forget to reverse the change though!
|
How can drawString method be used for writing diagonal way
|
[
"",
"c#",
"image",
"visual-studio-2005",
"gdi",
"drawstring",
""
] |
I'd like to get **uniform distribution** in range [0.0, 1.0)
If possible, please let the implementation make use of random bytes from **/dev/urandom.**
It would also be nice if your solution was **thread-safe**. If you're not sure, please indicate that.
See [some solution](https://stackoverflow.com/questions/146387#149814) I thought about after reading other answers.
|
This seems to be pretty good way:
```` ```
unsigned short int r1, r2, r3;
// let r1, r2 and r3 hold random values
double result = ldexp(r1, -48) + ldexp(r2, -32) + ldexp(r3, -16);
``` ````
This is based on NetBSD's drand48 implementation.
|
**Simple**: A double has 52 bits of precision assuming IEEE. So generate a 52 bit (or larger) unsigned random integer (for example by reading bytes from dev/urandom), convert it into a double and divide it by 2^(number of bits it was).
This gives a numerically uniform distribution (in that the probability of a value being in a given range is proportional to the range) down to the 52nd binary digit.
**Complicated**: However, there are a lot of double values in the range [0,1) which the above cannot generate. To be specific, half the values in the range [0,0.5) (the ones that have their least significant bit set) can't occur. Three quarters of the values in the range [0,0.25) (the ones that have either of their least 2 bits set) can't occur, etc, all the way to only one positive value less than 2^-51 being possible, despite a double being capable of representing squillions of such values. So it can't be said to be truly uniform across the specified range to full precision.
Of course we don't want to choose one of those doubles with equal probability, because then the resulting number will on average be too small. We still need the probability of the result being in a given range to be proportional to the range, but with a higher precision on what ranges that works for.
I *think* the following works. I haven't particularly studied or tested this algorithm (as you can probably tell by the way there's no code), and personally I wouldn't use it without finding proper references indicating it's valid. But here goes:
* Start the exponent off at 52 and choose a 52-bit random unsigned integer (assuming 52 bits of mantissa).
* If the most significant bit of the integer is 0, increase the exponent by one, shift the integer left by one, and fill the least significant bit in with a new random bit.
* Repeat until either you hit a 1 in the most significant place, or else the exponent gets too big for your double (1023. Or possibly 1022).
* If you found a 1, divide your value by 2^exponent. If you got all zeroes, return 0 (I know, that's not actually a special case, but it bears emphasis how very unlikely a 0 return is [Edit: actually it might be a special case - it depends whether or not you want to generate denorms. If not then once you have enough 0s in a row you discard anything left and return 0. But in practice this is so unlikely as to be negligible, unless the random source isn't random).
I don't know whether there's actually any practical use for such a random double, mind you. Your definition of random should depend to an extent what it's for. But if you can benefit from all 52 of its significant bits being random, this might actually be helpful.
|
What is the best way to produce random double on POSIX?
|
[
"",
"c++",
"multithreading",
"random",
"posix",
""
] |
I can't figure out a use case for being able to annotate interfaces in Java.
Maybe someone could give me an example?
|
I've used it in Spring to annotate interfaces where the annotation should apply to all subclasses. For example, say you have a Service interface and you might have multiple implementations of the interface but you want a security annotation to apply regardless of the annotation. In that case, it makes the most sense to annotate the interface.
|
A use case that I am working with is javax/hibernate bean validation, we are using interfaces to help us on avoiding to define validations on every specific class.
```
public interface IUser {
@NotNull Long getUserId();
...
}
public class WebUser implements IUser {
private Long userId;
@Override
public Long getUserId(){
return userId;
}
...
}
```
|
Annotations on Interfaces?
|
[
"",
"java",
"annotations",
""
] |
I am having some 10 lac records in my single SQL Table. I need to load this much record in my record. I need to know whether this will load. when i tried loading to report its showing out of memory exception.
|
This is impossible to answer unless you expand your question. What language are you using? Which report generating framwork? How does the SQL query look like?
**Edit:** Ah, ok, Microsoft SQL Reporting Services. Well, it should easily handle queries on tables with millions of tuples, I'm sure. It all depends on how you have structured your query, so until you give us that we can't help you.
|
Reporting Services (and Cognos, Business Objects, and other BI reporting suites) generally have problems rendering reports that have hundreds of thousands of records or millions of records in the OUTPUT. Most of these systems don't have much of a problem aggregating the data into tens of thousands of records, but once you start going into the hundreds of thousands or millions, you will run into memory errors.
My recommendation is to NOT use Reporting Services for reports that are hundreds of thousands of rows. No person is going to read all the lines in the report. Heck, most of the BI suites won't even output the report if you try to render to Excel due to the 65,556 row limitation. I would recommend using SSIS for large raw data dumps, Analysis Services cubes if you want to allow the user to do exploratory ad hoc slice and dice analysis in Excel, or find ways to break it into smaller, more relevant data that can be consumed by a human -- meaning aggregated or filtered to a few hundred or thousand rows.
If you MUST use reporting services and you want to use it as a tool to get the data into Excel, then you could try rendering to CSV via a subscription. Again, I would recommend just building a SSIS package that does this instead since you won't have memory issues outputting multi-million row CSV files. But if you MUST use reporting services as the output tool, then minimize the memory cost by going with the least memory intensive rendering method.
|
SQL Reporting services - Out of Memory Exception
|
[
"",
"asp.net",
"sql",
"reporting-services",
""
] |
Here's the problem:
In C# I'm getting information from a legacy ACCESS database. .NET converts the content of the database (in the case of this problem a string) to Unicode before handing the content to me.
How do I convert this Unicode string back to it's ASCII equivalent?
---
**Edit**
Unicode char 710 is indeed MODIFIER LETTER CIRCUMFLEX ACCENT. Here's the problem a bit more precise:
```
-> (Extended) ASCII character ê (Extended ASCII 136) was inserted in the database.
-> Either Access or the reading component in .NET converted this to U+02C6 U+0065
(MODIFIER LETTER CIRCUMFLEX ACCENT + LATIN SMALL LETTER E)
-> I need the (Extended) ASCII character 136 back.
```
---
Here's what I've tried (I see now why this did not work...):
```
string myInput = Convert.ToString(Convert.ToChar(710));
byte[] asBytes = Encoding.ASCII.GetBytes(myInput);
```
But this does not result in 94 but a byte with value 63...
Here's a new try but it still does not work:
```
byte[] bytes = Encoding.ASCII.GetBytes("ê");
```
---
**Soltution**
Thanks to both [csgero](https://stackoverflow.com/questions/138449/how-to-convert-a-unicode-character-to-its-extended-ascii-equivalent#138579) and [bzlm](https://stackoverflow.com/questions/138449/how-to-convert-a-unicode-character-to-its-extended-ascii-equivalent#138583) for pointing in the right direction I solved the problem [here](https://stackoverflow.com/questions/138449/how-to-convert-a-unicode-character-to-its-ascii-equivalent#141816).
|
Okay, let's elaborate. Both [csgero](https://stackoverflow.com/questions/138449/how-to-convert-a-unicode-character-to-its-extended-ascii-equivalent#138579) and [bzlm](https://stackoverflow.com/questions/138449/how-to-convert-a-unicode-character-to-its-extended-ascii-equivalent#138583) pointed in the right direction.
Because of blzm's reply I looked up the Windows-1252 page on wiki and found that it's called a codepage. The wikipedia article for [Code page](http://en.wikipedia.org/wiki/Codepage) which stated the following:
> No formal standard existed for these ‘[extended character sets](http://en.wikipedia.org/wiki/Extended_ASCII)’; IBM merely referred to the variants as code pages, as it had always done for variants of EBCDIC encodings.
This led me to codepage 437:
> n ASCII-compatible code pages, the lower 128 characters maintained their standard US-ASCII values, and different pages (or sets of characters) could be made available in the upper 128 characters. DOS computers built for the North American market, for example, used [code page 437](http://en.wikipedia.org/wiki/Code_page_437), which included accented characters needed for French, German, and a few other European languages, as well as some graphical line-drawing characters.
So, codepage 437 was the codepage I was calling 'extended ASCII', it had the ê as character 136 so I looked up some other chars as well and they seem right.
csgero came with the Encoding.GetEncoding() hint, I used it to create the following statement which solves my problem:
```
byte[] bytes = Encoding.GetEncoding(437).GetBytes("ê");
```
|
You cannot use the default ASCII encoding (Encoding.ASCII) here, but must create the encoding with the appropriate code page using Encoding.GetEncoding(...). You might try to use code page 1252, which is a superset of ISO 8859-1.
|
How to convert a Unicode character to its ASCII equivalent
|
[
"",
"c#",
".net",
"unicode",
"ascii",
""
] |
I've been creating snapins with the new MMC 3.0 classes and C#. I can't seem to find any examples of how to get rid of the "Console Root" node when creating the \*.msc files. I looked through the examples in the SDK, but I can't seem to find anything for this.
I have seen other snapins that do what I want, but I can't tell what version of MMC they are using.
|
If I've understood you correctly, this isn't specific to MMC3, but it did take me a while to realise. Right-click on the node, and click `New Window from Here`. Then switch back to the Console Root window, and close it (Ctrl+F4).
Inside the .msc, it's //View/BookMark/@NodeID, which needs to be "2" (etc.), instead of "1".
|
I know this is an older post, so maybe a response is not necessary but what you're trying to do requires saving a customized MSC file. As one reply states, add your SnapIn, select Open new window from here, then save the MSC file. This is your console configured to show your SnapIn as the RootNode rather than the Console root. Under the File menu is an Options... dialog. From there you can change settings for that particular console file to provide end users a non-Author mode console, they won't be able to change the layout on you then. Note: this is only a setting for that specific console file (e.g. C:\temp\MyCustomConsole.msc), anyone could open a console and use the add/remove dialog to open the SnapIn in any other console they desire.
|
How to get rid of the "Console Root" node in a MMC 3.0 snapin?
|
[
"",
"c#",
"mmc",
"mmc3",
""
] |
I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects.
However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read.
For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing):
```
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
```
The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this:
```
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
#-- while --
#-- if --
did_i_not_warn_you_biz()
my_father_is_avenged()
```
And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me.
|
Part of learning a new programming language is learning to read code in that language. A crutch like this may make it easier to read your own code, but it's going to impede the process of learning how to read anyone else's Python code. I really think you'd be better off getting rid of the end of block comments and getting used to normal Python.
|
I like to put blank lines around blocks to make control flow more obvious. For example:
```
if foo:
bar = baz
while bar not biz:
bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz()
my_father_is_avenged()
```
|
Improving Python readability?
|
[
"",
"python",
"readability",
""
] |
We have fairly large C++ application which is composed of about 60 projects in Visual Studio 2005. It currently takes 7 minutes to link in Release mode and I would like to try to reduce the time. Are there any tips for improving the link time?
Most of the projects compile to static libraries, this makes testing easier since each one also has a set of associated unit tests. It seems the use of static libraries prevents VS2005 from using incremental linking, so even with incremental linking turned on it does a full link every time.
Would using DLLs for the sub projects make any difference? I don't really want to go through all the headers and add macros to export the symbols (even using a script) but if it would do something to reduce the 7 minute link time I will certainly consider it.
For some reason using nmake from the command line is slightly faster and linking the same application on Linux (with GCC) is much faster.
* Visual Studio IDE 7 minutes
* Visual C++ using nmake from the command line - 5 minutes
* GCC on Linux 34 seconds
|
If you're using the [`/GL`](http://msdn.microsoft.com/en-us/library/0zza0de8.aspx) flag to enable Whole Program Optimization (WPO) or the [`/LTCG`](http://msdn.microsoft.com/en-us/library/xbf3tbeh.aspx) flag to enable Link Time Code Generation, turning them off will improve link times significantly, at the expense of some optimizations.
Also, if you're using the [`/Z7`](http://msdn.microsoft.com/en-us/library/958x11bc.aspx) flag to put debug symbols in the `.obj` files, your static libraries are probably huge. Using [`/Zi`](http://msdn.microsoft.com/en-us/library/958x11bc.aspx) to create separate `.pdb` files might help if it prevents the linker from reading all of the debug symbols from disk. I'm not sure if it actually does help because I have not benchmarked it.
|
See my suggestion made at Microsoft : <https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=511300>
You should vote for it ! Here is my last comment on it :
Yes we are using incremental linking to build most of our projects. For the biggest projects, it's useless. In fact, it takes more time to link those projects with incremental linking (2min50 compared to 2min44). We observed that it doesn't work when size of ILK files is big (our biggest project generate an ilk of 262144 KB in win 32).
Bellow, I list others things we tried to reduce link time:
* Explicit template instantiation to reduce code bloat. Small gain.
* IncrediLink (IncrediBuild give interesting gain for compilation but almost no gain for link).
* Remove debug information for libraries who are rarely debugged (good gain).
* Delete PDB file in « Pre-Build Event » (strangely it give interesting gain ex: 2min44 instead of 3min34).
* Convert many statics library to DLL. Important gain.
* Working with computer equiped with lot of RAM in order to maximize disk cache. The biggest gain.
* Big obj versus small obj. No difference.
* Change project options (/Ob1, /INCREMENTAL, Enable COMDAT folding, Embedding manifest, etc.). Some give interesting gain other not. We try to continuously maximize our settings.
* Maximize Internal linkage vs External linkage. It's a good programming practice.
* Separate software component as much as we can afford. You can than work in unit test that link fast. But we still have to interate things together, we have legacy code and we worked with third party component.
* Use secret linker switch /expectedoutputsize:120000000. Small gain.
Note that for all our experimentation, we meticulously measured link time. Slow link time seriously cost in productivity. When you implement complex algorithm or track difficult bug, you want to iterate rapidly this sequence : modify some code, link, trace debug, modify some code, link, etc...
Another point to optimize link time is the impact it have on our continuous integration cycle. We have many applications that shared common code and we are running continuous integration on it. Link time of all our applications took half the cycle time (15 minutes)...
In thread <https://blogs.msdn.microsoft.com/vcblog/2009/09/10/linker-throughput/>, some interesting suggestions were made to improve link time. On a 64 bits computer, why not offering an option to work with file completely in RAM ?
Again, any suggestions that may help us reduce link time is welcome.
|
How to improve link performance for a large C++ application in VS2005
|
[
"",
"c++",
"visual-studio-2005",
""
] |
I have this line in a useful Bash script that I haven't managed to translate into Python, where 'a' is a user-input number of days' worth of files to archive:
```
find ~/podcasts/current -mindepth 2 -mtime '+`a`+' -exec mv {} ~/podcasts/old \;
```
I am familiar with the os.name and getpass.getuser for the most general cross-platform elements. I also have this function to generate a list of the full names of all the files in the equivalent of ~/podcasts/current:
```
def AllFiles(filepath, depth=1, flist=[]):
fpath=os.walk(filepath)
fpath=[item for item in fpath]
while depth < len(fpath):
for item in fpath[depth][-1]:
flist.append(fpath[depth][0]+os.sep+item)
depth+=1
return flist
```
First off, there must be a better way to do that, any suggestion welcome. Either way, for example, "AllFiles('/users/me/music/itunes/itunes music/podcasts')" gives the relevant list, on Windows. Presumably I should be able to go over this list and call os.stat(list\_member).st\_mtime and move all the stuff older than a certain number in days to the archive; I am a little stuck on that bit.
Of course, anything with the concision of the bash command would also be illuminating.
|
```
import os
import shutil
from os import path
from os.path import join, getmtime
from time import time
archive = "bak"
current = "cur"
def archive_old_versions(days = 3):
for root, dirs, files in os.walk(current):
for name in files:
fullname = join(root, name)
if (getmtime(fullname) < time() - days * 60 * 60 * 24):
shutil.move(fullname, join(archive, name))
```
|
```
import subprocess
subprocess.call(['find', '~/podcasts/current', '-mindepth', '2', '-mtime', '+5',
'-exec', 'mv', '{}', '~/podcasts/old', ';'], shell=True)
```
That is not a joke. This python script will do exactly what the bash one does.
**EDIT**: Dropped the backslash on the last param because it is not needed.
|
How to express this Bash command in pure Python
|
[
"",
"python",
"shell",
"language-comparisons",
""
] |
My question is possibly a subtle one:
Web services - are they extensions of the presentation/web layer? ..or are they extensions of the biz/data layer?
That may seem like a dumb question. *Web* services are an extension of the *web* tier. I'm not so sure though. I'm building a pretty standard webform with some AJAX-y features, and it seems to me I could build the web services in one of two ways:
1. they could retrieve data for me (biz/data layer extension).
example: `GetUserData(userEmail)`
where the web form has javascript on it that knows how to consume the user data and make changes to markup
2. they could return completely rendered user controls (html; extension of web layer)
example: `RenderUserProfileControl(userEmail)`
where the web form has simple/dumb js that only copies and pastes the web service html in to the form
I could see it working in either scenario, but I'm interested in different points of view... Thoughts?
|
In my mind, a web service has 2 characteristics:
1. it exposes data to external sources, i.e. other sources than the application they reside within. In this sense I agree with [@Pete](https://stackoverflow.com/questions/118595/ajax-webservices-extensions-of-web-or-biz-layer#118814) in that you're not really designing a web service; you're designing a helper class that responds to requests in a web-service-like fashion. A semantic distinction, perhaps, but one that's proved useful to me.
2. it returns data (and only data) in a format that is reusable by multiple consumers. For me this is the answer to your "why not #2" question - if you return web-control-like structures then you limit the usefulness of the web service to other potential callers. They **must** present the data the way you're returning it, and can't choose to represent it in another way, which minimises the usefulness (and re-usefulness) of the service as a whole.
All of that said, if what you really are looking at is a helper class that responds like a web-service and you only ever intend to use it in this one use case then you can do whatever you like, and your case #2 will **work**. From my perspective, though, it breaks the separation of responsibilities; you're combining data-access and rendering functions in the same class. I suspect that even if you don't care about MVC patterns option #2 will make your classes harder to maintain, and you're certainly limiting their future usefulness to you; if you ever wanted to access the same data but render it differently you'd need to refactor.
|
I would say definitely not #2, but #1 is valid.
I also think (and this is opinion) that web services as a data access layer is not ideal. The service has to have a little bit more value (in general - I am sure there are notable exceptions to this).
|
AJAX webservices - extensions of web or biz layer?
|
[
"",
"asp.net",
"javascript",
"ajax",
"web-services",
""
] |
I'm trying to store a password in a file that I'd like to retrieve for later. Hashing is not an option as I need the password for connecting to a remote server for later.
The following code works well, but it creates a different output each time even though the key is the same. This is bad as when the application shuts down and restarts I won't be able to retrieve my password any more. How can I store passwords in a file and retrieve them later?
```
public class EncyptDecrypt {
static System.Security.Cryptography.TripleDESCryptoServiceProvider keyProv = new System.Security.Cryptography.TripleDESCryptoServiceProvider();
public static System.Security.Cryptography.TripleDESCryptoServiceProvider KeyProvider {
get {
keyProv.Key = new byte[] { /* redacted with prejudice */ };
return keyProv;
}
}
public static string Encrypt(string text, SymmetricAlgorithm key) {
if (text.Equals(string.Empty)) return text;
// Create a memory stream.
MemoryStream ms = new MemoryStream();
// Create a CryptoStream using the memory stream and the
// CSP DES key.
CryptoStream encStream = new CryptoStream(ms, key.CreateEncryptor(), CryptoStreamMode.Write);
// Create a StreamWriter to write a string
// to the stream.
StreamWriter sw = new StreamWriter(encStream);
// Write the plaintext to the stream.
sw.WriteLine(text);
// Close the StreamWriter and CryptoStream.
sw.Close();
encStream.Close();
// Get an array of bytes that represents
// the memory stream.
byte[] buffer = ms.ToArray();
// Close the memory stream.
ms.Close();
// Return the encrypted byte array.
return System.Convert.ToBase64String(buffer);
}
// Decrypt the byte array.
public static string Decrypt(string cypherText, SymmetricAlgorithm key) {
if (cypherText.Equals(string.Empty)) return cypherText;
string val;
try {
// Create a memory stream to the passed buffer.
MemoryStream ms = new MemoryStream(System.Convert.FromBase64String(cypherText));
// Create a CryptoStream using the memory stream and the
// CSP DES key.
CryptoStream encStream = new CryptoStream(ms, key.CreateDecryptor(), CryptoStreamMode.Read);
// Create a StreamReader for reading the stream.
StreamReader sr = new StreamReader(encStream);
// Read the stream as a string.
val = sr.ReadLine();
// Close the streams.
sr.Close();
encStream.Close();
ms.Close();
}
catch (System.Exception) {
return string.Empty;
}
return val;
}
}
```
|
I believe that what's happening is that the crypto provider is randomly generating an IV. Specify this and it should no longer differ.
Edit: You can do this in your 'keyProvider' by setting the IV property.
|
According to the docs of CreateEncryptor:
> If the current IV property is a null
> reference (Nothing in Visual Basic),
> the GenerateIV method is called to
> create a new random IV.
This will make the ciphertext different every time.
Note: a way around this is discussed [here](https://stackoverflow.com/questions/65879/should-i-use-an-initialization-vector-iv-along-with-my-encryption#66259) where I suggest you can prepend the plaintext with a mac ... then the first block of ciphertext is effectively the IV, but it's all repeatable
|
Encryption output always different even with same key
|
[
"",
"c#",
"encryption",
"cryptography",
""
] |
Can I use JavaScript to check (irrespective of scrollbars) if an HTML element has overflowed its content? For example, a long div with small, fixed size, the overflow property set to visible, and no scrollbars on the element.
|
Normally, you can compare the `client[Height|Width]` with `scroll[Height|Width]` in order to detect this... but the values will be the same when overflow is visible. So, a detection routine must account for this:
```
// Determines if the passed element is overflowing its bounds,
// either vertically or horizontally.
// Will temporarily modify the "overflow" style to detect this
// if necessary.
function checkOverflow(el)
{
var curOverflow = el.style.overflow;
if ( !curOverflow || curOverflow === "visible" )
el.style.overflow = "hidden";
var isOverflowing = el.clientWidth < el.scrollWidth
|| el.clientHeight < el.scrollHeight;
el.style.overflow = curOverflow;
return isOverflowing;
}
```
Tested in FF3, FF40.0.2, IE6, Chrome 0.2.149.30.
|
Try comparing `element.scrollHeight` / `element.scrollWidth` to `element.offsetHeight` / `element.offsetWidth` :
```
if (element.scrollHeight > element.offsetHeight) {
console.log('element overflows')
}
```
See MDN docs :
* [offsetWidth](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/offsetWidth)
* [offsetHeight](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/offsetHeight)
* [scrollWidth](https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollWidth)
* [scrollHeight](https://developer.mozilla.org/en-US/docs/Web/API/Element/scrollHeight)
|
Determine if an HTML element's content overflows
|
[
"",
"javascript",
"html",
"css",
""
] |
What is the best way to find if an object is in an array?
This is the best way I know:
```
function include(arr, obj) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] == obj) return true;
}
}
console.log(include([1, 2, 3, 4], 3)); // true
console.log(include([1, 2, 3, 4], 6)); // undefined
```
|
As of ECMAScript 2016 you can use [`includes()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes)
```
arr.includes(obj);
```
If you want to support IE or other older browsers:
```
function include(arr,obj) {
return (arr.indexOf(obj) != -1);
}
```
EDIT:
This will not work on IE6, 7 or 8 though. The best workaround is to define it yourself if it's not present:
1. [Mozilla's](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/indexOf) (ECMA-262) version:
```
if (!Array.prototype.indexOf)
{
Array.prototype.indexOf = function(searchElement /*, fromIndex */)
{
"use strict";
if (this === void 0 || this === null)
throw new TypeError();
var t = Object(this);
var len = t.length >>> 0;
if (len === 0)
return -1;
var n = 0;
if (arguments.length > 0)
{
n = Number(arguments[1]);
if (n !== n)
n = 0;
else if (n !== 0 && n !== (1 / 0) && n !== -(1 / 0))
n = (n > 0 || -1) * Math.floor(Math.abs(n));
}
if (n >= len)
return -1;
var k = n >= 0
? n
: Math.max(len - Math.abs(n), 0);
for (; k < len; k++)
{
if (k in t && t[k] === searchElement)
return k;
}
return -1;
};
}
```
2. [Daniel James](https://stackoverflow.com/questions/143847/best-way-to-find-an-item-in-a-javascript-array#144172)'s version:
```
if (!Array.prototype.indexOf) {
Array.prototype.indexOf = function (obj, fromIndex) {
if (fromIndex == null) {
fromIndex = 0;
} else if (fromIndex < 0) {
fromIndex = Math.max(0, this.length + fromIndex);
}
for (var i = fromIndex, j = this.length; i < j; i++) {
if (this[i] === obj)
return i;
}
return -1;
};
}
```
3. [roosteronacid](https://stackoverflow.com/questions/143847/best-way-to-find-an-item-in-a-javascript-array#144664)'s version:
```
Array.prototype.hasObject = (
!Array.indexOf ? function (o)
{
var l = this.length + 1;
while (l -= 1)
{
if (this[l - 1] === o)
{
return true;
}
}
return false;
} : function (o)
{
return (this.indexOf(o) !== -1);
}
);
```
|
If you are using jQuery:
```
$.inArray(5 + 5, [ "8", "9", "10", 10 + "" ]);
```
For more information: <http://api.jquery.com/jQuery.inArray/>
|
Best way to find if an item is in a JavaScript array?
|
[
"",
"javascript",
"arrays",
""
] |
let's say we have a c++ class like:
```
class MyClass
{
void processArray( <an array of 255 integers> )
{
int i ;
for (i=0;i<255;i++)
{
// do something with values in the array
}
}
}
```
and one instance of the class like:
```
MyClass myInstance ;
```
and 2 threads which call the processArray method of that instance (depending on how system executes threads, probably in a completely irregular order). There is no mutex lock used in that scope so both threads can enter.
**My question is what happens to the i ? Does each thread scope has it's own "i" or would each entering thread modify i in the for loop, causing i to be changing weirdly all the time.**
|
`i` is allocated on the stack. Since each thread has its own separate stack, each thread gets its own copy of `i`.
|
Be careful. In the example provided the method **processArray** seems to be *reentrant* (it's not clear what happens in *// do something with values in the array*). If so, no race occurs while two or more threads invoke it simultaneously and therefore it's safe to call it without any locking mechanism.
To enforce this, you could mark both the instance and the method with the **volatile** qualifier, to let users know that no lock is required.
It has been published an interesting article of Andrei Alexandrescu about *volatile qualifier* and how it can be used to write correct multithreaded classes. The article is published here:
<http://www.ddj.com/cpp/184403766>
|
How does Multiple C++ Threads execute on a class method
|
[
"",
"c++",
"multithreading",
""
] |
```
string percentage = e.Row.Cells[7].Text;
```
I am trying to do some dynamic stuff with my GridView, so I have wired up some code to the RowDataBound event. I am trying to get the value from a particular cell, which is a TemplateField. But the code above always seems to be returning an empty string.
Any ideas?
To clarify, here is a bit the offending cell:
```
<asp:TemplateField HeaderText="# Percentage click throughs">
<ItemTemplate>
<%# AddPercentClickThroughs((int)Eval("EmailSummary.pLinksClicked"), (int)Eval("NumberOfSends")) %>
</ItemTemplate>
</asp:TemplateField>
```
On a related note, does anyone know if there is a better way of selecting the cell in the row. It sucks putting in `cell[1]`. Couldn't I do `cell["mycellname"]`, so if I decide to change the order of my cells, bugs wont appear?
|
First you need to wrap your code in a `Label` or `Literal` control so that you can reference it properly. What's happening is that there's no way for the system to keep track of it, because there's no control associated with the text. It's the control's responsibility to add its contents to viewstate.
You need to use gridView.FindControl("controlName"); to get the control in the row. From there you can get at its properties including `Text`.
You can also get at the DataItem property of the Row in question and cast it to the appropriate type and extract the information directly.
|
why not pull the data directly out of the data source.
```
DataBinder.Eval(e.Row.DataItem, "ColumnName")
```
|
Getting value from a cell from a gridview on RowDataBound event
|
[
"",
"c#",
"asp.net",
"gridview",
""
] |
Here's the idea, I'd like to make a service? that will look for a certain program starting up and dissallow it unless certain conditions are met.
Let's say I have a game I want to stop myself from playing during the week. So if I start it up on any day other than Friday/Saturday/Sunday, it will intercept and cancel. Is this possible with C#?
Main thing I am looking for is how to catch a program starting up, rest should be easy.
|
Well, you can definitely determine which programs are running by looking for the process names you want (GetProcessesByName()) and killing them.
```
Process[] processes = Process.GetProcessesByName(processName);
foreach(Process process in processes)
{
process.Kill();
}
```
You could just have a list of them you didn't want to run, do the time check (or whatever criteria was to be met) and then walk the list. I did something like this once for a test and it works well enough.
|
I don't know about C# in particular here but one why you could accomplish this (I'll be it is a dangerous way) is by using the Image File Execution Options (<http://blogs.msdn.com/junfeng/archive/2004/04/28/121871.aspx>) in the registry. For whatever executable you are interested in intercepting you could set the Debugger option for it and then create a small application that would be used at the debugger then have it essentially filter these calls. If you wanted to allow it to run then start up the process otherwise do whatever you like. I’ve never attempted this but it seems like it could do what you want.
Or if you wanted to react to the process starting up and then close it down you could use a ProcessWatcher (<http://weblogs.asp.net/whaggard/archive/2006/02/11/438006.aspx>) and subscribe to the process created event and then close it down if needed. However that is more of reactive approach instead a proactive approach like the first one.
|
Way to stop a program from starting up using c#?
|
[
"",
"c#",
"windows",
"service",
""
] |
I know there is a function somewhere that will accept a client rect and it will convert it into a window rect for you. I just can't find / remember it!
Does anyone know what it is?
It will do something similar to:
```
const CRect client(0, 0, 200, 200);
const CRect window = ClientRectToWindowRect(client);
SetWindowPos(...)
```
|
You're probably thinking of [`AdjustWindowRectEx()`](http://msdn.microsoft.com/en-us/library/ms632667(VS.85).aspx). Keep in mind, this is intended for use when *creating* a window - there's no guarantee that it will produce an accurate set of window dimensions for an existing window; for that, use [`GetWindowRect()`](http://msdn.microsoft.com/en-us/library/ms633519(VS.85).aspx).
|
Is this what you are looking for?
ClientToScreen
<http://msdn.microsoft.com/en-us/library/ms532670(VS.85).aspx>
|
Win32/MFC Get window rect from client rect
|
[
"",
"c++",
"windows",
"winapi",
"mfc",
""
] |
Yacc does not permit objects to be passed around. Because the %union can only contain POD types, complex objects must be new'd and passed around by pointer. If a syntax error occurs, the yacc parser just stops running, and references to all of those created objects are lost.
The only solution I've come up with is that all new'd object inherit a particular base class, be added to a container when allocated, and if there is an error everything in that container can be deleted.
Does anyone know of any better yacc tricks to solve this problem?
Please don't tell me to choose a different parser.
|
I love Yacc, but the discriminating union stack does present a challenge.
I don't know whether you are using C or C++. I've modified Yacc to generate C++ for my own purposes, but this solution can be adapted to C.
My preferred solution is to pass an interface to the owner down the parse tree, rather than constructed objects up the stack. Do this by creating your own stack outside of Yacc's. Before you invoke a non-terminal that allocates an object, push the owner of that object to this stack.
For example:
```
class IExpressionOwner
{
public:
virtual ExpressionAdd *newExpressionAdd() = 0;
virtual ExpressionSubstract *newExpressionSubtract() = 0;
virtual ExpressionMultiply *newExpressionMultiply() = 0;
virtual ExpressionDivide *newExpressionDivide() = 0;
};
class ExpressionAdd : public Expression, public IExpressionOwner
{
private:
std::auto_ptr<Expression> left;
std::auto_ptr<Expression> right;
public:
ExpressionAdd *newExpressionAdd()
{
ExpressionAdd *newExpression = new ExpressionAdd();
std::auto_ptr<Expression> autoPtr(newExpression);
if (left.get() == NULL)
left = autoPtr;
else
right = autoPtr;
return newExpression;
}
...
};
class Parser
{
private:
std::stack<IExpressionOwner *> expressionOwner;
...
};
```
Everything that wants an expression has to implement the IExpressionOwner interface and push itself to the stack before invoking the expression non-terminal. It's a lot of extra code, but it controls object lifetime.
**Update**
The expression example is a bad one, since you don't know the operation until after you've reduced the left operand. Still, this technique works in many cases, and requires just a little tweaking for expressions.
|
If it suits your project, consider using the Boehm Garbage collector. That way you can freely allocate new objects and let the collector handle the deletes. Of course there are tradeoffs involved in using a garbage collector. You would have to weigh the costs and benefits.
|
What is the best way of preventing memory leaks in a yacc-based parser?
|
[
"",
"c++",
"yacc",
""
] |
I've come across this term POD-type a few times.
What does it mean?
|
*POD* stands for *Plain Old Data* - that is, a class (whether defined with the keyword `struct` or the keyword `class`) without constructors, destructors and virtual members functions. [Wikipedia's article on POD](http://en.wikipedia.org/wiki/Plain_Old_Data_Structures) goes into a bit more detail and defines it as:
> A Plain Old Data Structure in C++ is an aggregate class that contains only PODS as members, has no user-defined destructor, no user-defined copy assignment operator, and no nonstatic members of pointer-to-member type.
Greater detail can be found in [this answer for C++98/03](https://stackoverflow.com/a/4178176/734069). C++11 changed the rules surrounding POD, relaxing them greatly, thus [necessitating a follow-up answer here](https://stackoverflow.com/a/7189821/734069).
|
### Very informally:
A POD is a type (including classes) where the C++ compiler guarantees that there will be no "magic" going on in the structure: for example hidden pointers to vtables, offsets that get applied to the address when it is cast to other types (at least if the target's POD too), constructors, or destructors. Roughly speaking, a type is a POD when the only things in it are built-in types and combinations of them. The result is something that "acts like" a C type.
### Less informally:
* `int`, `char`, `wchar_t`, `bool`, `float`, `double` are PODs, as are `long/short` and `signed/unsigned` versions of them.
* pointers (including pointer-to-function and pointer-to-member) are PODs,
* `enums` are PODs
* a `const` or `volatile` POD is a POD.
* a `class`, `struct` or `union` of PODs is a POD provided that all non-static data members are `public`, and it has no base class and no constructors, destructors, or virtual methods. Static members don't stop something being a POD under this rule. This rule has changed in C++11 and certain private members are allowed: [Can a class with all private members be a POD class?](https://stackoverflow.com/questions/4762788/can-a-class-with-all-private-members-be-a-pod-class/4762944#4762944)
* Wikipedia is wrong to say that a POD cannot have members of type pointer-to-member. Or rather, it's correct for the C++98 wording, but TC1 made explicit that pointers-to-member are POD.
### Formally (C++03 Standard):
> **3.9(10):** "Arithmetic types (3.9.1), enumeration types, pointer types, and pointer to member types (3.9.2) and cv-qualified versions of these types (3.9.3) are collectively caller scalar types. Scalar types, POD-struct types, POD-union types (clause 9), arrays of such types and cv-qualified versions of these types (3.9.3) are collectively called POD types"
>
> **9(4):** "A POD-struct is an aggregate class that has no non-static data members of type non-POD-struct, non-POD-union (or array of such types) or reference, and has no user-define copy operator and no user-defined destructor. Similarly a POD-union is an aggregate union that has no non-static data members of type non-POD-struct, non-POD-union (or array of such types) or reference, and has no user-define copy operator and no user-defined destructor.
>
> **8.5.1(1):** "An aggregate is an array or class (clause 9) with no user-declared constructors (12.1), no private or protected non-static data members (clause 11), no base classes (clause 10) and no virtual functions (10.3)."
|
What are POD types in C++?
|
[
"",
"c++",
"types",
"c++-faq",
""
] |
I keep hearing that
```
catch (Exception ex)
```
Is bad practise, however, I often use it in event handlers where an operation may for example go to network, allowing the possibility of many different types of failure. In this case, I catch all exceptions and display the error message to the user in a message box.
Is this considered bad practise? There's nothing more I can do with the exception: I don't want it to halt the application, the user needs to know what happened, and I'm at the top level of my code. What else should I be doing?
EDIT:
People are saying that I should look through the stack of calls and handle errors specifically, because for example a StackOverflow exception cannot be handled meaningfully. However, halting the process is the **worst** outcome, I want to prevent that at all costs. If I can't handle a StackOverflow, so be it - the outcome will be no worse than not catching exceptions at all, and in 99% of cases, informing the user is the least bad option as far as I'm concerned.
Also, despite my best efforts to work out all of the possible exceptions that can be thrown, in a large code-base it's likely that I would miss some. And for most of them the best defense is still to inform the user.
|
The bad practice is
```
catch (Exception ex){}
```
and variants:
```
catch (Exception ex){ return false; }
```
etc.
Catching all exceptions on the top-level and passing them on to the user (by either logging them or displaying them in a message-box, depending on whether you are writing a server- or a client-application), is exactly the right thing to do.
|
I find the arguments that generic catches are always bad to be overly dogmatic. They, like everything else, have a place.
That place is **not** your library code, nor the classes you custom-develop for your app. That place is, as many have mentioned, the very top level of the app, where if **any** exception is raised, it is most likely unexpected.
Here's my general rule (and like all rules, it's designed to be broken when appropriate):
I use classes and custom-built libraries for the majority of the lifting in an app. This is basic app architecture -- really basic, mind you. These guys try to handle as many exceptions as possible, and if they really can't continue, throw the most specific kind available back up to the UI.
At the UI, I tend to always catch all from event handlers. If there is a reasonable expectation of catching a specific exception, **and** I can do something about it, then I catch the specific exception and handle it gracefully. This must come before the catch all, however, as .NET will only use the very first exception handler which matches your exception. (Always order from most specific to most generic!)
If I can't do anything about the exception other than error out (say, the database is offline), or if the exception truly is unexpected, catch all will take it, log it, and fail safe quickly, with a general error message displayed to the user before dying. (Of course, there are certain classes of errors which will almost always fail ungracefully -- OutOfMemory, StackOverflow, etc. I'm fortunate enough to have not had to deal with those in prod-level code ... so far!)
Catch all has its place. That place is not to hide the exception, that place is not to try and recover (because if you don't know what you caught, how can you possibly recover), that place is not to prevent errors from showing to the user while allowing your app to continue executing in an unknown and bad state.
Catch all's place is to be a last resort, a trap to ensure that if anything makes it through your well-designed and well-guarded defenses, that at a minimum it's logged appropriately and a clean exit can be made. It **is** bad practice if you don't have well-designed and well-guarded defenses in place at lower levels, and it is **very** bad practice at lower levels, but done as a last resort it is (in my mind) not only acceptable, but often the right thing to do.
|
Catching base Exception class in .NET
|
[
"",
"c#",
"exception",
""
] |
i'm wondering if there is any nice and neat tool to replace the GNU Autotools or Make to build a very large C++ project, which are such a complicated thing to use.
It is simple to generate all the files that de Autotools require if the project is small, but if the source code is divided in many directories, with multiple third party libraries and many dependencies, you fall into the "Autotools Hell"..
thanks for any recommendations
|
The [Google V8 JavaScript Engine](http://code.google.com/p/v8/) is written in C++ and uses [SCons](http://www.scons.org/), so I guess that's one vote for it.
|
[CMake](http://www.cmake.org/)? (generates makefiles, so technically not a replacement as such).
I've also seen "[SCons](http://www.scons.org/)" pop up in a few places recently. Haven't created anything with it myself though.
|
Any good building tools for a C++ project, which can replace make?
|
[
"",
"c++",
"build-process",
"makefile",
""
] |
Does any one know how do I get the current open windows or process of a local machine using Java?
What I'm trying to do is: list the current open task, windows or process open, like in Windows Taskmanager, but using a multi-platform approach - using only Java if it's possible.
|
Finally, with Java 9+ it is possible with [`ProcessHandle`](https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessHandle.html):
```
public static void main(String[] args) {
ProcessHandle.allProcesses()
.forEach(process -> System.out.println(processDetails(process)));
}
private static String processDetails(ProcessHandle process) {
return String.format("%8d %8s %10s %26s %-40s",
process.pid(),
text(process.parent().map(ProcessHandle::pid)),
text(process.info().user()),
text(process.info().startInstant()),
text(process.info().commandLine()));
}
private static String text(Optional<?> optional) {
return optional.map(Object::toString).orElse("-");
}
```
Output:
```
1 - root 2017-11-19T18:01:13.100Z /sbin/init
...
639 1325 www-data 2018-12-04T06:35:58.680Z /usr/sbin/apache2 -k start
...
23082 11054 huguesm 2018-12-04T10:24:22.100Z /.../java ProcessListDemo
```
|
This is another approach to parse the the process list from the command "**ps -e**":
```
try {
String line;
Process p = Runtime.getRuntime().exec("ps -e");
BufferedReader input =
new BufferedReader(new InputStreamReader(p.getInputStream()));
while ((line = input.readLine()) != null) {
System.out.println(line); //<-- Parse data here.
}
input.close();
} catch (Exception err) {
err.printStackTrace();
}
```
If you are using Windows, then you should change the line: "Process p = Runtime.getRun..." etc... (3rd line), for one that looks like this:
```
Process p = Runtime.getRuntime().exec
(System.getenv("windir") +"\\system32\\"+"tasklist.exe");
```
Hope the info helps!
|
How to get a list of current open windows/process with Java?
|
[
"",
"java",
"process",
""
] |
How do I create a unique constraint on an existing table in SQL Server 2005?
I am looking for both the TSQL and how to do it in the Database Diagram.
|
The SQL command is:
```
ALTER TABLE <tablename> ADD CONSTRAINT
<constraintname> UNIQUE NONCLUSTERED
(
<columnname>
)
```
See the full syntax [here](http://msdn.microsoft.com/en-us/library/ms190273.aspx).
If you want to do it from a Database Diagram:
* right-click on the table and select 'Indexes/Keys'
* click the Add button to add a new index
* enter the necessary info in the Properties on the right hand side:
+ the columns you want (click the ellipsis button to select)
+ set Is Unique to Yes
+ give it an appropriate name
|
In SQL Server Management Studio Express:
* Right-click table, choose **Modify** or **Design(For Later Versions)**
* Right-click field, choose **Indexes/Keys...**
* Click **Add**
* For **Columns**, select the **field name** you want to be unique.
* For **Type**, choose **Unique Key**.
* Click **Close**, **Save** the table.
|
SQL Server 2005 How Create a Unique Constraint?
|
[
"",
"sql",
"sql-server",
"constraints",
""
] |
Comparing string in C# is pretty simple. In fact there are several ways to do it. I have listed some in the block below. What I am curious about are the differences between them and when one should be used over the others? Should one be avoided at all costs? Are there more I haven't listed?
```
string testString = "Test";
string anotherString = "Another";
if (testString.CompareTo(anotherString) == 0) {}
if (testString.Equals(anotherString)) {}
if (testString == anotherString) {}
```
(Note: I am looking for equality in this example, not less than or greater than but feel free to comment on that as well)
|
Here are the rules for how these functions work:
**`stringValue.CompareTo(otherStringValue)`**
1. `null` comes before a string
2. it uses `CultureInfo.CurrentCulture.CompareInfo.Compare`, which means it will use a culture-dependent comparison. This might mean that `ß` will compare equal to `SS` in Germany, or similar
**`stringValue.Equals(otherStringValue)`**
1. `null` is not considered equal to anything
2. unless you specify a `StringComparison` option, it will use what looks like a direct ordinal equality check, i.e. `ß` is not the same as `SS`, in any language or culture
**`stringValue == otherStringValue`**
1. Is not the same as `stringValue.Equals()`.
2. The `==` operator calls the static `Equals(string a, string b)` method (which in turn goes to an internal `EqualsHelper` to do the comparison.
3. Calling `.Equals()` on a `null` string gets `null` reference exception, while on `==` does not.
**`Object.ReferenceEquals(stringValue, otherStringValue)`**
Just checks that references are the same, i.e. it isn't just two strings with the same contents, you're comparing a string object with itself.
---
Note that with the options above that use method calls, there are overloads with more options to specify how to compare.
My advice if you just want to check for equality is to make up your mind whether you want to use a culture-dependent comparison or not, and then use `.CompareTo` or `.Equals`, depending on the choice.
|
From MSDN:
> "The CompareTo method was designed primarily for use in sorting or
> alphabetizing operations. It should not be used when the primary
> purpose of the method call is to determine whether two strings are
> equivalent. To determine whether two strings are equivalent, call
> the Equals method."
They suggest using `.Equals` instead of `.CompareTo` when looking solely for equality. I am not sure if there is a difference between `.Equals` and `==` for the `string` class. I will sometimes use `.Equals` or `Object.ReferenceEquals` instead of `==` for my own classes in case someone comes along at a later time and redefines the `==` operator for that class.
|
Differences in string compare methods in C#
|
[
"",
"c#",
"string",
"comparison",
""
] |
In Python, given a module X and a class Y, how can I iterate or generate a list of all subclasses of Y that exist in module X?
|
Here's one way to do it:
```
import inspect
def get_subclasses(mod, cls):
"""Yield the classes in module ``mod`` that inherit from ``cls``"""
for name, obj in inspect.getmembers(mod):
if hasattr(obj, "__bases__") and cls in obj.__bases__:
yield obj
```
|
Although Quamrana's suggestion works fine, there are a couple of possible improvements I'd like to suggest to make it more pythonic. They rely on using the inspect module from the standard library.
1. You can avoid the getattr call by using `inspect.getmembers()`
2. The try/catch can be avoided by using `inspect.isclass()`
With those, you can reduce the whole thing to a single list comprehension if you like:
```
def find_subclasses(module, clazz):
return [
cls
for name, cls in inspect.getmembers(module)
if inspect.isclass(cls) and issubclass(cls, clazz)
]
```
|
Iterate over subclasses of a given class in a given module
|
[
"",
"python",
"oop",
""
] |
I have a dict, that looks like this:
```
{
'foo': {
'opt1': 1,
'opt2': 2,
},
'foo/bar': {
'opt3': 3,
'opt4': 4,
},
'foo/bar/baz': {
'opt5': 5,
'opt6': 6,
}
}
```
And I need to get it to look like:
```
{
'foo': {
'opt1': 1,
'opt2': 2,
'bar': {
'opt3': 3,
'opt4': 4,
'baz': {
'opt5': 5,
'opt6': 6,
}
}
}
}
```
I should point out that there can and will be multiple top-level keys ('foo' in this case). I could probably throw something together to get what i need, but I was hoping that there is a solution that's more efficient.
|
Like this:
```
def nest(d):
rv = {}
for key, value in d.iteritems():
node = rv
for part in key.split('/'):
node = node.setdefault(part, {})
node.update(value)
return rv
```
|
```
def layer(dict):
for k,v in dict:
if '/' in k:
del dict[k]
subdict = dict.get(k[:k.find('/')],{})
subdict[k[k.find('/')+1:]] = v
layer(subdict)
```
|
Need to create a layered dict from a flat one
|
[
"",
"python",
"algorithm",
"performance",
""
] |
Is it really viable to use GCJ to publish server-side applications? Webapps?
My boss is convinced that compiling our (***my***) webapp into a binary executable is a brilliant idea. (Then again, he likes nice, small simple things with blinky lights that he can understand.) He instinctively sees no issues with this, while I only see an endless series of problems and degradations. Once I start talking to him about the complexity of our platform, and more in depth specifics of byte code, JVMs, libraries, differences between operating systems, processor architectures, etc...well...his eyes glaze over, he smiles and he has made it clear he thinks I'm being childishly resistive.
Why does he want a single magic executable? He sees a couple of "benefits":
* If it is a binary executable, then it is hard to reverse engineer and circumvent any licensing. Management lives in constant fear that this is happening, even though we sell into larger corporates who generally do not do cheat with server software.
* There is that vision of downloading this magic executable, running it, and everything works. (No more sending me out to do customer installations, which is not that frequent.)
So, I've done my obligatory 20 minutes of googling, and now I am here.
A bit of background on my application:
**What it is made from:**
* Java 6 (Sun's JVM)
* AspectJ 1.6
* Tomcat 6
* Hibernate 3
* Spring 2
* another two dozen supporting jar files
**What it does**
* A streaming media CMS
* Performance sensitive
* Deployed on Linux, Solaris, Windows (and developed on a Mac)
As you can probably gather, I'm highly skeptical of this *"compiling Java to native code"* thing. It sound like where Mono (VB on Linux) was back in 2000. But am I being overly pessimistic? Is it viable? Should I actually spend the time (days if not weeks) to try this out?
There is one other similar thread ([Java Compiler Options to produce .exe files](https://stackoverflow.com/questions/53845/java-compiler-options-to-produce-exe-files)) but it is a bit too simple, the links dated, and not really geared towards a server-side question.
Your informed opinions will be highly cherished, my dear SOpedians! TIA!
|
FWIW: I have never had good luck with GCJ, I have had a lot of problems using it and have had some obscure issues pop up that took forever to diagnose to GCJ rather than me (I am always very very reluctant to blame things on external libraries). I will openly admit this happened several years ago and I have never wanted to go near GCJ again. To give that more substance this was while I was in school and was working on a mostly trivial program so on an *"enterprise level"* I have had a healthy fear of GCJ.
|
I don't know about GCJ, but my company uses Excelsior JET with success. We haven't done it with a webapp (yet) but it should be able to handle anything that the Sun JRE can. In fact JET is a Sun-certified Java implementation.
|
Is GCJ (GNU Compiler for Java) a viable tool for publishing a webapp?
|
[
"",
"java",
"gcj",
""
] |
I am looking for a method of reading emails using Pop3 in C# 2.0. Currently, I am using code found in [CodeProject](http://www.codeproject.com/KB/IP/Pop3MimeClient.aspx?fid=341657). However, this solution is less than ideal. The biggest problem is that it doesn't support emails written in unicode.
|
I've successfully used [OpenPop.NET](http://sourceforge.net/projects/hpop/) to access emails via POP3.
|
downloading the email via the POP3 protocol is the easy part of the task. The protocol is quite simple and the only hard part could be advanced authentication methods if you don't want to send a clear text password over the network (and cannot use the SSL encrypted communication channel). See [RFC 1939: Post Office Protocol - Version 3](http://www.ietf.org/rfc/rfc1939.txt) and [RFC 1734: POP3 AUTHentication command](http://www.ietf.org/rfc/rfc1734.txt) for details.
The hard part comes when you have to parse the received email, which means parsing MIME format in most cases. You can write quick&dirty MIME parser in a few hours or days and it will handle 95+% of all incoming messages. Improving the parser so it can parse almost any email means:
* getting email samples sent from the most popular mail clients and improve the parser in order to fix errors and RFC misinterpretations generated by them.
* Making sure that messages violating RFC for message headers and content will not crash your parser and that you will be able to read every readable or guessable value from the mangled email
* correct handling of internationalization issues (e.g. languages written from righ to left, correct encoding for specific language etc)
* UNICODE
* Attachments and hierarchical message item tree as seen in ["Mime torture email sample"](http://www.rebex.net/secure-mail.net/sample-mime-explorer.aspx)
* S/MIME (signed and encrypted emails).
* and so on
Debugging a robust MIME parser takes months of work. I know, because I was watching my friend writing one such parser for the component mentioned below and was writing a few unit tests for it too ;-)
Back to the original question.
Following [code taken from our POP3 Tutorial page](http://www.rebex.net/secure-mail.net/tutorial-pop3.aspx#downloading) and links would help you:
```
//
// create client, connect and log in
Pop3 client = new Pop3();
client.Connect("pop3.example.org");
client.Login("username", "password");
// get message list
Pop3MessageCollection list = client.GetMessageList();
if (list.Count == 0)
{
Console.WriteLine("There are no messages in the mailbox.");
}
else
{
// download the first message
MailMessage message = client.GetMailMessage(list[0].SequenceNumber);
...
}
client.Disconnect();
```
* [HOWTO: Download emails from a GMail account in C#](http://blog.rebex.net/news/archive/2007/05/14/howto-download-emails-from-gmail-account-in-csharp.aspx) (blogpost)
* [Rebex Mail for .NET (POP3/IMAP client component for .NET)](http://www.rebex.net/mail.net/)
* [Rebex Secure Mail for .NET (POP3/IMAP client component for .NET - SSL enabled)](http://www.rebex.net/secure-mail.net/)
|
Reading Email using Pop3 in C#
|
[
"",
"c#",
"unicode",
"pop3",
""
] |
I use Eclipse, Maven, and Java in my development. I use Maven to download dependencies (jar files and javadoc when available) and Maven's eclipse plug-in to generate the .project and .classpath files for Eclipse. When the dependency downloaded does not have attached javadoc I manually add a link for the javadoc in the .classpath file so that I can see the javadoc for the dependency in Eclipse. Then when I run Maven's eclipse plugin to regenerate the .classpath file it of course wipes out that change.
Is there a way to configure Maven's eclipse plug-in to automatically add classpath attributes for javadoc when running Maven's eclipse plug-in?
I'm only interested in answers where the javadoc and/or sources are not provided for the dependency in the maven repository, which is the case most often for me. Using downloadSources and/or downloadJavadocs properties won't help this problem.
|
You might consider just avoiding this problem completely by installing the javadoc jar into your local repository manually using the [install-file goal](http://maven.apache.org/plugins/maven-install-plugin/usage.html) and passing in the -Dclassifier=javadoc option. Once you do that the .classpath that mvn generates should be correct.
If you use a remote repo as a proxy to central you could also deploy the javadocs to that repo and then everyone else who uses that proxy will now get the javadocs automatically as well.
|
From the Maven Eclipse Plugin [FAQ](http://maven.apache.org/plugins/maven-eclipse-plugin/examples/attach-library-sources.html)
> The following example shows how to do
> this in the command-line:
>
> ```
> mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
> ```
>
> or in your pom.xml:
>
> ```
> <project>
> [...]
> <build>
> [...]
> <plugins>
> [...]
> <plugin>
> <groupId>org.apache.maven.plugins</groupId>
> <artifactId>maven-eclipse-plugin</artifactId>
> <configuration>
> <downloadSources>true</downloadSources>
> <downloadJavadocs>true</downloadJavadocs>
> </configuration>
> </plugin>
> [...]
> </plugins>
> [...]
> </build>
> [...]
> </project>
> ```
|
How to reference javadocs to dependencies in Maven's eclipse plugin when javadoc not attached to dependency
|
[
"",
"java",
"eclipse",
"maven-2",
""
] |
I've got a sign up form that requires the user to enter their email and password, both are in two separate text boxes. I want to provide a button that the user can click so that the password (which is masked) will appear in a popup when the user clicks the button.
Currently my JavaScript code for this is as follows:
```
function toggleShowPassword() {
var button = $get('PASSWORD_TEXTBOX_ID');
var password;
if (button)
{
password = button.value;
alert(password);
button.value = password;
}
}
```
The problem is that every time the user clicks the button, the password is cleared in both Firefox and IE. I want them to be able to see their password in clear text to verify without having to retype their password.
My questions are:
1. Why does the password field keep getting reset with each button click?
2. How can I make it so the password field is NOT cleared once the user has seen his/her password in clear text?
|
I did a quick example up of a working version:
```
<html>
<head>
<script type="text/javascript" src="prototype.js"></script>
<script type="text/javascript">
function toggleShowPassword() {
var textBox = $('PasswordText');
if (textBox)
{
alert(textBox.value);
}
}
</script>
</head>
<body>
<input type="password" id="PasswordText" /><input type="button" onclick="toggleShowPassword();" value="Show Password" />
</body>
</html>
```
The key is that the input is of type button and not submit. I used the [prototype](http://prototypejs.org/) library for retrieving the element by ID.
|
I would assume that the browser has some issue with the script attempting to set the value of a password field:
```
button.value = password;
```
This line of code has no real purpose. `password.value` is not affected in the previous lines where you are reading the value and using it in the `alert()`.
This should be a simpler version of your code:
```
function toggleShowPassword() {
var button = $get('PASSWORD_TEXTBOX_ID');
if (button)
{
alert(button.value);
}
```
}
edit: actually I just did a quick test, and Firefox has no problem setting the password field's value with code such as `button.value = "blah"`. So it doesn't seem like this would be the case ... I would check if your ASP.NET code is causing a postback as others have suggested.
|
catching button clicks in javascript without server interaction
|
[
"",
"javascript",
"ajax",
""
] |
I would like to generate a random floating point number between 2 values. What is the best way to do this in C#?
|
The only thing I'd add to [Eric](https://stackoverflow.com/questions/44408/how-do-you-generate-a-random-number-in-c#44428)'s response is an explanation; I feel that knowledge of why code works is better than knowing what code works.
The explanation is this: let's say you want a number between 2.5 and 4.5. The range is 2.0 (4.5 - 2.5). `NextDouble` only returns a number between 0 and 1.0, but if you multiply this by the range you will get a number between 0 and *range*.
So, this would give us random doubles between 0.0 and 2.0:
```
rng.NextDouble() * 2.0
```
But, we want them between 2.5 and 4.5! How do we do this? Add the smallest number, 2.5:
```
2.5 + rng.NextDouble() * 2.0
```
Now, we get a number between 0.0 and 2.0; if you add 2.5 to each of these values we see that the range is now between 2.5 and 4.5.
At first I thought that it mattered if b > a or a > b, but if you work it out both ways you'll find it works out identically so long as you don't mess up the order of the variables used. I like to express it with longer variable names so I don't get mixed up:
```
double NextDouble(Random rng, double min, double max)
{
return min + (rng.NextDouble() * (max - min));
}
```
|
```
System.Random r = new System.Random();
double rnd( double a, double b )
{
return a + r.NextDouble()*(b-a);
}
```
|
How do you generate a random number in C#?
|
[
"",
"c#",
"floating-point",
"random",
""
] |
I have a web application developed with ASP.net and C# that is running on my companies' intranet. Because all the users for this application are all using Microsoft Outlook without exception, I would like for the the application to open up an Outlook message on the client-side. **I understand that Office is designed to be run on the desktop and not from a server, however I have no trouble creating a Word or Excel document on the client-side.**
I have code that instantiates the Outlook object using the Microsoft.Office.Interop.Outlook namespace and Outlook installed on the server. When I try to run the code from the server, I get a DCOM source error message that states "The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {000C101C-0000-0000-C000-000000000046} to the user This security permission can be modified using the Component Services administrative tool." I have modified the permissions using the Component Services tool, but still get this same error.
Is there a way to overcome this or is this a fruitless exercise because Outlook cannot be opened on the client side from the server-side code?
Mailto will not work due to the extreme length that the emails can obtain. Also, the user that sends it needs add in eye-candy to the text for the recipients.
|
You cannot open something on the client from server side code. You'd have to use script on the page to do what you're wanting (or something else client-side like ActiveX or embedded .NET or something)
Here's a sample Javascript that invokes an Outlook MailItem from an webpage. This could easily be injected into the page from your server-side code so it executes on the client.
<http://www.codeproject.com/KB/aspnet/EmailUsingJavascript.aspx>
|
(hint: formatting in your question)
I'm not understanding what's wrong with a mailto link or a formmail-type page.
|
Outlook Email via a Webpage
|
[
"",
"c#",
"asp.net",
"outlook",
"ms-office",
""
] |
I have a small to medium project that is in C++/CLI. I really hate the syntax extensions of C++/CLI and I would prefer to work in C#. Is there a tool that does a decent job of translating one to the other?
**EDIT:** When I said Managed c++ before I apparently meant c++/CLI
|
You can only translate Managed C++ code (and C++/CLI code) to C# if the C++ code is pure managed. If it is not -- i.e. if there is native code included in the sources -- tools like .NET Reflector won't be able to translate the code for you.
If you do have native C++ code mixed in, then I'd recommend trying to move the native code into a separate DLL, replace your calls to DLL functions by easily identifiable stub functions, compile your project as a pure .NET library, then use .NET reflector to de-compile into C# code. Then you can replace the calls to the stub functions by p-invoke calls to your native DLL.
Good luck! I feel for you!
|
.NET Managed C++ is like a train wreck. But have you looked into C++ CLI? I think Microsoft did a great job in this field to make C++ a first class .NET citizen.
<http://msdn.microsoft.com/en-us/magazine/cc163852.aspx>
|
Translate C++/CLI to C#
|
[
"",
"c#",
".net",
"c++-cli",
""
] |
Currently, I am working on a new version control system as part of a final year project at University. The idea is to make it highly adaptable and pluggable.
We're using the OSGi framework (Equinox implementation) to manage our plug ins. My problem is that I can't find a simple & easy to use method for testing OSGi bundles.
Currently, I have to build the bundle using Maven and then execute a test harness. I'm looking for something like the JUnit test runner for Eclipse, as it will save me a bunch of time.
Is there a quick and easy way to test OSGi bundles?
EDIT: I don't need something to test Eclipse plug ins or GUI components, just OSGi bundles.
EDIT2: Is there some framework that supports JUnit4?
|
Spring Dynamic Modules has excellent support for [testing OSGi bundles](http://static.springframework.org/osgi/docs/current/reference/html/testing.html).
|
More recently, you should have a look at Pax Exam:
<http://team.ops4j.org/wiki/display/paxexam/Pax+Exam>
This is the current effort at OPS4J related to testing.
|
Quick and easy way to test OSGi bundles
|
[
"",
"java",
"testing",
"osgi",
""
] |
I understand the difference between `String` and `StringBuilder` (`StringBuilder` being mutable) but is there a large performance difference between the two?
The program I’m working on has a lot of case driven string appends (500+). Is using `StringBuilder` a better choice?
|
Yes, the performance difference is significant. See the KB article "[How to improve string concatenation performance in Visual C#](http://support.microsoft.com/kb/306822)".
I have always tried to code for clarity first, and then optimize for performance later. That's much easier than doing it the other way around! However, having seen the enormous performance difference in my applications between the two, I now think about it a little more carefully.
Luckily, it's relatively straightforward to run performance analysis on your code to see where you're spending the time, and then to modify it to use `StringBuilder` where needed.
|
To clarify what Gillian said about 4 string, if you have something like this:
```
string a,b,c,d;
a = b + c + d;
```
then it would be faster using strings and the plus operator. This is because (like Java, as Eric points out), it internally uses StringBuilder automatically (Actually, it uses a primitive that StringBuilder also uses)
However, if what you are doing is closer to:
```
string a,b,c,d;
a = a + b;
a = a + c;
a = a + d;
```
Then you need to explicitly use a StringBuilder. .Net doesn't automatically create a StringBuilder here, because it would be pointless. At the end of each line, "a" has to be an (immutable) string, so it would have to create and dispose a StringBuilder on each line. For speed, you'd need to use the same StringBuilder until you're done building:
```
string a,b,c,d;
StringBuilder e = new StringBuilder();
e.Append(b);
e.Append(c);
e.Append(d);
a = e.ToString();
```
|
String vs. StringBuilder
|
[
"",
"c#",
".net",
"performance",
""
] |
I see a lot of example code for C# classes that does this:
```
public class Point {
public int x { get; set; }
public int y { get; set; }
}
```
Or, in older code, the same with an explicit private backing value and without the new auto-implemented properties:
```
public class Point {
private int _x;
private int _y;
public int x {
get { return _x; }
set { _x = value; }
}
public int y {
get { return _y; }
set { _y = value; }
}
}
```
My question is why. Is there any functional difference between doing the above and just making these members public fields, like below?
```
public class Point {
public int x;
public int y;
}
```
To be clear, I understand the value of getters and setters when you need to do some translation of the underlying data. But in cases where you're just passing the values through, it seems needlessly verbose.
|
I tend to agree (that it seems needlessly verbose), although this has been an issue our team hasn't yet resolved and so our coding standards still insist on verbose properties for all classes.
[Jeff Atwood](https://blog.codinghorror.com/properties-vs-public-variables/) dealt with this a few years ago. The most important point he retrospectively noted is that changing from a field to a property is a [breaking change](http://blogs.msdn.com/abhinaba/archive/2006/04/11/572694.aspx) in your code; anything that consumes it must be recompiled to work with the new class interface, so if anything outside of your control is consuming your class you might have problems.
|
It's also much simpler to change it to this later:
```
public int x { get; private set; }
```
|
Auto-implemented getters and setters vs. public fields
|
[
"",
"c#",
"oop",
"properties",
"field",
""
] |
We are attempting to use a SQL Server 2003 database for our test records and want a quick way to take NUnit and NAnt output and produce SQL schema and data. Is there a simple way to generate SQL Schema using the XSD file describing these XML documents?
|
You could use XSD. No, I'm serious. Go to a command prompt and type xsd and press Enter.
Here's what you will see (truncated):
```
I:\>xsd
Microsoft (R) Xml Schemas/DataTypes support utility
[Microsoft (R) .NET Framework, Version 1.0.3705.0]
Copyright (C) Microsoft Corporation 1998-2001. All rights reserved.
xsd.exe -
Utility to generate schema or class files from given source.
xsd.exe <schema>.xsd /classes|dataset [/e:] [/l:] [/n:] [/o:] [/uri:]
xsd.exe <assembly>.dll|.exe [/outputdir:] [/type: [...]]
xsd.exe <instance>.xml [/outputdir:]
xsd.exe <schema>.xdr [/outputdir:]
```
Just follow the instructions.
|
As XSD in ambiguous in terms of master-detail relations, I doubt an automatic generation is possible.
For example, a declaration such as
```
<xs:element name="foo" type="footype" minOccurs="0" maxOccurs="unbounded" />
```
can be interpreted as child table "foo" (1:n) or as an n:m relation.
minOccurs="0" maxOccurs="1" may be a nullable column, or an optional 1:1 relation.
type="xs:string" maxOccurs="1" is a string ((n)varchar) column, or an optional lookup; but type="xs:string" maxOccurs="unbounded" is a detail table with a (n)varchar column.
|
Automatically Generating SQL Schema from XML
|
[
"",
"sql",
"xml",
"unit-testing",
""
] |
Given a path such as `"mydir/myfile.txt"`, how do I find the file's absolute path in Python? E.g. on Windows, I might end up with:
```
"C:/example/cwd/mydir/myfile.txt"
```
|
```
>>> import os
>>> os.path.abspath("mydir/myfile.txt")
'C:/example/cwd/mydir/myfile.txt'
```
Also works if it is already an absolute path:
```
>>> import os
>>> os.path.abspath("C:/example/cwd/mydir/myfile.txt")
'C:/example/cwd/mydir/myfile.txt'
```
|
You could use the new Python 3.4 library `pathlib`. (You can also get it for Python 2.6 or 2.7 using `pip install pathlib`.) The authors [wrote](http://www.python.org/dev/peps/pep-0428/#abstract): "The aim of this library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them."
To get an absolute path in Windows:
```
>>> from pathlib import Path
>>> p = Path("pythonw.exe").resolve()
>>> p
WindowsPath('C:/Python27/pythonw.exe')
>>> str(p)
'C:\\Python27\\pythonw.exe'
```
Or on UNIX:
```
>>> from pathlib import Path
>>> p = Path("python3.4").resolve()
>>> p
PosixPath('/opt/python3/bin/python3.4')
>>> str(p)
'/opt/python3/bin/python3.4'
```
Docs are here: <https://docs.python.org/3/library/pathlib.html>
|
How to get an absolute file path in Python
|
[
"",
"python",
"path",
"relative-path",
"absolute-path",
""
] |
I'm currently playing around with [HTML\_QuickForm](http://pear.php.net/package/HTML_QuickForm) for generating forms in PHP. It seems kind of limited in that it's hard to insert my own javascript or customizing the display and grouping of certain elements.
Are there any alternatives to QuickForm that might provide more flexibility?
|
If you find it hard to insert Javascript into the form elements, consider using a JavaScript framework such as [Prototype](http://www.prototypejs.org/) or [jQuery](http://jquery.com/). There, you can centralize the task of injecting event handling into form controls.
By that, I mean that you won't need to insert event handlers into the HTML form code. Instead, you register those events from somewhere else. For example, in Prototype you would be able to write something like this:
```
$('myFormControl').observe('click', myClickFunction)
```
Also have a look at the answers to [another question](https://stackoverflow.com/questions/34126/whats-the-best-way-to-add-event-in-javascript).
/EDIT: of course, you can also insert custom attributes and thus event handlers into the form elements using HTML\_QuickForm. However, the above way is superior.
|
I've found the [Zend\_Form](http://framework.zend.com/manual/en/zend.form.html) package of the [Zend Framework](http://framework.zend.com) to be particulary flexible. This component can also be used with [Zend\_Dojo](http://framework.zend.com/manual/en/zend.dojo.html) to rapidly implement common javascript form helpers. However, the component is agnostic when it comes to the library that you use, but supports Dojo naitively. The component also allows for grouping, multi-page forms, custom decorators, validators and other features making it very flexible.
|
Alternative to PHP QuickForm?
|
[
"",
"php",
""
] |
Given a small set of entities (say, 10 or fewer) to insert, delete, or update in an application, what is the best way to perform the necessary database operations? Should multiple queries be issued, one for each entity to be affected? Or should some sort of XML construct that can be parsed by the database engine be used, so that only one command needs to be issued?
I ask this because a common pattern at my current shop seems to be to format up an XML document containing all the changes, then send that string to the database to be processed by the database engine's XML functionality. However, using XML in this way seems rather cumbersome given the simple nature of the task to be performed.
|
It depends on how many you need to do, and how fast the operations need to run. If it's only a few, then doing them one at a time with whatever mechanism you have for doing single operations will work fine.
If you need to do thousands or more, and it needs to run quickly, you should re-use the connection and command, changing the arguments for the parameters to the query during each iteration. This will minimize resource usage. You don't want to re-create the connection and command for each operation.
|
You didn't mention what database you are using, but in SQL Server 2008, you can use table variables to pass complex data like this to a stored procedure. Parse it there and perform your operations. For more info, see Scott Allen's article on [ode to code](http://www.odetocode.com/articles/365.aspx).
|
What's the best way to insert/update/delete multiple records in a database from an application?
|
[
"",
"sql",
"database",
""
] |
Is there an XSLT library that is pure Python?
Installing libxml2+libxslt or any similar C libraries is a problem on some of the platforms I need to support.
I really only need basic XSLT support, and speed is not a major issue.
|
Unfortunately there are no pure-python XSLT processors at the moment. If you need something that is more platform independent, you may want to use a Java-based XSLT processor like [Saxon](http://saxon.sourceforge.net/). 4Suite is working on a pure-python XPath parser, but it doesn't look like a pure XSLT processor will be out for some time. Perhaps it would be best to use some of Python's functional capabilities to try and approximate the existing stylesheet or look into the feasibility of using Java instead.
|
I don't think you can do it in cpython: there are no pure python XSLT implementations.
But you can trivially do it in jython, using the inbuilt XSLT APIs of the JVM. I wrote a blog post for the specific case of doing it on Google AppEngine, but the code given should work under jython in anyn circumstances.
Transforming with XSLT on Google AppEngine and jython
<http://jython.xhaus.com/transforming-with-xslt-on-google-appengine-and-jython/>
HTH,
Alan.
|
Pure Python XSLT library
|
[
"",
"python",
"xml",
"xslt",
""
] |
I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
|
You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
|
Pimping disclaimer: I work for [Atalasoft](http://www.atalasoft.com), a company that makes .NET imaging software.
Using [dotImage](http://www.atalasoft.com/products/dotimage/documentimaging/default.aspx), this task becomes something like this:
```
FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames
// tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit.
TiffEncoder encoder = new TiffEncoder();
encoder.Append = true;
// DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand
DynamicThresholdCommand threshold = new DynamicThresholdCommand();
using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) {
while (source.HasMoreImages()) {
AtalaImage image = source.AcquireNext();
AtalaImage finalImage = image;
// convert when needed.
if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) {
finalImage = threshold.Apply().Image;
}
encoder.Save(outstm, finalImage, null);
if (finalImage != image) {
finalImage.Dispose();
}
source.Release(image);
}
}
```
The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.
|
Compressing a TIF file
|
[
"",
"c#",
"tiff",
""
] |
My understanding of the MVC is as follows (incase it's horribly wrong, I am afterall new to it)
1. Models are the things that interface with the database
2. Views are the design/layout of the page
3. Controllers are where everything starts and are essentially the page logic
I'm using [CodeIgniter](http://codeigniter.com/) but I would hazard a guess it's not just limited to that or possibly even just to PHP frameworks.
Where do I put global classes?
I may have a model for Products and I then run a query that collects 20 products from the database. Do I now make 20 models or should I have a separate class for it, if the latter, where do I put this class (other controllers will need to use it too)
|
Model is the wrong word to use when discussing what to do with products: each product is a **value object (VO)** (or data transfer objet/DTO, whatever fits in your mouth better). Value objects generally have the same fields that a table contains. In your case ProductVO should have the fields that are in Products table.
Model is a **Data Access Object (DAO)** that has methods like
```
findByPk --> returns a single value object
findAll --> returns a collection of value objects (0-n)
etc.
```
In your case you would have a ProductDAO that has something like the above methods. This ProductDAO would then return ProductVO's and collections of them.
Data Access Objects can also return **Business Objects (BO)** which may contain multiple VO's and additional methods that are business case specific.
Addendum:
In your controller you call a ProductDAO to find the products you want.
The returned ProductVO(s) are then passed to the view (as request attributes in Java). The view then loops through/displays the data from the productVO's.
|
**Model** is part of your application where business logic happens. Model represents real life relations and dependencies between objects, like: Employee reports to a Manager, Manager supervises many Employees, Manager can assign Task to Employee, Task sends out notification when overdue. Model CAN and most often DO interface with database, but this is not a requirement.
**View** is basically everything that can be displayed or help in displaying. View contains templates, template objects, handles template composition and nesting, wraps with headers and footers, and produces output in one of well known formats (X/HTML, but also XML, RSS/Atom, CSV).
**Controller** is a translation layer that translates user actions to model operations. In other words, it tells model what to do and returns a response. Controller methods should be as small as possible and all business processing should be done in Model, and view logic processing should take place in View.
Now, back to your question. It really depends if you need separate class for each product. In most cases, one class will suffice and 20 instances of it should be created. As products represent business logic it should belong to Model part of your application.
|
MVC, where do the classes go?
|
[
"",
"php",
"model-view-controller",
"oop",
""
] |
I know there's some `JAVA_OPTS` to set to remotely debug a Java program.
What are they and what do they mean ?
|
I have [this article](http://www.eclipsezone.com/eclipse/forums/t53459.html) bookmarked on setting this up for Java 5 and below.
Basically [run it with](http://download.oracle.com/otn_hosted_doc/jdeveloper/904preview/jdk14doc/docs/tooldocs/windows/jdb.html):
```
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044
```
For [Java 5 and above](http://javahowto.blogspot.ru/2010/09/java-agentlibjdwp-for-attaching.html), run it with:
```
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1044
```
If you want Java to **wait for you to connect** before executing the application, replace `suspend=n` with `suspend=y`.
|
Before Java 5.0, use `-Xdebug` and `-Xrunjdwp` arguments. These options will still work in later versions, but it will run in interpreted mode instead of JIT, which will be slower.
From Java 5.0, it is better to use the `-agentlib:jdwp` single option:
```
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1044
```
Options on `-Xrunjdwp` or `agentlib:jdwp` arguments are :
* `transport=dt_socket` : means the way used to connect to JVM (socket is a good choice, it can be used to debug a distant computer)
* `address=8000` : TCP/IP port exposed, to connect from the debugger (will only allow local connections, see Antony Shumskikh's answer for remote debugging)
* `suspend=y` : if 'y', tell the JVM to wait until debugger is attached to begin execution, otherwise (if 'n'), starts execution right away.
|
What are Java command line options to set to allow JVM to be remotely debugged?
|
[
"",
"java",
"debugging",
""
] |
Ok, do you do `Business.Name` or `Business.BusinessName`
`SubCategory.ID` or `SubCategory.SubCategoryID`
What about in your database?
Why?
I'm torn with both. Would love there to be a "right answer"
|
The main drawback of using ID, Name etc is that you have to qualify them with the table name if you are writing an SQL join which overlaps two tables.
Despite that, I find it far more concise and readable to just use ID and Name - your code and tables will 'flow' much more easily past the eyes. Easier to type and less redundant. And typing SELECT Business.Name FROM ... in an SQL query is not really more troublesome than typing SELECT BusinessName FROM ...
In general, if I find myself repeating semantic information it alerts me to look for ways to eliminate it or at least recognise why it repeats. This could be on the small scale (attribute names) or the large scale (behaviour patterns or common class structures).
|
The only "right" answer is to be consistent. Decide upfront which one you will be using in a project, and stick to it.
|
Arbitrary Naming Convention (Business Objects)
|
[
"",
"c#",
"oop",
"object",
"naming",
""
] |
How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup).
|
Try [ShedSkin](https://shedskin.readthedocs.io/en/latest/) Python-to-C++ compiler, but it is far from perfect. Also there is Psyco - Python JIT if only speedup is needed. But IMHO this is not worth the effort. For speed-critical parts of code best solution would be to write them as C/C++ extensions.
|
As @Greg Hewgill says it, there are good reasons why this is not always possible. However, certain kinds of code (like very algorithmic code) can be turned into "real" machine code.
There are several options:
* Use [Psyco](http://psyco.sourceforge.net/), which emits machine code dynamically. You should choose carefully which methods/functions to convert, though.
* Use [Cython](http://cython.org/), which is a Python-*like* language that is compiled into a Python C extension
* Use [PyPy](http://pypy.org), which has a translator from RPython (a *restricted subset* of Python that does not support some of the most "dynamic" features of Python) to C or LLVM.
+ PyPy is still highly experimental
+ not all extensions will be present
After that, you can use one of the existing packages (freeze, Py2exe, PyInstaller) to put everything into one binary.
All in all: there is no general answer for your question. If you have Python code that is performance-critical, try to use as much builtin functionality as possible (or ask a "How do I make my Python code faster" question). If that doesn't help, try to identify the code and port it to C (or Cython) and use the extension.
|
Is it feasible to compile Python to machine code?
|
[
"",
"python",
"c",
"linker",
"compilation",
""
] |
I have created a webservice in .net 2.0, C#. I need to log some information to a file whenever different methods are called by the web service clients.
The problem comes when one user process is writing to a file and another process tries to write to it. I get the following error:
> The process cannot access the file because it is being used by another process.
The solutions that I have tried to implement in C# and failed are as below.
1. Implemented singleton class that contains code that writes to a file.
2. Used lock statement to wrap the code that writes to the file.
3. I have also tried to use open source logger log4net but it also is not a perfect solution.
4. I know about logging to system event logger, but I do not have that choice.
I want to know if there exists a perfect and complete solution to such a problem?
|
The locking is probably failing because your webservice is being run by more than one worker process.
You could protect the access with a named mutex, which is shared across processes, unlike the locks you get by using `lock(someobject) {...}`:
```
Mutex lock = new Mutex("mymutex", false);
lock.WaitOne();
// access file
lock.ReleaseMutex();
```
|
You don't say how your web service is hosted, so I'll assume it's in IIS. I don't think the file should be accessed by multiple processes unless your service runs in multiple application pools. Nevertheless, I guess you could get this error when multiple threads in one process are trying to write.
I think I'd go for the solution you suggest yourself, Pradeep, build a single object that does all the writing to the log file. Inside that object I'd have a Queue into which all data to be logged gets written. I'd have a separate thread reading from this queue and writing to the log file. In a thread-pooled hosting environment like IIS, it doesn't seem too nice to create another thread, but it's only one... Bear in mind that the in-memory queue will not survive IIS resets; you might lose some entries that are "in-flight" when the IIS process goes down.
Other alternatives certainly include using a separate process (such as a Service) to write to the file, but that has extra deployment overhead and IPC costs. If that doesn't work for you, go with the singleton.
|
Issue writing to single file in Web service in .NET
|
[
"",
"c#",
"web-services",
""
] |
I have a list of structs and I want to change one element. For example :
```
MyList.Add(new MyStruct("john");
MyList.Add(new MyStruct("peter");
```
Now I want to change one element:
```
MyList[1].Name = "bob"
```
However, whenever I try and do this I get the following error:
> Cannot modify the return value of
> System.Collections.Generic.List.this[int]‘ because it is not
> a variable
If I use a list of classes, the problem doesn't occur.
I guess the answer has to do with structs being a value type.
So, if I have a list of structs should I treat them as *read-only*? If I need to change elements in a list then I should use classes and not structs?
|
```
MyList[1] = new MyStruct("bob");
```
structs in C# should almost always be designed to be immutable (that is, have no way to change their internal state once they have been created).
In your case, what you want to do is to replace the entire struct in specified array index, not to try to change just a single property or field.
|
Not quite. Designing a type as class or struct shouldn't be driven by your need to store it in collections :) You should look at the 'semantics' needed
The problem you're seeing is due to value type semantics. Each value type variable/reference is a new instance. When you say
```
Struct obItem = MyList[1];
```
what happens is that a new instance of the struct is created and all members are copied one by one. So that you have a clone of MyList[1] i.e. 2 instances.
Now if you modify obItem, it doesn't affect the original.
```
obItem.Name = "Gishu"; // MyList[1].Name still remains "peter"
```
Now bear with me for 2 mins here (This takes a while to gulp down.. it did for me :)
If you really need structs to be stored in a collection and modified like you indicated in your question, you'll have to make your struct expose an interface (*However this will result in boxing*). You can then modify the actual struct via an interface reference, which refers to the boxed object.
The following code snippet illustrates what I just said above
```
public interface IMyStructModifier
{
String Name { set; }
}
public struct MyStruct : IMyStructModifier ...
List<Object> obList = new List<object>();
obList.Add(new MyStruct("ABC"));
obList.Add(new MyStruct("DEF"));
MyStruct temp = (MyStruct)obList[1];
temp.Name = "Gishu";
foreach (MyStruct s in obList) // => "ABC", "DEF"
{
Console.WriteLine(s.Name);
}
IMyStructModifier temp2 = obList[1] as IMyStructModifier;
temp2.Name = "Now Gishu";
foreach (MyStruct s in obList) // => "ABC", "Now Gishu"
{
Console.WriteLine(s.Name);
}
```
HTH. Good Question.
**Update:** @Hath - you had me running to check if I overlooked something that simple. (It would be inconsistent if setter properties dont and methods did - the .Net universe is still balanced :)
*Setter method doesn't work*
obList2[1] returns a copy whose state would be modified. Original struct in list stays unmodified. So Set-via-Interface seems to be only way to do it.
```
List<MyStruct> obList2 = new List<MyStruct>();
obList2.Add(new MyStruct("ABC"));
obList2.Add(new MyStruct("DEF"));
obList2[1].SetName("WTH");
foreach (MyStruct s in obList2) // => "ABC", "DEF"
{
Console.WriteLine(s.Name);
}
```
|
Changing the value of an element in a list of structs
|
[
"",
"c#",
"struct",
"value-type",
""
] |
I am trying to make a `JTable` that has column spans available. Specifically, I am looking to nest a `JTable` inside another `JTable`, and when the user clicks to view the nested table, it should expand to push down the rows below and fill the empty space. This is similar to what you see in MS Access where you can nest tables, and clicking the expand button on a row will show you the corresponding entries in the nested table.
If someone knows of a way to perform a column span with `JTable`, can you please point me in the right direction? Or if you know of an alternative way to do this, I am open to suggestions. The application is being built with Swing. Elements in the table, both high level and low level, have to be editable in any solution. Using nested `JTable`s this won't be a problem, and any other solution would have to take this into consideration as well.
|
As a pointer in the right direction, try [this article](http://www.swingwiki.org/howto:column_spanning) at SwingWiki that explains the TableUI method of column spanning quite well. Before this, I also tried some alternatives such as overriding the TableCellRenderer paint methods without much success.
|
You need to write your own TableUI for the master table. It can also helpful to use your own TableModel to save additional data like if a row is expanded. But this is optional.
I write an equals TableUI that expand a row and show an text editor. In the TableUI you need to change the the row hight dynamically with table.setRowHeight(height). Also it is necessary to copy some stuff from the BaseTableUI because you can not access the private stuff.
|
JTable column spanning
|
[
"",
"java",
"swing",
"jtable",
""
] |
More than about LINQ to [insert your favorite provider here], this question is about searching or filtering in-memory collections.
I know LINQ (or searching/filtering extension methods) works in objects implementing `IEnumerable` or `IEnumerable<T>`. The question is: *because of the nature of enumeration, is every query complexity at least **O(n)**?*
For example:
```
var result = list.FirstOrDefault(o => o.something > n);
```
In this case, every algorithm will take at least **O(n)** unless `list` is ordered with respect to `'something'`, in which case the search should take **O(log(n))**: it should be a binary search. However, If I understand correctly, this query will be resolved through enumeration, so it should take **O(n)**, even in `list` was previously ordered.
* Is there something I can do to solve a query in **O(log(n))**?
* If I want performance, should I use Array.Sort and Array.BinarySearch?
|
Even with parallelisation, it's still O(n). The constant factor would be different (depending on your number of cores) but as n varied the total time would still vary linearly.
Of course, you could write your own implementations of the various LINQ operators over your own data types, but they'd only be appropriate in very specific situations - you'd have to know for sure that the predicate only operated on the optimised aspects of the data. For instance, if you've got a list of people that's ordered by age, it's not going to help you with a query which tries to find someone with a particular name :)
To examine the predicate, you'd have to use expression trees instead of delegates, and life would become a lot harder.
I suspect I'd normally add new methods which make it obvious that you're using the indexed/ordered/whatever nature of the data type, and which will always work appropriately. You couldn't easily invoke those extra methods from query expressions, of course, but you can still use LINQ with dot notation.
|
Yes, the generic case is always O(n), as Sklivvz said.
However, many LINQ methods special case for when the object implementing IEnumerable actually implements e.g. ICollection. (I've seen this for IEnumerable.Contains at least.)
In practice this means that LINQ IEnumerable.Contains calls the fast HashSet.Contains for example if the IEnumerable actually is a HashSet.
```
IEnumerable<int> mySet = new HashSet<int>();
// calls the fast HashSet.Contains because HashSet implements ICollection.
if (mySet.Contains(10)) { /* code */ }
```
You can use reflector to check exactly how the LINQ methods are defined, that is how I figured this out.
Oh, and also LINQ contains methods IEnumerable.ToDictionary (maps key to single value) and IEnumerable.ToLookup (maps key to multiple values). This dictionary/lookup table can be created once and used many times, which can speed up some LINQ-intensive code by orders of magnitude.
|
In-memory LINQ performance
|
[
"",
"c#",
"linq",
"performance",
"complexity-theory",
""
] |
Under VS's external tools settings there is a "Use Output Window" check box that captures the tools command line output and dumps it to a VS tab.
The question is: *can I get the same processing for my program when I hit F5?*
**Edit:** FWIW I'm in C# but if that makes a difference to your answer then it's unlikely that your answer is what I'm looking for.
What I want would take the output stream of the program and transfer it to the output tab in VS using the same devices that output redirection ('|' and '>') uses in the cmd prompt.
|
I'm going to make a few assumptions here. First, I presume that you are talking about printf output from an application (whether it be from a console app or from a windows GUI app). My second assumption is the C language.
*To my knowledge, you cannot direct printf output to the output window in dev studio, not directly anyway.* [emphasis added by OP]
There might be a way but I'm not aware of it. One thing that you could do though would be to direct printf function calls to your own routine which will
1. call printf and print the string
2. call OuputDebugString() to print the string to the output window
You could do several things to accomplish this goal. First would be to write your own printf function and then call printf and the OuputDebugString()
```
void my_printf(const char *format, ...)
{
char buf[2048];
// get the arg list and format it into a string
va_start(arglist, format);
vsprintf_s(buf, 2048, format, arglist);
va_end(arglist);
vprintf_s(buf); // prints to the standard output stream
OutputDebugString(buf); // prints to the output window
}
```
The code above is mostly untested, but it should get the concepts across.
If you are not doing this in C/C++, then this method won't work for you. :-)
|
You should be able to capture the output in a text file and use that.
I don't have a VS handy, so this is from memory:
1. Create a C++ project
2. Open the project settings, debugging tab
3. Enable managed debugging
4. Edit command line to add "`> output.txt`"
5. Run your program under the debugger
If things work the way I remember, this will redirect STDOUT to a file, even though you're not actually running under CMD.EXE.
(The debugger has its own implementation of redirection syntax, which is not 100% the same as cmd, but it's pretty good.)
Now, if you open this file in VS, you can still see the output from within VS, although not in exactly the same window you were hoping for.
|
Capture console output for debugging in VS?
|
[
"",
"c#",
"visual-studio",
"debugging",
""
] |
I'm looking to have two versions of BOOST compiled into a project at the same time. Ideally they should be usable along these lines:
```
boost_1_36_0::boost::shared_ptr<SomeClass> someClass = new SomeClass();
boost_1_35_0::boost::regex expression("[0-9]", boost_1_35_0::boost::regex_constants::basic);
```
|
I read (well scanned) through the [development list discussion](http://thread.gmane.org/gmane.comp.lib.boost.devel/180008). There's no easy solution. To sum up:
1. Wrapping header files in a namespace declaration
```
namespace boost_1_36_0 {
#include <boost_1_36_0/boost/regex.hpp>
}
namespace boost_1_35_0 {
#include <boost_1_35_0/boost/shared_ptr.hpp>
}
```
* Requires modifying source files
* Doesn't allow for both versions to be included in the same translation unit, due to the fact that macros do not respect namespaces.
2. Defining boost before including headers
```
#define boost boost_1_36_0
#include <boost_1_36_0/boost/regex.hpp>
#undef boost
#define boost boost_1_35_0
#include <boost_1_35_0/boost/shared_ptr.hpp>
#undef boost
```
* Source files can simply be compiled with `-Dboost=boost_1_36_0`
* Still doesn't address macro conflicts in a single translation unit.
* Some internal header file inclusions may be messed up, since this sort of thing does happen.
```
#if defined(SOME_CONDITION)
# define HEADER <boost/some/header.hpp>
#else
# define HEADER <boost/some/other/header.hpp>
#endif
```
But it may be easy enough to work around those cases.
3. Modifying the entire boost library to replace `namespace boost {..}` with `namespace boost_1_36_0 {...}` and then providing a namespace alias. Replace all `BOOST_XYZ` macros and their uses with `BOOST_1_36_0_XYZ` macros.
* This would likely work if you were willing to put into the effort.
|
Using [bcp](http://www.boost.org/doc/libs/1_55_0b1/tools/bcp/doc/html/index.html) can install boost library to a specific location and can replace all 'namespace boost' in their code to a custom alias. Assuming our alias is 'boost\_1\_36\_0' all 'namespace boost' code blocks will start with 'boost\_1\_36\_0'. Something like
```
bcp --namespace=boost_1_36_0 --namespace-alias shared_ptr regex /path/to/install
```
, but check the documentation in the link yourself because I'm not sure if it is legal syntaxis.
|
How can I wrap BOOST in a separate namespace?
|
[
"",
"c++",
"boost",
"namespaces",
""
] |
I had assumed that the canonical form for operator+, assuming the existence of an overloaded operator+= member function, was like this:
```
const T operator+(const T& lhs, const T& rhs)
{
return T(lhs) +=rhs;
}
```
But it was pointed out to me that this would also work:
```
const T operator+ (T lhs, const T& rhs)
{
return lhs+=rhs;
}
```
In essence, this form transfers creation of the temporary from the body of the implementation to the function call.
It seems a little awkward to have different types for the two parameters, but is there anything wrong with the second form? Is there a reason to prefer one over the other?
|
With the edited question, the first form would be preferred. The compiler will more likely optimize the return value (you could verify this by placing a breakpoint in the constructor for T). The first form also takes both parameters as const, which would be more desirable.
Research on the topic of **return value optimization**, such as this link as a quick example: <http://www.cs.cmu.edu/~gilpin/c++/performance.html>
|
I'm not sure if there is much difference in the generated code for either.
Between these two, I would (personally) prefer the first form since it better conveys the intention. This is with respect to both your reuse of the += operator and the idiom of passing templatized types by const&.
|
What is the best signature for overloaded arithmetic operators in C++?
|
[
"",
"c++",
"operator-keyword",
""
] |
Considering this code, can I be *absolutely sure* that the `finally` block always executes, no matter what `something()` is?
```
try {
something();
return success;
}
catch (Exception e) {
return failure;
}
finally {
System.out.println("I don't know if this will get printed out");
}
```
|
Yes, `finally` will be called after the execution of the `try` or `catch` code blocks.
The only times `finally` won't be called are:
1. If you invoke `System.exit()`
2. If you invoke `Runtime.getRuntime().halt(exitStatus)`
3. If the JVM crashes first
4. If the JVM reaches an infinite loop (or some other non-interruptable, non-terminating statement) in the `try` or `catch` block
5. If the OS forcibly terminates the JVM process; e.g., `kill -9 <pid>` on UNIX
6. If the host system dies; e.g., power failure, hardware error, OS panic, et cetera
7. If the `finally` block is going to be executed by a daemon thread and all other non-daemon threads exit before `finally` is called
|
Example code:
```
public static void main(String[] args) {
System.out.println(Test.test());
}
public static int test() {
try {
return 0;
}
finally {
System.out.println("something is printed");
}
}
```
Output:
```
something is printed.
0
```
|
Does a finally block always get executed in Java?
|
[
"",
"java",
"error-handling",
"return",
"try-catch-finally",
""
] |
I need to convert HTML documents into valid XML, preferably XHTML. What's the best way to do this? Does anybody know a toolkit/library/sample/...whatever that helps me to get that task done?
To be a bit more clear here, my application has to do the conversion automatically at runtime. I don't look for a tool that helps me to move some pages to XHTML manually.
|
[**Convert from HTML to XML with HTML Tidy**](http://www.ibm.com/developerworks/library/tiptidy.html)
[**Downloadable Binaries**](http://tidy.sourceforge.net/#binaries)
JRoppert, For your need, i guess you might want to look at the [Sources](http://sourceforge.net/cvs/?group_id=27659)
```
c:\temp>tidy -help
tidy [option...] [file...] [option...] [file...]
Utility to clean up and pretty print HTML/XHTML/XML
see http://tidy.sourceforge.net/
Options for HTML Tidy for Windows released on 14 February 2006:
File manipulation
-----------------
-output <file>, -o write output to the specified <file>
<file>
-config <file> set configuration options from the specified <file>
-file <file>, -f write errors to the specified <file>
<file>
-modify, -m modify the original input files
Processing directives
---------------------
-indent, -i indent element content
-wrap <column>, -w wrap text at the specified <column>. 0 is assumed if
<column> <column> is missing. When this option is omitted, the
default of the configuration option "wrap" applies.
-upper, -u force tags to upper case
-clean, -c replace FONT, NOBR and CENTER tags by CSS
-bare, -b strip out smart quotes and em dashes, etc.
-numeric, -n output numeric rather than named entities
-errors, -e only show errors
-quiet, -q suppress nonessential output
-omit omit optional end tags
-xml specify the input is well formed XML
-asxml, -asxhtml convert HTML to well formed XHTML
-ashtml force XHTML to well formed HTML
-access <level> do additional accessibility checks (<level> = 0, 1, 2, 3).
0 is assumed if <level> is missing.
Character encodings
-------------------
-raw output values above 127 without conversion to entities
-ascii use ISO-8859-1 for input, US-ASCII for output
-latin0 use ISO-8859-15 for input, US-ASCII for output
-latin1 use ISO-8859-1 for both input and output
-iso2022 use ISO-2022 for both input and output
-utf8 use UTF-8 for both input and output
-mac use MacRoman for input, US-ASCII for output
-win1252 use Windows-1252 for input, US-ASCII for output
-ibm858 use IBM-858 (CP850+Euro) for input, US-ASCII for output
-utf16le use UTF-16LE for both input and output
-utf16be use UTF-16BE for both input and output
-utf16 use UTF-16 for both input and output
-big5 use Big5 for both input and output
-shiftjis use Shift_JIS for both input and output
-language <lang> set the two-letter language code <lang> (for future use)
Miscellaneous
-------------
-version, -v show the version of Tidy
-help, -h, -? list the command line options
-xml-help list the command line options in XML format
-help-config list all configuration options
-xml-config list all configuration options in XML format
-show-config list the current configuration settings
Use --blah blarg for any configuration option "blah" with argument "blarg"
Input/Output default to stdin/stdout respectively
Single letter options apart from -f may be combined
as in: tidy -f errs.txt -imu foo.html
For further info on HTML see http://www.w3.org/MarkUp
```
|
You can use a [HTML Agility Pack](http://www.codeplex.com/htmlagilitypack). Its open-source project from CodePlex.
|
How to convert HTML to XHTML?
|
[
"",
"c#",
".net",
"html",
"xhtml",
"converters",
""
] |
As far as I know, in gcc you can write something like:
```
#define DBGPRINT(fmt...) printf(fmt);
```
Is there a way to do that in VC++?
|
Yes but only since VC++ 2005. The syntax for your example would be:
```
#define DBGPRINT(fmt, ...) printf(fmt, __VA_ARGS__)
```
A full reference is [here](http://msdn.microsoft.com/en-us/library/ms177415(VS.80).aspx).
|
Yes, you can do this in Visual Studio C++ in versions 2005 and beyond (not sure about VS 2003). Take a look at **VA\_ARGS**. You can basically do something like this:
```
#define DBGPRINTF(fmt, ...) printf(fmt, __VA_ARGS__)
```
and the variable arguments to the macro will get passed to the function provided as '...' args, where you can then us va\_args to parse them out.
There can be weird behavior with **VA\_ARGS** and the use of macros. Because **VA\_ARGS** is variable, that means that there can be 0 arguments. That might leave you with trailing commas where you didn't intend.
|
is there a way to write macros with a variable argument list in visual C++?
|
[
"",
"c++",
"visual-c++",
"c-preprocessor",
"variadic-macros",
""
] |
I happened upon a brief discussion recently on another site about C# runtime compilation recently while searching for something else and thought the idea was interesting. Have you ever used this? I'm trying to determine how/when one might use this and what problem it solves. I'd be very interested in hearing how you've used it or in what context it makes sense.
Thanks much.
|
Typically, I see this used in cases where you are currently using Reflection and need to optimize for performance.
For example, instead of using reflection to call method X, you generate a Dynamic Method at runtime to do this for you.
|
You can use this to add scripting support to your application. For examples look [here](http://www.codeproject.com/KB/cs/cs-script_for_cp.aspx) or [here](http://www.vsj.co.uk/articles/display.asp?id=417).
It is quite easily possible to publish parts of your internal object framework to the scripting part, so you could with relative ease add something to your application that has the same effect as for example VBA for Office.
|
Where would you use C# Runtime Compilation?
|
[
"",
"c#",
"optimization",
"runtime-compilation",
""
] |
For a new project that I'm doing in PHP I've created an SQLMethods class to connect to the database and perform queries. Tonight was the first night that I actually got to test it (I wrote it a week or so ago and forgot about it) and an unexpected error occured: When it was calling my ExecuteQuery() function, it wouldn't use the database I selected in the constructor.
The constructor:
```
public function SQLMethods() {
$SQLConnection = mysql_connect($SQLDBAddress, $SQLUserName, $SQLPassword);
if (!$SQLConnection) {
die('Could not connect: ' . mysql_error());
}
mysql_select_db($SQLDB, $SQLConnection);
}
```
The function in question:
```
public function ExecuteQuery($Query) {
mysql_query($Query, $SQLConnection) or die('Could not perform query: ' . mysql_error());
}
```
Does anyone see what the issue might be? Does the connection close after the constructor completes?
|
you should declare $SQLConnection in your class, and you should refer to it as
```
$this->SQLConnection
```
and not simply $SQLConnection.
|
`$SQLConnection` doesn't exist within the `ExecuteQuery` method.
You can either pass it directly as a parameter to `ExecuteQuery`, or add an `sqlConnection` class property that is set in the constructor and accessed as `$this->sqlConnection` inside your class methods.
|
PHP Class Database Connection Scope Issue
|
[
"",
"php",
"mysql",
"class",
"scope",
"code-reuse",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.