Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
In my asp.net mvc application I created the following link:
```
http://localhost:2689/en/Formula.mvc/351702++LYS+GRONN+5G+9%252f2++fds
```
I get error 400 (bad request).
I think it blocks at the %25 (forward slash).
What am I doing wrong?
**--EDIT 3--**
I tried not encoding anything at all but rather rely on the default encoding of Url.RouteUrl().
It seems that this doesn't encode the "/" for some reason.
If I encode it myself first, I end up with the doubel encoded %252f. This gives me a bad request for some reason..
Why?!
**--EDIT 2--**
I generated the last part of the URI as follows:
1. Take the id.toString
2. Take the HttpUtility.UrlEncode(name)
3. Take the HttpUtility.UrlEncode(code)
4. String.Format("{0}--{1}--{2}") with the values from the previous parts
5. Add it as a parameter to Url.RouteUrl()
After that my action gets this parameter again, splits it at -- and HttpUtility.Decode() the values back.
I do it this way because the two last parameters are optional, but functional parameters. IF they are defined in a previous step, they have to be carried along to the other pages.
Less abstract: A color can have multiple names, but if a user selected it by a particular name, it should be kept throughout all the other pages.
**--EDIT 1--**
It also looks like HttpUtility.UrlEncode() and Url.Encode() return different results :S
If I don't encode the "/", it acts as a separator=>no luck there.
If I encode it with Url.Encode() I end up with %2F => Code 400
If I encode it with HttpUtility.UrlEncode() I end up with %25 => code 400
Because 400 doesn't even let it through to asp.net-mvc, the route debugger is of no use :( | I was there a couple of days ago. If you can accept unreadable route-values in the URL try this:
[URL-encoded slash in URL](https://stackoverflow.com/questions/591694/url-encoded-slash-in-url) | `%25` is actually encoded "%", so `%252f` is encoded "%2f".
`%2f` (encoded "/") is not allowed in URL unless you explicitly allow it in webserver's configuration. | What's wrong with my url encoding? | [
"",
"c#",
".net",
"asp.net-mvc",
"encoding",
""
] |
is there some UML tool where I can draw my database design and generate SQL scripts for creating my database? | I can recommend Enterprise Architect from <http://www.sparxsystems.com.au/> | [Sparx Enterprise Architect](http://www.sparxsystems.com/) can do that. It can also reverse-engineer existing databases.
Of course, it does a lot more than this as well. | UML to SQL tool | [
"",
"sql",
"uml",
""
] |
Have a look at this code:
```
#include <iostream>
using namespace std;
int main()
{
const char* str0 = "Watchmen";
const char* str1 = "Watchmen";
char* str2 = "Watchmen";
char* str3 = "Watchmen";
cerr << static_cast<void*>( const_cast<char*>( str0 ) ) << endl;
cerr << static_cast<void*>( const_cast<char*>( str1 ) ) << endl;
cerr << static_cast<void*>( str2 ) << endl;
cerr << static_cast<void*>( str3 ) << endl;
return 0;
}
```
Which produces an output like this:
```
0x443000
0x443000
0x443000
0x443000
```
This was on the *g++* compiler running under *Cygwin*. The pointers all point to the same location even with no optimization turned on (`-O0`).
Does the compiler always optimize so much that it searches all the string constants to see if they are equal? Can this behaviour be relied on? | It's an extremely easy optimization, probably so much so that most compiler writers don't even consider it much of an optimization at all. Setting the optimization flag to the lowest level doesn't mean "Be completely naive," after all.
Compilers will vary in how aggressive they are at merging duplicate string literals. They might limit themselves to a single subroutine — put those four declarations in different functions instead of a single function, and you might see different results. Others might do an entire compilation unit. Others might rely on the linker to do further merging among multiple compilation units.
You can't rely on this behavior, unless your particular compiler's documentation says you can. The language itself makes no demands in this regard. I'd be wary about relying on it in my own code, even if portability weren't a concern, because behavior is liable to change even between different versions of a single vendor's compiler. | It can't be relied on, it is an optimization which is not a part of any standard.
I'd changed corresponding lines of your code to:
```
const char* str0 = "Watchmen";
const char* str1 = "atchmen";
char* str2 = "tchmen";
char* str3 = "chmen";
```
The output for the -O0 optimization level is:
```
0x8048830
0x8048839
0x8048841
0x8048848
```
But for the -O1 it's:
```
0x80487c0
0x80487c1
0x80487c2
0x80487c3
```
As you can see GCC (v4.1.2) reused first string in all subsequent substrings. It's compiler choice how to arrange string constants in memory. | C/C++: Optimization of pointers to string constants | [
"",
"c++",
"c",
"optimization",
"string",
"constants",
""
] |
I am creating a WCF Service that will be consumed by both .NET and Java client applications.
We do not have any Java experience in the team, so are looking for guidelines or rules to follow to ensure that we do not accidentally include any types in our WCF Service interface or do anything else that would preclude it from being consumed by a Java client application.
Are our worries well-founded? If so, what should we be wary of?
## Edit
One example of a concern is whether a .NET `DateTime` value is represented in the service interface in a manner that can be correctly understood by a Java client.
## Edit2
A second example of a concern is the use of any nullable value types (`bool?`, `int?` etc).
## Edit3
At present some of our development teams are hand-writing .xsd files to define the various objects that the WCF interface methods will take as arguments and return as return values. They are then using xsd.exe to auto-generate C# classes from these.
The rationale behind this is that it guarantees that the generated classes won't contain anything that is .NET-specific.
The downside is that this adds a development burden and also precludes us from documenting these classes using `<summary>` tags (.NET equivalent of javadoc comments). | The recommendation to start with XSD is a good one. That will not guarantee compatibility on each side, as XML Schema is really big and no web services stack supports all of it. (Example: lists).
So, start with XSD, but confine yourself to mainstream types. Primitives, complextypes composed of primitives, arrays of same. You can safely nest complextypes and arrays. (arrays of complextypes, complextypes that contain arrays or complextypes, etc).
Stay away from restrictions, substitution groups, lists, derivations, and any other XSD esoterica. Even XSD enumerations should be avoided.
About dateTime:
It's not enough to use a nullable datetime. There are formatting concerns as well. The .NET DateTime is a higher resolution quantity than a Java Calendar and as a result, shipping a .NET time to Java can result in de-serialization exceptions on the Java side. (**EDIT:** using the DataType="dateTime" decorator in the XmlElement attribute on the .NET side can make sure you serialize properly)
[Some old advice](http://blogs.msdn.com/dotnetinterop/archive/2005/12/06/interop-and-datetime-formatting.aspx) on that.
Finally, it is not true that you cannot use in-code XML doc on the classes that get generated. With C#'s partial classes, you can write separate code from the generated classes with the in-code doc you want. Even if you re-gen the code, your partial class code will remain unchanged. **EDIT:** When you compile, the doc will appear on the classes.
**EDIT:** Someone asked, if using XSD-first is not enough to guarantee interop, why use it? My answer: it is not a guarantee but it is a good step, it helps. It keeps you away from designing interfaces in code (either Java or C# or VB, etc) that expose platform-specific things like .NET DataSets, generic Dictionaries, Java ResultSets, etc, all of which present interop problems. There are still pitfalls in the more marginal parts of XSD, but you can usually avoid those with thoughtful design.
I should have mentioned in my original answer that you can apply an iterative process to the development of the interface. Design in XSD, then generate (client) stub and (server) skeleton code from the XSD+WSDL, then adjust and do it again. | Using [basicHttpBinding](http://msdn.microsoft.com/en-us/library/ms731361.aspx) endpoint will guarantee that any SOAP 1.1 compatible client will be able to consume your service. | Guaranteeing that a .NET WCF Service can be consumed by a Java client | [
"",
"java",
".net",
"wcf",
""
] |
I need to get the data from a datefield calender and be able to display it in a string and later store it in a recordstore. I tried the toString() method but i had an error once run.
```
StartDate = new DateField("Start Date ", DateField.DATE);
StartDate.setDate(Calendar.getInstance().getTime());
```
I now have this code
```
public static String dateToString (long date)
{
Calendar c = Calendar.getInstance() ;
c.setTime(new Date(date));
int y = c.get(Calendar.YEAR);
int m = c.get(Calendar.MONTH) + 1;
int d = c.get(Calendar.DATE);
String t = (y<10? "0": "")+y+"-"+(m<10? "0": "")+m+"-"+(d<10?
"0": "")+d;
return t;
}
```
How can I change this in order for this to get the date from Startdate and make that t.?
If anyone could help! | There are some [sample implementations](http://www.velocityreviews.com/forums/t126578-p2-convert-date-to-string-with-j2me.html) that may help you get started, since it appears your runtime doesn't provide a useful `toString` implementation. | Does [this](http://www.roseindia.net/j2me/date-string.shtml) solve your problem?
> ```
> ...
> date = new Date();
> currentDate = new DateField("", DateField.DATE_TIME);
> currentDate.setDate(date);
> dateinstring = date.toString();
> System.out.println("Date And Time Is In String Format: "+dateinstring);
> ...
> ``` | Does anyone know how to convert a datefield to a string in java j2me? | [
"",
"java",
"java-me",
""
] |
The Google AppEngine [development server](http://code.google.com/appengine/docs/java/tools/devserver.html) simulates Google's backend database with an embedded database. It creates a file on you local disk that you can delete to clear the DB.
Is there a tool that allows to browse and edit this data, similar to the Data Viewer that you get for deployed applications?
**Clarification**: I am asking about the Java toolkit, not the Python one. | Finally, data viewer comes with Google App Engine Java SDK version 1.2.2.
It can be accessed at:
<http://localhost:8080/_ah/admin>
or
<http://localhost:8888/_ah/admin>
The same link as in Python version. | On my system (sdk 1.2.8 I think) the dev server runs on port 8888. So the data viewer is at <http://localhost:8888/_ah/admin> | Data Viewer for AppEngine development server | [
"",
"java",
"database",
"google-app-engine",
"testing",
""
] |
I'm going through the "Head First C#" book and in one of the chapters I created a program and uses variables declared as ints and decimals. Visual Studio got cranky with me a couple of times about mixing and matching the two. For example:
```
dinnerParty.NumberOfPeople = (int) numericUpDown1.Value;
```
NumberOfPeople is declared as an int and apparently numeric updowns are decimals.
Also, the book puts an M after some numbers when adding them together. For example:
```
public void SetHealthyOption(bool healthy)
{
if (healthy)
{
CostOfBeveragesPerPerson = 5.00M;
}
else
{
CostOfBeveragesPerPerson = 20.00M;
}
}
```
CostOfBeveragesPerPerson is declared as a decimal.
So I have two specific questions:
1) How can you know when you need to cast something? I'm sure there is quite a bit to casting... can anyone provide some good links to learn about casting?
2) What does the M after the numbers do?
*EDIT*
So the M denotes that the number is a decimal and not a double. Why not just cast the number as a decimal like: (decimal) 50.00? And what is that "function" called? If I wanted to see what "letters" were available what would I google? | 1. Explicit casts are generally needed when there's a loss of precision between the two types. For example, if you had an int and assigned it to a long, no cast is necessary since long can hold all values that an int can. If you were assigning a long to an int, however, a cast would be required as int can hold less values than a long can, which can lead to data loss.
2. The M defines the number as a Decimal type. If you omit this, the number is interpreted as a double. | ```
Type Suffix Example
uint U or u 100U
long L or l 100L
ulong UL or ul 100UL
float F or f 123.45F
decimal M or m 123.45M
```
There's a lot of pages that explain [C# numeric literals.](http://www.blackwasp.co.uk/CSharpNumericLiterals.aspx) The letter at the end is not a cast or any kind of function. It is syntax showing that what you are writing represents a value of a particular type. So writing (decimal) 5.0 uses a cast, but writing 5.0m does not. | decimals, ints, casting... oh my! | [
"",
"c#",
".net",
"casting",
"integer",
"decimal",
""
] |
I'm using the following code to query my database:
```
private const int PAGE_SIZE = 10;
public static IList<Image> GetTopImagesForUser(String connectionString, int userID, int page)
{
dbDataContext db = new dbDataContext(connectionString);
var images = (from p in db.Images
where (p.SubmitterUserIndex == userID &&
p.URL != "none" &&
p.ThumbURL != "none")
orderby p.Rep descending
select p).Skip(page * PAGE_SIZE).Take(PAGE_SIZE);
/* snip */
return topImages;
}
```
If I call this code with a **page** of 0, everything works the way I want it to - I get a nicely ordered list, 10 results, everything is correct.
If I call this code with a **page** of 1, however, rows that were in page 0 end up in page 1. I can't even begin to understand why. I've checked my database for duplicate rows, none. I've checked to make sure every row's **URL** and **ThumbURL** are not "none". That's not the problem either. I've checked to make sure **page** is what I expect it to be when I call this method, and it is always what I expect it to be.
What really baffles me is that the following method, which differs from the first method only in the **orderby** clause, *works completely as expected*.
```
public static IList<Image> GetAllImagesForUser(String connectionString, int userID, int page)
{
dbDataContext db = new dbDataContext(connectionString);
var images = (from p in db.Images
where (p.SubmitterUserIndex == userID &&
p.URL != "none" &&
p.ThumbURL != "none")
orderby p.SubmitTime descending
select p).Skip(page * PAGE_SIZE).Take(PAGE_SIZE);
/* snip */
return allImages;
}
```
Has anyone run into something like this? Is there a different form that my query should take to do what I want it to do? I'm not sure what I could be missing. | When you say you don't have any "duplicate rows" - do any of the returned rows have the same Rep value? If so, the ordering within that will be indeterminate.
Try
```
orderby p.Rep descending, p.SubmitTime
```
(or some other ordering which will be completely determined) to get a more predictable result.
Also, is this LINQ to SQL? If so, you should look at the generated SQL to see what's happening. | I have been baffled by similar problems in paging when not using LINQ, but just using a SELECT query (in a sproc). I think that the issue may be that the Rep field is not unique, but the SubmitTime field, being a date, may be unique.
I would suggest that you try a second field after Rep to make the Order By always return the results in the same order. It can be any field that will make the records unique; you don't have to be using it in the results. | Duplicate rows when using orderby, Skip() and Take() with LINQ | [
"",
"c#",
"linq",
"c#-3.0",
""
] |
Using Java Reflection, is it possible to get the name of a local variable? For example, if I have this:
```
Foo b = new Foo();
Foo a = new Foo();
Foo r = new Foo();
```
is it possible to implement a method that can find the names of those variables, like so:
```
public void baz(Foo... foos)
{
for (Foo foo: foos) {
// Print the name of each foo - b, a, and r
System.out.println(***);
}
}
```
*EDIT: This question is different from [Is there a way in Java to find the name of the variable that was passed to a function?](https://stackoverflow.com/questions/9984550/is-there-a-way-in-java-to-find-the-name-of-the-variable-that-was-passed-to-a-fun) in that it more purely asks the question about whether one can use reflection to determine the name of a local variable, whereas the other question (including the accepted answer) is more focused on testing values of variables.* | As of Java 8, some local variable name information is available through reflection. See the *"Update"* section below.
Complete information is often stored in class files. One compile-time optimization is to remove it, saving space (and providing some obsfuscation). However, when it is is present, each method has a local variable table attribute that lists the type and name of local variables, and the range of instructions where they are in scope.
Perhaps a byte-code engineering library like [ASM](http://asm.ow2.org/) would allow you to inspect this information at runtime. The only reasonable place I can think of for needing this information is in a development tool, and so byte-code engineering is likely to be useful for other purposes too.
---
*Update:* Limited support for this was [added to Java 8.](http://openjdk.java.net/jeps/118) Parameter (a special class of local variable) names are now available via reflection. Among other purposes, this can help to replace `@ParameterName` annotations used by dependency injection containers. | It is *not* possible at all. Variable names aren't communicated within Java (and might also be removed due to compiler optimizations).
**EDIT (related to comments):**
If you step back from the idea of having to use it as function parameters, here's an alternative (which I wouldn't use - see below):
```
public void printFieldNames(Object obj, Foo... foos) {
List<Foo> fooList = Arrays.asList(foos);
for(Field field : obj.getClass().getFields()) {
if(fooList.contains(field.get()) {
System.out.println(field.getName());
}
}
}
```
There will be issues if `a == b, a == r, or b == r` or there are other fields which have the same references.
**EDIT now unnecessary since question got clarified** | Java Reflection: How to get the name of a variable? | [
"",
"java",
"reflection",
""
] |
I have an array like the following
Array ( [0] => "txt1" [1] => "txt2" [2] => "txt3")
I have another array like it but with different content :
Array ( [0] => on [2] => on)
The aim is to get a final array with the keys of the second and the content of the first, it's like merging them.
So that the final result is : Array ( [0] => "txt1" [2] => "txt3") It would be better to change the keys to 0 - 1, but that a trivial issue, let's focus on merging them one to one. | The easiest way to do this is with `array_intersect_key` ([See the PHP Docs](http://ca.php.net/manual/en/function.array-intersect-key.php)). It grabs the values from the first array passed corresponding to the keys present in all other arrays passed.
So, your example would look like this:
```
$a = array(0 => "txt1", 1 => "txt2", 2 => "txt3");
$b = array(0 => 1, 2 => 1);
$c = array_intersect_key($a, $b);
print_r($c);
```
prints:
```
Array
(
[0] => txt1
[2] => txt3
)
``` | > [`array_combine`](http://www.php.net/array_combine) - Creates an array by using one array for keys and another for its values | Merge two arrays (key and content) in PHP | [
"",
"php",
"arrays",
"merge",
"key",
""
] |
What's the best way to prevent javascript injections in a VB.NET Web Application? Is there some way of disabling javascript on the pageload event?
Recently, part of the security plan for our vb.net product was to simply disable buttons on the page that weren't available to the specific user. However, I informed the guy who thought of the idea that typing
`javascript:alert(document.getElementById("Button1").disabled="")`
in the address bar would re-enable the button. I'm sure that someone else has ran into issues like this before, so any help is appreciated. Thanks!
**Update:**
Aside from validating user input, how can I protect the website from being toyed with from the address bar? | Any changes you make to the client-side behavior of your application are superficial and not secure at all. You should not rely upon these. They are nice to stop 99% of users, but it is trivially easy to bypass them. You should be checking whether a user has the right privileges for the action on the server side when the action is called, so that if someone did decide to re-enable the button themselves they would not be able to do whatever the button is meant to do. You have no control over what someone can do to the page with javascript, so you should never trust anything coming from the client.
**Response to update:** You can't in any practical way, which is exactly what the problem is. Once the website is in their browser, it's a free-for-all and they can have a go at it. Which is why your program should validate everything server side, every time. | The most important item to consider is html encoding the user input. If the user enters <script> it'll get converted to <script> etc.
**Update:** If expecting input from the url / querystring, validate the data with extreme measures. If possible white list the data received. When white listed, you're ensuring only what you deem correct and safe is a viable submission.
Never trust the users' input. | Preventing JavaScript Injections | [
"",
"javascript",
"vb.net",
"security",
"javascript-injection",
""
] |
I'm using jquery to call a webservice which returns a dataset with a couple of tables in it.
This was working ok until i needed to set up my webmethod to accept a parameter. I reflected this on the client side with
```
data: "{paramname:'" + paramval+ "'}",
```
I now get the following error when the webmethod returns. This happens regardless of whats being returned in the dataset
> Error:{"Message":"A circular reference was detected while serializing
> an object of type
> \u0027System.Globalization.CultureInfo\u0027.","StackTrace":" at
> System.Web.Script.Serialization.JavaScriptSerializer.SerializeValueInternal(Object
> o, StringBuilder sb, Int32 depth, Hashtable objectsInUse,
> SerializationFormat serializationFormat)\r\n at ...etc
When the webmethod has no parameters the client side js looks the same as below except the data: line is removed.
```
function ClientWebService(paramval){
$.ajax({
type: "POST",
url: "WebService1.asmx/webmethodName",
data: "{paramname:'" + paramval+ "'}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(msg) {
ParseResult(msg.d);
},
error: function(err) {
if (err.status == 200) {
ParseResult(err);
}
else { alert('Error:' + err.responseText + ' Status: ' + err.status); }
}
});
```
}
Edit: As per suggestion to change the request to
```
data: {paramname: paramval},
```
yields the following error.
> Error:{"Message":"Invalid JSON primitive: paramval.","StackTrace":"
> at
> System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializePrimitiveObject()\r\n
> at
> System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32
> depth)\r\n at
> System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String
> input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at
> System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer
> serializer, String input, Type type, Int32 depthLimit)\r\n at
> System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String
> input)\r\n at
> System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext
> context, JavaScriptSerializer serializer)\r\n at
> System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData
> methodData, HttpContext context)\r\n at
> System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext
> context, WebServiceMethodData
> methodData)","ExceptionType":"System.ArgumentException"} Status: 500 | I changed my webmethod to return
```
ds.GetXml();
```
and this worked. Considering datasets are serializeable I'm not sure why I have to do this, but it gets me over this hurdle. | I had the exact same problem,
I removed Virtual keyword from my Entities which make lazy loading of object.
The problem solved!! | A circular reference was detected while serializing an object of type System.Globalization.CultureInfo | [
"",
"c#",
"jquery",
"web-services",
"json",
""
] |
I need to emulate a do-while loop in a Python program. Unfortunately, the following straightforward code does not work:
```
list_of_ints = [ 1, 2, 3 ]
iterator = list_of_ints.__iter__()
element = None
while True:
if element:
print element
try:
element = iterator.next()
except StopIteration:
break
print "done"
```
Instead of "1,2,3,done", it prints the following output:
```
[stdout:]1
[stdout:]2
[stdout:]3
None['Traceback (most recent call last):
', ' File "test_python.py", line 8, in <module>
s = i.next()
', 'StopIteration
']
```
What can I do in order to catch the 'stop iteration' exception and break a while
loop properly?
An example of why such a thing may be needed is shown below as pseudocode.
State machine:
```
s = ""
while True :
if state is STATE_CODE :
if "//" in s :
tokens.add( TOKEN_COMMENT, s.split( "//" )[1] )
state = STATE_COMMENT
else :
tokens.add( TOKEN_CODE, s )
if state is STATE_COMMENT :
if "//" in s :
tokens.append( TOKEN_COMMENT, s.split( "//" )[1] )
else
state = STATE_CODE
# Re-evaluate same line
continue
try :
s = i.next()
except StopIteration :
break
``` | I am not sure what you are trying to do. You can implement a do-while loop like this:
```
while True:
stuff()
if fail_condition:
break
```
Or:
```
stuff()
while not fail_condition:
stuff()
```
What are you doing trying to use a do while loop to print the stuff in the list? Why not just use:
```
for i in l:
print i
print "done"
```
Update:
So do you have a list of lines? And you want to keep iterating through it? How about:
```
for s in l:
while True:
stuff()
# use a "break" instead of s = i.next()
```
Does that seem like something close to what you would want? With your code example, it would be:
```
for s in some_list:
while True:
if state is STATE_CODE:
if "//" in s:
tokens.add( TOKEN_COMMENT, s.split( "//" )[1] )
state = STATE_COMMENT
else :
tokens.add( TOKEN_CODE, s )
if state is STATE_COMMENT:
if "//" in s:
tokens.append( TOKEN_COMMENT, s.split( "//" )[1] )
break # get next s
else:
state = STATE_CODE
# re-evaluate same line
# continues automatically
``` | Here's a very simple way to emulate a do-while loop:
```
condition = True
while condition:
# loop body here
condition = test_loop_condition()
# end of loop
```
The key features of a do-while loop are that the loop body always executes at least once, and that the condition is evaluated at the bottom of the loop body. The control structure show here accomplishes both of these with no need for exceptions or break statements. It does introduce one extra Boolean variable. | How to emulate a do-while loop? | [
"",
"python",
"while-loop",
"do-while",
""
] |
I want to supply a number, and then receive a set of random numbers. However, I want those numbers to be the same regardless of which computer I run it on (assuming I supply the same seed).
Basically my question is: in C++, if I make use of `rand()`, but supply `srand()` with a user-defined seed rather than the current time, will I be able to generate the same random number stream on any computer? | `srand()` & `rand()` are not part of the STL. They're actually part of the C runtime.
Yes, they will produce the same results as long as it's the same implementation of `srand()/rand()`.
Depending on your needs, you might want to consider using [Boost.Random](http://www.boost.org/doc/libs/1_38_0/libs/random/index.html). It provides several high-quality random number generators. | There are dozens of [PRNG](http://en.wikipedia.org/wiki/Pseudorandom%5Fnumber%5Fgenerator)s available as libraries. Pick one. I tend to use [Mersenne Twister](http://en.wikipedia.org/wiki/Mersenne_twister).
By using an externally supplied library, you bypass the risk of a weird or buggy implementation of your language's library `rand()`. As long as your platforms all conform to the same mathematical semantics, you'll get consistent results.
MT is a favorite of mine because I'm a physicist, and I use these things for Monte Carlo, where the guarantee of equal-distribution to high dimensions is important. But *don't use MT as a cryptographic PRNG!* | Deterministic Random Number Streams in C++ STL | [
"",
"c++",
"stl",
"random",
""
] |
I have a client and server application that transfer message using serialization over TCP. I got the following error when deserializing an object:
Any ideas to the cause or possible next steps in analyzing this problem?
```
java.io.StreamCorruptedException: invalid stream header: 383D4649
at java.io.ObjectInputStream.readStreamHeader(Unknown Source)
at java.io.ObjectInputStream.<init>(Unknown Source)
at com.aqua.NmsApi.ResiliantTCPServer$ServerThread.run(ResiliantTCPServer.java:248)
at java.lang.Thread.run(Unknown Source)
``` | There's something wrong with the magic number at the head of the serialized data. You're probably going to need to capture the serialized data and look it over yourself to start with. That ascii stream is '8=FI'. | There are two possible reasons for that:
* The stream has actually been corrupted (i.e. what you are reading is different from what you wrote at the other end). In that case you should write in a local file each contents (emitted and received), and compare them.
* The magic numbers required by the implementation(s) of ObjectInputStream you are using at are different at either end, for instance because you are using different versions of the Java base packages. Those constants are declared in ObjectStreamConstants, you should check them. | Java StreamCorruptedException | [
"",
"java",
"serialization",
""
] |
In Python, is it possible to define an alias for an imported module?
For instance:
```
import a_ridiculously_long_module_name
```
...so that is has an alias of 'short\_name'. | ```
import a_ridiculously_long_module_name as short_name
```
also works for
```
import module.submodule.subsubmodule as short_name
``` | [Check here](http://docs.python.org/reference/simple_stmts.html#import)
```
import module as name
```
or
```
from relative_module import identifier as name
``` | Can you define aliases for imported modules in Python? | [
"",
"python",
"module",
"alias",
"python-import",
""
] |
I've used delegates when designing win forms in .NET... i.e. drag/drop a button, double click, and fill in the myButton\_click event. I want to understand how to create and use user-defined delegates in C#.
How are user-defined delegates used and created in C# ? | I suggest reading a tutorial on the topic.
Basically, you declare a delegate type:
```
public delegate void MyDelegate(string message);
```
Then you can either assign and call it directly:
```
MyDelegate = SomeFunction;
MyDelegate("Hello, bunny");
```
Or you create an event:
```
public event MyDelegate MyEvent;
```
Then you can add an event handler from outside like this:
```
SomeObject.MyEvent += SomeFunction;
```
Visual Studio is helpful with this. After you entered the +=, just press tab-tab and it will create the handler for you.
Then you can fire the event from inside the object:
```
if (MyEvent != null) {
MyEvent("Hello, bunny");
}
```
That's the basic usage. | ```
public delegate void testDelegate(string s, int i);
private void callDelegate()
{
testDelegate td = new testDelegate(Test);
td.Invoke("my text", 1);
}
private void Test(string s, int i)
{
Console.WriteLine(s);
Console.WriteLine(i.ToString());
}
``` | How are user-defined delegates used and created in C#? | [
"",
"c#",
"delegates",
""
] |
Is there a difference between Enumeration<? extends ZipEntry> and Enumeration<ZipEntry>? If so, what is the difference? | There's no practical difference in terms of what you can do when you've got one of them, because the type parameter is only used in an "output" position. On the other hand, there's a big difference in terms of what you can use *as* one of them.
Suppose you had an `Enumeration<JarEntry>` - you couldn't pass this to a method which took `Enumeration<ZipEntry>` as one of its arguments. You *could* pass it to a method taking `Enumeration<? extends ZipEntry>` though.
It's more interesting when you've got a type which uses the type parameter in both input and output positions - `List<T>` being the most obvious example. Here are three examples of methods with variations on a parameter. In each case we'll try to get an item from the list, and add another one.
```
// Very strict - only a genuine List<T> will do
public void Foo(List<T> list)
{
T element = list.get(0); // Valid
list.add(element); // Valid
}
// Lax in one way: allows any List that's a List of a type
// derived from T.
public void Foo(List<? extends T> list)
{
T element = list.get(0); // Valid
// Invalid - this could be a list of a different type.
// We don't want to add an Object to a List<String>
list.add(element);
}
// Lax in the other way: allows any List that's a List of a type
// upwards in T's inheritance hierarchy
public void Foo(List<? super T> list)
{
// Invalid - we could be asking a List<Object> for a String.
T element = list.get(0);
// Valid (assuming we get the element from somewhere)
// the list must accept a new element of type T
list.add(element);
}
```
For more details, read:
* [The Java language guide to generics](http://java.sun.com/j2se/1.5.0/docs/guide/language/generics.html)
* [The Java generics tutorial (PDF)](http://java.sun.com/j2se/1.5/pdf/generics-tutorial.pdf)
* [The Java generics FAQ](http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.html) - particularly [the section on wildcards](http://www.angelikalanger.com/GenericsFAQ/FAQSections/ParameterizedTypes.html#WIldcard%20Instantiations) | Yes, straight from one of the [sun generics tutorials](http://java.sun.com/developer/technicalArticles/J2SE/generics/):
> Here Shape is an abstract class with
> three subclasses: Circle, Rectangle,
> and Triangle.
>
> ```
> public void draw(List<Shape> shape) {
> for(Shape s: shape) {
> s.draw(this);
> }
> }
> ```
>
> It is worth noting that the draw()
> method can only be called on lists of
> Shape and cannot be called on a list
> of Circle, Rectangle, and Triangle for
> example. In order to have the method
> accept any kind of shape, it should be
> written as follows:
>
> ```
> public void draw(List<? extends Shape> shape) {
> // rest of the code is the same
> }
> ``` | Difference between Enumeration<? extends ZipEntry> and Enumeration<ZipEntry>? | [
"",
"java",
"generics",
""
] |
How can I override operators to be used on builtin types like String, arrays etc? For example: I wish to override the meaining of the + operator for arrays. | You can't overload operators for existing types, as that could potentially break any other code that uses the types.
You can make your own class that encapsulates an array, expose the methods and properties that you need from the array, and overload any operators that makes sense.
Example:
```
public class AddableArray<T> : IEnumerable<T> {
private T[] _array;
public AddableArray(int len) {
_array = new T[len];
}
public AddableArray(params T[] values) : this((IEnumerable<T>)values) {}
public AddableArray(IEnumerable<T> values) {
int len;
if (values is ICollection<T>) {
len = ((ICollection<T>)values).Count;
} else {
len = values.Count();
}
_array = new T[len];
int pos = 0;
foreach (T value in values) {
_array[pos] = value;
pos++;
}
}
public int Length { get { return _array.Length; } }
public T this[int index] {
get { return _array[index]; }
set { _array[index] = value; }
}
public static AddableArray<T> operator +(AddableArray<T> a1, AddableArray<T> a2) {
int len1 = a1.Length;
int len2 = a2.Length;
AddableArray<T> result = new AddableArray<T>(len1 + len2);
for (int i = 0; i < len1; i++) {
result[i] = a1[i];
}
for (int i = 0; i < len2; i++) {
result[len1 + i] = a2[i];
}
return result;
}
public IEnumerator<T> GetEnumerator() {
foreach (T value in _array) {
yield return value;
}
}
IEnumerator System.Collections.IEnumerable.GetEnumerator() {
return _array.GetEnumerator();
}
}
```
Usage:
```
// create two arrays
AddableArray<int> a1 = new AddableArray<int>(1, 2, 3);
AddableArray<int> a2 = new AddableArray<int>(4, 5, 6);
// add them
AddableArray<int> result = a1 + a2;
// display the result
Console.WriteLine(string.Join(", ", result.Select(n=>n.ToString()).ToArray()));
```
(Note that as the class implements `IEnumerable<T>`, you can use extension methods like `Select` on it.) | Basically you can't.
You can use extension methods to add functionality like this:
```
public void CustomAdd( this Array input, Array addTo ) {
...
}
```
But this doesn't work with operators. | Operator Overloading For Builtin Types | [
"",
"c#",
""
] |
I have a C# application that I want to use [Flickr's API](http://www.flickr.com/services/api/). I received my API key and shared secret, but when receiving the key it explicitly mentions not giving out the API key we were provided.
Since the app is open source and also easily viewed with Reflector, I don't think storing it as a string is really secure.
I could encrypt it with a symmetric key, but that is just obfuscating it since the password for the key will be provided in the code.
My question is, is securing the API Key from Flickr actually necessary? If so, are there any recommendations for properly securing the key? | I agree that a random developer's third-party key isn't worth anything. I use a Flickr API Key in my third-party library and simply store it as a string. Flickr's own key to their API would be worth something if dirtied by other use, but someone who wants to get their hands on any API key could look to someone else's open-source project. | I used hashing for this. | Flickr API Key storage | [
"",
"c#",
".net",
"flickr",
"api-key",
""
] |
I'm trying to split the innerText of a div on the newlines in the source for it. This works fine with:
```
$('div').text().split("\r\n|\r|\n")
```
But this fails in IE 7, due to the newlines apparently being stripped out. jQuery 1.3.2 doesn't appear to be at fault since i also tried both:
```
$('div')[0].innerText.split("\r\n|\r|\n")
```
and
```
$('div')[0].innerHTML.split("\r\n|\r|\n")
```
Neither do any of the above work if i change the div to a pre.
It appears that the newlines are once the source is parsed into the DOM. :( Is this so? Is there no way to get at them with javascript? | Newlines are whitespace, and are not generally preserved. They mean the same thing as a space does. | Try splitting on `"\n"` instead of `"\r\n"`.
To do both, consider splitting on the pattern `"\r?\n"`. | Losing newlines in div content in IE 7 | [
"",
"javascript",
"internet-explorer",
"split",
"newline",
"innerhtml",
""
] |
Is there a way to disable caching of serialized object in Java?
I have this scenario:
1. I have an object which is Serializable, I am serializing it, deserializing it, values are OK.
2. On the same object, I am changing some value, I am serializing it, deserializing it, values are NOT OK, values are same as the first initially loaded.
Seems like the serializator is caching the values, or not?
Thanks
Copied this example from "fredrik" and adopted to my case:
```
public class SerialDeserial {
public static void main(String[] args) {
try {
ChangingObject obj = new ChangingObject();
obj.foo=1;
// Write it
ObjectOutputStream os = new ObjectOutputStream(new FileOutputStream("test.foo"));
os.writeObject(obj);
os.flush();os.close();
// Read the object
ObjectInputStream is = new ObjectInputStream(new FileInputStream("test.foo"));
ChangingObject objDummy = (ChangingObject)is.readObject();
System.out.println("objDummy.foo is "+objDummy.foo);
// Change it
obj.foo=2;
// Write it
os = new ObjectOutputStream(new FileOutputStream("test.foo"));
os.writeObject(obj);
os.flush();os.close();
// Read the object
is = new ObjectInputStream(new FileInputStream("test.foo"));
objDummy = (ChangingObject)is.readObject();
System.out.println("objDummy.foo is "+objDummy.foo); // this returns "1" insted of "2"
} catch (Exception e) {
e.printStackTrace();
}
}
}
class ChangingObject implements Serializable {
public int foo;
}
``` | [`ObjectOutputStream.reset`](http://java.sun.com/javase/6/docs/api/java/io/ObjectOutputStream.html#reset()).
You can also write the object with [`writeUnshared`](http://java.sun.com/javase/6/docs/api/java/io/ObjectOutputStream.html#writeUnshared(java.lang.Object)), but that is shallow so referenced objects will still be shared.
Of course, immutable objects are a win as usual. | stepancheg is right, are you sure you're not rereading the first serialized object?
The following sample works. If you can create something similar that doesn't, please post it here.
import java.io.\*;
```
public class SerialDeserial {
public static void main(String[] args) {
try {
ChangingObject obj = new ChangingObject();
obj.foo=1;
// Write it
ObjectOutputStream os = new ObjectOutputStream(new FileOutputStream("test.foo"));
os.writeObject(obj);
os.flush();os.close();
// Read the object
ObjectInputStream is = new ObjectInputStream(new FileInputStream("test.foo"));
obj = (ChangingObject)is.readObject();
System.out.println("obj.foo is "+obj.foo);
// Change it
obj.foo=2;
// Write it
os = new ObjectOutputStream(new FileOutputStream("test.foo"));
os.writeObject(obj);
os.flush();os.close();
// Read the object
is = new ObjectInputStream(new FileInputStream("test.foo"));
obj = (ChangingObject)is.readObject();
System.out.println("obj.foo is "+obj.foo);
} catch (Exception e) {
e.printStackTrace();
}
}
}
class ChangingObject implements Serializable {
public int foo;
}
```
**Edit:** If I take your altered (well, not really) example I still get a correct output (with 2 instead of as the second output). You suggestion to do reset should make no difference as you are reopening the file and starting from the beginning both when reading and writing. I think your problems lies elsewhere (what OS are you using?, which file system? etc.)
It should just work, sorry. | Disable serialization cache | [
"",
"java",
"serialization",
""
] |
I have a data that looks like this:
```
AAA 0.3 1.00 foo chr1,100
AAC 0.1 2.00 bar chr2,33
AAT 3.3 2.11 chr3,45
AAG 1.3 3.11 qux chr1,88
ACA 2.3 1.33 chr8,13
ACT 2.3 7.00 bux chr5,122
```
Note that the lines above are tab separated. Moreover,
it sometime may contain 5 fields or 4 fields.
What I want to do is to capture 4th fields in variable as "" if it doesn't contain any value.
I have the following codes, but somehow it reads the 5th fields, as 4th fields
when 4th is empty.
What's the correct way to do it?
```
#include <iostream>
#include <vector>
#include <fstream>
#include <sstream>
using namespace std;
int main ( int arg_count, char *arg_vec[] ) {
string line;
ifstream myfile (arg_vec[1]);
if (myfile.is_open())
{
while (getline(myfile,line) )
{
stringstream ss(line);
string Tag;
double Val1;
double Val2;
double Field4;
double Field5;
ss >> Tag >> Val1 >> Val2 >> Field4 >> Field5;
cout << Field4 << endl;
//cout << Tag << "," << Val1 << "," << Val2 << "," << Field4 << "," << Field5 << endl;
}
myfile.close();
}
else { cout << "Unable to open file"; }
return 0;
}
``` | Another C++ only version that just uses the fact that istream must set the failbit if operator>> fails to parse.
```
while(getline(ss, line))
{
stringstream sl(line);
sl >> tag >> v1 >> v2 >> v3 >> v4;
if(sl.rdstate() == ios::failbit) // failed to parse 5 arguments?
{
sl.clear();
sl.seekg(ios::beg);
sl >> tag >> v1 >> v2 >> v4; // do it again with 4
v3 = "EMPTY"; // just a default value
}
cout << "tag: " << tag <<std::endl
<< "v1: " << v1 << std::endl
<< "v2: " << v2 << std::endl
<< "v3: " << v3 << std::endl
<< "v4: " << v4 << std::endl << std::endl;
}
``` | Tokenize the line into a vector of strings and then do conversion to an appropriate data type depending on the number of tokens.
If you can use [Boost.Spirit](http://spirit.sourceforge.net/), this reduces to a simple problem of defining an appropriate grammar. | How to Parse Lines With Differing Number of Fields in C++ | [
"",
"c++",
"parsing",
"stringstream",
""
] |
I think its a 5am brain drain, but I'm having trouble with understanding this.
```
obj = ['a','b'];
alert( obj.prototype ); //returns "undefined"
```
Why isn't `obj.prototype` returning function `Array(){ }` as the prototype? It does reference `Array` as the constructor. | Because the instance doesn't have a prototype, the class\* does.
Possibly you want `obj.constructor.prototype` or alternatively `obj.constructor==Array`
\* *to be more accurate, the **constructor** has the prototype, but of course in JS functions = classes = constructors* | According to the ECMA spec, an object's prototype link isn't visible, but most modern browsers (firefox, safari, chrome) let you see it via the `__proto__` property, so try:
```
obj = ['a','b'];
alert( obj.__proto__ );
```
An object also has the `constructor' property set on construction, so you can try:
```
obj = ['a','b'];
alert( obj.constructor.prototype );
```
However, `obj.constructor` can be changed after an object's contruction, as can `obj.constructor.prototype`, without changing the actual prototype pointer of obj. | Javascript - Getting 'undefined' when trying to get array's prototype | [
"",
"javascript",
"prototype",
""
] |
I have an XML structure like the following
```
<root>
<person>
<name>James</name>
<description xsi:type="me:age">12</description>
<description xsi:type="me:height>6 foot</description>
...
```
Which I have to pull out of a table like ...
## Person
Name , Age , Height
I'm trying to use the FOR XML path stuff in SQL 2005 with a query like
```
SELECT
Name as 'name'
Age as 'description xsi:type="me:age"'
Height as 'description xsi:type="me:height"'
FOR XML PATH('person')
```
But it gives me an error about the 'description xsi' namespace being missing. Is there any way to achieve this using FOR XML PATH. The actual query is rather more complex than this example and would take a lot of effort to change.
Thanks | FOR XML PATH is a little difficult at times (at least from what I know). This may get you there:
```
WITH XMLNAMESPACES('uri' as xsi)
SELECT
'me:age' AS 'description/@xsi:type'
,age AS 'description'
,name AS 'name'
,'me:height' AS 'description/@xsi:type'
,height AS 'description'
FROM #test
FOR XML PATH('person')
```
Produces:
```
<person xmlns:xsi="uri">
<description xsi:type="me:age">32</description>
<name>Alice</name>
<description xsi:type="me:height">6 Foot</description>
</person>
<person xmlns:xsi="uri">
<description xsi:type="me:age">24</description>
<name>Bob</name>
<description xsi:type="me:height">5 Feet 5 Inches</description>
</person>
``` | I don't think it's possible to deal with sibling nodes with the same name using `FOR XML PATH`.
I was able to generate your schema using `FOR XML EXPLICIT`.
The output isn't valid XML as is doesn't include a definition for then `xsi` namespace, but it does match your spec:
```
create table #test
(id int identity
,name varchar(50)
,age int
,height varchar(20))
insert #test (name,age,height)
select 'Alice',32,'6 feet one inch'
union select 'Bob',30,'5 feet 10 inches'
union select 'Charles',23,'6 feet two inch'
SELECT 1 AS Tag
,NULL AS Parent
,'' AS [root!1]
,null AS [person!2!name!ELEMENT]
,null AS [description!3]
,null AS [description!3!xsi:type]
,null AS [description!4]
,null AS [description!4!xsi:type]
UNION ALL
SELECT 2 AS Tag
,1 AS Parent
,null
,name
,null
,null
,null
,null
FROM #test
UNION ALL
SELECT 3 AS Tag
,2 AS Parent
,null
,name
,age
,'me:age'
,null
,null
FROM #test
UNION ALL
SELECT 4 AS Tag
,2 AS Parent
,null
,name
,null
,null
,height
,'me:height'
FROM #test
order by [person!2!name!ELEMENT],Tag
FOR XML EXPLICIT
``` | SQL Server 2005 "FOR XML PATH" Multiple tags with same name | [
"",
"sql",
"sql-server",
"xml",
""
] |
I'm trying to write a very simple parser in C#.
I need a lexer -- something that lets me associate regular expressions with tokens, so it reads in regexs and gives me back symbols.
It seems like I ought to be able to use Regex to do the actual heavy lifting, but I can't see an easy way to do it. For one thing, Regex only seems to work on strings, not streams (why is that!?!?).
Basically, I want an implementation of the following interface:
```
interface ILexer : IDisposable
{
/// <summary>
/// Return true if there are more tokens to read
/// </summary>
bool HasMoreTokens { get; }
/// <summary>
/// The actual contents that matched the token
/// </summary>
string TokenContents { get; }
/// <summary>
/// The particular token in "tokenDefinitions" that was matched (e.g. "STRING", "NUMBER", "OPEN PARENS", "CLOSE PARENS"
/// </summary>
object Token { get; }
/// <summary>
/// Move to the next token
/// </summary>
void Next();
}
interface ILexerFactory
{
/// <summary>
/// Create a Lexer for converting a stream of characters into tokens
/// </summary>
/// <param name="reader">TextReader that supplies the underlying stream</param>
/// <param name="tokenDefinitions">A dictionary from regular expressions to their "token identifers"</param>
/// <returns>The lexer</returns>
ILexer CreateLexer(TextReader reader, IDictionary<string, object> tokenDefinitions);
}
```
So, pluz send the codz...
No, seriously, I am about to start writing an implementation of the above interface yet I find it hard to believe that there isn't some simple way of doing this in .NET (2.0) already.
So, any suggestions for a simple way to do the above? (Also, I don't want any "code generators". Performance is not important for this thing and I don't want to introduce any complexity into the build process.) | The original version I posted here as an answer had a problem in that it only worked while there was more than one "Regex" that matched the current expression. That is, as soon as only one Regex matched, it would return a token - whereas most people want the Regex to be "greedy". This was especially the case for things such as "quoted strings".
The only solution that sits on top of Regex is to read the input line-by-line (which means you cannot have tokens that span multiple lines). I can live with this - it is, after all, a poor man's lexer! Besides, it's usually useful to get line number information out of the Lexer in any case.
So, here's a new version that addresses these issues. Credit also goes to [this](http://sites.google.com/site/fredm/linguist-parsing-system)
```
public interface IMatcher
{
/// <summary>
/// Return the number of characters that this "regex" or equivalent
/// matches.
/// </summary>
/// <param name="text">The text to be matched</param>
/// <returns>The number of characters that matched</returns>
int Match(string text);
}
sealed class RegexMatcher : IMatcher
{
private readonly Regex regex;
public RegexMatcher(string regex) => this.regex = new Regex(string.Format("^{0}", regex));
public int Match(string text)
{
var m = regex.Match(text);
return m.Success ? m.Length : 0;
}
public override string ToString() => regex.ToString();
}
public sealed class TokenDefinition
{
public readonly IMatcher Matcher;
public readonly object Token;
public TokenDefinition(string regex, object token)
{
this.Matcher = new RegexMatcher(regex);
this.Token = token;
}
}
public sealed class Lexer : IDisposable
{
private readonly TextReader reader;
private readonly TokenDefinition[] tokenDefinitions;
private string lineRemaining;
public Lexer(TextReader reader, TokenDefinition[] tokenDefinitions)
{
this.reader = reader;
this.tokenDefinitions = tokenDefinitions;
nextLine();
}
private void nextLine()
{
do
{
lineRemaining = reader.ReadLine();
++LineNumber;
Position = 0;
} while (lineRemaining != null && lineRemaining.Length == 0);
}
public bool Next()
{
if (lineRemaining == null)
return false;
foreach (var def in tokenDefinitions)
{
var matched = def.Matcher.Match(lineRemaining);
if (matched > 0)
{
Position += matched;
Token = def.Token;
TokenContents = lineRemaining.Substring(0, matched);
lineRemaining = lineRemaining.Substring(matched);
if (lineRemaining.Length == 0)
nextLine();
return true;
}
}
throw new Exception(string.Format("Unable to match against any tokens at line {0} position {1} \"{2}\"",
LineNumber, Position, lineRemaining));
}
public string TokenContents { get; private set; }
public object Token { get; private set; }
public int LineNumber { get; private set; }
public int Position { get; private set; }
public void Dispose() => reader.Dispose();
}
```
Example program:
```
string sample = @"( one (two 456 -43.2 "" \"" quoted"" ))";
var defs = new TokenDefinition[]
{
// Thanks to [steven levithan][2] for this great quoted string
// regex
new TokenDefinition(@"([""'])(?:\\\1|.)*?\1", "QUOTED-STRING"),
// Thanks to http://www.regular-expressions.info/floatingpoint.html
new TokenDefinition(@"[-+]?\d*\.\d+([eE][-+]?\d+)?", "FLOAT"),
new TokenDefinition(@"[-+]?\d+", "INT"),
new TokenDefinition(@"#t", "TRUE"),
new TokenDefinition(@"#f", "FALSE"),
new TokenDefinition(@"[*<>\?\-+/A-Za-z->!]+", "SYMBOL"),
new TokenDefinition(@"\.", "DOT"),
new TokenDefinition(@"\(", "LEFT"),
new TokenDefinition(@"\)", "RIGHT"),
new TokenDefinition(@"\s", "SPACE")
};
TextReader r = new StringReader(sample);
Lexer l = new Lexer(r, defs);
while (l.Next())
Console.WriteLine("Token: {0} Contents: {1}", l.Token, l.TokenContents);
```
Output:
```
Token: LEFT Contents: (
Token: SPACE Contents:
Token: SYMBOL Contents: one
Token: SPACE Contents:
Token: LEFT Contents: (
Token: SYMBOL Contents: two
Token: SPACE Contents:
Token: INT Contents: 456
Token: SPACE Contents:
Token: FLOAT Contents: -43.2
Token: SPACE Contents:
Token: QUOTED-STRING Contents: " \" quoted"
Token: SPACE Contents:
Token: RIGHT Contents: )
Token: RIGHT Contents: )
``` | It may be overkill, but have a look at [Irony](http://www.codeplex.com/irony) on CodePlex.
*Irony is a development kit for implementing languages on .NET platform. It uses the flexibility and power of c# language and .NET Framework 3.5 to implement a completely new and streamlined technology of compiler construction.
Unlike most existing yacc/lex-style solutions Irony does not employ any scanner or parser code generation from grammar specifications written in a specialized meta-language. In Irony the target language grammar is coded directly in c# using operator overloading to express grammar constructs. Irony's scanner and parser modules use the grammar encoded as c# class to control the parsing process. See the expression grammar sample for an example of grammar definition in c# class, and using it in a working parser.* | Poor man's "lexer" for C# | [
"",
"c#",
"regex",
"lexer",
""
] |
While debugging something, I saw the STL vector::empty() implementation:
```
bool empty() const
{return (size() == 0); }
```
I believe, whenever we are probing the emptiness of vector it is always recommended to use empty over size(). But seeing that implementation, I am wondering, what is the benefit of doing so? Instead, there is a function call overhead in calling empty as it internally calls size()==0.
I thought empty() may be helpful in case of list as size() doesn't guarantees the constant time in list. To verify my assumption, I checked the list implementation and surprisingly, found the same implementation in list as well,
```
return (size() == 0);
```
I am bit confused now. If empty internally uses size() then why should we prefer empty over size() ? | You would need to write the condition out everytime you use `size()`. It's convenient to use `empty()`. This is of course, provided you don't switch containers. As others have pointed out, it is upto the implementation to use `size()` in `empty()` or not. However, the standard does guarantee that: `empty()` is a constant-time operation for all
standard containers. | Because if you switch from std::vector to std::list or other container, it may be different.
For example some implementations of `std::list::size` take `O(n)` and not `O(1)`. | size() Vs empty() in vector - why empty() is preferred? | [
"",
"c++",
"stl",
"size",
""
] |
I have a question about the std::vector.
I have a very memory intensive algorithm where I forsee that predicting vector sizes and reserving enough memory for the vectors in advance will help me a lot with reducing memory usage.
Which of the following is better:
```
for ( ... ) {
std::vector<Type> my_vector;
my_vector.reserve(stuff_count);
// Do stuff , and append stuff to my_vector.
}
```
Or this:
```
std::vector my_vector;
for ( ... ) {
my_vector.clear();
my_vector.reserve(stuff_count);
// Do stuff , and append stuff to my_vector.
}
```
Please tell me which is best, or if there is an even better way of doing stuff.
Thank you very much in advance! | With the first variant you reallocate the vector's buffer on each iteration – that's usually quite costly. With the second variant you only reallocate occasionally. The second variant is better since speed is a priority for you.
It's unclear from you question where the number of elements is know from. Maybe you even can quickly calculate the maximum number of elements for all iterations, set this to be the buffer size and have no reallocation. | > I forsee that predicting vector sizes and reserving enough memory for the vectors in advance will help me a lot with reducing memory usage.
Try and act like an engineer not a fortune teller. Create a test, and measure the difference. | Clean vector every loop iteration. What is the most memory efficient way? | [
"",
"c++",
"vector",
"loops",
"memory-management",
"memory-optimization",
""
] |
Consider a following code:
```
struct X {
void MethodX() {
...
}
};
struct Y {
void MethodY() {
...
}
};
void test () {
X x;
Y y;
Dispatcher d;
d.Register("x", x, &X::MethodX);
d.Register("y", y, &Y::MethodY);
d.Call("x");
d.Call("y");
}
```
The question is how to implement Dispatcher.
I don't mind X and Y may inheriting from something, but Dispatcher should allow further clients (not only X and Y).
And I would like to avoid void\* pointers if possible :) | Take a look at [boost::function](http://www.boost.org/doc/libs/1_38_0/doc/html/function.html), it does this. | To avoid boost usage and implement your own you can take a look at the book
<http://en.wikipedia.org/wiki/Modern_C%2B%2B_Design>
There is a Loki library described in the book with a good explanation how to make your own smart enough for your need functor.
```
class Dispather
{
public:
typedef boost::function< void ( void ) > FunctionT;
void Register( const std::string& name, FunctionT function )
{
registered_[ name ] = function;
}
void Call( const std::string& name )
{
RegisterdFunctionsT::const_iterator it =
registered_.find( name );
if ( it == registered_.end() )
throw std::logic_error( "Function is not registered" );
(it->second)();
}
private:
typedef std::map< std::string, FunctionT > RegisterdFunctionsT;
RegisterdFunctionsT registered_;
};
int main()
{
X x;
Y y;
Dispather d;
d.Register( "x", boost::bind( &X::MethodX, &x ) );
d.Register( "y", boost::bind( &Y::MethodY, &y ) );
d.Call( "x" );
d.Call( "y" );
return 0;
}
``` | Is it possible to create method call dispatcher in C++? | [
"",
"c++",
"pointers",
"methods",
"types",
"dispatcher",
""
] |
> **Possible Duplicate:**
> [How costly is .NET reflection?](https://stackoverflow.com/questions/25458/how-costly-is-net-reflection)
The "elegant" solution to a [problem](https://stackoverflow.com/questions/616447/what-is-the-best-way-to-enforce-properties-that-must-be-implemented-at-each-subcl) I am having is to use attributes to associate a class and its properties with another's. The problem is, to convert it to the other, I'd have to use reflection. I am considering it for a server-side app that will be hosted on the cloud.
I've heard many rumblings of "reflection is slow, don't use it," how slow is slow? Is it so CPU intensive that it'll multiply my CPU time so much that I'll literally be paying for my decision to use reflection at the bottom of my architecture on the cloud? | Just in case you don't see the update on the original question: when you are reflecting to find all the types that support a certain attribute, you have a perfect opportunity to use caching. That means you don't have to use reflection more than once at runtime.
To answer the general question, reflection is slower than raw compiled method calls, but it's much, much faster than accessing a database or the file system, and practically all web servers do those things all the time. | It's many times faster than filesystem access.
It's many many times faster than database access across the network.
It's many many many times faster than sending an HTTP response to the browser. | Is reflection really THAT slow that I shouldn't use it when it makes sense to? | [
"",
"c#",
".net",
"vb.net",
"oop",
"reflection",
""
] |
How can I calculate the logarithm of a BigDecimal? Does anyone know of any algorithms I can use?
My googling so far has come up with the (useless) idea of just converting to a double and using Math.log.
I will provide the precision of the answer required.
edit: any base will do. If it's easier in base x, I'll do that. | [Java Number Cruncher: The Java Programmer's Guide to Numerical Computing](http://books.google.com.au/books?id=h0d8hVA5HyQC&dq=books+Java+Number+Cruncher:+The+Java+Programmer%27s+Guide+to+Numerical+Computing&printsec=frontcover&source=bn&hl=en&ei=MOLjSaPQBdaSkAX4w-nbCw&sa=X&oi=book_result&ct=result&resnum=4) provides a solution using [Newton's Method](http://en.wikipedia.org/wiki/Newton%27s_method). Source code from the book is available [here](http://www.apropos-logic.com/nc/download.html). The following has been taken from chapter *12.5 Big Decimal Functions* (p330 & p331):
```
/**
* Compute the natural logarithm of x to a given scale, x > 0.
*/
public static BigDecimal ln(BigDecimal x, int scale)
{
// Check that x > 0.
if (x.signum() <= 0) {
throw new IllegalArgumentException("x <= 0");
}
// The number of digits to the left of the decimal point.
int magnitude = x.toString().length() - x.scale() - 1;
if (magnitude < 3) {
return lnNewton(x, scale);
}
// Compute magnitude*ln(x^(1/magnitude)).
else {
// x^(1/magnitude)
BigDecimal root = intRoot(x, magnitude, scale);
// ln(x^(1/magnitude))
BigDecimal lnRoot = lnNewton(root, scale);
// magnitude*ln(x^(1/magnitude))
return BigDecimal.valueOf(magnitude).multiply(lnRoot)
.setScale(scale, BigDecimal.ROUND_HALF_EVEN);
}
}
/**
* Compute the natural logarithm of x to a given scale, x > 0.
* Use Newton's algorithm.
*/
private static BigDecimal lnNewton(BigDecimal x, int scale)
{
int sp1 = scale + 1;
BigDecimal n = x;
BigDecimal term;
// Convergence tolerance = 5*(10^-(scale+1))
BigDecimal tolerance = BigDecimal.valueOf(5)
.movePointLeft(sp1);
// Loop until the approximations converge
// (two successive approximations are within the tolerance).
do {
// e^x
BigDecimal eToX = exp(x, sp1);
// (e^x - n)/e^x
term = eToX.subtract(n)
.divide(eToX, sp1, BigDecimal.ROUND_DOWN);
// x - (e^x - n)/e^x
x = x.subtract(term);
Thread.yield();
} while (term.compareTo(tolerance) > 0);
return x.setScale(scale, BigDecimal.ROUND_HALF_EVEN);
}
/**
* Compute the integral root of x to a given scale, x >= 0.
* Use Newton's algorithm.
* @param x the value of x
* @param index the integral root value
* @param scale the desired scale of the result
* @return the result value
*/
public static BigDecimal intRoot(BigDecimal x, long index,
int scale)
{
// Check that x >= 0.
if (x.signum() < 0) {
throw new IllegalArgumentException("x < 0");
}
int sp1 = scale + 1;
BigDecimal n = x;
BigDecimal i = BigDecimal.valueOf(index);
BigDecimal im1 = BigDecimal.valueOf(index-1);
BigDecimal tolerance = BigDecimal.valueOf(5)
.movePointLeft(sp1);
BigDecimal xPrev;
// The initial approximation is x/index.
x = x.divide(i, scale, BigDecimal.ROUND_HALF_EVEN);
// Loop until the approximations converge
// (two successive approximations are equal after rounding).
do {
// x^(index-1)
BigDecimal xToIm1 = intPower(x, index-1, sp1);
// x^index
BigDecimal xToI =
x.multiply(xToIm1)
.setScale(sp1, BigDecimal.ROUND_HALF_EVEN);
// n + (index-1)*(x^index)
BigDecimal numerator =
n.add(im1.multiply(xToI))
.setScale(sp1, BigDecimal.ROUND_HALF_EVEN);
// (index*(x^(index-1))
BigDecimal denominator =
i.multiply(xToIm1)
.setScale(sp1, BigDecimal.ROUND_HALF_EVEN);
// x = (n + (index-1)*(x^index)) / (index*(x^(index-1)))
xPrev = x;
x = numerator
.divide(denominator, sp1, BigDecimal.ROUND_DOWN);
Thread.yield();
} while (x.subtract(xPrev).abs().compareTo(tolerance) > 0);
return x;
}
/**
* Compute e^x to a given scale.
* Break x into its whole and fraction parts and
* compute (e^(1 + fraction/whole))^whole using Taylor's formula.
* @param x the value of x
* @param scale the desired scale of the result
* @return the result value
*/
public static BigDecimal exp(BigDecimal x, int scale)
{
// e^0 = 1
if (x.signum() == 0) {
return BigDecimal.valueOf(1);
}
// If x is negative, return 1/(e^-x).
else if (x.signum() == -1) {
return BigDecimal.valueOf(1)
.divide(exp(x.negate(), scale), scale,
BigDecimal.ROUND_HALF_EVEN);
}
// Compute the whole part of x.
BigDecimal xWhole = x.setScale(0, BigDecimal.ROUND_DOWN);
// If there isn't a whole part, compute and return e^x.
if (xWhole.signum() == 0) return expTaylor(x, scale);
// Compute the fraction part of x.
BigDecimal xFraction = x.subtract(xWhole);
// z = 1 + fraction/whole
BigDecimal z = BigDecimal.valueOf(1)
.add(xFraction.divide(
xWhole, scale,
BigDecimal.ROUND_HALF_EVEN));
// t = e^z
BigDecimal t = expTaylor(z, scale);
BigDecimal maxLong = BigDecimal.valueOf(Long.MAX_VALUE);
BigDecimal result = BigDecimal.valueOf(1);
// Compute and return t^whole using intPower().
// If whole > Long.MAX_VALUE, then first compute products
// of e^Long.MAX_VALUE.
while (xWhole.compareTo(maxLong) >= 0) {
result = result.multiply(
intPower(t, Long.MAX_VALUE, scale))
.setScale(scale, BigDecimal.ROUND_HALF_EVEN);
xWhole = xWhole.subtract(maxLong);
Thread.yield();
}
return result.multiply(intPower(t, xWhole.longValue(), scale))
.setScale(scale, BigDecimal.ROUND_HALF_EVEN);
}
``` | A hacky little algorithm that works great for large numbers uses the relation `log(AB) = log(A) + log(B)`. Here's how to do it in base 10 (which you can trivially convert to any other logarithm base):
1. Count the number of decimal digits in the answer. That's the integral part of your logarithm, *plus one*. Example: `floor(log10(123456)) + 1` is 6, since 123456 has 6 digits.
2. You can stop here if all you need is the integer part of the logarithm: just subtract 1 from the result of step 1.
3. To get the fractional part of the logarithm, divide the number by `10^(number of digits)`, then compute the log of that using `math.log10()` (or whatever; use a simple series approximation if nothing else is available), and add it to the integer part. Example: to get the fractional part of `log10(123456)`, compute `math.log10(0.123456) = -0.908...`, and add it to the result of step 1: `6 + -0.908 = 5.092`, which is `log10(123456)`. Note that you're basically just tacking on a decimal point to the front of the large number; there is probably a nice way to optimize this in your use case, and for really big numbers you don't even need to bother with grabbing all of the digits -- `log10(0.123)` is a great approximation to `log10(0.123456789)`. | Logarithm of a BigDecimal | [
"",
"java",
"bigdecimal",
"logarithm",
""
] |
I've inherited a less-than-ideal table structure, and I'm trying to improve it as much as I can without tearing down and rebuilding. There's currently at least two levels of data for everything, the legacy data and the marketing override data. I'm trying to find all the records within the legacy data that don't yet have a marketing override.
So far, this is what I have:
```
SELECT DISTINCT
old.STYLE_NBR, old.COLOR_NBR
FROM
LEGACY_PRODUCT_TABLE old
INNER JOIN
MARKETING_PRODUCT_TABLE new
ON old.STYLE_NBR <> new.style_number AND old.COLOR_NBR <> new.colour_number
```
This seems to work, but it takes a few minutes to run. If at all possible, I'd like a more efficient way of doing this.
Other info:
* There are about 60,000 records in the legacy table
* There are about 7,000 in the marketing table
* Both `STYLE_NBR` and `COLOR_NBR` are char(5) and, when combined, make a unique ID. | You should be using a LEFT OUTER JOIN and change your lookup
```
SELECT DISTINCT
old.STYLE_NBR, old.COLOR_NBR
FROM
LEGACY_PRODUCT_TABLE old
LEFT OUTER JOIN MARKETING_PRODUCT_TABLE new
ON (old.STYLE_NBR + old.COLOR_NBR) = (new.style_number + new.Colour_number)
WHERE (new.style_number + new.Colour_number) IS NULL
``` | What you currently have is incorrect because it will return a row for every row that *doesn't* match, so potentially 6999 rows in the result per row in the legacy table if there is marketing override, or 7000 if there isn't. the distinct will then discard the duplicates, but the result will be wrong because even if there is a marketing matching row, the non-matching ones will make sure the result set will include the ones where there is no row.
Try this instead:
```
select distinct style_nbr, color_nbr
from legacy_product_table L
where not exists
(
select * from marketing_product_table m
where m.style_nbr = L.style_nbr and m.color_nbr = L.color_nbr
)
```
Make sure the product table has an index on (style\_nbr,color\_nbr). | SELECT All that are not in another table | [
"",
"sql",
"sql-server",
""
] |
Duplicate:
**[C++: undefined reference to static class member](https://stackoverflow.com/questions/272900/c-undefined-reference-to-static-class-member)**
If I have a class/struct like this
```
// header file
class Foo
{
public:
static int bar;
int baz;
int adder();
};
// implementation
int Foo::adder()
{
return baz + bar;
}
```
This doesn't work. I get an "undefined reference to `Foo::bar'" error. How do I access static class variables in C++? | You must add the following line in the *implementation* file:
```
int Foo::bar = you_initial_value_here;
```
This is required so the compiler has a place for the static variable. | It's the correct syntax, however, `Foo::bar` must be defined separately, outside of the header. In one of your `.cpp` files, say this:
```
int Foo::bar = 0; // or whatever value you want
``` | Accessing static class variables in C++? | [
"",
"c++",
"class",
"static",
""
] |
I have a 'mapwrap' div set to 400px x 400px and inside that I have a Google 'map' set to 100% x 100%. So the map loads at 400 x 400px, then with JavaScript I resize the 'mapwrap' to 100% x 100% of the screen - the google map resizes to the whole screen as I expected but tiles start disappearing before the right hand edge of the page.
Is there a simple function I can call to cause the Google map to re-adjust to the larger size 'mapwrap' div? | If you're using Google Maps v2, call `checkResize()` on your map after resizing the container. [link](http://code.google.com/apis/maps/documentation/reference.html#GMap2)
**UPDATE**
Google Maps JavaScript API v2 was deprecated in 2011. It is not available anymore. | for Google Maps v3, you need to trigger the resize event differently:
```
google.maps.event.trigger(map, "resize");
```
See the documentation for the resize event (you'll need to search for the word 'resize'): <http://code.google.com/apis/maps/documentation/v3/reference.html#event>
---
**Update**
This answer has been here a long time, so a little demo might be worthwhile & although it uses jQuery, there's no real need to do so.
```
$(function() {
var mapOptions = {
zoom: 8,
center: new google.maps.LatLng(-34.397, 150.644)
};
var map = new google.maps.Map($("#map-canvas")[0], mapOptions);
// listen for the window resize event & trigger Google Maps to update too
$(window).resize(function() {
// (the 'map' here is the result of the created 'var map = ...' above)
google.maps.event.trigger(map, "resize");
});
});
```
```
html,
body {
height: 100%;
}
#map-canvas {
min-width: 200px;
width: 50%;
min-height: 200px;
height: 80%;
border: 1px solid blue;
}
```
```
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://maps.googleapis.com/maps/api/js?v=3.exp&dummy=.js"></script>
Google Maps resize demo
<div id="map-canvas"></div>
```
**UPDATE 2018-05-22**
With a new renderer release in version 3.32 of Maps JavaScript API the resize event is no longer a part of `Map` class.
The documentation states
> When the map is resized, the map center is fixed
>
> * The full-screen control now preserves center.
> * **There is no longer any need to trigger the resize event manually.**
source: <https://developers.google.com/maps/documentation/javascript/new-renderer>
`google.maps.event.trigger(map, "resize");` doesn't have any effect starting from version 3.32 | How do I resize a Google Map with JavaScript after it has loaded? | [
"",
"javascript",
"css",
"google-maps",
""
] |
I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process.
Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!!
Can someone please point to me how I can get started?
I have looked at many many tools and I think I might go with:
[Visual Build](http://www.kinook.com/VisBuildPro) (for build automation) (Note: I also considered Final Builder)
[Cruise](http://studios.thoughtworks.com/cruise-continuous-integration) (for CI server)
I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p)
Price is not the issue, I want what's best and easiest to get started.
Can people that use real CI/automation tool for **unmanaged MSVC++** tell me their tools and how I can go about starting?
Our source control is Subversion.
Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others? | I would take some time to seriously consider [TeamCity](http://www.jetbrains.com/teamcity/). We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free.
Best of all, the tool is free for < 20 users and gives you three build agents.
I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc. | The last time I worked on an unmanaged MSVC++ project (which was moderately sized I might add), we used FinalBuilder to do the automated build & versioning (and even executing PCLint and other profiling tools as well).
Having said that, if you're willing to invest the time, MSBuild (or nAnt perhaps?) can do everything you need - even for unmanaged solutions.
Which brings us to the trade-off: Tools like Visual Build Pro and Final Builder get you up and running quickly. If you want something which offers a greater range of customization, you'll probably be spending a decent amount of time learning and understanding it - i.e. MSBuild, CIFactory, nAnt etc are no cake walk.
So if price isn't an issue - is time an issue? If time is at a premium, I'd investigate the GUI driven tools, they'll get you to where you want to go quickly. If you know you're going to need to extend on the simple one button build + unit tests + deploy scenario (which happens a lot!) then decide if you can invest the time into the more complex tools like MSBuild? | Continuous Integration: Unmanaged C++ on Visual Studio 2008 | [
"",
"c++",
"visual-studio",
"unit-testing",
"continuous-integration",
"build-automation",
""
] |
I have two database tables, Categories and SuperCategories, for an inventory control system I'm working on:
Categories: ID\_Category, CategoryName
SuperCategories: ID\_SuperCategory, CategoryID, SuperCategoryID
I'm putting category-subcategory relationships into the SuperCategories table. I'm putting all categories into the Categories table.
Here is an example:
```
Categories:
ID_Category CategoryName
1 Box
2 Red Box
3 Blue Box
4 Blue Plastic Box
5 Can
6 Tin Can
```
```
SuperCategories:
ID_Super CategoryID SuperCategoryID
1 2 1
2 3 1
3 4 3
4 6 5
```
CategoryID and SuperCategoryID relate back to the primary key ID\_Category in the Categories table.
What I would like is a query that returns all of the category names that are not parents of any other categories:
Red Box
Blue Plastic Box
Tin Can
This amounts to finding all values of ID\_Category that do not show up in the SuperCategoryID column (2, 4, and 6), but I'm having trouble writing the SQL.
I'm using VB6 to query an Access 2000 database.
Any help is appreciated. Thanks!
**EDIT:** I voted up everyone's answer that gave me something that worked. I accepted the answer that I felt was the most instructive. Thanks again for your help! | Mike Pone's answer works, because he joins the "Categories" table with the "SuperCategories" table as a "LEFT OUTER JOIN" - this will take all entries from "Categories" and add columns from "SuperCategories" to those where the link exists - where it does not exist (e.g. where there is no entry in "SuperCategories"), you'll get NULLs for the SuperCategories columns - and that's exactly what Mike's query then checks for.
If you would write the query like so:
```
SELECT c.CategoryName, s.ID_Super
FROM Categories c
LEFT OUTER JOIN SuperCategories s ON c.ID_Category = s.SuperCategoryID
```
you would get something like this:
```
CategoryName ID_Super
Box 1
Box 2
Red Box NULL
Blue Box 3
Blue Plastic Box NULL
Can 4
Tin Can NULL
```
So this basically gives you your answer - all the rows where the ID\_Super on the LEFT OUTER JOIN is NULL are those who don't have any entries in the SuperCategories table. All clear? :-)
Marc | ```
SELECT
CAT.ID_Category,
CAT.CategoryName
FROM
Categories CAT
WHERE
NOT EXISTS
(
SELECT
*
FROM
SuperCategories SC
WHERE
SC.SuperCategoryID = CAT.ID_Category
)
```
Or
```
SELECT
CAT.ID_Category,
CAT.CategoryName
FROM
Categories CAT
LEFT OUTER JOIN SuperCategories SC ON
SC.SuperCategoryID = CAT.ID_Category
WHERE
SC.ID_Super IS NULL
```
I'll also make the suggestion that your naming standards could probably use some work. They seem all over the place and difficult to work with. | SQL query on two tables - return rows in one table that don't have entries in the other | [
"",
"sql",
""
] |
I have a web application that I'm working on (ASP.NET 2.0 with C#, using Visual Studio 2005). Everything was working fine, and all of a sudden I get the error:
> Error 1 The name 'Label1' does not exist in the current context
and 43 others of the sort for each time that I used a control in my code behind page.
This is only happening for one page. And it's as if the code behind page isn't recognizing the controls. Another interesting thing is that the IntelliSense isn't picking up any of the controls either..
I have tried to clean the solution file, delete the obj file, exclude the files from the project then re-add them, close Visual Studio and restart it, and even restart my computer, but none of these have worked. | Check your code behind file name and Inherits property on the @Page directive, make sure they both match. | I know this is an old question, but I had a similar problem and wanted to post my solution in case it could benefit someone else. I encountered the problem while learning to use:
* ASP.NET 3.5
* C#
* VS2008
I was trying to create an AJAX-enabled page (look into a tutorial about using the ScriptManager object if you aren't familiar with this). I tried to access the HTML elements in the page via the C# code, and I was getting an error stating the the identifier for the HTML ID value "does not exist in the current context."
To solve it, I had to do the following:
**1. Run at server**
To access the HTML element as a variable in the C# code, the following value must be placed in the HTML element tag in the aspx file:
```
runat="server"
```
Some objects in the Toolbox in the Visual Studio IDE do not automatically include this value when added to the page.
**2. Regenerate the auto-generated C# file:**
* In the Solution Explorer, under the aspx file there should be two files: \*.aspx.cs and \*.aspx.designer.cs. The designer file is auto-generated.
* Delete the existing \*.aspx.designer.cs file. Make sure you only delete the ***designer*** file. Do not delete the other one, because it contains your C# code for the page.
* Right-click on the parent aspx file or Project menu. In the pop-up menu, select **Convert to Web Application**.
Now the element should be accessible in the C# code file. | The name 'controlname' does not exist in the current context | [
"",
"c#",
"asp.net",
"visual-studio-2005",
"code-behind",
""
] |
Will the initialization list always be processed before the constructor code?
In other words, will the following code always print `<unknown>`, and the constructed class will have "known" as value for `source_` (if the global variable `something` is `true`)?
```
class Foo {
std::string source_;
public:
Foo() : source_("<unknown>") {
std::cout << source_ << std::endl;
if(something){
source_ = "known";
}
}
};
``` | Yes, it will, as per `C++11: 12.6.2 /10` (same section in `C++14`, `15.6.2 /13` in `C++17`):
---
In a non-delegating constructor, initialization proceeds in the following order (my bold):
* First, and only for the constructor of the most derived class (1.8), virtual base classes are initialized in the order they appear on a depth-first left-to-right traversal of the directed acyclic graph of base classes, where “left-to-right” is the order of appearance of the base classes in the derived class base-specifier-list.
* Then, direct base classes are initialized in declaration order as they appear in the base-specifier-list (regardless of the order of the mem-initializers).
* Then, non-static data members are initialized in the order they were declared in the class definition (again regardless of the order of the mem-initializers).
* ***Finally,*** the compound-statement of the constructor body is executed.
---
The main reason for using init-lists is to help the compiler with optimisation. Init-lists for non-basic types (i.e., class objects rather than `int`, `float`, etc.) can generally be constructed in-place.
If you create the object then assign to it in the constructor, this generally results in the creation and destruction of temporary objects, which is inefficient.
Init-lists can avoid this (if the compiler is up to it, of course but most of them should be).
The following complete program will output 7 but this is for a specific compiler (CygWin g++) so it doesn't guarantee that behaviour any more than the sample in the original question.
However, as per the citation in the first paragraph above, the standard *does* actually guarantee it.
```
#include <iostream>
class Foo {
int x;
public:
Foo(): x(7) {
std::cout << x << std::endl;
}
};
int main (void) {
Foo foo;
return 0;
}
``` | Yes, C++ constructs all the members before calling the constructur code. | Will the initialization list always be processed before the constructor code? | [
"",
"c++",
"constructor",
"initialization",
""
] |
I develop commercial unmanaged C++ app on Visual Studio 2008, and I want to add a static-code analysis tool.
Any recommendations?
I think it would be real nice if the tool can be integrated into MSVC.
I'm thinking about [PC-Lint](http://www.gimpel.com/) + [Visual Lint](http://www.riverblade.co.uk/products/visual_lint/index.html)
However, I have been taking a hard look at [Coverity](http://www.coverity.com/), [Understand](http://www.scitools.com), and [Klockwork](http://www.klocwork.com/) as well.
Price isnt really the issue. I want opinions from people who actually used the tool for **unmanaged C++** on MSVC, and they just absolutely loved it.
Lastly, VSTS and Intel Parallel Studio now also offer static code analysis. Nice~
Note: related [post](https://stackoverflow.com/questions/639694/are-c-static-code-analyis-tools-worth-it) suggest Coverity is the best (?) (see last 2 posts) | I work for RedLizard building [Goanna](http://redlizards.com), a C++ static analysis plugin for Visual Studio. Its focus is on desktop use by a programmer. You can run it on individual files, just as you do the compiler, and it can give you results quickly.
There is a [trial available](http://redlizards.com/download.html). Right-click a file, select *Run Goanna*, and the results appear in the Visual Studio warnings list. | Beyond all those you mentioned, VS Team Developer edition comes bundled with a nice static analysis tool called prefast. Its (obviously..) well integrated into the IDE, and accessible via the menus.
Its in fact a public release of an MS internal tool - a thin version of a tool called Prefix they run on their builds. Personally, when I faced the same decision, prefast sufficed. | Static-code analyzer: unmanaged C++ Visual Studio 2008 | [
"",
"c++",
"visual-studio",
"visual-c++",
"static-analysis",
""
] |
I have a 384MB text file with 50 million lines. Each line contains 2 space-separated integers: a key and a value. The file is sorted by key. I need an efficient way of looking up the values of a list of about 200 keys in Python.
My current approach is included below. It takes 30 seconds. There must be more efficient Python foo to get this down to a reasonable efficiency of a couple of seconds at most.
```
# list contains a sorted list of the keys we need to lookup
# there is a sentinel at the end of list to simplify the code
# we use pointer to iterate through the list of keys
for line in fin:
line = map(int, line.split())
while line[0] == list[pointer].key:
list[pointer].value = line[1]
pointer += 1
while line[0] > list[pointer].key:
pointer += 1
if pointer >= len(list) - 1:
break # end of list; -1 is due to sentinel
```
Coded binary search + seek solution (thanks kigurai!):
```
entries = 24935502 # number of entries
width = 18 # fixed width of an entry in the file padded with spaces
# at the end of each line
for i, search in enumerate(list): # list contains the list of search keys
left, right = 0, entries-1
key = None
while key != search and left <= right:
mid = (left + right) / 2
fin.seek(mid * width)
key, value = map(int, fin.readline().split())
if search > key:
left = mid + 1
else:
right = mid - 1
if key != search:
value = None # for when search key is not found
search.result = value # store the result of the search
``` | If you only need 200 of 50 million lines, then reading all of it into memory is a waste. I would sort the list of search keys and then apply binary search to the file using seek() or something similar. This way you would not read the entire file to memory which I think should speed things up. | Slight optimization of S.Lotts answer:
```
from collections import defaultdict
keyValues= defaultdict(list)
targetKeys= # some list of keys as strings
for line in fin:
key, value = line.split()
if key in targetKeys:
keyValues[key].append( value )
```
Since we're using a dictionary rather than a list, the keys don't have to be numbers. This saves the map() operation and a string to integer conversion for each line. If you want the keys to be numbers, do the conversion a the end, when you only have to do it once for each key, rather than for each of 50 million lines. | Reading Huge File in Python | [
"",
"python",
"performance",
"file-io",
"large-files",
""
] |
I'm using `__init__()` like this in some SQLAlchemy ORM classes that have many parameters (upto 20).
```
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
```
Is it "pythonic" to set attributes like this? | Yes. Another way to do this is.
```
def __init__(self, **kwargs):
self.__dict__.update( kwargs )
``` | Yes, if there's not a "nicer" way of supplying the arguments.
For example, using your ORM classes you mention, perhaps it would be more Python'y to allow..
```
col = Varchar()
col.index = True
col.length = 255
```
..rather than..
```
col = Varchar(index = True, length = 255)
```
Okay that's not the best example, since the `**kwargs` method would actually be nicer.. but my point is you should always consider alternative methods of achieving something, before using sometimes-discouraged things like `**kwargs`..
Another thing to keep in mind is you might lose behaviour a user expects, such as raising a TypeError if the user supplies an invalid keyword arg, which could be worked around like..
```
def __init__(self, **kwargs):
valid_kwargs = ['x', 'y', 'z']
for k, v in kwargs.iteritems():
if k not in valid_kwargs:
raise TypeError("Invalid keyword argument %s" % k)
setattr(self, k, v)
```
A final thing to consider:
```
class Hmm:
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
def mymethod(self):
print "mymethod should print this message.."
x = Hmm(mymethod = None)
x.mymethod() # raises TypeError: 'NoneType' object is not callable
``` | setattr with kwargs, pythonic or not? | [
"",
"python",
"initialization",
""
] |
Many methods that used to return lists in Python 2.x now seem to return iterators in Py3k
Are iterators also generator expressions? Lazy evaluation?
Thus, with this the memory footprint of python is going to reduce drastically. Isn't it?
What about for the programs converted from 2to3 using the builtin script?
Does the builtin tool explicitly convert all the returned iterators into lists, for compatibility? If so then the lower memory footprint benefit of Py3k is not really apparent in the converted programs. Is it? | Many of them are not exactly iterators, but special view objects. For instance range() now returns something similar to the old xrange object - it can still be indexed, but lazily constructs the integers as needed.
Similarly dict.keys() gives a dict\_keys object implementing a view on the dict, rather than creating a new list with a copy of the keys.
How this affects memory footprints probably depends on the program. Certainly there's more of an emphasis towards using iterators unless you really need lists, whereas using lists was generally the default case in python2. That will cause the average program to probably be more memory efficient. Cases where there are really big savings are probably going to already be implemented as iterators in python2 programs however, as really large memory usage will stand out, and is more likely to be already addressed. (eg. the file iterator is already much more memory efficient than the older `file.readlines()` method)
Converting is done by the 2to3 tool, and will generally convert things like range() to iterators where it can safely determine a real list isn't needed, so code like:
```
for x in range(10): print x
```
will switch to the new range() object, no longer creating a list, and so will obtain the reduced memory benefit, but code like:
```
x = range(20)
```
will be converted as:
```
x = list(range(20))
```
as the converter can't know if the code expects a *real* list object in x. | > Are iterators also generator expressions? Lazy evaluation?
An iterator is just an object with a next method. What the documentation means most of the time when saying that a function returns an iterator is that its result is lazily loaded.
> Thus, with this the memory footprint of python is going to reduce drastically. Isn't it?
It depends. I'd guess that the average program wouldn't notice a *huge* difference though. The performance advantages of iterators over lists is really only significant if you have a large dataset. You may want to see [this question](https://stackoverflow.com/questions/628903/performance-advantages-to-iterators). | Py3k memory conservation by returning iterators rather than lists | [
"",
"python",
"memory",
"list",
"iterator",
"python-3.x",
""
] |
Wondering how much effort I should go to forcing useful debugging information when creating exception messages, or should I just trust the user to supply the right info, or defer the information gathering to an exception handler?
I see a lot of people people doing their exceptions like:
```
throw new RuntimeException('MyObject is not an array')
```
or extending the default exceptions with custom exceptions that don't do much but change the name of the exception:
```
throw new WrongTypeException('MyObject is not an array')
```
But this doesn't supply much debugging info... and doesn't enforce any kind of formatting with the error message. So you could end up with exactly the same error producing two different error messages... eg "Database connection failed" vs "Could not connect to db"
Sure, if it bubbles to the top, it'll print the stack trace, which is useful, but it doesn't always tell me everything I need to know and usually I end up having to start shooting off var\_dump() statements to discover what went wrong and where... though this could be somewhat offset with a decent exception handler.
I'm starting to think about something like the code below, where I *require* the thrower of the exception to supply necessary args to produce the correct error message. I'm thinking this might be the way to go in that:
* Minimum level of useful information must be supplied
* Produces somewhat consistent error messages
* Templates for exception messages all in the one location (exception classes), so easier to update the messages...
But I see the downside being that they are harder to use (requires you look up exception definition), and thus might discourage other programmers from using supplied exceptions...
I'd like some comment on this idea, & best practices for a consistent, flexible exception message framework.
```
/**
* @package MyExceptions
* MyWrongTypeException occurs when an object or
* datastructure is of the incorrect datatype.
* Program defensively!
* @param $objectName string name of object, eg "\$myObject"
* @param $object object object of the wrong type
* @param $expect string expected type of object eg 'integer'
* @param $message any additional human readable info.
* @param $code error code.
* @return Informative exception error message.
* @author secoif
*/
class MyWrongTypeException extends RuntimeException {
public function __construct($objectName, $object, $expected, $message = '', $code = 0) {
$receivedType = gettype($object)
$message = "Wrong Type: $objectName. Expected $expected, received $receivedType";
debug_dump($message, $object);
return parent::__construct($message, $code);
}
}
```
....
```
/**
* If we are in debug mode, append the var_dump of $object to $message
*/
function debug_dump(&$message, &$object) {
if (App::get_mode() == 'debug') {
ob_start();
var_dump($object);
$message = $message . "Debug Info: " . ob_get_clean();
}
}
```
Then used like:
```
// Hypothetical, supposed to return an array of user objects
$users = get_users(); // but instead returns the string 'bad'
// Ideally the $users model object would provide a validate() but for the sake
// of the example
if (is_array($users)) {
throw new MyWrongTypeException('$users', $users, 'array')
// returns
//"Wrong Type: $users. Expected array, received string
}
```
and we might do something like a nl2br in a custom exception handler to make things nice for html output.
Been reading:
<http://msdn.microsoft.com/en-us/library/cc511859.aspx#>
And there is no mention of anything like this, so maybe it's a bad idea... | I strongly recommend the advice on [Krzysztof's blog](https://learn.microsoft.com/en-us/archive/blogs/kcwalina/how-to-design-exception-hierarchies) and would note that in your case you seem to be trying to deal with what he calls Usage Errors.
In this case what is required is not a new type to indicate it but a better error message about what caused it. As such a helper function to either:
1. generate the textual string to place into the exception
2. generate the whole exception and message
Is what is required.
Approach 1 is clearer, but may lead to a little more verbose usage, 2 is the opposite, trading a terser syntax for less clarity.
Note that the functions must be extremely safe (they should never, ever cause an unrelated exception themselves) and not force the provision of data that is optional in certain reasonable uses.
By using either of these approaches you make it easier to internationalise the error message later if required.
A stack trace at a minimum gives you the function, and possibly the line number, thus you should focus on supplying information that is not easy to work out from that. | I won't detract from the advise regarding Krzysztof's blog, but here is a dead-easy way to create custom exceptions.
### Updated for PHP 8.1
```
class ResponseSendResponse extends Exception
{
/**
* Construct the exception. Note: The message is NOT binary safe.
* @link https://php.net/manual/en/exception.construct.php
* @param string $message [optional] The Exception message to throw.
* @param Throwable|null $previous [optional] The previous throwable used for the exception chaining.
* @param int $code [optional] The Exception code.
*/
public function __construct(string $message = '', int $code = 422) {
parent::__construct($message, $code);
$this->message = "$message";
$this->code = $code;
}
}
```
### PHP 5-7
Example:
```
<?php
require_once "CustomException.php";
class SqlProxyException extends CustomException {}
throw new SqlProxyException($errorMsg, mysql_errno());
?>
```
The code behind that (which I borrowed somewhere, apologies to whomever that was)
```
<?php
interface IException
{
/* Protected methods inherited from Exception class */
public function getMessage(); // Exception message
public function getCode(); // User-defined Exception code
public function getFile(); // Source filename
public function getLine(); // Source line
public function getTrace(); // An array of the backtrace()
public function getTraceAsString(); // Formated string of trace
/* Overrideable methods inherited from Exception class */
public function __toString(); // formated string for display
public function __construct($message = null, $code = 0);
}
abstract class CustomException extends Exception implements IException
{
protected $message = 'Unknown exception'; // Exception message
private $string; // Unknown
protected $code = 0; // User-defined exception code
protected $file; // Source filename of exception
protected $line; // Source line of exception
private $trace; // Unknown
public function __construct($message = null, $code = 0)
{
if (!$message) {
throw new $this('Unknown '. get_class($this));
}
parent::__construct($message, $code);
}
public function __toString()
{
return get_class($this) . " '{$this->message}' in {$this->file}({$this->line})\n"
. "{$this->getTraceAsString()}";
}
}
``` | Custom Exception Messages: Best practices | [
"",
"php",
"exception",
""
] |
If a HANDLE is an output parameter, is it necessary to use a reference to the HANDLE or
use HANDLE directly?
```
bool fn(HANDLE h_result);
```
or
```
bool fn(HANDLE& h_result);
``` | To return values you can:
1. Use a reference as a parameter
2. Use a pointer as a parameter
3. Actually return the value via `return`
When you specify the following:
```
bool fn(HANDLE h_result);
```
You are making a copy of h\_result. So if you change it in your function, you are changing the copy, not the original.
When dealing with pointers, the concept is exactly the same. Just remember that a pointer is simply a variable that holds a memory address. It doesn't matter what's in that memory address. If you want to return a memory address via a parameter, then you need to use a pointer to a pointer or a pointer reference. | In your case, I would rather do something like:
```
HANDLE fn()
```
and return an invalid (or NULL) handle in case of failure and a valid handle otherwise. Of course, I am assuming that the HANDLE type is someway similar to what Win32 API uses as handle.
As for your question, pass output parameters by reference or by pointer, so either:
```
bool fn( HANDLE& out)
```
or
```
bool fn( HANDLE* out)
```
In the second example, you should make sure that the passed pointer is not NULL, usually with an assert:
```
bool fn( HANDLE* out)
{
assert( NULL != out);
// ... rest of the code
}
```
Another disadvantage of passing the output parameters as pointers is that you can't know for sure if the passed pointer is valid or not (was initialized or not).
Passing by reference is sometimes less readable when it comes to output parameters. Calling
bool result = fn( myHandle);
doesn't give a hint to the developer that myHandle is an output parameter, and some subtle problems may occur due to code readability.
In conclusion, I recomend to avoid as much as possible output parameters, because of the code readability, but, if you really have no choice, use references instead pointers, if the output parameter is mandatory. | Do I need to use reference parameters for returning values? | [
"",
"c++",
"winapi",
"api",
""
] |
How do I make two decorators in Python that would do the following?
```
@make_bold
@make_italic
def say():
return "Hello"
```
Calling `say()` should return:
```
"<b><i>Hello</i></b>"
``` | Check out [the documentation](http://docs.python.org/reference/compound_stmts.html#function) to see how decorators work. Here is what you asked for:
```
from functools import wraps
def makebold(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
return "<b>" + fn(*args, **kwargs) + "</b>"
return wrapper
def makeitalic(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
return "<i>" + fn(*args, **kwargs) + "</i>"
return wrapper
@makebold
@makeitalic
def hello():
return "hello world"
@makebold
@makeitalic
def log(s):
return s
print hello() # returns "<b><i>hello world</i></b>"
print hello.__name__ # with functools.wraps() this returns "hello"
print log('hello') # returns "<b><i>hello</i></b>"
``` | If you are not into long explanations, see [Paolo Bergantino’s answer](https://stackoverflow.com/questions/739654/understanding-python-decorators#answer-739665).
# Decorator Basics
## Python’s functions are objects
To understand decorators, you must first understand that functions are objects in Python. This has important consequences. Let’s see why with a simple example :
```
def shout(word="yes"):
return word.capitalize()+"!"
print(shout())
# outputs : 'Yes!'
# As an object, you can assign the function to a variable like any other object
scream = shout
# Notice we don't use parentheses: we are not calling the function,
# we are putting the function "shout" into the variable "scream".
# It means you can then call "shout" from "scream":
print(scream())
# outputs : 'Yes!'
# More than that, it means you can remove the old name 'shout',
# and the function will still be accessible from 'scream'
del shout
try:
print(shout())
except NameError as e:
print(e)
#outputs: "name 'shout' is not defined"
print(scream())
# outputs: 'Yes!'
```
Keep this in mind. We’ll circle back to it shortly.
Another interesting property of Python functions is they can be defined inside another function!
```
def talk():
# You can define a function on the fly in "talk" ...
def whisper(word="yes"):
return word.lower()+"..."
# ... and use it right away!
print(whisper())
# You call "talk", that defines "whisper" EVERY TIME you call it, then
# "whisper" is called in "talk".
talk()
# outputs:
# "yes..."
# But "whisper" DOES NOT EXIST outside "talk":
try:
print(whisper())
except NameError as e:
print(e)
#outputs : "name 'whisper' is not defined"*
#Python's functions are objects
```
## Functions references
Okay, still here? Now the fun part...
You’ve seen that functions are objects. Therefore, functions:
* can be assigned to a variable
* can be defined in another function
That means that **a function can `return` another function**.
```
def getTalk(kind="shout"):
# We define functions on the fly
def shout(word="yes"):
return word.capitalize()+"!"
def whisper(word="yes") :
return word.lower()+"..."
# Then we return one of them
if kind == "shout":
# We don't use "()", we are not calling the function,
# we are returning the function object
return shout
else:
return whisper
# How do you use this strange beast?
# Get the function and assign it to a variable
talk = getTalk()
# You can see that "talk" is here a function object:
print(talk)
#outputs : <function shout at 0xb7ea817c>
# The object is the one returned by the function:
print(talk())
#outputs : Yes!
# And you can even use it directly if you feel wild:
print(getTalk("whisper")())
#outputs : yes...
```
There’s more!
If you can `return` a function, you can pass one as a parameter:
```
def doSomethingBefore(func):
print("I do something before then I call the function you gave me")
print(func())
doSomethingBefore(scream)
#outputs:
#I do something before then I call the function you gave me
#Yes!
```
Well, you just have everything needed to understand decorators. You see, decorators are “wrappers”, which means that **they let you execute code before and after the function they decorate** without modifying the function itself.
## Handcrafted decorators
How you’d do it manually:
```
# A decorator is a function that expects ANOTHER function as parameter
def my_shiny_new_decorator(a_function_to_decorate):
# Inside, the decorator defines a function on the fly: the wrapper.
# This function is going to be wrapped around the original function
# so it can execute code before and after it.
def the_wrapper_around_the_original_function():
# Put here the code you want to be executed BEFORE the original function is called
print("Before the function runs")
# Call the function here (using parentheses)
a_function_to_decorate()
# Put here the code you want to be executed AFTER the original function is called
print("After the function runs")
# At this point, "a_function_to_decorate" HAS NEVER BEEN EXECUTED.
# We return the wrapper function we have just created.
# The wrapper contains the function and the code to execute before and after. It’s ready to use!
return the_wrapper_around_the_original_function
# Now imagine you create a function you don't want to ever touch again.
def a_stand_alone_function():
print("I am a stand alone function, don't you dare modify me")
a_stand_alone_function()
#outputs: I am a stand alone function, don't you dare modify me
# Well, you can decorate it to extend its behavior.
# Just pass it to the decorator, it will wrap it dynamically in
# any code you want and return you a new function ready to be used:
a_stand_alone_function_decorated = my_shiny_new_decorator(a_stand_alone_function)
a_stand_alone_function_decorated()
#outputs:
#Before the function runs
#I am a stand alone function, don't you dare modify me
#After the function runs
```
Now, you probably want that every time you call `a_stand_alone_function`, `a_stand_alone_function_decorated` is called instead. That’s easy, just overwrite `a_stand_alone_function` with the function returned by `my_shiny_new_decorator`:
```
a_stand_alone_function = my_shiny_new_decorator(a_stand_alone_function)
a_stand_alone_function()
#outputs:
#Before the function runs
#I am a stand alone function, don't you dare modify me
#After the function runs
# That’s EXACTLY what decorators do!
```
## Decorators demystified
The previous example, using the decorator syntax:
```
@my_shiny_new_decorator
def another_stand_alone_function():
print("Leave me alone")
another_stand_alone_function()
#outputs:
#Before the function runs
#Leave me alone
#After the function runs
```
Yes, that’s all, it’s that simple. `@decorator` is just a shortcut to:
```
another_stand_alone_function = my_shiny_new_decorator(another_stand_alone_function)
```
Decorators are just a pythonic variant of the [decorator design pattern](http://en.wikipedia.org/wiki/Decorator_pattern). There are several classic design patterns embedded in Python to ease development (like iterators).
Of course, you can accumulate decorators:
```
def bread(func):
def wrapper():
print("</''''''\>")
func()
print("<\______/>")
return wrapper
def ingredients(func):
def wrapper():
print("#tomatoes#")
func()
print("~salad~")
return wrapper
def sandwich(food="--ham--"):
print(food)
sandwich()
#outputs: --ham--
sandwich = bread(ingredients(sandwich))
sandwich()
#outputs:
#</''''''\>
# #tomatoes#
# --ham--
# ~salad~
#<\______/>
```
Using the Python decorator syntax:
```
@bread
@ingredients
def sandwich(food="--ham--"):
print(food)
sandwich()
#outputs:
#</''''''\>
# #tomatoes#
# --ham--
# ~salad~
#<\______/>
```
The order you set the decorators MATTERS:
```
@ingredients
@bread
def strange_sandwich(food="--ham--"):
print(food)
strange_sandwich()
#outputs:
##tomatoes#
#</''''''\>
# --ham--
#<\______/>
# ~salad~
```
---
# Now: to answer the question...
As a conclusion, you can easily see how to answer the question:
```
# The decorator to make it bold
def makebold(fn):
# The new function the decorator returns
def wrapper():
# Insertion of some code before and after
return "<b>" + fn() + "</b>"
return wrapper
# The decorator to make it italic
def makeitalic(fn):
# The new function the decorator returns
def wrapper():
# Insertion of some code before and after
return "<i>" + fn() + "</i>"
return wrapper
@makebold
@makeitalic
def say():
return "hello"
print(say())
#outputs: <b><i>hello</i></b>
# This is the exact equivalent to
def say():
return "hello"
say = makebold(makeitalic(say))
print(say())
#outputs: <b><i>hello</i></b>
```
You can now just leave happy, or burn your brain a little bit more and see advanced uses of decorators.
---
# Taking decorators to the next level
## Passing arguments to the decorated function
```
# It’s not black magic, you just have to let the wrapper
# pass the argument:
def a_decorator_passing_arguments(function_to_decorate):
def a_wrapper_accepting_arguments(arg1, arg2):
print("I got args! Look: {0}, {1}".format(arg1, arg2))
function_to_decorate(arg1, arg2)
return a_wrapper_accepting_arguments
# Since when you are calling the function returned by the decorator, you are
# calling the wrapper, passing arguments to the wrapper will let it pass them to
# the decorated function
@a_decorator_passing_arguments
def print_full_name(first_name, last_name):
print("My name is {0} {1}".format(first_name, last_name))
print_full_name("Peter", "Venkman")
# outputs:
#I got args! Look: Peter Venkman
#My name is Peter Venkman
```
## Decorating methods
One nifty thing about Python is that methods and functions are really the same. The only difference is that methods expect that their first argument is a reference to the current object (`self`).
That means you can build a decorator for methods the same way! Just remember to take `self` into consideration:
```
def method_friendly_decorator(method_to_decorate):
def wrapper(self, lie):
lie = lie - 3 # very friendly, decrease age even more :-)
return method_to_decorate(self, lie)
return wrapper
class Lucy(object):
def __init__(self):
self.age = 32
@method_friendly_decorator
def sayYourAge(self, lie):
print("I am {0}, what did you think?".format(self.age + lie))
l = Lucy()
l.sayYourAge(-3)
#outputs: I am 26, what did you think?
```
If you’re making general-purpose decorator--one you’ll apply to any function or method, no matter its arguments--then just use `*args, **kwargs`:
```
def a_decorator_passing_arbitrary_arguments(function_to_decorate):
# The wrapper accepts any arguments
def a_wrapper_accepting_arbitrary_arguments(*args, **kwargs):
print("Do I have args?:")
print(args)
print(kwargs)
# Then you unpack the arguments, here *args, **kwargs
# If you are not familiar with unpacking, check:
# http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/
function_to_decorate(*args, **kwargs)
return a_wrapper_accepting_arbitrary_arguments
@a_decorator_passing_arbitrary_arguments
def function_with_no_argument():
print("Python is cool, no argument here.")
function_with_no_argument()
#outputs
#Do I have args?:
#()
#{}
#Python is cool, no argument here.
@a_decorator_passing_arbitrary_arguments
def function_with_arguments(a, b, c):
print(a, b, c)
function_with_arguments(1,2,3)
#outputs
#Do I have args?:
#(1, 2, 3)
#{}
#1 2 3
@a_decorator_passing_arbitrary_arguments
def function_with_named_arguments(a, b, c, platypus="Why not ?"):
print("Do {0}, {1} and {2} like platypus? {3}".format(a, b, c, platypus))
function_with_named_arguments("Bill", "Linus", "Steve", platypus="Indeed!")
#outputs
#Do I have args ? :
#('Bill', 'Linus', 'Steve')
#{'platypus': 'Indeed!'}
#Do Bill, Linus and Steve like platypus? Indeed!
class Mary(object):
def __init__(self):
self.age = 31
@a_decorator_passing_arbitrary_arguments
def sayYourAge(self, lie=-3): # You can now add a default value
print("I am {0}, what did you think?".format(self.age + lie))
m = Mary()
m.sayYourAge()
#outputs
# Do I have args?:
#(<__main__.Mary object at 0xb7d303ac>,)
#{}
#I am 28, what did you think?
```
## Passing arguments to the decorator
Great, now what would you say about passing arguments to the decorator itself?
This can get somewhat twisted, since a decorator must accept a function as an argument. Therefore, you cannot pass the decorated function’s arguments directly to the decorator.
Before rushing to the solution, let’s write a little reminder:
```
# Decorators are ORDINARY functions
def my_decorator(func):
print("I am an ordinary function")
def wrapper():
print("I am function returned by the decorator")
func()
return wrapper
# Therefore, you can call it without any "@"
def lazy_function():
print("zzzzzzzz")
decorated_function = my_decorator(lazy_function)
#outputs: I am an ordinary function
# It outputs "I am an ordinary function", because that’s just what you do:
# calling a function. Nothing magic.
@my_decorator
def lazy_function():
print("zzzzzzzz")
#outputs: I am an ordinary function
```
It’s exactly the same. "`my_decorator`" is called. So when you `@my_decorator`, you are telling Python to call the function 'labelled by the variable "`my_decorator`"'.
This is important! The label you give can point directly to the decorator—**or not**.
Let’s get evil. ☺
```
def decorator_maker():
print("I make decorators! I am executed only once: "
"when you make me create a decorator.")
def my_decorator(func):
print("I am a decorator! I am executed only when you decorate a function.")
def wrapped():
print("I am the wrapper around the decorated function. "
"I am called when you call the decorated function. "
"As the wrapper, I return the RESULT of the decorated function.")
return func()
print("As the decorator, I return the wrapped function.")
return wrapped
print("As a decorator maker, I return a decorator")
return my_decorator
# Let’s create a decorator. It’s just a new function after all.
new_decorator = decorator_maker()
#outputs:
#I make decorators! I am executed only once: when you make me create a decorator.
#As a decorator maker, I return a decorator
# Then we decorate the function
def decorated_function():
print("I am the decorated function.")
decorated_function = new_decorator(decorated_function)
#outputs:
#I am a decorator! I am executed only when you decorate a function.
#As the decorator, I return the wrapped function
# Let’s call the function:
decorated_function()
#outputs:
#I am the wrapper around the decorated function. I am called when you call the decorated function.
#As the wrapper, I return the RESULT of the decorated function.
#I am the decorated function.
```
No surprise here.
Let’s do EXACTLY the same thing, but skip all the pesky intermediate variables:
```
def decorated_function():
print("I am the decorated function.")
decorated_function = decorator_maker()(decorated_function)
#outputs:
#I make decorators! I am executed only once: when you make me create a decorator.
#As a decorator maker, I return a decorator
#I am a decorator! I am executed only when you decorate a function.
#As the decorator, I return the wrapped function.
# Finally:
decorated_function()
#outputs:
#I am the wrapper around the decorated function. I am called when you call the decorated function.
#As the wrapper, I return the RESULT of the decorated function.
#I am the decorated function.
```
Let’s make it *even shorter*:
```
@decorator_maker()
def decorated_function():
print("I am the decorated function.")
#outputs:
#I make decorators! I am executed only once: when you make me create a decorator.
#As a decorator maker, I return a decorator
#I am a decorator! I am executed only when you decorate a function.
#As the decorator, I return the wrapped function.
#Eventually:
decorated_function()
#outputs:
#I am the wrapper around the decorated function. I am called when you call the decorated function.
#As the wrapper, I return the RESULT of the decorated function.
#I am the decorated function.
```
Hey, did you see that? We used a function call with the "`@`" syntax! :-)
So, back to decorators with arguments. If we can use functions to generate the decorator on the fly, we can pass arguments to that function, right?
```
def decorator_maker_with_arguments(decorator_arg1, decorator_arg2):
print("I make decorators! And I accept arguments: {0}, {1}".format(decorator_arg1, decorator_arg2))
def my_decorator(func):
# The ability to pass arguments here is a gift from closures.
# If you are not comfortable with closures, you can assume it’s ok,
# or read: https://stackoverflow.com/questions/13857/can-you-explain-closures-as-they-relate-to-python
print("I am the decorator. Somehow you passed me arguments: {0}, {1}".format(decorator_arg1, decorator_arg2))
# Don't confuse decorator arguments and function arguments!
def wrapped(function_arg1, function_arg2) :
print("I am the wrapper around the decorated function.\n"
"I can access all the variables\n"
"\t- from the decorator: {0} {1}\n"
"\t- from the function call: {2} {3}\n"
"Then I can pass them to the decorated function"
.format(decorator_arg1, decorator_arg2,
function_arg1, function_arg2))
return func(function_arg1, function_arg2)
return wrapped
return my_decorator
@decorator_maker_with_arguments("Leonard", "Sheldon")
def decorated_function_with_arguments(function_arg1, function_arg2):
print("I am the decorated function and only knows about my arguments: {0}"
" {1}".format(function_arg1, function_arg2))
decorated_function_with_arguments("Rajesh", "Howard")
#outputs:
#I make decorators! And I accept arguments: Leonard Sheldon
#I am the decorator. Somehow you passed me arguments: Leonard Sheldon
#I am the wrapper around the decorated function.
#I can access all the variables
# - from the decorator: Leonard Sheldon
# - from the function call: Rajesh Howard
#Then I can pass them to the decorated function
#I am the decorated function and only knows about my arguments: Rajesh Howard
```
Here it is: a decorator with arguments. Arguments can be set as variable:
```
c1 = "Penny"
c2 = "Leslie"
@decorator_maker_with_arguments("Leonard", c1)
def decorated_function_with_arguments(function_arg1, function_arg2):
print("I am the decorated function and only knows about my arguments:"
" {0} {1}".format(function_arg1, function_arg2))
decorated_function_with_arguments(c2, "Howard")
#outputs:
#I make decorators! And I accept arguments: Leonard Penny
#I am the decorator. Somehow you passed me arguments: Leonard Penny
#I am the wrapper around the decorated function.
#I can access all the variables
# - from the decorator: Leonard Penny
# - from the function call: Leslie Howard
#Then I can pass them to the decorated function
#I am the decorated function and only know about my arguments: Leslie Howard
```
As you can see, you can pass arguments to the decorator like any function using this trick. You can even use `*args, **kwargs` if you wish. But remember decorators are called **only once**. Just when Python imports the script. You can't dynamically set the arguments afterwards. When you do "import x", **the function is already decorated**, so you can't
change anything.
---
# Let’s practice: decorating a decorator
Okay, as a bonus, I'll give you a snippet to make any decorator accept generically any argument. After all, in order to accept arguments, we created our decorator using another function.
We wrapped the decorator.
Anything else we saw recently that wrapped function?
Oh yes, decorators!
Let’s have some fun and write a decorator for the decorators:
```
def decorator_with_args(decorator_to_enhance):
"""
This function is supposed to be used as a decorator.
It must decorate an other function, that is intended to be used as a decorator.
Take a cup of coffee.
It will allow any decorator to accept an arbitrary number of arguments,
saving you the headache to remember how to do that every time.
"""
# We use the same trick we did to pass arguments
def decorator_maker(*args, **kwargs):
# We create on the fly a decorator that accepts only a function
# but keeps the passed arguments from the maker.
def decorator_wrapper(func):
# We return the result of the original decorator, which, after all,
# IS JUST AN ORDINARY FUNCTION (which returns a function).
# Only pitfall: the decorator must have this specific signature or it won't work:
return decorator_to_enhance(func, *args, **kwargs)
return decorator_wrapper
return decorator_maker
```
It can be used as follows:
```
# You create the function you will use as a decorator. And stick a decorator on it :-)
# Don't forget, the signature is "decorator(func, *args, **kwargs)"
@decorator_with_args
def decorated_decorator(func, *args, **kwargs):
def wrapper(function_arg1, function_arg2):
print("Decorated with {0} {1}".format(args, kwargs))
return func(function_arg1, function_arg2)
return wrapper
# Then you decorate the functions you wish with your brand new decorated decorator.
@decorated_decorator(42, 404, 1024)
def decorated_function(function_arg1, function_arg2):
print("Hello {0} {1}".format(function_arg1, function_arg2))
decorated_function("Universe and", "everything")
#outputs:
#Decorated with (42, 404, 1024) {}
#Hello Universe and everything
# Whoooot!
```
I know, the last time you had this feeling, it was after listening a guy saying: "before understanding recursion, you must first understand recursion". But now, don't you feel good about mastering this?
---
# Best practices: decorators
* Decorators were introduced in Python 2.4, so be sure your code will be run on >= 2.4.
* Decorators slow down the function call. Keep that in mind.
* **You cannot un-decorate a function.** (There *are* hacks to create decorators that can be removed, but nobody uses them.) So once a function is decorated, it’s decorated *for all the code*.
* Decorators wrap functions, which can make them hard to debug. (This gets better from Python >= 2.5; see below.)
The `functools` module was introduced in Python 2.5. It includes the function `functools.wraps()`, which copies the name, module, and docstring of the decorated function to its wrapper.
(Fun fact: `functools.wraps()` is a decorator! ☺)
```
# For debugging, the stacktrace prints you the function __name__
def foo():
print("foo")
print(foo.__name__)
#outputs: foo
# With a decorator, it gets messy
def bar(func):
def wrapper():
print("bar")
return func()
return wrapper
@bar
def foo():
print("foo")
print(foo.__name__)
#outputs: wrapper
# "functools" can help for that
import functools
def bar(func):
# We say that "wrapper", is wrapping "func"
# and the magic begins
@functools.wraps(func)
def wrapper():
print("bar")
return func()
return wrapper
@bar
def foo():
print("foo")
print(foo.__name__)
#outputs: foo
```
---
# How can the decorators be useful?
**Now the big question:** What can I use decorators for?
Seem cool and powerful, but a practical example would be great. Well, there are 1000 possibilities. Classic uses are extending a function behavior from an external lib (you can't modify it), or for debugging (you don't want to modify it because it’s temporary).
You can use them to extend several functions in a DRY’s way, like so:
```
def benchmark(func):
"""
A decorator that prints the time a function takes
to execute.
"""
import time
def wrapper(*args, **kwargs):
t = time.clock()
res = func(*args, **kwargs)
print("{0} {1}".format(func.__name__, time.clock()-t))
return res
return wrapper
def logging(func):
"""
A decorator that logs the activity of the script.
(it actually just prints it, but it could be logging!)
"""
def wrapper(*args, **kwargs):
res = func(*args, **kwargs)
print("{0} {1} {2}".format(func.__name__, args, kwargs))
return res
return wrapper
def counter(func):
"""
A decorator that counts and prints the number of times a function has been executed
"""
def wrapper(*args, **kwargs):
wrapper.count = wrapper.count + 1
res = func(*args, **kwargs)
print("{0} has been used: {1}x".format(func.__name__, wrapper.count))
return res
wrapper.count = 0
return wrapper
@counter
@benchmark
@logging
def reverse_string(string):
return str(reversed(string))
print(reverse_string("Able was I ere I saw Elba"))
print(reverse_string("A man, a plan, a canoe, pasta, heros, rajahs, a coloratura, maps, snipe, percale, macaroni, a gag, a banana bag, a tan, a tag, a banana bag again (or a camel), a crepe, pins, Spam, a rut, a Rolo, cash, a jar, sore hats, a peon, a canal: Panama!"))
#outputs:
#reverse_string ('Able was I ere I saw Elba',) {}
#wrapper 0.0
#wrapper has been used: 1x
#ablE was I ere I saw elbA
#reverse_string ('A man, a plan, a canoe, pasta, heros, rajahs, a coloratura, maps, snipe, percale, macaroni, a gag, a banana bag, a tan, a tag, a banana bag again (or a camel), a crepe, pins, Spam, a rut, a Rolo, cash, a jar, sore hats, a peon, a canal: Panama!',) {}
#wrapper 0.0
#wrapper has been used: 2x
#!amanaP :lanac a ,noep a ,stah eros ,raj a ,hsac ,oloR a ,tur a ,mapS ,snip ,eperc a ,)lemac a ro( niaga gab ananab a ,gat a ,nat a ,gab ananab a ,gag a ,inoracam ,elacrep ,epins ,spam ,arutaroloc a ,shajar ,soreh ,atsap ,eonac a ,nalp a ,nam A
```
Of course the good thing with decorators is that you can use them right away on almost anything without rewriting. DRY, I said:
```
@counter
@benchmark
@logging
def get_random_futurama_quote():
from urllib import urlopen
result = urlopen("http://subfusion.net/cgi-bin/quote.pl?quote=futurama").read()
try:
value = result.split("<br><b><hr><br>")[1].split("<br><br><hr>")[0]
return value.strip()
except:
return "No, I'm ... doesn't!"
print(get_random_futurama_quote())
print(get_random_futurama_quote())
#outputs:
#get_random_futurama_quote () {}
#wrapper 0.02
#wrapper has been used: 1x
#The laws of science be a harsh mistress.
#get_random_futurama_quote () {}
#wrapper 0.01
#wrapper has been used: 2x
#Curse you, merciful Poseidon!
```
Python itself provides several decorators: `property`, `staticmethod`, etc.
* Django uses decorators to manage caching and view permissions.
* Twisted to fake inlining asynchronous functions calls.
This really is a large playground. | How do I make function decorators and chain them together? | [
"",
"python",
"function",
"decorator",
"python-decorators",
"chain",
""
] |
WHen i try to connect to SQL Express 2005 from Visual Web Developer Express 2008, i was getting errors like 'Could not load file or assembly Microsoft.SqlServer.Management.Sdk.Sfc' .
I read some posts which advised me to download and install 3 applications to address above issue ( SharedManagementObjects.msi, sqlncli.msi,SQLSysClrTypes.msi ).
I did that and now i get a different connection error
'A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) '.
Does anyone know a way of overcoming this ?
Thanks. | How are you trying to connect ? Just a regular ADO.NET connection? If so, what's your connection string?
Are you trying to use SMO (SQL Mgmt Objects)? This sounds like some of your SMO objects aren't available for some reason - you might want to download and reinstall those SMO components from [here](http://www.microsoft.com/downloads/details.aspx?FamilyID=d09c1d60-a13c-4479-9b91-9e8b9d835cdc&displaylang=en). The files you want are quite a bit down the page, under the heading of "**Microsoft SQL Server 2005 Management Objects Collection**" - files are called "SQLServer2005\_XMO.msi" (with some x64 variations).
Does installing that MSI help at all?
Marc
PS: okay, thanks, based on your comment - can you try this connection string here:
```
string connectionString = @"Server=.\SQLEXPRESS2005;Database=DBName;Integrated Security=SSPI";
``` | If you are running SQL Sever Express locally another thing you may want to do check in the Services console (WinKey, Run, 'services.msc') and verify that you have SQL Server and SQL Server Agent entries and that they are both running. | Problem connecting VWD Express 2008 to SQL Server Express 2005 | [
"",
"sql",
"sql-server-express",
""
] |
I am writing a django app that keeps track of which email addresses are allowed to post content to a user's account. The user can whitelist and blacklist addresses as they like.
Any addresses that aren't specified can either be handled per message or just default to whitelist or blacklist (again user specified).
Here are the django models I wrote... do you think is a good way to do it? or should I add a whitelist and blacklist field to each user's profile model?
```
class knownEmail(models.Model):
# The user who set this address' permission, NOT
# the user who the address belongs to...
relatedUser = models.ManyToManyField(User)
email = models.EmailField()
class whiteList(knownEmail):
pass
class blackList(knownEmail):
pass
```
Then I could do something like:
```
def checkPermission(user, emailAddress):
"Check if 'emailAddress' is allowed to post content to 'user's profile"
if whiteList.objects.filter(relatedUser=user, email=emailAddress):
return True
elif blackList.objects.filter(relatedUser=user, email=emailAddress):
return False
else:
return None
```
Is there a better way? | I would restructure it so both lists were contained in one model.
```
class PermissionList(models.Model):
setter = models.ManyToManyField(User)
email = models.EmailField(unique=True) #don't want conflicting results
permission = models.BooleanField()
```
Then, your lists would just be:
```
# whitelist
PermissionList.objects.filter(permission=True)
# blacklist
PermissionList.objects.filter(permission=False)
```
To check a particular user, you just add a couple functions to the model:
```
class PermissionList(...):
...
@classmethod
def is_on_whitelist(email):
return PermissionList.objects.filter(email=email, permission=True).count() > 0
@classmethod
def is_on_blacklist(email):
return PermissionList.objects.filter(email=email, permission=False).count() > 0
@classmethod
def has_permission(email):
if PermissionList.is_on_whitelist(email):
return True
if PermissionList.is_on_blacklist(email):
return False
return None
```
Having everything in one place is a lot simpler, and you can make more interesting queries with less work. | [Please start All Class Names With Upper Case Letters.]
Your code doesn't make use of your class distinction very well.
Specifically, your classes don't have any different behavior. Since both classes have all the same methods, it isn't clear *why* these are two different classes in the first place. If they have different methods, then your solution is good.
If, however, they don't have different methods, you might want to look at providing a customized [manager](http://docs.djangoproject.com/en/dev/topics/db/managers/#managers) for each of the two subsets of `KnownEmail`
```
class WhiteList( models.Manager ):
def get_query_set( self ):
return super( WhiteList, self ).get_query_set().filter( status='W' )
class BlackList( models.Manager )
def get_query_set( self ):
return super( BlackList, self ).get_query_set().filter( status='B' )
class KnownEmail( models.Model ):
relatedUser = models.ForeignKey(User)
email = models.EmailField()
status = models.CharField( max_length=1, choices=LIST_CHOICES )
objects = models.Manager() # default manager shows all lists
whiteList= WhiteList() # KnownEmail.whiteList.all() is whitelist subset
blackList= BlackList() # KnownEmail.blackList.all() is blackList subset
``` | email whitelist/blacklist in python/django | [
"",
"python",
"django",
"email",
"whitelist",
"blacklist",
""
] |
Are the locks from the `threading` module interchangeable with those from the `multiprocessing` module? | You can typically use the two interchangeably, but you need to cognizant of the differences. For example, multiprocessing.Event is backed by a named semaphore, which is sensitive to the platform under the application.
Multiprocessing.Lock is backed by Multiprocessing.SemLock - so it needs named semaphores. In essence, you can use them interchangeably, but using multiprocessing's locks introduces some platform requirements on the application (namely, it doesn't run on BSD :)) | I don't think so. Threading locks are within the same process, while the multiprocessing lock would likely be in shared memory.
Last time I checked, multiprocessing doesn't allow you to share the lock in a Queue, which is a threading lock. | Python: Locks from `threading` and `multiprocessing` interchangable? | [
"",
"python",
"multithreading",
"locking",
"multiprocessing",
""
] |
Say I have a very simple java object that only has some getXXX and setXXX properties. This object is used only to handle values, basically a record or a type-safe (and performant) map. I often need to covert this object to key value pairs (either strings or type safe) or convert from key value pairs to this object.
Other than reflection or manually writing code to do this conversion, what is the best way to achieve this?
An example might be sending this object over jms, without using the ObjectMessage type (or converting an incoming message to the right kind of object). | There is always apache [commons beanutils](http://commons.apache.org/beanutils/) but of course it uses reflection under the hood | Lots of potential solutions, but let's add just one more. Use [Jackson](https://github.com/FasterXML/jackson) (JSON processing lib) to do "json-less" conversion, like:
```
ObjectMapper m = new ObjectMapper();
Map<String,Object> props = m.convertValue(myBean, Map.class);
MyBean anotherBean = m.convertValue(props, MyBean.class);
```
([this blog entry](http://www.cowtowncoder.com/blog/archives/2009/12/entry_336.html) has some more examples)
You can basically convert any compatible types: compatible meaning that if you did convert from type to JSON, and from that JSON to result type, entries would match (if configured properly can also just ignore unrecognized ones).
Works well for cases one would expect, including Maps, Lists, arrays, primitives, bean-like POJOs. | How to convert a Java object (bean) to key-value pairs (and vice versa)? | [
"",
"java",
"reflection",
"collections",
"javabeans",
""
] |
I have been trying to clean up the naming and organisation conventions of our projects' unit and integration tests. We are using C#, NUnit, Visual Studio and Resharper.
It seems that if there is one best practice for unit test organisation it is that the layout and naming of the test classes and namespaces should mirror those of the code under test. As a consequence the file structure will also be replicated.
Wouldn't it be nice to have a tool that could automatically enforce and help refactor code to conform to these conventions? Does a tool like this exist?
For example, it would be simple for Resharper to recognise that OrderTest is a test fixture for the Order class. It could then ensure that the Order class exists and that it is in the same namespace. If the Order class underwent refactoring and was renamed, the OrderTest class should also be renamed. | I have often wanted that too. I looked at the Resharper bug tracker and found the enhancement request of this issue. It is scheduled for Resharper 5.0. Here is its description: <http://www.jetbrains.net/jira/browse/RSRP-64050>.
As for the naming and organization of unit tests, standards are changing quite a bit, especially with some of the bdd styles that are popping up. [Here](http://elegantcode.com/2008/09/08/bdd-test-naming-experiment/) is a good post on that naming style. I think it's best for tools to just let a team adapt and create its own standards. With collective code ownership, there should be just one standard after a while. | Ours are currently Test01, Test02, Test03, etc... It's like a steering wheel in a pirate's pants.
I much prefer, and am switching to, this style of naming:[Behavior Driven Development: As Human As Possible](http://secretgeek.net/behaviorDdAhap.asp) | Tools for keeping unit tests organised and named correctly | [
"",
"c#",
"unit-testing",
"naming",
"refactoring",
""
] |
besides updating GUI controls from background threads, is there any other operations that should avoid being done on background threads in winforms? | Oh, there are plenty of traps. BGW doesn't do much to protect you from the usual hazards of threaded programming. And adds some of its own:
* Variables that are accessed from both the BGW and the UI thread must be protected by a lock.
* Do not update a bound data source in a BGW. The control updates will be done on the BGW thread but will not generate an exception.
* Don't call ReportProgress() too often. Doing it more than about 1000 times per second will freeze the UI thread. About 25 times/sec is enough.
* Careful with the userstate argument you can pass to ReportProgress(). It must not be modified by the BGW after the call.
* Don't loop on the IsBusy property in the UI thread, it will deadlock.
* The BGW thread will be aborted when the main form closes. Watch out for necessary cleanup.
* Be sure to inspect the Error property in the RunWorkerCompleted event, it tells you when something went wrong. | This is sort of broad. Don't do anything in a background thread if you don't need to; that is, don't thread out some code just because you feel like it. Use threads where it is appropriate such as for long running tasks that you do not want to interrupt the GUI and so forth. Also, if you end up just calling Application.DoEvents() from your main UI thread just waiting on a task from another thread, you might think about keeping one thread and doing the work in small pieces in a loop where you would repaint the GUI with DoEvents() calls. This is just a suggesiton; however, of course, many times you *do* need to create multiple threads.
Perhaps you can ask about particular situations? | what should i avoid doing on background thread in winforms | [
"",
"c#",
"winforms",
""
] |
foo.py :
```
i = 10
def fi():
global i
i = 99
```
bar.py :
```
import foo
from foo import i
print i, foo.i
foo.fi()
print i, foo.i
```
This is problematic. Why does `i` not change when `foo.i` changes? | What `import` does in `bar.py` is set up an identifier called `i` in the `bar.py` module namespace that points to the same address as the identifier called `i` in the `foo.py` module namespace.
This is an important distinction... `bar.i` is not pointing to `foo.i`, but rather to the same space in memory where the object `10` is held that `foo.i` happens to point to at the same time. In python, the variable names are not the memory space... they are the identifier that points to a memory space. When you import in bar, you are setting up a local namespace identifier.
Your code behaves as expected until `foo.fi()` is called, when the identifier `i` in the foo.py namespace is changed to point to the literal 99, which is an object in memory obviously at a different place than 10. Now the module-level namespace dict for `foo` has `i` identifying a different object in memory than the identifier `i` in bar.py.
Shane and rossfabricant have good suggestions on how to adjust your modules to achieve what you want. | What Ross is saying is to restucture foo like so:
```
_i = 10
def getI():
return _i
def fi():
global _i
_i = 99
```
Then you will see it works the way you want:
```
>>> import foo
>>> print foo.getI()
10
>>> foo.fi()
>>> print foo.getI()
99
```
It is also 'better' in the sense that you avoid exporting a global, but still provide read access to it. | Imports in python are static, any solution? | [
"",
"python",
""
] |
I am getting this error
```
error: Access.Core may be used uninitialized in this function
```
And this is my code:
```
static int FirstTime = 1;
MyStruct Access;
if (FirstTime) {
FirstTime = 0;
Access = Implementation();
DoSomething(Access);
}
if(Other_Variable) {
Access = Implementation2();
DoSomething(Access);
}
//The Other_Variable will be set to 1 and to 0 by other part of the code
```
My code is like that because I want to call to the function Implementation only the first time. In every call the Access variable is going to be updated so it does not make many sense make it static.
If I make Access static works, but I do not like make it static because in every other call Access is going to be updated. Any way to avoid the problem without making it static?.
Also any better options to execute only once a function instead of using a static variable are welcome. | Make `Access` like this (and remove `FirstTime` and the `if`):
```
static MyStruct Access = Implementation(this_b);
```
The reason you get this warning is because static variables survive one function call. Their value is retained across all function calls (*without* regard to which thread calls that function). So, `FirstTime` will control whether you initialize `Access`. The first time you call the function that code is in will correctly initialize the `Access` variable. But with every further function call, `FirstTime` is zero, and you will *not* initialize `Access` anymore, and thus will use an uninitialized variable down the code.
**Edit:** Now, with your updated information, you say that you have two `Implementation` functions. The first time you want to use one, and all the other times you want to use another function. How about this then:
```
// static will be false/zero by default
static bool AlreadyCalled;
MyStruct Access;
if (!AlreadyCalled) {
Access = Implementation();
AlreadyCalled = true;
} else {
Access = Implementation2();
}
```
Depending on your actual use case, there may be better ways to handle this, though. For example, why not update the state of `Access`, like this:
```
// let the default constructor initialize it
// to a plausible state
static MyStruct Access;
// use RAII to update the state of Access when this
// function returns.
MyUpdater updater(Access);
// now, do whatever the function does.
```
Something like this for `MyUpdater`:
```
struct MyUpdater {
MyStruct &s;
MyUpdater(MyStruct &s):s(s) { }
~MyUpdater() {
s.ChangeState();
}
};
```
That pattern is called `RAII`: You associate some useful action with the constructor and destructor of a locally allocated object. | @litb's answer is interesting.
An equivalent program follows.
The code compiles and works as stated in C++, but doesn't compile in C.
```
#include <stdio.h>
static int newval(void) { return 3; }
void inc(void)
{
static int a = newval();
a++;
printf("%d\n", a);
}
int main(void)
{
int i;
for (i = 0; i < 10; i++)
inc();
return(0);
}
```
gcc says:
x.c: In function 'inc':
x.c:7: error: initializer element is not constant
g++ is quite happy with it.
That is a difference between C and C++ that I was not aware of (but this wouldn't fit into 300 characters so I can't make it a comment, easily).
---
@Eduardo asked one of the questions in the comments: "Why does C not allow this and C++ allow it?". *Since the answer is more than 300 characters...*
As @litb said in the comments, in C you can only use constants for initializers of static variables. This is in part because the values are set before main() is called, and no user-defined functions are called before main() is called. C++, by contrast, allows global and static variables to be initialized by (user-defined) constructors before main() is called, so there's no reason not to allow other user-defined functions to be called too, so the initialization is reasonable. With C89, you are limited in the initializers you can use with automatic (local) variables; in C99, you can use pretty much any expression to initialize any local variable. | error: X may be used uninitialized in this function in C | [
"",
"c++",
"initialization",
""
] |
I am having a hard time understanding JAAS. It all seems more complicated than it should be (especially the Sun tutorials). I need a simple tutorial or example on how to implement security (authentication + authorization) in java application based on Struts + Spring + Hibernate with custom user repository. Can be implemented using ACEGI. | Here are some of the links I used to help understand JAAS:
<http://www.owasp.org/index.php/JAAS_Tomcat_Login_Module>
<http://www.javaworld.com/jw-09-2002/jw-0913-jaas.html>
<http://jaasbook.wordpress.com/>
<http://roneiv.wordpress.com/2008/02/18/jaas-authentication-mechanism-is-it-possible-to-force-j_security_check-to-go-to-a-specific-page/>
Also have a look at the Apache tomcat realms configuration how-to:
<http://tomcat.apache.org/tomcat-6.0-doc/realm-howto.html> | Other users have provide some very useful links above so I am not going to bother with links. I have done a similar research in JAAS for web application and has ran into a "mind roadblock" until I finally realize JAAS is a framework tackling security at a different "layer" then web applications in the Java World. It is build to tackle security issues in Java SE not Java EE.
JAAS is a security framework built for securing things at a much lower level then web-application. Some example of these things are code and resources available at the JVM level, hence all these ability to set policy files in the JVM level.
However, since Java EE is built on top of Java SE, a few modules from JAAS were reused in Java EE security such as the LoginModules and Callbacks.
Note that in addition to Java EE security, there is also Spring security (formerly known as Acegi), which similar to native Java EE security tackles a much higher "layer" in the securing web-application problem. It is a separate security implementation and is not built on top of standard Java EE security, although it behaves similarly in many regards.
To summarize, unless you are looking to secure resources in the Java SE level (classes, System resources), I don't see any real use of JAAS other than the using the common class and interfaces. Just focus on using Spring Security or plain old Java EE security which both solve a lot of common web application security problems. | JAAS for human beings | [
"",
"java",
"security",
"spring",
"spring-security",
"jaas",
""
] |
I'm looking for a way to generate a (fairly) unique (non auto-incrementing) 4-digit string using the numbers 0 - 9 for each digit using C#. I can validate uniqueness and generate another number if a dup is found. I had thought about basing the number somehow on the DateTime object's Ticks property but am having a difficult time putting the pieces together.
Any thoughts or expertise would be much appreciated. | If it doesn't increment itself, how is it going to be unique the second time?
Is what you're saying that you want to generate a random 4-digit string from the set of all possible unused 4-digit strings?
If so, the correct approach is usually to generate all possible 4-digit strings, and shuffle them into a random order. Then take them in order as you need new ones.
**CLARIFICATION:** Other answers suggest simply generating a random 4-digit string and leaving it at that. Presumably you would then check to see whether it was already used, and generate another one if it's used. This has the potential of having extremely suboptimal performance. Suppose you have already used 9,999 (all but one) of the possible 4-digit strings ranging from 0000 to 9999. To generate the last one, this method may take many, many tries. | Create an array of all 10000 values, using the short type, and then [shuffle](http://dotnetperls.com/Content/Shuffle-Array.aspx) it. | How To Generate A Unique 4-Digit String | [
"",
"c#",
""
] |
I am trying to become a good programming citizen through learning more about Dependency Injection / IoC and other best-practices methods. For this I have a project where I am trying to make the right choices and design everything in the "proper" way, whatever that might mean. Ninject, Moq and ASP.NET MVC help with testability and getting the app "out the door".
However, I have a question about how to design an entity base class for the objects that my application consists of. I have a simple class library which the web app is built on top of. This library exposes a IRepository interface, and the default implementation (the one that the app uses) uses Linq-to-SQL under the covers (the DataContext etc. is not exposed to the web app) and simply contains ways to fetch these entities. The repository basically looks like this (simplified):
```
public interface IRepository
{
IEnumerable<T> FindAll<T>() where T : Entity
T Get<T>(int id) where T : Entity
}
```
The default implementation uses the GetTable() method on the DataContext to provide the correct data.
However, this requires that the base class "Entity" has some features. It is easy enough to get my objects to inherit from it by creating a partial class of the same name as the mapped object that Linq-to-SQL gives me, but what is the "correct" way to do this?
For instance, the interface above has a function for getting an Entity by it's id - all the different kinds of classes that derives from entity does indeed have an "id" field of type int (mapped from the primary key of their respective tables), but how can I specify this in a way that lets me implement IRepository like this?
```
public class ConcreteRepository : IRepository
{
private SomeDataContext db = new SomeDataContext();
public IEnumerable<T> FindAll<T>() where T : Entity
{
return db.GetTable<T>().ToList();
}
public T Get(int id) where T : Entity
{
return db.GetTable<T>().Where(ent => ent.id == id).FirstOrDefault();
}
}
```
I am doing this from memory on a compiler-less PC so forgive any errors, you hopefully get my intention.
The trick here is of course that for this to compile, it has to be known for sure that Entity promises that everyone that derives from it has an id field.
And I can make an abstract field, or a normal field that is "hidden" by the id field that Linq-to-SQL sticks in the generated classes.
But this all feels kind of like a cheat, and even gives compiler warnings.
Should "Entity" really be "IEntity" (an interface instead), and I should try to make the id field be defined in a way that Linq-to-SQL will fulfill? This would also make it easy to specify other interfaces that Entity-implementors need to implement.
Or should "Entity" be an abstract base class with an abstract id field, and should it also implement needed interfaces in an abstract way for others to override?
I don't know C# well enough to see an elegant solution to this, so I would love to hear from more experienced system designers with some base class experience weigh in on this.
Thanks!
**EDIT April 10th:**
I see I left out something important here. My IRepository has two concrete implementations - one that is the "real" one using LINQ to SQL, the other one is a InMemoryRepository that just uses a List for now, which is used for unit testing etc.
The two solutions added will both work for one of these situations, not for the other. So if possible, I would need a way to define that everything that inherits from Entity will have the "id" property, and that this will work with the LINQ to SQL DataContext without "cheating". | If you use "Id" as a primary keys in all tables so all the generated classes would have a public property like:
```
public int Id {}
```
Then create an interface
```
public interface IIdentifiable {
Id { get; }
}
```
Then the most tedious part :(, for all entities create a partial class and make it implement IIdentifiable.
The repository class can then look like:
```
public class Repository<T> : IRepository where T : IIdentifiable {
}
```
And the following code will work:
```
db.GetTable<T>().SingleOrDefault(ent => ent.Id.Equals(id));
```
If you do not use the generated classes and make your own, is even simpler from this point of view.
**EDIT:**
Instead of ent => ent.Id == id use ent => ent.Id.Equals(id). Just tested, the following is a complete working example:
```
public interface IIdentifiable {
int Id { get; }
}
public class Repository<T> where T : class, IIdentifiable {
TestDataContext dataContext = new TestDataContext();
public T GetById(int id) {
T t = dataContext.GetTable<T>().SingleOrDefault(elem => elem.Id.Equals(id));
return t;
}
}
public partial class Item : IIdentifiable {
}
class Program {
static void Main(string[] args) {
Repository<Item> itemRepository = new Repository<Item>();
Item item = itemRepository.GetById(1);
Console.WriteLine(item.Text);
}
}
``` | The "FindAll" is a bad idea, and will force it to fetch all data. Returning `IQueryable<T>` would be better, but still not ideal, IMO.
Re finding by ID - see [here for a LINQ-to-SQL answer](https://stackoverflow.com/questions/440474/can-i-make-a-generic-compiledquery-that-accepts-multiple-tables-and-generic-types/440608#440608); this uses LINQ-to-SQL's meta-model to locate the primary key, so that you don't have to. It also works for both attributed and resource-based models.
I wouldn't force a base class. That wasn't very popular with Entity Framework, and it isn't likely to get any more popular... | Entity base class design in C# for repository model | [
"",
"c#",
"linq-to-sql",
"oop",
""
] |
I'm planning on writing an RPC server in Java. The server needs to accept incoming RPCs - probably over HTTP - and answer them. Fairly basic stuff. Support for 'long polling' or 'hanging' RPCs isn't necessary, so a thread-per-request model ought to be perfectly adequate.
If I were writing this in Python, I would probably use a framework like twisted. In C, something like glibc. In each case, the framework provides an implementation of the general 'select loop' core of handling IO and invoking higher level constructs that deal with it, eventually leading to my application being called for events such as receiving an RPC.
It's a long time since I wrote anything substantial in Java, though, so I don't know what the state of the art or the suggested solutions are for this sort of thing. Possibly there's even parts of the standard library I can easily use to do this. Hence my question to StackOverflow: What frameworks are there out there that would be suitable for a task like this?
Note that although I may use HTTP for the RPCs, this is emphatically not a web application - and as such, a web framework is not appropriate. | [Apache MINA](http://mina.apache.org) is a very well designed asynchronous non-blocking networking framework. It provides byte-oriented access to read and write packet data. Building atop that it has a filter system where additional layers can be added, providing things like line-oriented text parsing, encryption (via TLS), compression, etc.
PS: The 2.0 release series is highly recommended, even though it's still in "milestone" form, it's proven very stable and is nearing a final release. | You have multiple choice:
* Roll your own solution with the existing SDK for socket programming.
* Java RMI, the remote method invocation framework.
* Java CORBA bindings, is no longer considered current.
* Java Web Service frameworks, are quite complex. Look at Apache CXF and the different J2EE products.
Then you have different systems running above HTTP transport like JSON/XML-RPC where you need a Web server. Even though you rule them out. | Framework for Java RPC server | [
"",
"java",
"http",
"network-programming",
"rpc",
""
] |
I'm trying to read input from the terminal. For this, I'm using a BufferedReader. Here's my code.
```
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
String[] args;
do {
System.out.println(stateManager.currentState().toString());
System.out.print("> ");
args = reader.readLine().split(" ");
// user inputs "Hello World"
} while (args[0].equals(""));
```
Somewhere else in my code I have a `HashTable` where the Key and Values are both Strings. Here's the problem:
If I want to get a value from the `HashTable` where the key I'm using to look up in the `HashTable` is one of the args elements. These args are weird. If the user enters two arguments (the first one is a command, the second is what the user wants to look up) I can't find a match.
For example, if the HashTable contains the following values:
```
[ {"Hello":"World"}, {"One":"1"}, {"Two":"2"} ]
```
and the user enters:
```
get Hello
```
My code doesn't return `"World"`.
So I used the debugger (using Eclipse) to see what's inside of `args`. I found that `args[1]` contains `"Hello"` but inside `args[1]` there is a field named `value` which has the value `['g','e','t',' ','H','e','l','l','o']`.
The same goes for `args[0]`. It contains `"get"` but the field `value` contains `['g','e','t',' ','H','e','l','l','o']`!
What the hell!!!
However, if I check my `HashTable`, wherever the key is `"Hello"`, the value=`['H','e','l','l','o']`.
Any ideas?
Thank you very much.
---
EDIT:
Here's come code sample. The same is still happening.
```
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Hashtable;
public class InputTest
{
public static void main(String[] args)
{
BufferedReader reader = new BufferedReader(new InputStreamReader( System.in));
Hashtable<String, String> EngToSpa = new Hashtable<String, String>();
// Adding some elements to the Eng-Spa dictionary
EngToSpa.put("Hello", "Hola");
EngToSpa.put("Science", "Ciencia");
EngToSpa.put("Red", "Rojo");
// Reads input. We are interested in everything after the first argument
do
{
System.out.print("> ");
try
{
args = reader.readLine().trim().split("\\s");
} catch (IOException e)
{
e.printStackTrace();
}
} while (args[0].equals("") || args.length < 2);
// ^ We don't want empty input or less than 2 args.
// Lets go get something in the dictionary
System.out.println("Testing arguments");
if (!EngToSpa.contains(args[1]))
System.out.println("Word not found!");
else
System.out.println(EngToSpa.get(args[1]));
// Now we are testing the word "Hello" directly
System.out.println("Testing 'Hello'");
if (!EngToSpa.contains("Hello"))
System.out.println("Word not found!");
else
System.out.println(EngToSpa.get("Hello"));
}
}
```
The same is still happening. I must be misunderstanding Hash Tables. Ideas where things are going wrong? | I just noticed my mistake. I have to use *containsKey()* instead of *contains()*.
I'd like to thank everyone for helping.
As a bonus, I also learned about what the 'value' field is. Nice! | Don't worry about the value field - that's just saying that there's a single char array containing the text of "get Hello", and both args[0] and args[1] *refer* to that char array, but they'll have different offsets and counts. args[0] will have an offset of 0 and a count of 3; args[1] will have an offset of 4 and a count of 5.
I've no idea why your hash map wouldn't be working though... can you provide a short but complete example program? | Trouble handling input in Java | [
"",
"java",
"string",
"input",
""
] |
Is there a way to disable the exit button on a windows form without having to import the some external .dll's? I disable the exit button by importing dll's using the following code but I don't like it. Is there a simpler (built-in) way?
```
public Form1()
{
InitializeComponent();
hMenu = GetSystemMenu(this.Handle, false);
}
private const uint SC_CLOSE = 0xf060;
private const uint MF_GRAYED = 0x01;
private IntPtr hMenu;
[DllImport("user32.dll")]
private static extern IntPtr GetSystemMenu(IntPtr hWnd, bool bRevert);
[DllImport("user32.dll")]
private static extern int EnableMenuItem(IntPtr hMenu, uint wIDEnableItem, uint wEnable);
// handle the form's Paint and Resize events
private void Form1_Paint(object sender, PaintEventArgs e)
{
EnableMenuItem(hMenu, SC_CLOSE, MF_GRAYED);
}
private void Form1_Resize(object sender, EventArgs e)
{
EnableMenuItem(hMenu, SC_CLOSE, MF_GRAYED);
}
``` | A little poking around found this handy helper class:
[Disable Close Button and Prevent Form Being Moved (C# version)](http://www.vbforums.com/showthread.php?t=544965)
It actually does more than what you're looking for, but essentially does it very nearly the same way you do in your sample code. The helper class hooks into the load/resize events for you so you don't have to remember to do it yourself. | To disable the close button on the form just write the below code on the form closing event.
```
private void Form1_FormClosing(object sender, FormClosingEventArgs e)
{
e.Cancel = true;
}
``` | Disabling the exit button on a windows form? | [
"",
"c#",
".net",
"winforms",
""
] |
Here's my code:
```
// Not all headers are relevant to the code snippet.
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <cstdlib>
#include <cstring>
#include <unistd.h>
char *buffer;
stringstream readStream;
bool readData = true;
while (readData)
{
cout << "Receiving chunk... ";
// Read a bit at a time, eventually "end" string will be received.
bzero(buffer, BUFFER_SIZE);
int readResult = read(socketFileDescriptor, buffer, BUFFER_SIZE);
if (readResult < 0)
{
THROW_VIMRID_EX("Could not read from socket.");
}
// Concatenate the received data to the existing data.
readStream << buffer;
// Continue reading while end is not found.
readData = readStream.str().find("end;") == string::npos;
cout << "Done (length: " << readStream.str().length() << ")" << endl;
}
```
It's a little bit of C and C++ as you can tell. The BUFFER\_SIZE is 256 - should I just increase the size? If so, what to? Does it matter?
I know that if "end" is not received for what ever reason, this will be an endless loop, which is bad - so if you could suggest a better way, please also do so. | Without knowing your full application it is hard to say what the best way to approach the problem is, but a common technique is to use a header which starts with a fixed length field, which denotes the length of the rest of your message.
Assume that your header consist only of a 4 byte integer which denotes the length of the rest of your message. Then simply do the following.
```
// This assumes buffer is at least x bytes long,
// and that the socket is blocking.
void ReadXBytes(int socket, unsigned int x, void* buffer)
{
int bytesRead = 0;
int result;
while (bytesRead < x)
{
result = read(socket, buffer + bytesRead, x - bytesRead);
if (result < 1 )
{
// Throw your error.
}
bytesRead += result;
}
}
```
Then later in the code
```
unsigned int length = 0;
char* buffer = 0;
// we assume that sizeof(length) will return 4 here.
ReadXBytes(socketFileDescriptor, sizeof(length), (void*)(&length));
buffer = new char[length];
ReadXBytes(socketFileDescriptor, length, (void*)buffer);
// Then process the data as needed.
delete [] buffer;
```
This makes a few assumptions:
* ints are the same size on the sender and receiver.
* Endianess is the same on both the sender and receiver.
* You have control of the protocol on both sides
* When you send a message you can calculate the length up front.
Since it is common to want to explicitly know the size of the integer you are sending across the network define them in a header file and use them explicitly such as:
```
// These typedefs will vary across different platforms
// such as linux, win32, OS/X etc, but the idea
// is that a Int8 is always 8 bits, and a UInt32 is always
// 32 bits regardless of the platform you are on.
// These vary from compiler to compiler, so you have to
// look them up in the compiler documentation.
typedef char Int8;
typedef short int Int16;
typedef int Int32;
typedef unsigned char UInt8;
typedef unsigned short int UInt16;
typedef unsigned int UInt32;
```
This would change the above to:
```
UInt32 length = 0;
char* buffer = 0;
ReadXBytes(socketFileDescriptor, sizeof(length), (void*)(&length));
buffer = new char[length];
ReadXBytes(socketFileDescriptor, length, (void*)buffer);
// process
delete [] buffer;
```
I hope this helps. | Several pointers:
You need to handle a return value of 0, which tells you that the remote host closed the socket.
For nonblocking sockets, you also need to check an error return value (-1) and make sure that errno isn't EINPROGRESS, which is expected.
You definitely need better error handling - you're potentially leaking the buffer pointed to by 'buffer'. Which, I noticed, you don't allocate anywhere in this code snippet.
Someone else made a good point about how your buffer isn't a null terminated C string if your read() fills the entire buffer. That is indeed a problem, and a serious one.
Your buffer size is a bit small, but should work as long as you don't try to read more than 256 bytes, or whatever you allocate for it.
If you're worried about getting into an infinite loop when the remote host sends you a malformed message (a potential denial of service attack) then you should use select() with a timeout on the socket to check for readability, and only read if data is available, and bail out if select() times out.
Something like this might work for you:
```
fd_set read_set;
struct timeval timeout;
timeout.tv_sec = 60; // Time out after a minute
timeout.tv_usec = 0;
FD_ZERO(&read_set);
FD_SET(socketFileDescriptor, &read_set);
int r=select(socketFileDescriptor+1, &read_set, NULL, NULL, &timeout);
if( r<0 ) {
// Handle the error
}
if( r==0 ) {
// Timeout - handle that. You could try waiting again, close the socket...
}
if( r>0 ) {
// The socket is ready for reading - call read() on it.
}
```
Depending on the volume of data you expect to receive, the way you scan the entire message repeatedly for the "end;" token is very inefficient. This is better done with a state machine (the states being 'e'->'n'->'d'->';') so that you only look at each incoming character once.
And seriously, you should consider finding a library to do all this for you. It's not easy getting it right. | What is the correct way of reading from a TCP socket in C/C++? | [
"",
"c++",
"c",
"tcp",
""
] |
I have to do an exercise using arrays. The user must enter 3 inputs (each time, information about items) and the inputs will be inserted in the array. Then I must to display the array.
However, I am having a difficult time increasing the array's length without changing the information within it; and how can I allow the user to enter another set of input? This is what I have so far:
```
public string stockNum;
public string itemName;
public string price;
string[] items = new string[3];
public string [] addItem(string[] items)
{
System.Console.WriteLine("Please Sir Enter the stock number");
stockNum = Console.ReadLine();
items.SetValue(stockNum, 0);
System.Console.WriteLine("Please Sir Enter the price");
price = Console.ReadLine();
items.SetValue(price, 1);
System.Console.WriteLine("Please Sir Enter the item name");
itemName = Console.ReadLine();
items.SetValue(itemName, 2);
Array.Sort(items);
return items;
}
public void ShowItem()
{
addItem(items);
Console.WriteLine("The stock Number is " + items[0]);
Console.WriteLine("The Item name is " + items[2]);
Console.WriteLine("The price " + items[1]);
}
static void Main(string[] args)
{
DepartmentStore depart = new DepartmentStore();
string[] ar = new string[3];
// depart.addItem(ar);
depart.ShowItem();
}
```
So my question boils down to:
1. How can I allow the user to enter more than one batch of input?
For example, the first time the user will enter the information about item ( socket number, price and name), but I need to allow the user enter more information about another item?
2. How can I display the socket num, price and name for each item in the array based on the assumption that I have more than one item in the array? | In .NET (C#), an array isn't resizable. It's not like in JavaScript or PHP were an array is made very flexible and can be extend with arbitrary elements.
Per definition an default static array has a fix size, so you can reference an element within by using the index. ( <http://en.wikipedia.org/wiki/Array_data_structure#Array_resizing> )
But there you can read about dynamic array. In C# it will be the System.Collections.ArrayList-Object. ( <http://en.wikipedia.org/wiki/Dynamic_array> )
So what you need is either the ArrayList-Definition or a normal list or generic list. (System.Collections.Generic.List)
```
string[] items = new string[3] { "input1", "input2", "input3" };
string[] moreItems = new string[10] { "input4", "input5" };
// array to list
List<string> itemsList = items.ToList<string>();
itemsList.Add("newItem");
// or merge an other array to the list
itemsList.AddRange(moreItems);
// list to array
string[] newArray = itemsList.ToArray();
``` | Starting with .NET Framework 3.5, for one-dimensional arrays you can use `Array.Resize<T>` method:
```
int[] myArray = { 1, 2, 3 };
// Extend
Array.Resize(ref myArray, 5); // { 1, 2, 3, 0, 0 }
// Shrink
Array.Resize(ref myArray, 2); // { 1, 2 }
```
Microsoft documentation link is [here](https://learn.microsoft.com/en-us/dotnet/api/system.array.resize).
> This method allocates a new array with the specified size, copies elements from the old array to the new one, and then replaces the old array with the new one. `array` must be a one-dimensional array. | How to extend arrays in C# | [
"",
"c#",
"arrays",
"input",
""
] |
I have a set of strings (~80 000) I can only access sequentially by the hits.Doc(int).Get("fieldName") method.
```
List<string> idStrings = new List<string>();
int count = hits.Length();
for (int i = 0; i < count; i++)
{
string idString = hits.Doc(i).Get("id");
if (!idStrings.Contains(idString))
idStrings.Add(idString);
}
```
The strings will later on have to be int.TryParse()'d.
I think there should be a faster way to do this. Any suggestions? | First of all, use a `Hashset<string>` instead of a list, the Contains method will go faster:
```
int count = hits.Length();
Hashset<string> idStrings = new Hashset<string>();
```
EDIT: You don't have to call "Contains" if you use a Hashset as it can't contain duplicate items. Just use Add, it will automatically remove duplicate values. | Use a `Dictionary` instead of a `List`. The `Dictionary.ContainsKey` method is much faster than the `List.Contains` method.
```
Dictionary<string, int> idStrings = new Dictionary<string, int>();
int count = hits.Length();
for (int i = 0; i < count; i++) {
string idString = hits.Doc(i).Get("id");
if (!idStrings.ContainsKey(idString)) {
idStrings.Add(idString, 1);
}
}
```
If you use framework 3.5 you can use a `HashSet` instead of a `Dictionary`:
```
HashSet<string> idStrings = new HashSet<string>();
int count = hits.Length();
for (int i = 0; i < count; i++) {
string idString = hits.Doc(i).Get("id");
idStrings.Add(idString);
}
``` | Fastest way to create a list of unique strings from within a loop? | [
"",
"c#",
".net",
"optimization",
""
] |
We are writing a custom tool using TFS client APIs, to connect to TFS, to fetch work items for a project etc.
---
We are querying the work item store, using WIQL.
Given a fully qualified filename, what is the easiest way to get a list of work items that have change sets which contain the specified file? | I'm not sure that there is an easy way to do the query that you are requesting using the TFS API. I know that you definately cannot do it using WIQL. I think, using the API, you would have to iterate over all the work items - get the changeset links in them and then look in each changeset for the file path that you are after. This is obviously not much use.
You could get that data using the TFS Data Warehouse database however. This information will lag behind the live operational store information because the warehouse only gets updated periodically - but will allow you to track things by the folder/file dimension pretty easily. | This little code snippet will get you a collection of work items given a TFS server name and a project.. It also filters out work items in the deleted state.
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;
namespace EngineTFSAutomation
{
class TFSHelper
{
static public WorkItemCollection QueryWorkItems(string server, string projectname)
{
TeamFoundationServer tfs = TeamFoundationServerFactory.GetServer(server);
WorkItemStore workItemStore = (WorkItemStore)tfs.GetService(typeof(WorkItemStore));
Project p = workItemStore.Projects[projectname];
string wiqlQuery = "Select * from Issue where [System.TeamProject] = '"+projectname+"'";
wiqlQuery += " and [System.State] <> 'Deleted'";
wiqlQuery+= " order by ID";
WorkItemCollection witCollection = workItemStore.Query(wiqlQuery);
return witCollection;
}
}
}
``` | TFS Client API - Query to get work items linked to a specific file? | [
"",
"c#",
".net",
"visual-studio-2008",
"tfs",
""
] |
The following catch() is not called:
```
void test(void)
{
int i=1,j=0,k;
try
{
k = i/j;
}
catch(...)
{
...handle it...
}
}
```
Is there a way to catch this kind of exception? | below code implement \_\_try/\_\_except effect like in visual studio c++
or
how to simulate \_\_try/\_\_except for gcc or g++
```
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
__thread jmp_buf * gThreadData; //thread local storage variable declare
void FPE_ExceptionHandler(int signal)
{
printf("exception handler signalid=%d\n", signal);
//jmp to setjmp_return and rc will equal to non zero
longjmp(*gThreadData, 10001);
}
int main(int argc, char *argv[])
{
//setup a callback function for access violation exception
signal(SIGSEGV, (__sighandler_t)FPE_ExceptionHandler);
//allocate a jmp_buf struct and assign it to thread local storage pointer
gThreadData = (jmp_buf *)(new jmp_buf);
//setjmp save current thread context
int rc = setjmp(*gThreadData);
//setjmp_return
//first time, run to here rc will equal to 0
if (rc == 0) {
*(int*)0 = 1; //generate a exception
}
printf("return from exception\n");
delete (jmp_buf *)gThreadData;
}
``` | Please check
<http://linux.die.net/man/1/gcc>
there is a compiler option -mcheck-zero-division to handle this.
Alternatively, installing a SIGFPE handler might be an option,
A float div by 0 would then generate a 'FPE\_ZERODIVIDE'
```
signal(SIGFPE, (fptr) FPE_ExceptionHandler);
void FPE_ExceptionHandler(int nSig,int nErrType,int */*pnReglist*/)
{
switch(nErrType)
{
case FPE_ZERODIVIDE: /* ??? */ break;
}
}
```
since
Most floating point systems are based on the IEEE standard, which allows division by 0. This returns either positive infinity or negative infinity as appropriate based on the signs of the numbers. (Except 0/0 returns the undefined NAN--again not an exceptional case.) This tends to be useful for scientific and mathematical applications. The NANs effectively signal a case where calculations were not pssible but allow calculations to continue. The continued calculations will not produce new results but will continue to return NANs. This allows long long chains of calculations to be performed witout error checking within the calculatiosn. The error checks only need to be performed at the very end of the work. This makes the code much much simpler and also faster. It can also be more useful at times as for some applications, infintity is a "useful" result, not really a sign of problems. | How do I catch system-level exceptions in Linux C++? | [
"",
"c++",
"linux",
"exception",
"try-catch",
""
] |
Say I have a list of edges each containing two nodes (to and from). What is the best way to find the edge two given nodes? Note that nodes in the edge may repeat.
Say I have edge in this format:
> 1 <-> 5
>
> 3 <-> 7
>
> 5 <-> 6
>
> 2<-> 6
Then query such as 1 5 will return **true**.
Then query such as 5 2 will return **true** because 5 connects 6 and 6 connects to 2.
Then query such as 1 7 will return **false**.
Then query such as 7 4 will return **false** since 4 doesnt exist, it means it is edge-less node. | It sounds to me like you are just asking if a path exists between two vertices in an undirected graph, but not necessarily what that path might be. This is the same as asking if the two vertices are in the same connected component of the graph.
If you really do only need to know if the two vertices are in the same connected component, then there is a simple and efficient algorithm using a [Disjoint-set data structure](http://en.wikipedia.org/wiki/Disjoint-set_data_structure).
```
initialize the disjoint set structure (DSS)
for each edge:
for each vertex in edge:
if the vertex does not exist in the DSS:
create a new subset in the DSS containing only the vertex
merge the subsets of the two vertices
```
To determine if a path exists between two vertices after processing all the edges, just check if the two vertices are in the same subset. If they are, then some path exists between them.
With an efficient implementation of the DSS, this algorithm achieves just slightly worse than linear time, and even with a simple linked-list implementation of the DSS it's O(n`*`log(n)). As j`_`random`_`hacker mentions, Floyd-Warshall is O(n^3) time and O(n^2) storage no matter if you are only calculating transitive closure or not, and using Dijkstra's algorithm requires an O(n`*`log(n)) calculation for each query. | You are basically looking forward to test if a given pair of nodes have a path in between them or not. This is a general case of the shortest path problem. Note, however, it suffices if we can find a shortest path between the pair of nodes in question. Use whatever representation suits you (adjacency matrix, adjacency list, edge-sets, union-find ...) and go ahead with a BFS/Djikstra implementation for all pair of nodes. It is then only a matter of servicing queries. Or, you can run Djikstra/BFS on a lazy basis (and cache past computations on an incremental manner). | How can I search a graph for a path? | [
"",
"java",
"algorithm",
"data-structures",
""
] |
I am developing email queue module for my php application. So all emails(user notifications, password reminder ...) will be put into the queue and will send by cron process based on the priority of emails. But i will have news letter module soon. so my question is either to keep newsletter in seperated queue or can be used the centralized queue since i have priority attribute for all emails ?
Thanks. | Word of caution: Do you have experience building email senders? It's a hairy adventure and you're almost always better off outsourcing the task. Email deliverability is not easy or predictable.
You can stick with one queue, but be sure to have the ability to specify which IP address a particular email can be sent from. You'll want to have different IP addresses for sending newsletters, signups, invoices, etc. And even further, you'll want to have an IP for sending newsletters to trusted addresses and untrusted addresses. | If you can do them with the same module, I'd consider that preferable since there's less code to worry about.
The only *potential* problem I can see is the differing nature of the two email types. User notifications and password reminders would tend to have one recipient. Newsletters would be emailed to all of your users at once.
If this doesn't cause a problem (and you can't see any other problems), I'd stick with the one-mailer-to-rule-them-all approach. | Email queue system for massive mailing | [
"",
"php",
"email",
""
] |
So I made myself a little html/css template. And now I'm trying to actually use it with some PHP code however, it only renders text. The images and css arent there. Everything is in the templates/Default directory. Do i have to do something funky with my paths in the template? | Use [firebug](http://getfirebug.com/) and see why the images don't show up. You need to activate the "NET" tab and then you see all the requests being made when your browser requests the page. My guess is that the paths to CSS and images are incorrect. | Wow, that's a weird question... :)
Just check what URLs the browser requests when looking for the stylesheets/images. I think you will have to adjust paths that you use in your smarty template. | Smarty not rendering images and css | [
"",
"php",
"html",
"css",
"smarty",
""
] |
In Python, what happens when two modules attempt to `import` each other? More generally, what happens if multiple modules attempt to `import` in a cycle?
---
See also [What can I do about "ImportError: Cannot import name X" or "AttributeError: ... (most likely due to a circular import)"?](https://stackoverflow.com/questions/9252543/) for the common problem that may result, and advice on how to rewrite code to avoid such imports. See [Why do circular imports seemingly work further up in the call stack but then raise an ImportError further down?](https://stackoverflow.com/questions/22187279) for technical details on **why and how** the problem occurs. | There was a really good discussion on this over at [comp.lang.python](http://groups.google.com/group/comp.lang.python/browse_thread/thread/1d80a1c6db2b867c) last year. It answers your question pretty thoroughly.
> Imports are pretty straightforward really. Just remember the following:
>
> 'import' and 'from xxx import yyy' are executable statements. They execute
> when the running program reaches that line.
>
> If a module is not in sys.modules, then an import creates the new module
> entry in sys.modules and then executes the code in the module. It does not
> return control to the calling module until the execution has completed.
>
> If a module does exist in sys.modules then an import simply returns that
> module whether or not it has completed executing. That is the reason why
> cyclic imports may return modules which appear to be partly empty.
>
> Finally, the executing script runs in a module named \_\_main\_\_, importing
> the script under its own name will create a new module unrelated to
> \_\_main\_\_.
>
> Take that lot together and you shouldn't get any surprises when importing
> modules. | If you do `import foo` (inside `bar.py`) and `import bar` (inside `foo.py`), it will work fine. By the time anything actually runs, both modules will be fully loaded and will have references to each other.
The problem is when instead you do `from foo import abc` (inside `bar.py`) and `from bar import xyz` (inside `foo.py`). Because now each module requires the other module to already be imported (so that the name we are importing exists) before it can be imported.
## Examples of working circular imports in Python 2 and Python 3
The article [When are Python circular imports fatal?](https://gist.github.com/Mark24Code/2073470277437f2241033c2003f98358) gives four examples when circular imports are, for the reason explained above, nonfatal.
### Top of module; no from; Python 2 only
```
# lib/foo.py # lib/bar.py
import bar import foo
def abc(): def xyz():
print(bar.xyz.__name__) print(foo.abc.__name__)
```
### Top of module; from ok; relative ok; Python 3 only
```
# lib/foo.py # lib/bar.py
from . import bar from . import foo
def abc(): def xyz():
print(bar.xyz.__name__) print(abc.__name__)
```
### Top of module; no from; no relative
```
# lib/foo.py # lib/bar.py
import lib.bar import lib.foo
def abc(): def xyz():
print(lib.bar.xyz.__name__) print(lib.foo.abc.__name__)
```
### Bottom of module; import attribute, not module; from okay
```
# lib/foo.py # lib/bar.py
def abc(): def xyz():
print(xyz.__name__) print(abc.__name__)
from .bar import xyz from .foo import abc
```
### Top of function; from okay
```
# lib/foo.py # lib/bar.py
def abc(): def xyz():
from . import bar from . import foo
print(bar.xyz.__name__) print(foo.abc.__name__)
```
## Additional examples
The article cited above does not discuss star imports. | What happens when using mutual or circular (cyclic) imports? | [
"",
"python",
"python-import",
"circular-dependency",
"cyclic-reference",
""
] |
I'd like to be able to allow community members to supply their own javascript code for others to use, because the users' imaginations are collectively far greater than anything I could think of.
But this raises the inherent question of security, particularly when the purpose is to *allow* external code to run.
So, can I just ban `eval()` from submissions and be done with it? Or are there other ways to evaluate code or cause mass panic in javascript?
There are other things to disallow, but my main concern is that unless I can prevent strings being executed, whatever other filters I put in for specific methods can be circumvented. Doable, or do I have to resort to demanding the author supplies a web service interface? | > Or are there other ways to evaluate code
You can't filter out calls to `eval()` at a script-parsing level because JavaScript is a Turing-complete language in which it is possible to obfuscate calls. eg. see svinto's workaround. You could hide `window.eval` by overwriting it with a null value, but there are indeed other ways to evaluate code, including (just off the top of my head):
* new Function('code')()
* document.write('%3Cscript>code%3C/script>')
* document.createElement('script').appendChild(document.createTextNode('code'))
* window.setTimeout('code', 0);
* window.open(...).eval('code')
* location.href='javascript:code'
* in IE, style/node.setExpression('someproperty', 'code')
* in some browsers, node.onsomeevent= 'code';
* in older browsers, Object.prototype.eval('code')
> or cause mass panic in javascript?
Well createElement('iframe').src='http://evil.iframeexploitz.ru/aff=2345' is one of the worse attacks you can expect... but really, when a script has control, it can do anything a user can on your site. It can make them post “I'm a big old paedophile!” a thousand times on your forums and then delete their own account. For example.
> do I have to resort to demanding the author supplies a web service interface?
Yes, or:
* do nothing and let users who want this functionality download GreaseMonkey
* vet every script submission yourself
* use your own (potentially JavaScript-like) mini-language over which you actually have control
an example of the latter that may interest you is [Google Caja](http://code.google.com/p/google-caja/). I'm not entirely sure I'd trust it; it's a hard job and they've certainly had some security holes so far, but it's about the best there is if you really must take this approach. | Since [HTML5](http://en.wikipedia.org/wiki/HTML5) has now become available you can use a [sandbox](http://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/) for untrusted JavaScript code.
The [OWASP HTML5 Security Cheat Sheet](https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet) comments on [Sandboxed frames](https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet#Sandboxed_frames):
> * Use the sandbox attribute of an iframe for untrusted content.
> * The sandbox attribute of an iframe enables restrictions on content within a `iframe`. The following restrictions are active when the sandbox attribute is set:
>
> 1. All markup is treated as being from a unique origin.
> 2. All forms and scripts are disabled.
> 3. All links are prevented from targeting other browsing contexts.
> 4. All features that triggers automatically are blocked.
> 5. All plugins are disabled.
>
> It is possible to have a fine-grained control over `iframe` capabilities using the value of the `sandbox` attribute.
> * In old versions of user agents where this feature is not supported, this attribute will be ignored. Use this feature as an additional layer of protection or check if the browser supports sandboxed frames and only show the untrusted content if supported.
> * Apart from this attribute, to prevent Clickjacking attacks and unsolicited framing it is encouraged to use the header `X-Frame-Options` which supports the `deny` and `same-origin` values. Other solutions like framebusting `if(window!== window.top) { window.top.location = location; }` are not recommended.
You can allow scripts to run while keeping the other restrictions in place. However, you should make sure that scripts run from a different domain than your main content in order to prevent [XSS](https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)) attacks by an attacker redirecting a user to load the page directly (i.e. not via your IFrame).
This will restrict scripts from using `eval` to attack your main domain, but it may be that this would also prevent the scripts from actually being powerful enough for your needs. Any interaction with your main domain would have to be via [`Window.postMessage`](https://developer.mozilla.org/en-US/docs/Web/API/Window.postMessage). If this is too restrictive then [@bobince's answer](https://stackoverflow.com/a/739937/413180) still has the best suggestions for workarounds.
Please see [my other answer](https://stackoverflow.com/a/21244925/413180) for details of how a sandbox can be safely implemented. | How can I make external code 'safe' to run? Just ban eval()? | [
"",
"javascript",
"security",
"xss",
""
] |
I'm currently working on databinding some of my existing Windows Forms, and I've ran into an issue figuring out the proper way of databinding a group of radiobutton controls within a group box.
My business object has an integer property which I want to databind against 4 radiobuttons (where each of them represents the values 0 - 3).
I'm currently binding against a presenter object which works as the binder between the form and the business object, and the way I've done it now is to have 4 separate properties which each binds against each of these values (I do use INotifyPropertyChanged, but not including that here):
```
Private int _propValue;
Public bool PropIsValue0
{
get { return _propValue == 0; }
set
{
if (value)
_propValue = 0;
}
}
Public bool PropIsValue1 { // As above, but with value == 1 }
Public bool PropIsValue2 { // As above, but with value == 2 }
Public bool PropIsValue3 { // As above, but with value == 3 }
```
And I then bind each of the radiobuttons to their respective property as above.
This does not seem right to me, so any advice are highly appreciated. | Following is a generic RadioGroupBox implementation in the spirit of ArielBH's suggestion (some code borrowed from Jay Andrew Allen's [RadioPanel](http://www.codeproject.com/KB/combobox/RadioPanel.aspx)). Just add RadioButtons to it, set their tags to different integers and bind to the 'Selected' property.
```
public class RadioGroupBox : GroupBox
{
public event EventHandler SelectedChanged = delegate { };
int _selected;
public int Selected
{
get
{
return _selected;
}
set
{
int val = 0;
var radioButton = this.Controls.OfType<RadioButton>()
.FirstOrDefault(radio =>
radio.Tag != null
&& int.TryParse(radio.Tag.ToString(), out val) && val == value);
if (radioButton != null)
{
radioButton.Checked = true;
_selected = val;
}
}
}
protected override void OnControlAdded(ControlEventArgs e)
{
base.OnControlAdded(e);
var radioButton = e.Control as RadioButton;
if (radioButton != null)
radioButton.CheckedChanged += radioButton_CheckedChanged;
}
void radioButton_CheckedChanged(object sender, EventArgs e)
{
var radio = (RadioButton)sender;
int val = 0;
if (radio.Checked && radio.Tag != null
&& int.TryParse(radio.Tag.ToString(), out val))
{
_selected = val;
SelectedChanged(this, new EventArgs());
}
}
}
```
Note that you can't bind to the 'Selected' property via the designer due to initialization order problems in InitializeComponent (the binding is performed before the radio buttons are initialized, so their tag is null in the first assignment). So just bind yourself like so:
```
public Form1()
{
InitializeComponent();
//Assuming selected1 and selected2 are defined as integer application settings
radioGroup1.DataBindings.Add("Selected", Properties.Settings.Default, "selected1");
radioGroup2.DataBindings.Add("Selected", Properties.Settings.Default, "selected2");
}
``` | I know this post is old but in my search for an answer for this same problem I came across this post and it didn't solve my problem. I ended up having a lightbulb go off randomly just a minute ago and wanted to share my solution.
I have three radio buttons in a group box. I'm using a List<> of a custom class object as the data source.
Class Object:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace BAL
{
class ProductItem
{
// Global Variable to store the value of which radio button should be checked
private int glbTaxStatus;
// Public variable to set initial value passed from
// database query and get value to save to database
public int TaxStatus
{
get { return glbTaxStatus; }
set { glbTaxStatus = value; }
}
// Get/Set for 1st Radio button
public bool Resale
{
// If the Global Variable = 1 return true, else return false
get
{
if (glbTaxStatus.Equals(1))
{
return true;
}
else
{
return false;
}
}
// If the value being passed in = 1 set the Global Variable = 1, else do nothing
set
{
if (value.Equals(true))
{
glbTaxStatus = 1;
}
}
}
// Get/Set for 2nd Radio button
public bool NeverTax
{
// If the Global Variable = 2 return true, else return false
get
{
if (glbTaxStatus.Equals(2))
{
return true;
}
else
{
return false;
}
}
// If the value being passed in = 2 set the Global Variable = 2, else do nothing
set
{
if (value.Equals(true))
{
glbTaxStatus = 2;
}
}
}
// Get/Set for 3rd Radio button
public bool AlwaysTax
{
// If the Global Variable = 3 return true, else return false
get
{
if (glbTaxStatus.Equals(3))
{
return true;
}
else
{
return false;
}
}
// If the value being passed in = 3 set the Global Variable = 3, else do nothing
set
{
if (value.Equals(true))
{
glbTaxStatus = 3;
}
}
}
// More code ...
```
Three seperate public variables with get/set accessing the same one global variable.
In the code behind, I have a function called during the Page\_Load() setting all the controls databindings. For each radio button I add its own databinging.
```
radResale.DataBindings.Add("Checked", glbProductList, "Resale", true, DataSourceUpdateMode.OnPropertyChanged, false);
radNeverTax.DataBindings.Add("Checked", glbProductList, "NeverTax", true, DataSourceUpdateMode.OnPropertyChanged, false);
radAlwaysTax.DataBindings.Add("Checked", glbProductList, "Always", true, DataSourceUpdateMode.OnPropertyChanged, false);
```
I hope this helps someone!! | Best way to databind a group of radiobuttons in WinForms | [
"",
"c#",
"winforms",
"data-binding",
"radio-button",
""
] |
How can I create a function that will have a dynamic return type based on the parameter type?
Like
```
protected DynamicType Test(DynamicType type)
{
return ;
}
``` | You'd have to use generics for this. For example,
```
protected T Test<T>(T parameter)
{
}
```
In this example, the '`<T>`' tells the compiler that it represents the name of a type, but you don't know what that is in the context of creating this function. So you'd end up calling it like...
```
int foo;
int bar = Test<int>(foo);
``` | Whilst the accepted answer is good, it has been over two years since it was written, so, I should add that you can use:
```
protected dynamic methodname(dynamic input)
{
return input;
}
```
Input will be returned as the same type, and you do not need to call the method as a generic.
**Reference:**
<https://msdn.microsoft.com/en-us/library/dd264736.aspx> | dynamic return type of a function | [
"",
"c#",
""
] |
Given a class that keeps a registry of its Objects:
```
class Person(object):
__registry = []
def __init__(self, name):
self.__registry.append(self)
self.name = name
```
How would I make the following code work (without using Person.\_\_registry):
```
for personobject in Person:
print personobject
```
While researching I found a hint that one could go for a `__metaclass__` with a `__getitem__`-method. Any ideas how this would look like? | You can make your class object iterable with a simple metaclass.
```
class IterRegistry(type):
def __iter__(cls):
return iter(cls._registry)
class Person(object):
__metaclass__ = IterRegistry
_registry = []
def __init__(self, name):
self._registry.append(self)
self.name = name
```
(I have also changed `__registry` to `_registry` to make it easier to access from the metaclass).
Then,
```
>>> p = Person('John')
>>> p2 = Person('Mary')
>>> for personobject in Person:
... print personobject
...
<person.Person object at 0x70410>
<person.Person object at 0x70250>
``` | First, do not use double `__` names. They're reserved for use by Python. If you want "private" use single `_`.
Second, keep this kind of thing as simple as possible. Don't waste a lot of time and energy on something complex. This is a simple problem, keep the code as simple as possible to get the job done.
```
class Person(object):
_registry = []
def __init__(self, name):
self._registry.append(self)
self.name = name
for p in Person._registry:
print p
``` | Iterating over object instances of a given class in Python | [
"",
"python",
"oop",
""
] |
I'm trying to write a test application for serial [I/O](http://en.wikipedia.org/wiki/Input/output) ([RS-232](http://en.wikipedia.org/wiki/RS-232)) with multiple units in C# and I'm running into an issue with my lack of threading experience so I'm soliciting feedback for a best known method.
I've got a pool of COM ports COM1-16 each of which can read/write at any time and I need to be able to manage them simultaneously. Is this a situation for a thread pool? What is some guidance on structuring this applet?
Edit:
Upon review I was wondering is I really even need to do asynchronous threads here, I could just maintain states for each COM-port and do flow logic (i.e., statemachine) for each COM-port individually. | Much of the difficulty surrounding the serial port is centered around an assumption. The assumption is that the serial port receives data in chunks that are convenient, and what is expected.
Here is an example. I know my GPS receiver sends lines (ends with CRLF). This is an example of one of the NMEA sentences:
$GPGSV,3,1,11,10,75,053,29,29,52,311,32,24,50,298,30,02,39,073,30\*77
However, the serial ports DataReceived event handler might(usually does on my PC) fire several times with chunks of that data.
event fire - data
```
1 $
2 GPGSV,3,1,11,10
3 ,75,053,29,29,52,311,32,24,50,298,30,02,39,073,30*77
```
Instead of fighting this I created some routines that receive data whenever the event fires, and queue it up. When I need the data I call some other routines that put the data back together in chunk sizes **I want**. So using my example the first and second time I call read line(my readline) I get back an empty answer.
The third time I get the entire NMEA sentence back.
The bad news is that I don't C#. The code is here [SerialPort](http://www.vbforums.com/showthread.php?t=530790)
Depending on the speed of the ports delegates may not be a good choice. I tested my routines at near 1Mbps using delegates, and not using delegates. At those speeds not using delegates was a better choice.
Here are some tips from those in the know
[Kim Hamilton](http://blogs.msdn.com/bclteam/archive/2006/10/10/Top-5-SerialPort-Tips-_5B00_Kim-Hamilton_5D00_.aspx) | If you use the backgroundworker you won't be grabbing small
pieces of the data, you'll get the entire string. You need
something like:
```
private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)
{
try
{
serialPort1.Open();
if (serialPort1.IsOpen)
{
while (keepReading)
{
backgroundWorker1.ReportProgress(0, serialPort1.ReadLine());
//backgroundWorker1.ReportProgress(0, sentence.Split(','));
// split_gps_data();
}
}
}
}
``` | Multiple port asynchronous I/O over serialport in C# | [
"",
"c#",
".net",
"serial-port",
""
] |
Is it possible to use **Toplink Essentials** without application server, just on desktop application? | Yes but for certain behaviour you may have to use the JVM startup options to enable the toplink agent. I believe this is for the lazy loading or something similar.
Do you mean TopLink or TopLink Essentials? The first is a commercial Oracle product. The latter is the reference implementation of JPA 1.0, which is a somewhat cut down version.
Honestly I don't think there's any reason to use anything other than EclipseLink (which will be the JPA 2.0 reference implementation) or Hibernate.
To actually do this:
* [Hibernate for Java SE](http://www.onjava.com/pub/a/onjava/2005/11/16/hibernate-for-java-se.html);
* [Eclipselink in J2SE RCP Applications](http://www.weheartcode.com/2008/08/27/eclipselink-in-j2se-rcp-applications/); and
* [Using the Java Persistence API in Desktop Applications](http://java.sun.com/developer/technicalArticles/J2SE/Desktop/persistenceapi/). | Yes it is, as long as you don't need JNDI naming services. Toplink and Hibernate can both run without an app server. You just need a relational database and a JDBC driver. | Toplink without application server | [
"",
"java",
"database",
"toplink",
""
] |
I am having fun with WPF and got a problem. I have googled and found this [website](http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/878ea631-c76e-49e8-9e25-81e76a3c4fe3/) that has the same problem of me but without any working solution.
The problem is that I have a button that do some processing of data (around 30 sec). I want to have the button to disable and to have log writing in a text box... the problem is that it doesn't disable and it doesn't wrote any thing on the textbox until the processing is completely done.
Any idea?
```
private void button1_Click(object sender, RoutedEventArgs e)
{
this.button1.IsEnabled = false;
//Long stuff here
txtLog.AppendText(Environment.NewLine + "Blabla");
//End long stuff here
this.button1.IsEnabled = true;
}
``` | As others have said, use the [`BackgroundWorker`](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx) or some other method of doing work asychronously.
You can declare it under your `Window`, initialize it somewhere like the `Loaded` event, and use it in the `Click` event. Here's your method, modified to use `BackgroundWorker`, assuming you've declared it under the `Window` as `_bw`:
```
private void Window_Loaded(object sender, RoutedEventArgs e)
{
_bw = new BackgroundWorker();
_bw.DoWork += new DoWorkEventHandler((o, args) =>
{
//Long stuff here
this.Dispatcher.Invoke((Action)(() => txtLog.AppendText(Environment.NewLine + "Blabla")));
});
_bw.RunWorkerCompleted += new RunWorkerCompletedEventHandler((o, args) =>
{
//End long stuff here
this.Dispatcher.Invoke((Action)(() => this.button1.IsEnabled = true));
});
}
private void button1_Click(object sender, RoutedEventArgs e)
{
this.button1.IsEnabled = false;
_bw.RunWorkerAsync();
}
```
Note that anything that modifies your UI from another thread must be done within a [`Dispatcher.Invoke`](http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcher.invoke.aspx) or [`Dispatcher.BeginInvoke`](http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcher.begininvoke.aspx) call, WPF does not allow you to get or set `DependencyProperty` values from any thread but the one where the object was created (more about this [here](http://msdn.microsoft.com/en-us/library/system.windows.threading.dispatcherobject.checkaccess.aspx)).
If you wanted to read from `txtLog` instead of modifying it, the code would be the same:
```
//Long stuff here
this.Dispatcher.Invoke((Action)(() =>
{
string myLogText = txtLog.Text;
myLogText = myLogText + Environment.NewLine + "Blabla";
txtLog.Text = myLogText;
}));
``` | do it async. create a backgroundworker process to handle the data and the application will continue to respond. [MSDN Resources on the Class](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx). Since WPF is using C# (or VB.net) you can still use the same types of threading objects. I've used the background worker successfully in a WPF app myself. | WPF application in a loop, how to not have the whole application freeze? | [
"",
"c#",
".net",
"wpf",
"multithreading",
""
] |
I am using the PHP function mkdir($path, 0777) to create a new directory on an Apache server (running on Windows).
When I create this and view the folders, Windows has set the dir as read only. Is there a way of changing this without using exec() to cacls.exe? The PHP documentation states that chmod() doesn't work on Windows but doesn't say anything about a replacement method.
Google seems to be a little unhelpful on this one.
EDIT: After several minutes talking to administrators at my two-bit educational institution, it seems that their test PHP server for students is being run off of a USB drive. I have a feeling that this will have something to do with it *slaps head* | The following:
```
<?php
mkdir( "foo" )
?>
```
(note no mode specified) works for me on Win2K - it creates a folder with no attributes set.
**Edit:** Just had a thought - could it be something to do with permissions? Try running the above code from the command line & see what happens:
```
C:> c:\php\pgp.exe
<?php
mkdir( "foo" )
?>
^Z
``` | i am using in php and cakephp for windows as mkdir(" my dir name ",0777,true);
it is working fine for me. | PHP mkdir(), chmod() and Windows | [
"",
"php",
"windows",
"directory",
"chmod",
""
] |
I have a local class with a method used to build a list of strings and I'm finding that when I hit this method (in a for loop of 1000 times) often it's not returning the amount I request.
I have a global variable:
```
string[] cachedKeys
```
A parameter passed to the method:
```
int requestedNumberToGet
```
The method looks similar to this:
```
List<string> keysToReturn = new List<string>();
int numberPossibleToGet = (cachedKeys.Length <= requestedNumberToGet) ?
cachedKeys.Length : requestedNumberToGet;
Random rand = new Random();
DateTime breakoutTime = DateTime.Now.AddMilliseconds(5);
//Do we have enough to fill the request within the time? otherwise give
//however many we currently have
while (DateTime.Now < breakoutTime
&& keysToReturn.Count < numberPossibleToGet
&& cachedKeys.Length >= numberPossibleToGet)
{
string randomKey = cachedKeys[rand.Next(0, cachedKeys.Length)];
if (!keysToReturn.Contains(randomKey))
keysToReturn.Add(randomKey);
}
if (keysToReturn.Count != numberPossibleToGet)
Debugger.Break();
```
I have approximately 40 strings in cachedKeys none exceeding 15 characters in length.
I'm no expert with threading so I'm literally just calling this method 1000 times in a loop and consistently hitting that debug there.
The machine this is running on is a fairly beefy desktop so I would expect the breakout time to be realistic, in fact it randomly breaks at any point of the loop (I've seen 20s, 100s, 200s, 300s).
Any one have any ideas where I'm going wrong with this?
**Edit: Limited to .NET 2.0**
**Edit: The purpose of the breakout is so that if the method is taking too long to execute, the client (several web servers using the data for XML feeds) won't have to wait while the other project dependencies initialise, they'll just be given 0 results.**
**Edit: Thought I'd post the performance stats**
**Original**
* '0.0042477465711424217323710136' - 10
* '0.0479597267250446634977350473' - 100
* '0.4721072091564710039963179678' - 1000
**Skeet**
* '0.0007076318358897569383818334' - 10
* '0.007256508857969378789762386' - 100
* '0.0749829936486341141122684587' - 1000
**Freddy Rios**
* '0.0003765841748043396576939248' - 10
* '0.0046003053460705201359390649' - 100
* '0.0417058592642360970458535931' - 1000 | The main issue are the using retries in a random scenario to ensure you get unique values. This quickly gets out of control, specially if the amount of items requested is near to the amount of items to get i.e. if you increase the amount of keys, you will see the issue less often but that can be avoided.
The following method does it by keeping a list of the keys remaining.
```
List<string> GetSomeKeys(string[] cachedKeys, int requestedNumberToGet)
{
int numberPossibleToGet = Math.Min(cachedKeys.Length, requestedNumberToGet);
List<string> keysRemaining = new List<string>(cachedKeys);
List<string> keysToReturn = new List<string>(numberPossibleToGet);
Random rand = new Random();
for (int i = 0; i < numberPossibleToGet; i++)
{
int randomIndex = rand.Next(keysRemaining.Count);
keysToReturn.Add(keysRemaining[randomIndex]);
keysRemaining.RemoveAt(randomIndex);
}
return keysToReturn;
}
```
The timeout was necessary on your version as you could potentially keep retrying to get a value for a long time. Specially when you wanted to retrieve the whole list, in which case you would almost certainly get a fail with the version that relies on retries.
**Update:** The above performs better than these variations:
```
List<string> GetSomeKeysSwapping(string[] cachedKeys, int requestedNumberToGet)
{
int numberPossibleToGet = Math.Min(cachedKeys.Length, requestedNumberToGet);
List<string> keys = new List<string>(cachedKeys);
List<string> keysToReturn = new List<string>(numberPossibleToGet);
Random rand = new Random();
for (int i = 0; i < numberPossibleToGet; i++)
{
int index = rand.Next(numberPossibleToGet - i) + i;
keysToReturn.Add(keys[index]);
keys[index] = keys[i];
}
return keysToReturn;
}
List<string> GetSomeKeysEnumerable(string[] cachedKeys, int requestedNumberToGet)
{
Random rand = new Random();
return TakeRandom(cachedKeys, requestedNumberToGet, rand).ToList();
}
```
Some numbers with 10.000 iterations:
```
Function Name Elapsed Inclusive Time Number of Calls
GetSomeKeys 6,190.66 10,000
GetSomeKeysEnumerable 15,617.04 10,000
GetSomeKeysSwapping 8,293.64 10,000
``` | Why not just take a copy of the list - O(n) - shuffle it, also O(n) - and then return the number of keys that have been requested. In fact, the shuffle only needs to be O(nRequested). Keep swapping a random member of the unshuffled bit of the list with the very start of the unshuffled bit, then expand the shuffled bit by 1 (just a notional counter).
EDIT: Here's some code which yields the results as an `IEnumerable<T>`. Note that it uses deferred execution, so if you change the source that's passed in before you first start iterating through the results, you'll see those changes. After the first result is fetched, the elements will have been cached.
```
static IEnumerable<T> TakeRandom<T>(IEnumerable<T> source,
int sizeRequired,
Random rng)
{
List<T> list = new List<T>(source);
sizeRequired = Math.Min(sizeRequired, list.Count);
for (int i=0; i < sizeRequired; i++)
{
int index = rng.Next(list.Count-i);
T selected = list[i + index];
list[i + index] = list[i];
list[i] = selected;
yield return selected;
}
}
```
The idea is that at any point after you've fetched `n` elements, the first `n` elements of the list will be those elements - so we make sure that we don't pick those again. When then pick a random element from "the rest", swap it to the right position and yield it.
Hope this helps. If you're using C# 3 you might want to make this an extension method by putting "this" in front of the first parameter. | What's wrong in terms of performance with this code? List.Contains, random usage, threading? | [
"",
"c#",
"multithreading",
"loops",
"list",
"performance",
""
] |
I build a list of Django model objects by making several queries. Then I want to remove any duplicates, (all of these objects are of the same type with an auto\_increment int PK), but I can't use set() because they aren't hashable.
Is there a quick and easy way to do this? I'm considering using a dict instead of a list with the id as the key. | > Is there a quick and easy way to do this? I'm considering using a dict instead of a list with the id as the key.
That's exactly what I would do if you were locked into your current structure of making several queries. Then a simply `dictionary.values()` will return your list back.
If you have a little more flexibility, why not use `Q` objects? Instead of actually making the queries, store each query in a `Q` object and use a bitwise or ("|") to execute a single query. This will achieve your goal and save database hits.
[Django Q objects](http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects) | In general it's better to combine all your queries into a single query if possible. Ie.
```
q = Model.objects.filter(Q(field1=f1)|Q(field2=f2))
```
instead of
```
q1 = Models.object.filter(field1=f1)
q2 = Models.object.filter(field2=f2)
```
If the first query is returning duplicated Models then use distinct()
```
q = Model.objects.filter(Q(field1=f1)|Q(field2=f2)).distinct()
```
If your query really is impossible to execute with a single command, then you'll have to resort to using a dict or other technique recommended in the other answers. It might be helpful if you posted the exact query on SO and we could see if it would be possible to combine into a single query. In my experience, most queries can be done with a single queryset. | Django models - how to filter out duplicate values by PK after the fact? | [
"",
"python",
"django",
"data-structures",
"set",
"unique",
""
] |
I would like to apply a function to a Java collection, in this particular case a map. Is there a nice way to do this? I have a map and would like to just run trim() on all the values in the map and have the map reflect the updates. | With Java 8's lambdas, this is a one liner:
```
map.replaceAll((k, v) -> v.trim());
```
For the sake of history, here's a version without lambdas:
```
public void trimValues(Map<?, String> map) {
for (Map.Entry<?, String> e : map.entrySet()) {
String val = e.getValue();
if (val != null)
e.setValue(val.trim());
}
}
```
Or, more generally:
```
interface Function<T> {
T operate(T val);
}
public static <T> void replaceValues(Map<?, T> map, Function<T> f)
{
for (Map.Entry<?, T> e : map.entrySet())
e.setValue(f.operate(e.getValue()));
}
Util.replaceValues(myMap, new Function<String>() {
public String operate(String val)
{
return (val == null) ? null : val.trim();
}
});
``` | I don't know a way to do that with the JDK libraries other than your accepted response, however Google Collections lets you do the following thing, with the classes `com.google.collect.Maps` and `com.google.common.base.Function`:
```
Map<?,String> trimmedMap = Maps.transformValues(untrimmedMap, new Function<String, String>() {
public String apply(String from) {
if (from != null)
return from.trim();
return null;
}
}
```
The biggest difference of that method with the proposed one is that it provides a view to your original map, which means that, while it is always in sync with your original map, the `apply` method could be invoked many times if you are manipulating said map heavily.
A similar `Collections2.transform(Collection<F>,Function<F,T>)` method exists for collections. | Java collection/map apply method equivalent? | [
"",
"java",
"collections",
"apply",
""
] |
I am trying to wrap my jar as an exe using launch4j. However I am using the lwjgl library and having trouble linking the native dll's. My directory structure is as follows:
I have a top directory which contains the following path: top/lib/lwjgl/native/win32 where my lwjgl dll's are contained.
There is also a dist directory that contains my jar top/dist/myapp.jar
I can run my program from the commandline within the dist dir using the following command:
java -ea -Djava.library.path=../lib/lwjgl/native/win32 -jar app.jar
and it works perfectly. Now I defined my launch4j xml file to reside within the dist dir with a commandline option of -ea -Djava.library.path=../lib/lwjgl/native/win32
However, when I try to run the exe file I get an unsatisfied link error. (Meaning it cannot find my lwjgl dlls).
I have tried defining this in multiple different ways. I defined the changedir as ../ and used -ea -Djava.library.path=lib/lwjgl/native/win32 as well as attempting to move my exe to the top directory and using /dist/app.jar and the lib path and nothing seems to be working.
Has anyone had a problem similar to this before? How can I get launch4j to recognize my dll path?
thanks. | I bypass this problem by copying all native (.ddl) files to the "top" dir next to the game exe. | On the launch4j forums this was asked in relation to SWT which requires dll's. See [this](http://sourceforge.net/forum/forum.php?thread_id=1489745&forum_id=332684) or [this](http://sourceforge.net/forum/forum.php?thread_id=1596083&forum_id=332684) or search for how others dealt with SWT issues. | How to link lwjgl dll with launch4j | [
"",
"java",
"launch4j",
""
] |
I have a module whose purpose is to define a class called "nib". (and a few related classes too.) How should I call the module itself? "nib"? "nibmodule"? Anything else? | Just nib. Name the class Nib, with a capital N. For more on naming conventions and other style advice, see [PEP 8](https://www.python.org/dev/peps/pep-0008/#package-and-module-names), the Python style guide. | I would call it nib.py. And I would also name the class Nib.
In a larger python project I'm working on, we have lots of modules defining basically one important class. Classes are named beginning with a capital letter. The modules are named like the class in lowercase. This leads to imports like the following:
```
from nib import Nib
from foo import Foo
from spam.eggs import Eggs, FriedEggs
```
It's a bit like emulating the Java way. One class per file. But with the added flexibility, that you can allways add another class to a single file if it makes sense. | Python naming conventions for modules | [
"",
"python",
"naming-conventions",
""
] |
I want to pass an Array to a template and afterwards use it via JavaScript.
In my *views.py* I have:
```
arry1 = ['Str',500,20]
return render_to_response('test.html', {'array1': arry1})
```
And in my template:
```
var array1 = {{ array1 }};
```
but when I visit the website it outputs:
```
var array1 = ['Str',500,20];
```
What do I have to change? | Try using `{{ array1|safe }}` and see if that makes any difference. | As mentioned, you could use the `|safe` filter so Django doesn't sanitize the array and leaves it as is.
Another option, and probably the better one for the long term is to use the `simplejson` module (it's included with django) to format your Python list into a JSON object in which you can spit back to the Javascript. You can loop through the JSON object just like you would any array really.
```
from django.utils import simplejson
list = [1,2,3,'String1']
json_list = simplejson.dumps(list)
render_to_response(template_name, {'json_list': json_list})
```
And in your Javascript, just `{{ json_list }}` | How to pass an array in Django to a template and use it with JavaScript | [
"",
"javascript",
"django",
"django-templates",
""
] |
Error:
error C2678: binary '==' : no operator found which takes a left-hand operand of type 'const entry' (or there is no acceptable conversion)
The function:
```
template <class T, int maxSize>
int indexList<T, maxSize>::search(const T& target) const
{
for (int i = 0; i < maxSize; i++)
if (elements[i] == target) //ERROR???
return i; // target found at position i
// target not found
return -1;
}
```
[indexList.h](http://www2.cs.uregina.ca/%7Emouhoubm/=teaching/=cs170/=indexList/indexList.h)
[indexList.cpp](http://www2.cs.uregina.ca/%7Emouhoubm/=teaching/=cs170/=indexList/indexList.cpp)
Is this suppose to be an overloaded operator? Being a template class I am not sure I understand the error?
Solution-
The overload function in the class now declared const:
```
//Operators
bool entry::operator == (const entry& dE) const <--
{
return (name ==dE.name);
}
``` | Start by reading the error text exactly as it is:
> binary '==' : no operator found which takes a left-hand operand of type 'const entry'
It means it can't find any `==` operator that accepts an `entry` type as its left operand. This code isn't valid:
```
entry const e;
if (e == foo)
```
You've showed us the code for your list class, but that's not what the error is about. The error is about a lack of operators for the `entry` type, whatever that is. Either give the class an `operator==` function, or declare a standalone `operator==` function that accepts a `const entry&` as its first parameter.
```
struct entry {
bool operator==(const entry& other) const;
};
// or
bool operator==(const entry& lhs, const entry& rhs);
```
I think the latter is the preferred style. | The problem refers to the type T that is being used in this instance not having an operator== defined. I would guess from your question that this is a class 'entry'.
It might also be that the 'entry' class does not have the operator== defined correctly to take a const entry& as a parameter. | C++ template class error with operator == | [
"",
"c++",
"operators",
"equals-operator",
""
] |
Help,
I have this class
```
var jMath = {
pi2: Math.PI,
foo: function() {
return this.pi2;
}
}
```
I want to make the pi2 constant and i want jMath to inherit from Math object. How do I do that? | Oh amusing, scratch all that, this is the correct version:
```
function JMath() {
this.foo = function() {
return this.PI;
}
}
JMath.prototype = Math;
var jMath = new JMath();
alert(jMath.foo());
```
(which matches what the other answer is here)
(I originally tried to set the prototype using "JMath.prototype = new Math()" which is how I've seen it other places, but the above works)
## Edit
Here's one way to do it as a singleton
```
// Execute an inline anon function to keep
// symbols out of global scope
var jMath = (function()
{
// Define the JMath "class"
function JMath() {
this.foo = function() {
return this.PI;
}
}
JMath.prototype = Math;
// return singleton
return new JMath();
})();
// test it
alert( jMath.PI );
// prove that JMath no longer exists
alert( JMath );
``` | Consider using `prototype`:
```
function JMath() {};
JMath.prototype = {
pi2: Math.PI,
foo: function() {
return this.pi2;
}
}
var j = new JMath();
j.pi2=44; j.foo(); // returns 44
delete j.pi2; j.foo(); // now returns Math.PI
```
The difference between this and @altCognito's answer is that here the fields of the object are shared and all point to the same things. If you don't use prototypes, you create new and unlinked instances in the constructor. You can override the prototype's value on a per-instance basis, and if you override it and then decide you don't like the override value and want to restore the original, use `delete` to remove the override which merely "shadows" the prototype's value.
Edit: if you want to inherit all the methods and fields of the Math object itself, but override some things without affecting the Math object, do something like this (change the name "Constructor1" to your liking):
```
function Constructor1() {};
Constructor1.prototype = Math;
function JMath() {};
JMath.prototype = new Constructor1();
JMath.prototype.pi2 = JMath.prototype.PI;
JMath.prototype.foo = function() { return this.pi2; }
var j = new JMath();
j.cos(j.foo()); // returns -1
```
edit 3: explanation for the Constructor1 function: This creates the following prototype chain:
```
j -> JMath.prototype -> Math
```
j is an instance of JMath. JMath's prototype is an instance of Constructor1. Constructor1's prototype is Math. JMath.prototype is where the overridden stuff "lives". If you're only implementing a few instances of JMath, you could make the overridden stuff be instance variables that are setup by the constructor JMath, and point directly to Math, like @altCognito's answer does. (j is an instance of JMath and JMath's prototype is Math)
There are 2 downsides of augmenting-an-object-in-the-constructor. (Not actually downsides necessarily) One is that declaring instance fields/methods in the constructor creates separate values for each instance. If you create a lot of instances of JMath, each instance's JMath.foo function will be a separate object taking up additional memory. If the JMath.foo function comes from its prototype, then all the instances share one object.
In addition, you can change JMath.prototype.foo after the fact and the instances will update accordingly. If you make the foo function in the constructor as a per-instance method, then once JMath objects are created, they are independent and the only way to change the foo function is by changing each one.
---
edit 2: as far as read-only properties go, you can't really implement them from within Javascript itself, you need to muck around under the surface. However you can declare so-called "[getters](http://ejohn.org/blog/javascript-getters-and-setters/)" which effectively act as constants:
```
JMath.prototype.__defineGetter__("pi2", function() { return Math.PI; });
JMath.prototype.__defineSetter__("pi2", function(){}); // NOP
var j = new JMath();
j.pi2 = 77; // gee, that's nice
// (if the setter is not defined, you'll get an exception)
j.pi2; // evaluates as Math.PI by calling the getter function
```
Warning: The syntax for defining getters/setters [apparently is not something that IE doesn't implement nicely](http://annevankesteren.nl/2009/01/gettters-setters). | JavaScript: Inheritance and Constant declaration | [
"",
"javascript",
"inheritance",
"constants",
""
] |
I'm trying to read from a text file to input data to my java program. However, eclipse continuosly gives me a Source not found error no matter where I put the file.
I've made an additional sources folder in the project directory, the file in question is in both it and the bin file for the project and it still can't find it.
I even put a copy of it on my desktop and tried pointing eclipse there when it asked me to browse for the source lookup path.
No matter what I do it can't find the file.
here's my code in case it's pertinent:
```
System.out.println(System.getProperty("user.dir"));
File file = new File("file.txt");
Scanner scanner = new Scanner(file);
```
in addition, it says the user directory is the project directory and there is a copy there too.
I have no clue what to do.
Thanks,
Alex
after attempting the suggestion below and refreshing again, I was greeted by a host of errors.
```
FileNotFoundException(Throwable).<init>(String) line: 195
FileNotFoundException(Exception).<init>(String) line: not available
FileNotFoundException(IOException).<init>(String) line: not available
FileNotFoundException.<init>(String) line: not available
URLClassPath$JarLoader.getJarFile(URL) line: not available
URLClassPath$JarLoader.access$600(URLClassPath$JarLoader, URL) line: not available
URLClassPath$JarLoader$1.run() line: not available
AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method]
URLClassPath$JarLoader.ensureOpen() line: not available
URLClassPath$JarLoader.<init>(URL, URLStreamHandler, HashMap) line: not available
URLClassPath$3.run() line: not available
AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method]
URLClassPath.getLoader(URL) line: not available
URLClassPath.getLoader(int) line: not available
URLClassPath.access$000(URLClassPath, int) line: not available
URLClassPath$2.next() line: not available
URLClassPath$2.hasMoreElements() line: not available
ClassLoader$2.hasMoreElements() line: not available
CompoundEnumeration<E>.next() line: not available
CompoundEnumeration<E>.hasMoreElements() line: not available
ServiceLoader$LazyIterator.hasNext() line: not available
ServiceLoader$1.hasNext() line: not available
LocaleServiceProviderPool$1.run() line: not available
AccessController.doPrivileged(PrivilegedExceptionAction<T>) line: not available [native method]
LocaleServiceProviderPool.<init>(Class<LocaleServiceProvider>) line: not available
LocaleServiceProviderPool.getPool(Class<LocaleServiceProvider>) line: not available
NumberFormat.getInstance(Locale, int) line: not available
NumberFormat.getNumberInstance(Locale) line: not available
Scanner.useLocale(Locale) line: not available
Scanner.<init>(Readable, Pattern) line: not available
Scanner.<init>(ReadableByteChannel) line: not available
Scanner.<init>(File) line: not available
```
code used:
```
System.out.println(System.getProperty("user.dir"));
File file = new File(System.getProperty("user.dir") + "/file.txt");
Scanner scanner = new Scanner(file);
``` | Did you try refreshing (right click -> refresh) the project folder after copying the file in there? That will SYNC your file system with Eclipse's internal file system.
When you run Eclipse projects, the CWD (current working directory) is project's root directory. Not bin's directory. Not src's directory, but the root dir.
Also, if you're in Linux, remember that its file systems are usually case sensitive. | Have you tried using an absolute path:
```
File file = new File(System.getProperty("user.dir") + "/file.txt");
``` | Read from file in eclipse | [
"",
"java",
"eclipse",
"file-io",
""
] |
I'm in the final stages of implementing a CodeIgniter site which requires a really simple login system. One user, and one password to protect the admin area. I think I will be using one of the many CodeIgniter authentication libraries, which should allow me to ensure that people can't read from the database without being logged in as admin (hopefully).
1. Are there any glaring holes in a setup like this?
2. Should I take further measures beyond using a library like this to
ensure the security of the data in the MySQL database?
It will be hosted on a shared server; meaning little, or no server config will be possible. | There is a long discussion about the authentication systems available here: [How should I choose an authentication library for CodeIgniter?](https://stackoverflow.com/questions/346980/what-code-igniter-authentication-library-is-best)
The latest version of Redux seems to be the favorite.
<http://code.google.com/p/reduxauth/> | I have used in the past (with excellent results) DX Auth 1.0.6.
You can find it at <http://codeigniter.com/forums/viewthread/98465/> | Codeigniter security - mysql database | [
"",
"php",
"mysql",
"security",
"codeigniter",
"admin",
""
] |
I want a Python program to import a list of words from a text file and print out the content of the text file as two lists. The data in the text file is on this form:
```
A Alfa
B Betta
C Charlie
```
I want a Python program to print out one list with A,B,C and one with Alfa, Betta, Charlie.
This is what I've written:
```
english2german = open('english2german.txt', 'r')
englist = []
gerlist = []
for i, line in enumerate(english2german):
englist[i:], gerlist[i:] = line.split()
```
This is making two lists, but will only print out the first letter in each word.
How can I make my code to print out the whole word? | You want something like this:
```
english2german = open("english2german.txt")
englist = []
gerlist = []
for line in english2german:
(e, g) = line.split()
englist.append(e)
gerlist.append(g)
```
The problem with your code before is that `englist[i:]` is actually a *slice* of a list, not just a single index. A string is also iterable, so you were basically stuffing a single letter into several indices. In other words, something like `gerlist[0:] = "alfa"` actually results in `gerlist = ['a', 'l', 'f', 'a']`. | And even shorter than [amo-ej1's answer](https://stackoverflow.com/questions/743248/something-wrong-with-output-from-list-in-python/743274#743274), and likely faster:
```
In [1]: english2german = open('english2german.txt')
In [2]: eng, ger = zip(*( line.split() for line in english2german ))
In [3]: eng
Out[3]: ('A', 'B', 'C')
In [4]: ger
Out[4]: ('Alfa', 'Betta', 'Charlie')
```
If you're using Python 3.0 or `from future_builtins import zip`, this is memory-efficient too. Otherwise replace `zip` with `izip` from `itertools` if `english2german` is very long. | Something wrong with output from list in Python | [
"",
"python",
"list",
"text",
""
] |
I am parsing thorugh a 2000+ record excel file, and need ot be able to grab the formula itself from the cells and not just the value. For example, one cell 'Contributions' may be a single number like *1500* or it can be a few numbers *1500+500-200* but I dont want the total, I want the actual formula so I can get the different numbers.
I was just parsing with an oledbconnection and that only seems to get me the total. How would I go about getting the formula? Is it possible (I can get it in either XLS or XLSX)?
Thank you. | Using OLE Automation, you can access the formula in a cell as follows:
```
formula = worksheet.Cells(1, 1).Formula
```
EDIT:
To use OLE Automation in your C# project, see the following [Microsoft KB article](http://support.microsoft.com/kb/302084): | A quick google found the following: <http://codingsense.wordpress.com/2009/03/01/get-all-formula-from-excel-cell/> | how can I access the formula in an excel sheet with c#? | [
"",
"c#",
"excel",
""
] |
For some or other reason Linq2SQL generates the following on 1 of my tables for a delete:
```
DELETE FROM [dbo].[Tag] WHERE ([TagId] = @p0) AND ([Type] = @p1)
-- @p0: Input UniqueIdentifier (Size = 0; Prec = 0; Scale = 0)
[fb538481-562d-45f2-bb33-3296cd7d0b28]
-- @p1: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [1]
-- @p2: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [0]
-- @p3: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [7]
-- @p4: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [5]
-- @p5: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [8]
-- @p6: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [4]
-- @p7: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [3]
-- @p8: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [9]
-- @p9: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [6]
-- @p10: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [1]
-- @p11: Input TinyInt (Size = 1; Prec = 0; Scale = 0) [2]
-- Context: SqlProvider(Sql2008) Model: AttributedMetaModel
Build: 3.5.30729.1
```
As one can see, the first 2 parameters (@p0 and @p1) are correct, but then it generates a randomized set of the unique number from 0 to 9.
Now this does not affect the query/behaviour in any way, I am just interested in whats going on here.
**UPDATE:**
Tag is a base class for Linq2SQL inheritence. It seems the extra parameters are the integer values of the discriminator (Type) of all the inherited types. If I remove inherited types, the extra parameters goes down.
**UPDATE 2:**
I have noticed this happens for SELECT's too.
```
SELECT
(CASE
WHEN EXISTS(
SELECT NULL AS [EMPTY]
FROM [Tag] AS [t0]
WHERE ([t0].[TagId] = @p0) AND ([t0].[TagType] = @p1)
) THEN 1
ELSE 0
END) AS [value]
-- @p0: Input Guid (Size = 0; Prec = 0; Scale = 0)
[60000000-0000-0000-0000-fe0000000025]
-- @p1: Input Byte (Size = 0; Prec = 0; Scale = 0) [25]
-- @p2: Input Byte (Size = 0; Prec = 0; Scale = 0) [0]
-- @p3: Input Byte (Size = 0; Prec = 0; Scale = 0) [10]
-- @p4: Input Byte (Size = 0; Prec = 0; Scale = 0) [28]
-- @p5: Input Byte (Size = 0; Prec = 0; Scale = 0) [13]
-- @p6: Input Byte (Size = 0; Prec = 0; Scale = 0) [27]
-- @p7: Input Byte (Size = 0; Prec = 0; Scale = 0) [1]
-- @p8: Input Byte (Size = 0; Prec = 0; Scale = 0) [2]
-- @p9: Input Byte (Size = 0; Prec = 0; Scale = 0) [3]
-- @p10: Input Byte (Size = 0; Prec = 0; Scale = 0) [4]
-- @p11: Input Byte (Size = 0; Prec = 0; Scale = 0) [5]
-- @p12: Input Byte (Size = 0; Prec = 0; Scale = 0) [6]
-- @p13: Input Byte (Size = 0; Prec = 0; Scale = 0) [7]
-- @p14: Input Byte (Size = 0; Prec = 0; Scale = 0) [8]
-- @p15: Input Byte (Size = 0; Prec = 0; Scale = 0) [9]
-- @p16: Input Byte (Size = 0; Prec = 0; Scale = 0) [11]
-- @p17: Input Byte (Size = 0; Prec = 0; Scale = 0) [12]
-- @p18: Input Byte (Size = 0; Prec = 0; Scale = 0) [14]
-- @p19: Input Byte (Size = 0; Prec = 0; Scale = 0) [15]
-- @p20: Input Byte (Size = 0; Prec = 0; Scale = 0) [16]
-- @p21: Input Byte (Size = 0; Prec = 0; Scale = 0) [17]
-- @p22: Input Byte (Size = 0; Prec = 0; Scale = 0) [18]
-- @p23: Input Byte (Size = 0; Prec = 0; Scale = 0) [19]
-- @p24: Input Byte (Size = 0; Prec = 0; Scale = 0) [20]
-- @p25: Input Byte (Size = 0; Prec = 0; Scale = 0) [21]
-- @p26: Input Byte (Size = 0; Prec = 0; Scale = 0) [22]
-- @p27: Input Byte (Size = 0; Prec = 0; Scale = 0) [23]
-- @p28: Input Byte (Size = 0; Prec = 0; Scale = 0) [24]
-- @p29: Input Byte (Size = 0; Prec = 0; Scale = 0) [26]
-- Context: SqlProvider(SqlCE) Model: AttributedMetaModel
Build: 3.5.30729.1
```
Just to note. I am using an enum for the discriminator, as per the MS examples.
**UPDATE 3**
Now I am running a SQL profiler, and I am seeing the following, and it's pretty worrying :(
```
exec sp_executesql N'SELECT
(CASE
WHEN EXISTS(
SELECT NULL AS [EMPTY]
FROM [Tag] AS [t0]
WHERE ([t0].[TagId] = @p0) AND ([t0].[TagType] = @p1)
) THEN 1
ELSE 0
END) AS [value]',
N'@p0 uniqueidentifier,
@p1 tinyint,
@p2 tinyint,
@p3 tinyint,
@p4 tinyint,
@p5 tinyint,
@p6 tinyint,
@p7 tinyint,
@p8 tinyint,
@p9 tinyint,
@p10 tinyint,
@p11 tinyint,
@p12 tinyint,
@p13 tinyint,
@p14 tinyint,
@p15 tinyint,
@p16 tinyint,
@p17 tinyint,
@p18 tinyint,
@p19 tinyint,
@p20 tinyint,
@p21 tinyint,
@p22 tinyint,
@p23 tinyint,
@p24 tinyint,
@p25 tinyint,
@p26 tinyint,
@p27 tinyint,
@p28 tinyint,
@p29 tinyint,
@p30 tinyint,
@p31 tinyint,
@p32 tinyint',
@p0='60000000-0000-0000-0000-FE0000000025',
@p1=25,@p2=0,@p3=23,@p4=29,@p5=31,@p6=16,@p7=14,@p8=1,@p9=2,
@p10=3,@p11=4,@p12=5,@p13=6,@p14=7,@p15=8,@p16=9,@p17=10,@p18=11,
@p19=12,@p20=13,@p21=22,@p22=15,@p23=17,@p24=18,@p25=19,@p26=20,
@p27=21,@p28=27,@p29=24,@p30=26,@p31=30,@p32=28
```
Thanks
leppie | This sounds like a bug somewhere in Linq2Sql. I think the extra parameters shouldn't be generated.
I think you should file a bug at [Microsoft Connect](http://connect.microsoft.com) on this. | Very interesting - it looks like it couldn't decide whether it could use the primary key or not... what does the object model look like? No hand-crafted inheritance or anything? | Weird Select/Delete query generation in Linq2SQL | [
"",
"c#",
".net",
"linq-to-sql",
"inheritance",
""
] |
I have the following table schema:
```
create table SerialNo2(
IncarnationID_UID counter primary key,
Mark integer
);
```
Now I want to add a new column in the table, let's call it `degree`. Given that I want to apply an update on production table, I will have to populate the `degree` column based on the `Mark` value for all the existing data in the `SerialNo2` table. The rule for this is that
```
if Mark>90 then degree='A',
else degree='B'
```
How to write a conditional SQL insert that will accomplish the above function? | Something like this:
```
UPDATE SerialNo2 SET SerialNo2.Degree = IIf([Mark]>90,"A","B")
``` | Define the degree column as being nullable at first, then do an UPDATE:
```
UPDATE SerialNo2
SET Degree = CASE WHEN Mark > 90 THEN 'A' ELSE 'B' END
``` | Conditional Inserts SQL | [
"",
"sql",
"ms-access",
""
] |
1. What do you do if you're error logging code fails?
2. How do you make sure that its currently working?
3. How do you know if its not working?
4. How do you test that its working in a production environment?
5. Should I throw an exception if all else fails?
The code below uses Microsoft's Enterprise Library Logging Application Block. How do you make it "better"?
```
using Microsoft.Practices.EnterpriseLibrary.Logging;
class Program
{
static void Main(string[] args)
{
try
{
// Trying to write some data to the DB
...
}
catch (Exception ex)
{
LogHelper.LogException(ex, "Trying to write to the DB");
}
}
}
public class LogHelper
{
public static void LogException(Exception ex, string exceptionType)
{
try
{
// Simplified version, only logging the message
Logger.Write(exceptionType);
}
catch
{
// What do you do here???
}
}
}
``` | See the answers in my [related question](https://stackoverflow.com/questions/313839/what-is-the-best-way-to-write-event-log-entries):
If everything else fails, have a 'last resort logging' in your catch block. Log the exception to a text file in a location where this is highly unlikely to fail. If this last resort logging fails by throwing another exception, you can either swallow that exception, or terminate your app and display the error as a message box.
In this specific (exceptional) case, swallowing the exception is the only way to not terminate the app. | My apps usually do two things on an error. The first is to write it to a local log file (even if it uses some type of database logging as well). Then it sends an e-mail of the error to a distribution list set up for support.
So if the database log write fails, the error is still in the local log file and also sent via e-mail.
If the e-mail fails, the error is still logged so you can troubleshoot the problem later.
Getting more fault tolerant than that would only be worth the effort for extremely mission critical applications, IMHO. | What do you do if your error logging fails, and how do you test that its working in production? | [
"",
"c#",
"asp.net",
"error-handling",
"enterprise-library",
"error-logging",
""
] |
Does the completed event of the BackgroundWorker control come back on the GUI thread or do i have to marshal that back.
Also, does the progress event come back on the GUI thread or do i have to marshal that back? | Both events will be marshalled onto the UI thread. | All events are going to come back on the thread that created the instance of the BackgroundWorker. All your client has to do is listen for the event.
You've probably been here, but see also: [BackgroundWorker Events](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker_events.aspx) | BackgroundWorker completed event - which thread? | [
"",
"c#",
"multithreading",
"background",
""
] |
My Java standalone application gets a URL (which points to a file) from the user and I need to hit it and download it. The problem I am facing is that I am not able to encode the HTTP URL address properly...
Example:
```
URL: http://search.barnesandnoble.com/booksearch/first book.pdf
java.net.URLEncoder.encode(url.toString(), "ISO-8859-1");
```
returns me:
```
http%3A%2F%2Fsearch.barnesandnoble.com%2Fbooksearch%2Ffirst+book.pdf
```
But, what I want is
```
http://search.barnesandnoble.com/booksearch/first%20book.pdf
```
(space replaced by %20)
I guess `URLEncoder` is not designed to encode HTTP URLs... The JavaDoc says "Utility class for HTML form encoding"... Is there any other way to do this? | The [java.net.URI](http://java.sun.com/javase/6/docs/api/java/net/URI.html) class can help; in the documentation of URL you find
> Note, the URI class does perform escaping of its component fields in certain circumstances. The recommended way to manage the encoding and decoding of URLs is to use an URI
Use one of the constructors with more than one argument, like:
```
URI uri = new URI(
"http",
"search.barnesandnoble.com",
"/booksearch/first book.pdf",
null);
URL url = uri.toURL();
//or String request = uri.toString();
```
*(the single-argument constructor of URI does NOT escape illegal characters)*
---
Only illegal characters get escaped by above code - it does NOT escape non-ASCII characters (see fatih's comment).
The `toASCIIString` method can be used to get a String only with US-ASCII characters:
```
URI uri = new URI(
"http",
"search.barnesandnoble.com",
"/booksearch/é",
null);
String request = uri.toASCIIString();
```
---
For an URL with a query like `http://www.google.com/ig/api?weather=São Paulo`, use the 5-parameter version of the constructor:
```
URI uri = new URI(
"http",
"www.google.com",
"/ig/api",
"weather=São Paulo",
null);
String request = uri.toASCIIString();
``` | Please be warned that most of the answers above are INCORRECT.
The `URLEncoder` class, despite is name, is NOT what needs to be here. It's unfortunate that Sun named this class so annoyingly. `URLEncoder` is meant for passing data as parameters, not for encoding the URL itself.
In other words, `"http://search.barnesandnoble.com/booksearch/first book.pdf"` is the URL. Parameters would be, for example, `"http://search.barnesandnoble.com/booksearch/first book.pdf?parameter1=this¶m2=that"`. The parameters are what you would use `URLEncoder` for.
The following two examples highlights the differences between the two.
The following produces the wrong parameters, according to the HTTP standard. Note the ampersand (&) and plus (+) are encoded incorrectly.
```
uri = new URI("http", null, "www.google.com", 80,
"/help/me/book name+me/", "MY CRZY QUERY! +&+ :)", null);
// URI: http://www.google.com:80/help/me/book%20name+me/?MY%20CRZY%20QUERY!%20+&+%20:)
```
The following will produce the correct parameters, with the query properly encoded. Note the spaces, ampersands, and plus marks.
```
uri = new URI("http", null, "www.google.com", 80, "/help/me/book name+me/", URLEncoder.encode("MY CRZY QUERY! +&+ :)", "UTF-8"), null);
// URI: http://www.google.com:80/help/me/book%20name+me/?MY+CRZY+QUERY%2521+%252B%2526%252B+%253A%2529
``` | HTTP URL Address Encoding in Java | [
"",
"java",
"http",
"urlencode",
""
] |
how can I trim `<br> ` from the end of a string? | ```
$Output = preg_replace('/'.preg_quote('<br>&'.'nbsp;').'$/i', '', $String);
```
Where `$String` is the input and `$Output` is the result. | Slightly faster, if what you are trimming is constant:
```
$haystack = "blah blah blah <br>&"."nbsp;";
$needle = "<br>&"."nbsp;";
echo substr($haystack, 0, strrpos($haystack, $needle));
``` | php: trim <br> from the end of a string? | [
"",
"php",
""
] |
I have an interesting problem which I've been looking into and would appreciate some advice:
I'm trying to create a tool which mimics the basic capabilities of a requirements management tool as part of a company project.
The basic design is a Windows Explorer-like setting of folders and documents. Documents can be opened in a GUI, editted, and saved.
The document itself contains a hierarchical spreadsheet (think of Excel with Chapters, if that makes any sense). Each chapter contains rows, which are really just some requirements text + some other values which complement it. When displayed, the requirement text and attribute values show up as independent columns (much like Excel), with filtering capabilities.
Representing the user/permissions/folder hierarchy/etc for this type of program is pretty straightforward, but where I get hung up on is the document content itself...
My biggest concern is size and how it relates to performance: As part of this tool, I not only intended to store the current state of each document, but also the entire list of changes that have been made since day 1 (much like SVN), and then provide fast access to the history of changes.
On average, I expect ~500 documents in the repo; Each document will probably have ~20,000 active rows; Over the course of a year, it's not unreasonable to assume ~20,000 edits (meaning each document itself will acquire an additional 20,000 rows year-in and year-out).
Multiplied by the number of documents, that amounts to nearly 10,000,000 rows (with an additional 10,000,000 the next year, and the next year, and so on). Old histories can be purged, but it would only be performed by an admin (and it's not preferable that he/she do so).
As I see it, there are two ways for me to handle this situation:
* I can try to represent a list all rows of all documents in a single table (much like how phpBB stores all posts of all forums in a single table), or...
* I can try to store the rows of each document in a uniquely named table (meaning each document has it's own table); The table would have to be given a unique name, and a master table would contain the list of all documents and the table names that correspond to each.
So my question: Which really is preferable? Are neither really good options? Can anyone offer advice on which approach you would find more appropriate, given the needs? | If you are creating and/or destroying tables programmatically during the normal day-to-day operation of your application, I would say this is a very bad sign that something in the database design is wrong.
Database systems can and do handle tables with that many rows. To make any meaningful sorts of queries on that number of rows, you really do have to choose your indexes carefully and frugally. I mean, you really have to know intimately how the table will be queried.
However, I dare say it would be a good deal less complicated to implement than the approach you proposed of creating new tables arbitrarily based on IDs or numbers alone. And, with less complication comes greater ease of maintenance, and less chance that you'll introduce nasty bugs that are hard to debug.
If you are really keen on splitting into multiple tables, then I suggest that you look into how other people do [data partitioning](http://en.wikipedia.org/wiki/Partition_(database)). Rather than creating tables dynamically, create a fixed number of tables from the start, based on how many you think you are likely to need, and allocate records to those tables based not on some arbitrary thing like how many records are in the tables at the time, but on something predictable - the user's ZIP code is an example given, or the category the document is in, or the domain name or country of the user who created it, or something logical that you can use to easily determine where a record ended up and it will be reasonably spread out.
One of the benefits of data partitioning this way, where you create all the partitions to start with, is that if you need to in future it's relatively easy to move to multiple database servers. If you are creating and destroying tables dynamically, that's going to make that less attainable. | A few points to consider with multiple tables approach:
* Would be necessary to lookup information in all the documents? If yes you will need to search in all tables which is not such simple to achieve.
* If the schema changes it is not simple to update the database as all the tables representing the same type of entity would need to change
* Tracking information about user edits is also not very simple as it is split in multiple edits (eg: consider the scenario 'which documents did the user modified')
Have you considered alternative approaches to storing data? It is necessary to store each excel row in database as a table row? Storing data as xml and only save idexes on database? Or maybe store only tracking modifications and the document versions? The application can take a part of the database burden and do the filtering? | Generating SQL Tables on the fly as new content is uploaded - A bad idea? | [
"",
"sql",
"mysql",
"database",
"postgresql",
""
] |
How do I get a list of Python modules installed on my computer? | ## Solution
# Do not use with pip > 10.0!
My 50 cents for getting a `pip freeze`-like list from a Python script:
```
import pip
installed_packages = pip.get_installed_distributions()
installed_packages_list = sorted(["%s==%s" % (i.key, i.version)
for i in installed_packages])
print(installed_packages_list)
```
As a (too long) one liner:
```
sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()])
```
Giving:
```
['behave==1.2.4', 'enum34==1.0', 'flask==0.10.1', 'itsdangerous==0.24',
'jinja2==2.7.2', 'jsonschema==2.3.0', 'markupsafe==0.23', 'nose==1.3.3',
'parse-type==0.3.4', 'parse==1.6.4', 'prettytable==0.7.2', 'requests==2.3.0',
'six==1.6.1', 'vioozer-metadata==0.1', 'vioozer-users-server==0.1',
'werkzeug==0.9.4']
```
## Scope
This solution applies to the system scope or to a virtual environment scope, and covers packages installed by `setuptools`, `pip` and ([god forbid](https://stackoverflow.com/questions/3220404/why-use-pip-over-easy-install)) `easy_install`.
## My use case
I added the result of this call to my Flask server, so when I call it with `http://example.com/exampleServer/environment` I get the list of packages installed on the server's virtualenv. It makes debugging a whole lot easier.
## Caveats
I have noticed a strange behaviour of this technique - when the Python interpreter is invoked in the same directory as a `setup.py` file, it does not list the package installed by `setup.py`.
### Steps to reproduce:
#### Create a virtual environment
```
$ cd /tmp
$ virtualenv test_env
New python executable in test_env/bin/python
Installing setuptools, pip...done.
$ source test_env/bin/activate
(test_env) $
```
#### Clone a Git repository with `setup.py`
```
(test_env) $ git clone https://github.com/behave/behave.git
Cloning into 'behave'...
remote: Reusing existing pack: 4350, done.
remote: Total 4350 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (4350/4350), 1.85 MiB | 418.00 KiB/s, done.
Resolving deltas: 100% (2388/2388), done.
Checking connectivity... done.
```
We have behave's `setup.py` in `/tmp/behave`:
```
(test_env) $ ls /tmp/behave/setup.py
/tmp/behave/setup.py
```
#### Install the Python package from the Git repository
```
(test_env) $ cd /tmp/behave && pip install .
running install
...
Installed /private/tmp/test_env/lib/python2.7/site-packages/enum34-1.0-py2.7.egg
Finished processing dependencies for behave==1.2.5a1
```
### If we run the aforementioned solution from `/tmp`
```
>>> import pip
>>> sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()])
['behave==1.2.5a1', 'enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1']
>>> import os
>>> os.getcwd()
'/private/tmp'
```
### If we run the aforementioned solution from `/tmp/behave`
```
>>> import pip
>>> sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed_distributions()])
['enum34==1.0', 'parse-type==0.3.4', 'parse==1.6.4', 'six==1.6.1']
>>> import os
>>> os.getcwd()
'/private/tmp/behave'
```
`behave==1.2.5a1` is missing from the second example, because the working directory contains `behave`'s `setup.py` file.
I could not find any reference to this issue in the documentation. Perhaps I shall open a bug for it. | ```
help('modules')
```
in a Python shell/prompt. | How do I get a list of locally installed Python modules? | [
"",
"python",
"module",
"pip",
""
] |
I have a table called tblAssetsInUse with the following structure:
```
intPK intAssetID datCheckedOut datCheckedIn
1 450 1/5/2009 10/5/2009
2 300 2/5/2009 <NULL>
3 200 2/5/2009 <NULL>
4 450 12/5/2009 5/7/2009
```
and I have a SP that receives a scanned Asset ID then either Inserts, or Updates to the table for assets being checked Out or In respectively. As you can see datCheckedIn may be Null which is used to work out which assets are currently in use. This procedure works perfectly. I wish to be able to determine what the last asset to be scanned was and also what the last operation to the table was (i.e. Check In or Out). I have some SQL code which finds the row with the most recent date (regardless of which column) and I then use this to join to a separate Assets View, which also works. I just need to be able to work out if the most recent date was in the Checked Out or Checked In column somehow.
```
SELECT TOP (1) allDates.intPK, MAX(allDates.datLastAction) AS datLastScan, dbo.viwAssets.strFriendlyName, tblAssetsInUse_join.intAssetID
FROM (SELECT intPK, MAX(datCheckedOut) AS datLastAction
FROM dbo.tblAssetsInUse AS tblAssetsInUse_out
GROUP BY intPK
UNION ALL
SELECT intPK, MAX(datCheckedIn) AS datLastAction
FROM dbo.tblAssetsInUse AS tblAssetsInUse_in
GROUP BY intPK) AS allDates
INNER JOIN
dbo.tblAssetsInUse AS tblAssetsInUse_join ON allDates.intPK = tblAssetsInUse_join.intPK
INNER JOIN
dbo.viwAssets ON tblAssetsInUse_join.intAssetID = dbo.viwAssets.intPK
GROUP BY allDates.intPK, dbo.viwAssets.strFriendlyName, tblAssetsInUse_join.intAssetID
ORDER BY datLastScan DESC
```
Is there a literal value of some kind I can add in so that it flags a bit value in the results perhaps?
Thanks for your help,
Paul Reynolds | Beside from getting the type of operation performed, I think that the whole query could be simplified (and optimized) this way:
```
SELECT TOP 1 allDates.intPK, allDates.datLastAction AS datLastScan, allDates.operation, dbo.viwAssets.strFriendlyName, tblAssetsInUse_join.intAssetID
FROM (SELECT TOP 1 intPK, intAssetID, datCheckedOut AS datLastAction, 'Out' AS operation
FROM dbo.tblAssetsInUse AS tblAssetsInUse_out
ORDER BY datCheckedOut DESC
UNION ALL
SELECT TOP 1 intPK, intAssetID, datCheckedIn AS datLastAction, 'In' AS operation
FROM dbo.tblAssetsInUse AS tblAssetsInUse_in
ORDER BY datCheckedIn DESC) AS allDates
INNER JOIN
dbo.viwAssets ON allDates.intAssetID = dbo.viwAssets.intPK
ORDER BY datLastScan DESC
```
EDIT: also note that by removing the TOP 1 in the first line you get both the last Asset checked in and the last checked out. | Try this as your inner query:
```
SELECT intPK, 'Out', MAX(datCheckedOut)
FROM dbo.tblAssetsInUse
GROUP BY intPK
UNION
SELECT intPK, 'In', MAX(datCheckedIn)
FROM dbo.tblAssetsInUse
GROUP BY intPK
```
Edited to add:
A slicker solution would be to create a Greater user defined function that compares two dates. | SQL Find most recent date from 2 columns, AND identify which column it was in | [
"",
"sql",
"aggregate",
""
] |
Have you ever been frustrated by Visual Studio not re-evaluating certain watch expressions when stepping through code with the debugger?
I have and does anyone here care about pure methods? Methods with no side-effects? There's so many great things about C# that I love, but I can't write pure functions, that is, static methods with no side-effects.
What if you were to build your own C# compiler where you could write something like this.
```
function int func(readonly SomeRefType a, int x, int y) { return /*...*/; }
```
Not only is the above a free function, alas, I don't call it a method, the function is assured to not have any side-effects. The C# keyword `readonly` could here be used to indicate just that, and provide a contract for pure functions. These kind of functions can always be evulated, without causing side effects. This being a way for Visual Studio to always evaulate my functions in the watch, despite the faulty assumpation that all method calls and user operators have side effects. A method where all the parameters are copy by value can never have side effects, yet, Visual Studio fails to recognize this.
I love C++ for what you can do at compile-time and I miss these things in C#, I think C# is dumbing down on the user a bit and basically not allowing certain expressiveness, hurting programmers. Many things which actually relate what you can do at compile time, I'd like to actually see more meta programs which are programs run by the compiler to compiler your original program.
e.g. While C# has booleans and don't allow things like `if (var a = obj as MyRefType)`, it doesn't generate the approriate code. I did some digging around and noticed how C# fails to generate approriate IL for branchless conditionals, for example `x > y ? 1 : 0`, there's an IL instruction for just that which the C# compiler dosen't use.
Would you want, or be instrested in an open-source .NET compiler? Which looks like C# but is something entierly different, more expressive, and more flexible, and totally whaack, in terms what you can do with it? | Not really. If I want different language options I've already got:
* F# for a functional bent
* Boo for a DSL helper with custom compilation stages
The chances of a "design by committee" (or even "design by single amateur language designer") language ending up as well thought out as C# are pretty slim, IMO.
Would it be nice to be able to express a few more things? Absolutely.
Is it rather handy having hundreds of thousands of people who understand the same language, built-in Visual Studio integration from the people who really know it, etc? Absolutely!
For me, "looks like C# but is something entirely different" sounds like a problem, not a solution. | If you really want a mix of purity but you can still go with imperative aspect of OOP, go with F#. It's from Microsoft and it'll be productized and integrated into VS 2010.
If you really want a real pure functional programming that you can be sure that you have pure functions in your programming constructs and clearly differentiate side effects including IOs, Exceptions, you can go with Haskell.
For starter, you can download GHC, an open source Haskell compiler from Simon Peyton Jones, a researcher from Microsoft. You can also download Visual Haskell, a plugin for VS 2005 that functions as IDE for Haskell.
For more info:
* <http://haskell.org/ghc/>
* <http://www.haskell.org/visualhaskell/> | Purity of methods in C#, evaluation and custom compiler? | [
"",
"c#",
".net",
"compiler-construction",
""
] |
I am using ASP.net(2.0) with VB.NET.
I have a User registration form.
On that form the user supply all his contact details and he can upload a image with the normal file upload control in ASP.net.
**This is my problem.**
If anything goes wrong on the page then i give the User a error message saying what he left out or what went wrong. But the page refresh when that does happen. NOW the link to the image the user selected is gone. NOW when the user fix his error he thinks that he is uploading a picture but he never did because when the page re loaded it removed the link to his image inside the file upload control.
Note, the user don't have to upload a image, so there will be no error when the field is blank.
Anyone have an idea what I must do? | That behaviour is by design. It is a security restriction imposed by browsers so that all files are uploaded from users' computers only by their explicit action.
If something does go wrong during form submission, you should intimate the user to re-upload his/her file. That is the right way to do it. Think of it as a Transaction (all or nothing). | > Anyone have an idea what i must do?
Two usual approaches:
1. Add client-side (JavaScript) form validation so that for the majority of people, an error will pop up *before* they've submitted the form rather than later, on the server side, by which time the upload will have been lost.
2. Give each form instance a unique ID. When a file is uploaded on a form with an error, store that file on the server and record in the database that the form with that ID has an attached file. On the second-chance form, include the ID as a hidden field, and visually indicate that the uploaded file has been remembered. On submission, drag out the remembered file and attach it with the new submission. | File Upload link clears when page reloads, why? | [
"",
"c#",
"asp.net",
"html",
"vb.net",
"file-upload",
""
] |
I'm deserializing a fair amount of data through Boost.Serialization (one for each frame). However, when I output how long the deserialization takes, it varies wildly. It is not unusably slow at the moment, but it would be nice to make it faster. The data represents the same classes, arrays, maps and vectors but merely with different values.
Looking at the memory spiking as each deserialization takes place, I have to believe there's a better way of doing this than continually allocating and deallocating all this memory.
Here's a few of the read times as an example:
```
Deserialization - 390 milliseconds
Deserialization - 422 milliseconds
Deserialization - 422 milliseconds
Deserialization - 422 milliseconds
Deserialization - 438 milliseconds
Deserialization - 2156 milliseconds
Deserialization - 1797 milliseconds
Deserialization - 1656 milliseconds
Deserialization - 1328 milliseconds
Deserialization - 1219 milliseconds
Deserialization - 1078 milliseconds
Deserialization - 1078 milliseconds
```
Is there a way of writing a custom deserialization function for the same data that uses Boost.Serialization so I can specify to allocate the memory at the beginning, and then just change their values for each frame?
Update: I realised a minor issue with the optimization flags I was using was causing the serialization data to be written incorrectly, which resulted in the inconsistency of deserialization times. After fixing this, it is now consistently at 750 - 780 milliseconds each frame.
However, my original question still stands, as currently I am serializing and deserializing an entire stl container, when I really want to only serialize the contents (as the size and indexing of the container will remain exactly the same). I'm not sure of the best way to go about doing this though. | You might want to reconsider the design of the feature using this serialization.
From your description it seems as though you are serializing/deserializing an entire STL container very frequently. This shouldn't be required. [Serialization](http://en.wikipedia.org/wiki/Serialization) shouldn't be used unless the data needs to be persisted so that it can be re-built later or by someone else.
If serialization is required for you application you might consider serializing each item in the container separately and then only re-serializing when an item changes. This way you won't be re-doing all of the work un-necessarily. | Boost serialization provides a templated save methods for STL collections, e.g. for a set:
```
template<class Archive, class Key, class Compare, class Allocator >
inline void save(
Archive & ar,
const std::set<Key, Compare, Allocator> &t,
const unsigned int /* file_version */
){
boost::serialization::stl::save_collection<
Archive, std::set<Key, Compare, Allocator>
>(ar, t);
}
```
which just delegate to save\_collection. I guess you could define a specialisation for your collection type which does the serialization in a manner of your choosing, e.g.:
```
namespace boost {
namespace serialization {
template<class Archive>
inline void save<MyKey, MyCompare, MyAlloc> (
Archive & ar,
const std::set<MyKey, MyCompare, MyAlloc> &t,
const unsigned int /* file_version */
){
// ...
}
}
}
```
You could take a copy of save\_collection implementation (from collections\_save\_imp.hpp) as a starting point, and optimise it to fit your requirements. E.g. use a class which remembers the collection size from the previous invocation, and if it hasn't changed then re-use the same buffer.
You may need a specialisation for your member type as well. At which point it's questionable if you're getting any value from the boost serialization.
I know that's a bit vague but it's difficult to be more specific without knowing what collection type you're using, what the member type is etc. | Boost Deserialization Optimizations? | [
"",
"c++",
"optimization",
"serialization",
"boost",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.