Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I used to use the implicit call of toString when wanting some debug info about an object, because in case of the object is null it does not throw an Exception.
For instance:
```
System.out.println("obj: "+obj);
```
instead of:
```
System.out.println("obj: "+obj.toString());
```
Is there any difference apart from the null case?
Can the latter case work, when the former does not?
Edit:
What exactly is done, in case of the implicit call?
|
There's little difference. Use the one that's shorter and works more often.
If you actually want to get the string value of an object for other reasons, and want it to be null friendly, do this:
```
String s = String.valueOf(obj);
```
**Edit**: The question was extended, so I'll extend my answer.
In both cases, they compile to something like the following:
```
System.out.println(new StringBuilder().append("obj: ").append(obj).toString());
```
When your `toString()` is implicit, you'll see that in the second append.
If you look at the source code to java, you'll see that `StringBuilder.append(Object)` looks like this:
```
public StringBuilder append(Object obj) {
return append(String.valueOf(obj));
}
```
where `String.valueOf` looks like this:
```
public static String valueOf(Object obj) {
return (obj == null) ? "null" : obj.toString();
}
```
Now, if you `toString()` yourself, you bypass a null check and a stack frame and go straight to this in `StringBuilder`:
```
public StringBuilder append(String str) {
super.append(str);
return this;
}
```
So...very similar things happens in both cases. One just does a little more work.
|
As others have said - use the `"" + obj` method.
According to The [Java Language Spec](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.18.1):
* If the term is null, use `"null"`
* Primitive types are converted using the boxed-type constructor `new Boolean(X)` or whatever
* `toString()` is invoked (or equivalent)
* if the *result* of `toString()` is `null`, use `"null"`
* Concatenate the strings.
|
Explicit vs implicit call of toString
|
[
"",
"java",
"tostring",
""
] |
I have a desktop application having heavyweight components (JxBrowser) in a JFrame. How can I make a snapshot from the GUI and save it to for example a png file?
Note: The method using Graphics2d and Component.paint()/paintAll()/print/printAll works only for lightweight components.
Any answers appreciated!
**EDIT**
I have already tried this:
```
robot.createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
```
But I want the inner component to be captured...
**EDIT**
The situation seems to converge to this solution: if I have a bigger heavyweight component in my JFrame, so it is rendered on a JScrollPane then there exist no other method to get a snapshot of it programatically then to scroll it/screenshot it with screencapture?
|
My name is Roman and I'm developer at TeamDev.
JxBrowser component it's a heavyweight component that embedds a native mozilla window to display web pages. To get screenshot of a full web page from JxBrowser component you can really use the Java Robot functionality with web page scrolling. For small web pages this solution is ok. But receiving screenshot of a large web page will take a lot of time. Also please note that Java Robot allows capturing only screen. So if some window will appears over the JxBroser component, then it will be captured too.
We have already started working on this functionality in terms of JxBrowser project. The solution that will allow capturing a full web page including its invisible part is based on Mozilla internal stuff. You will be able to capture a full web page and save it as image very easy.
We are going to add this functionality in one of the next release of JxBrowser library, but the release date is not defined yet. If you wish you can subscribe to TeamDev's RSS feed: <http://support.teamdev.com/blogs/feeds/tags/company_news>
Or just let me know and I will inform you when this functionality will be available. Please let me know if you have any further questions. I will be happy to help.
Regards,
Roman
|
You mean programmatically?
What about
```
Point p = yourAwtComponent.getLocationOnScreen();
int w = yourAwtComponent.getWidth();
int h = yourAwtComponent.getHeight();
Rectangle rectangle = new Rectangle( p.x, p.y, w, h );
Image image = robot.createScreenCapture(rectangle);
```
And then something like this:
```
ImageIO.write( image, "png", file );
```
|
Capturing heavyweight java components?
|
[
"",
"java",
"user-interface",
"screenshot",
"jxbrowser",
""
] |
I have some jQuery/JavaScript code that I want to run only when there is a hash (`#`) anchor link in a URL. How can you check for this character using JavaScript? I need a simple catch-all test that would detect URLs like these:
* `example.com/page.html#anchor`
* `example.com/page.html#anotheranchor`
Basically something along the lines of:
```
if (thereIsAHashInTheUrl) {
do this;
} else {
do this;
}
```
|
Simple use of [location hash](https://developer.mozilla.org/en-US/docs/Web/API/Location/hash):
```
if(window.location.hash) {
// Fragment exists
} else {
// Fragment doesn't exist
}
```
|
```
if(window.location.hash) {
var hash = window.location.hash.substring(1); //Puts hash in variable, and removes the # character
alert (hash);
// hash found
} else {
// No hash found
}
```
|
How can you check for a #hash in a URL using JavaScript?
|
[
"",
"javascript",
"jquery",
"anchor",
"fragment-identifier",
""
] |
When I write code like this in VS 2008:
```
.h
struct Patterns {
string ptCreate;
string ptDelete;
string ptDrop;
string ptUpdate;
string ptInsert;
string ptSelect;
};
class QueryValidate {
string query;
string pattern;
static Patterns pts;
public:
friend class Query;
QueryValidate(const string& qr, const string& ptn):
query(qr), pattern(ptn) {}
bool validate() {
boost::regex rg(pattern);
return boost::regex_match(query, rg);
}
virtual ~QueryValidate() {}
};
```
I then initialize my structure like this:
```
.cpp
string QueryValidate::pts::ptCreate = "something";
string QueryValidate::pts::ptDelete = "something";
//...
```
The compiler gives the following errors:
> 'Patterns': the symbol to the left of a '::' must be a type 'ptSelect'
> : is not a member of 'QueryValidate'
What am I doing wrong? Is this a problem with Visual Studio or with my code? I know that static members except for const ones must be defined outside the class they were declared in.
|
You're trying to create a non-static member (ptCreate) of a static member (pts). This won't work like this.
You got two options, either use a struct initializer list for the Patterns class.
```
Patterns QueryValidate::pts = {"CREATE", "DELETE"}; // etc. for every string
```
Or, much safer (and better in my opinion), provide a constructor in Patterns and call that one.
```
struct Patterns {
Patterns() { /*...*/ }
/* ... */
}
```
On a side not, your code wouldn't work in any C++ compiler, it's not a conflict with Visual Studio things.
|
You can only initialize the structure as a whole, as in:
```
Patterns QueryValidate::pts = { "something", "something", ... };
```
|
Why can't I use static members, for example static structures, in my classes in VS2008?
|
[
"",
"c++",
"visual-studio-2008",
"definition",
"static-members",
""
] |
I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible?
Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file.
If it helps:
```
mysql --version
mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2
```
|
The output file is created by the mysqld process, not by your client process. Therefore the output file must be owned by the uid and gid of the mysqld process.
You can avoid having to sudo to access the file if you access it from a process under a uid or gid that can access the file. In other words, if mysqld creates files owned by uid and gid "mysql"/"mysql", then add your own account to group "mysql". Then you should be able to access the file, provided the file's permission mode includes group access.
**Edit:**
You are deleting a file in /tmp, with a directory permission mode of rwxrwxrwt. The sticky bit ('t') means you can remove files only if your uid is the same as the owner of the file, regardless of permissions on the file or the directory.
If you save your output file in another directory that doesn't have the sticky bit set, you should be able to remove the file normally.
Read this excerpt from the man page for sticky(8):
**STICKY DIRECTORIES**
A directory whose `sticky bit' is set becomes an append-only directory, or, more accurately, a directory in which the deletion of files is restricted. A file in a sticky directory may only be removed or renamed by a user if the user has write permission for the directory and the user is the owner of the file, the owner of the directory, or the super-user. This feature is usefully applied to directories such as /tmp which must be publicly writable but should deny users the license to arbitrarily delete or rename each others' files.
|
Not using the "SELECT...INTO OUTFILE" syntax, no.
You need to run the query (ie client) as another user, and redirect the output. For example, edit your crontab to run the following command whenever you want:
```
mysql db_schema -e 'SELECT col,... FROM table' > /tmp/outfile.txt
```
That will create /tmp/outfile.txt as the user who's crontab you've added the command to.
|
How can I have MySQL write outfiles as a different user?
|
[
"",
"mysql",
"sql",
"into-outfile",
""
] |
I have a fairly huge database with a master table with a single column GUID (custom GUID like algorithm) as primary key and 8 child tables that have foreign key relationships with this GUID column. All the tables have approximately 3-8 million records. None of these tables have any BLOB/CLOB/TEXT or any other fancy data types just normal numbers, varchars, dates, and timestamps (about 15-45 columns in each table). No partitions or other indexes other than the primary and foreign keys.
Now, the custom GUID algorithm has changed and though there are no collisions I would like to migrate all the old data to use GUIDs generated using the new algorithm. No other columns need to be changed. Number one priority is data integrity and performance is secondary.
Some of the possible solutions that I could think of were (as you will probably notice they all revolve around one idea only)
1. add new column ngu\_id and populate with new gu\_id; disable constraints; update child tables with ngu\_id as gu\_id; renaname ngu\_id->gu\_id; re-enable constraints
2. read one master record and its dependent child records from child tables; insert into the same table with new gu\_id; remove all records with old gu\_ids
3. drop constraints; add a trigger to the master table such that all the child tables are updated; start updating old gu\_id's with new new gu\_ids; re-enable constraints
4. add a trigger to the master table such that all the child tables are updated; start updating old gu\_id's with new new gu\_ids
5. create new column ngu\_ids on all master and child tables; create foreign key constraints on ngu\_id columns; add update trigger to the master table to cascade values to child tables; insert new gu\_id values into ngu\_id column; remove old foreign key constraints based on gu\_id; remove gu\_id column and rename ngu\_id to gu\_id; recreate constraints if necessary;
6. use `on update cascade` if available?
My questions are:
1. Is there a better way? (Can't burrow my head in the sand, gotta do this)
2. What is the most suitable way to do this? (I've to do this in Oracle, SQL server and mysql4 so, vendor-specific hacks are welcome)
3. What are the typical points of failure for such an exercise and how to minimize them?
If you are with me so far, thank you and hope you can help :)
|
Your ideas should work. the first is probably the way I would use. Some cautions and things to think about when doing this:
Do not do this unless you have a current backup.
I would leave both values in the main table. That way if you ever have to figure out from some old paperwork which record you need to access, you can do it.
Take the database down for maintenance while you do this and put it in single user mode. The very last thing you need while doing something like this is a user attempting to make changes while you are in midstream. Of course, the first action once in single user mode is the above-mentioned backup. You probably should schedule the downtime for some time when the usage is lightest.
Test on dev first! This should also give you an idea as to how long you will need to close production for. Also, you can try several methods to see which is the fastest.
Be sure to communicate in advance to users that the database will be going down at the scheduled time for maintenance and when they can expect to have it be available again. Make sure the timing is ok. It really makes people mad when they plan to stay late to run the quarterly reports and the database is not available and they didn't know it.
There are a fairly large number of records, you might want to run the updates of the child tables in batches (one reason not to use cascading updates). This can be faster than trying to update 5 million records with one update. However, don't try to update one record at a time or you will still be here next year doing this task.
Drop indexes on the GUID field in all the tables and recreate after you are done. This should improve the performance of the change.
|
It's difficult to say what the "best" or "most suitable" approach is as you have not described what you are looking for in a solution. For example, do the tables need to be available for query while you are migrating to new IDs? Do they need to be available for concurrent modification? Is it important to complete the migration as fast as possible? Is it important to minimize the space used for migration?
Having said that, I would prefer #1 over your other ideas, assuming they all met your requirements.
Anything that involves a trigger to update the child tables seems error-prone and over complicated and likely will not perform as well as #1.
Is it safe to assume that new IDs will never collide with old IDs? If not, solutions based on updating the IDs one at a time will have to worry about collisions -- this will get messy in a hurry.
Have you considered using `CREATE TABLE AS SELECT` (CTAS) to populate new tables with the new IDs? You'll be making a copy of your existing tables and this will require additional space, however it is likely to be faster than updating the existing tables in place. The idea is: (i) use CTAS to create new tables with new IDs in place of the old, (ii) create indexes and constraints as appropriate on the new tables, (iii) drop the old tables, (iv) rename the new tables to the old names.
|
updating primary key of master and child tables for large tables
|
[
"",
"sql",
"sql-server",
"oracle",
"migration",
"mysql4",
""
] |
I'm getting a strange error from `g++` 3.3 in the following code:
```
#include <bitset>
#include <string>
using namespace std;
template <int N, int M>
bitset<N> slice_bitset(const bitset<M> &original, size_t start) {
string str = original.to_string<char, char_traits<char>, allocator<char> >();
string newstr = str.substr(start, N);
return bitset<N>(newstr);
}
int main() {
bitset<128> test;
bitset<12> result = slice_bitset<12, 128>(test, 0);
return 0;
}
```
The error is as follows:
```
In function `std::bitset slice_bitset(const std::bitset&, unsigned int)':
syntax error before `,' token
`char_traits' specified as declarator-id
two or more data types in declaration of `char_traits'
`allocator' specified as declarator-id
two or more data types in declaration of `allocator'
syntax error before `>' token
```
It has to be something really silly, but I've already told it to my rubber duck and a friend to no avail.
Thanks, Lazyweb.
|
The selected answer from [CAdaker](https://stackoverflow.com/questions/231868/c-two-or-more-data-types-in-declaration#231904) solves the problem, but does not explain **why** it solves the problem.
When a function template is being parsed, lookup does not take place in dependent types. As a result, constructs such as the following can be parsed:
```
template <typename T>
class B;
template <typename T>
void foo (B<T> & b) {
// Use 'b' here, even though 'B' not defined
}
template <typename T>
class B
{
// Define 'B' here.
};
```
However, this "feature" has a cost, and in this case it is that the definition of 'foo' requires hints on the contents of the template 'B'. If 'foo' uses a nested type of 'B', then the `typename` keyword is required to tell the compiler that the name is a type:
```
template <typename T>
void foo (B<T> & b)
{
typename B<T>::X t1; // 'X' is a type - this declares t1
B<T>::Y * t1; // 'Y' is an object - this is multiplication
}
```
Without 'typename' in the above the compiler will assume that `X` is an object (or function).
Similarly, if a member function is called and the call has explicit template arguments then the compiler needs to know to treat the `<` as the start of a template argument list and not the less than operator:
```
template <typename T>
void foo (B<T> & b)
{
b.template bar<int> (0); // 'bar' is a template, '<' is start of arg list
b.Y < 10; // 'Y' is an object, '<' is less than operator
}
```
Without `template`, the compiler assumes that `<` is the less than operator, and so generates the syntax error when it sees `int>` since that is not an expression.
These hints are required *even* when the definition of the template is visible. The reason is that an explicit specialization might later change the definition that is actually chosen:
```
template <typename T>
class B
{
template <typename S>
void a();
};
template <typename T>
void foo (B<T> & b)
{
b.a < 10; // 'B<int>::a' is a member object
}
template <>
class B<int>
{
int a;
};
```
|
Use either just
```
original.to_string();
```
or, if you really need the type specifiers,
```
original.template to_string<char, char_traits<char>, allocator<char> >();
```
|
C++ two or more data types in declaration
|
[
"",
"c++",
"templates",
"g++",
""
] |
I am trying to set the innerxml of a xmldoc but get the exception: Reference to undeclared entity
```
XmlDocument xmldoc = new XmlDocument();
string text = "Hello, I am text α – —"
xmldoc.InnerXml = "<p>" + text + "</p>";
```
This throws the exception:
> Reference to undeclared entity 'alpha'. Line 2, position 2..
How would I go about solving this problem?
|
XML, unlike HTML does not define entities (ie named references to UNICODE characters) so α — etc. are not translated to their corresponding character. You must use the numerical value instead. You can only use < and & in XML
If you want to create HTML, use an HtmlDocument instead.
|
In .Net, you can use the `System.Xml.XmlConvert` class:
```
string text = XmlConvert.EncodeName("Hello α");
```
Alternatively, you can declare the entities locally by putting the declarations between square brackets in a DOCTYPE declaration. Add the following header to your xml:
```
<!DOCTYPE documentElement[
<!ENTITY Alpha "Α">
<!ENTITY ndash "–">
<!ENTITY mdash "—">
]>
```
Do a google on "html character entities" for the entity definitions.
|
Reference to undeclared entity exception while working with XML
|
[
"",
"c#",
"xml",
"dtd",
""
] |
I have a collection of domain objects that I need to convert into another type for use by the .NET framework. What is the best practice for doing such a transformation?
Specifically, I have a type called ContentEntry and I need to convert it into a SyndicationItem for use in putting into a SyndicationFeed. The conversion itself is straight forward but I'm looking for a good pattern. Do I create a method on ContentEntry called CreateSyndicationItem() or perhaps a separate converter object? Perhaps an extension method?
This will be somewhat subjective, but I'd appreciate some ideas.
> Edit Note: I'd like to note that I don't control SyndicationItem. It is built into the .NET Framework. Also, I'd really like to convert several ContentEntry objects into SyndicationItems at one time.
|
As as cannot modify the SyndicationItem's constructors, I'd suggest you use the [factory pattern](http://en.wikipedia.org/wiki/Factory_method_pattern). Create a SyndicationItemFactory class that has the method CreateSyndicationItem(). This method returns a SyndicationItem object. In this case, you'll only need one version of the method, that will take a ContentEntry object.
When you say you'd like to create multiple ContentEntry objects at once, I'm assuming you have say an array of ContentEntry objects. Another method (say CreateSyndicationItems()) might return an array of SyndicationItems, and take an array of ContentEntry objects as its parameter.
|
I'd suggest looking at [Adapter Pattern](http://en.wikipedia.org/wiki/Adapter_pattern)
> In computer programming, the adapter
> design pattern (often referred to as
> the wrapper pattern or simply a
> wrapper) translates one interface for
> a class into a compatible interface.
|
Best OO practice to adapt one type to another?
|
[
"",
"c#",
".net",
"oop",
""
] |
I'm using Tomcat 5.5 as my servlet container. My web application deploys via .jar and has some resource files (textual files with strings and configuration parameters) located under its WEB-INF directory. Tomcat 5.5 runs on ubuntu linux. The resource file is read with a file reader:
`fr = new FileReader("messages.properties");`
The problem is that sometimes the servlet can't find the resource file, but if i restart it a couple of times it works, then again after some time it stops working.
Can someone suggest what's the best way of reading resource strings from a servlet? Or a workaround for this problem?
Putting the resource files under WEB-INF/classes doesn't help either.
|
I'm guessing the problem is you're trying to use a relative path to access the file.
Using absolute path should help (i.e. "/home/tomcat5/properties/messages.properties").
However, the usual solution to this problem is to use the getResourceAsStream method of the ClassLoader. Deploying the properties file to "WEB-INF/classes" will make it available to the class loader and you'll be able to access the properties stream.
Untested proto-code:
```
Properties props = new Properties();
InputStream is =
getClass().getClassLoader().getResourceAsStream("messages.properties");
props.load(is);
```
|
If you are trying to access this file from a Servlet-aware class, such as a ContextListener or other lifecycle listener, you can use the ServletContext object to get the path to a resource.
These three are roughly equivalent. (Don't confuse getResourceAsStream as the same as the one provided by the `ClassLoader` class. They behave very differently)
```
void myFunc(ServletContext context) {
//returns full path. Ex: C:\tomcat\5.5\webapps\myapp\web-inf\message.properties
String fullCanonicalPath = context.getRealPath("/WEB-INF/message.properties");
//Returns a URL to the file. Ex: file://c:/tomcat..../message.properties
URL urlToFile = context.getResource("/WEB-INF/message.properties");
//Returns an input stream. Like calling getResource().openStream();
InputStream inputStream = context.getResourceAsStream("/WEB-INF/message.properties");
//do something
}
```
|
tomcat 5.5 - problem with reading resource files
|
[
"",
"java",
"tomcat",
"servlets",
"resources",
""
] |
I want to execute a php-script from php that will use different constants and different versions of classes that are already defined.
Is there a sandbox php\_module where i could just:
```
sandbox('script.php'); // run in a new php environment
```
instead of
```
include('script.php'); // run in the same environment
```
Or is [proc\_open()](http://php.net/proc_open) the only option?
PS: The script isn't accessible through the web, so fopen('<http://host/script.php>') is not an option.
|
There is [runkit](http://www.php.net/runkit), but you may find it simpler to just call the script over the command line (Use [shell\_exec](http://www.php.net/shell_exec)), if you don't need any interaction between the master and child processes.
|
The is a class on GitHub that may help, early stages but looks promising.
<https://github.com/fregster/PHPSandbox>
|
Is there a way to execute php code in a sandbox from within php
|
[
"",
"php",
"module",
"sandbox",
""
] |
I'm using [Interop.Domino.dll](http://www.codeproject.com/KB/cs/lotusnoteintegrator.aspx) to retrieve E-mails from a Lotus "Database" (Term used loosely). I'm having some difficulty in retrieving certain fields and wonder how to do this properly. I've been using `NotesDocument.GetFirstItem` to retrieve Subject, From and Body.
My issues in this regard are thus:
1. How do I retrieve Reply-To address? Is there a list of "Items" to get somewhere? I can't find it.
2. How do I retrieve friendly names for From and Reply-To addresses?
3. When I retrieve Body this way, it's formatted wierdly with square bracket sets ([]) interspersed randomly across the message body, and parts of the text aren't where I expect them.
Related code:
```
string
ActualSubject = nDoc.GetFirstItem("Subject").Text,
ActualFrom = nDoc.GetFirstItem("From").Text,
ActualBody = nDoc.GetFirstItem("Body").Text;
```
|
Hah, got it!
```
Object[] ni = (Object[])nDoc.Items;
string names_values = "";
for (int x = 0; x < ni.Length; x++)
{
NotesItem item = (NotesItem)ni[x];
if (!string.IsNullOrEmpty(item.Name)) names_values += x.ToString() + ": " + item.Name + "\t\t" + item.Text + "\r\n";
}
```
This returned a list of indices, names, and values:
```
0: Received from example.com ([192.168.0.1]) by host.example.com (Lotus Domino Release 6.5.4 HF182) with ESMTP id 2008111917343129-205078 ; Wed, 19 Nov 2008 17:34:31 -0500
1: Received from example.com ([192.168.0.2]) by host2.example.com (Lotus Domino Release 6.5.4 HF182) with ESMTP id 2008111917343129-205078 ; Wed, 19 Nov 2008 17:34:31 -0500
2: X_PGRTRKID 130057945714t
3: X_PGRSRC IE
4: ReplyTo "example" <name@email.example.com>
5: Principal "example" <customerservice@email.example.com>
6: From "IE130057945714t"<service@test.email.example.com>
7: SendTo me@example.com
8: Subject (Message subject redacted)
9: PostedDate 11/19/2008 03:34:15 PM
10: MIME_Version 1.0
11: $Mailer SMTP DirectMail
12: $MIMETrack Itemize by SMTP Server on xxxPT02-CORP/example(Release 6.5.4 HF182|May 31, 2005) at 11/19/2008 05:34:31 PM;Serialize by Router on xxxPT02-CORP/example(Release 6.5.4 HF182|May 31, 2005) at 11/19/2008 05:34:32 PM;Serialize complete at 11/19/2008 05:34:32 PM;MIME-CD by Router on xxxPT02-CORP/example(Release 6.5.4 HF182|May 31, 2005) at 11/19/2008 05:34:32 PM;MIME-CD complete at 11/19/2008 05:34:32 PM;Itemize by Router on camp-db-05/example(Release 7.0.2 HF76|November 03, 2006) at 11/19/2008 05:34:32 PM;MIME-CD by Notes Client on MyName/Guest/example(Release 6.5.6|March 06, 2007) at 11/20/2008 12:46:25 PM;MIME-CD complete at 11/20/2008 12:46:25 PM
13: Form Memo
14: $UpdatedBy ;CN=xxxPT02-CORP/O=example
15: $ExportHeadersConverted 1
16: $MessageID <redacted@LocalDomain>
17: RouteServers CN=xxxPT02-CORP/O=example;CN=camp-db-05/O=example
18: RouteTimes 11/19/2008 03:34:31 PM-11/19/2008 03:34:32 PM;11/19/2008 03:34:32 PM-11/19/2008 03:34:32 PM
19: $Orig 958F2E4E4B666AB585257506007C02A7
20: Categories
21: $Revisions
22: DeliveredDate 11/19/2008 03:34:32 PM
23: Body []exampleexample
```
Now, who can tell me why the Body keeps getting messed up?
|
The Body item is a NotesRichTextItem, not a regular NotesItem. They are a different type of object in the Lotus Notes world (and often the source of much developer frustration!)
I don't have much experience with using COM to connect to Domino, and I know there are differences in what you have access to, but the Domino Designer Help should give you lots of information the classes, such as NotesRichTextItem.
Perhaps the method "GetFormattedText" would work better for you than accessing the item's Text property.
Here's an example of the method (taken from Domino Designer Help)
```
Dim doc As NotesDocument
Dim rtitem As Variant
Dim plainText As String
Dim fileNum As Integer
'...set value of doc...
Set rtitem = doc.GetFirstItem( "Body" )
If ( rtitem.Type = RICHTEXT ) Then
plainText = rtitem.GetFormattedText( False, 0 )
End If
' get a file number for the file
fileNum = Freefile
' open the file for writing
Open "c:\plane.txt" For Output As fileNum
' write the formatted text to the file
Print #fileNum, plainText
' close the file
Close #fileNum
```
|
C#, Lotus Interop: Getting Message Information
|
[
"",
"c#",
"lotus-notes",
"interop-domino",
""
] |
In PHP 5.2 there was a nice security function added called "input\_filter", so instead of saying:
```
$name = $_GET['name'];
```
you can now say:
```
$name = filter_input (INPUT_GET, 'name', FILTER_SANITIZE_STRING);
```
and it automatically sanitizes your string, there is also:
* `FILTER_SANITIZE_ENCODED`
* `FILTER_SANITIZE_NUMBER_INT`
* `FILTER_SANITIZE_EMAIL`
* `FILTER_SANITIZE_URL`
etc.
so this is a very convenient security feature to use and I want to switch over to it completely.
**The problem is...** I often manipulate the $\_GET and $\_POST arrays before processing them, like this:
$\_GET['name'] = '(default name)';
but it seems that filter\_input does not have access to the changes in $\_GET since it reads "INPUT\_GET" which is of type int (?). It would be nice if I could get filter\_input to read $\_GET instead but:
```
$name = filter_input ( $_GET, 'name', FILTER_SANITIZE_STRING );
```
gives me the error:
```
Warning: filter_input() expects parameter 1 to be long, array given.
```
Can anyone think of a way that I could:
* manipulate the source of `INPUT_GET` (whereever it is) so that I can change its values before filter\_input can read them
* get filter\_input to read `$_GET`
`ADDENDUM:`
---
Rich asked: "Why are you changing the arrays anyway, surely you want them to be an input, rather than something you've programmatically inserted."
It is just a very convenient place to preprocess variables coming in, e.g. in order to:
* set defaults (if $\_GET['state'] = '' then $\_GET['state'] = 'AL')
* do manual processing (delete all spaces, etc.)
* security (some of which will be done by filter\_input now)
Then I know by the time I get the incoming variable, it is secure and valid. Of course I could copy the $\_GET array to another array and process THAT array but that is just an unnecessary step since I $\_GET is already a functioning array so it makes sense to do it with these system arrays that already exist.
|
You could manually force it to read the arrays again by using [filter\_var](https://www.php.net/manual/en/function.filter-var.php) and [filter\_var\_array](https://www.php.net/manual/en/function.filter-var-array.php)
```
$name = filter_var ( $_GET['name'], FILTER_SANITIZE_STRING );
```
|
A handy way to do this without modifying the global array:
```
if (!($name = filter_input(INPUT_GET, 'name'))) {
$name = 'default_value';
}
```
Or using the ternary operator:
```
$name = ($name = filter_input(INPUT_GET, 'name')) ? $name : 'default_value';
```
|
PHP's new input_filter does not read $_GET or $_POST arrays
|
[
"",
"php",
"security",
"filter-input",
""
] |
I have already visited [Preferred Python unit-testing framework](https://stackoverflow.com/questions/191673/preferred-python-unit-testing-framework). I am not just looking at Python Unit Testing Framework, but also code coverage with respect to unit tests. So far I have only come across [coverage.py](http://nedbatchelder.com/code/modules/coverage.html). Is there any better option?
An interesting option for me is to integrate [cpython](http://www.python.org/), unit testing of Python code and code coverage of Python code with Visual Studio 2008 through plugins (something similar to [IronPython Studio](http://www.codeplex.com/IronPythonStudio)). What can be done to achieve this? I look forward to suggestions.
|
PyDev seems to allow code coverage from within Eclipse.
I've yet to find how to integrate that with my own (rather complex) build process, so I use Ned Batchelder's coverage.py at the command line.
|
We use this [Django coverage integration](http://www.djangosnippets.org/snippets/705/), but instead of using the default coverage.py reporting, we generate some simple HTML:
[Colorize Python source using the built-in tokenizer](http://code.activestate.com/recipes/52298/).
|
Code Coverage and Unit Testing of Python Code
|
[
"",
"python",
"visual-studio-2008",
"unit-testing",
"code-coverage",
""
] |
Informed options needed about the merits of flat file database. I'm considering using a flat file database scheme to manage data for a custom blog. It would be deployed on Linux OS variant and written in Java.
What are the possible negatives or positives regarding performance for reading and writing of both articles and comments?
Would article retrieval crap out because of it being a flat file rather than a RDBMS if it were to get slash-doted? (Wishful thinking)
I'm not against using a RDBMS, just asking the community their opinion on the viability of such a software architecture scheme.
**Follow Up:**
In the case of this question I would see “Flat file == file system–based” For example each blog entry and its accompanying metadata would be in a single file. Making for many files organized by date structure of the file folders (blogs\testblog2\2008\12\01) == 12/01/2008
|
Flat file databases have their place and are quite workable for the right domain.
Mail servers and NNTP servers of the past really pushed the limits of how far you can really take these things (which is actually quite far -- files systems can have millions of files and directories).
Flat file DBs two biggest weaknesses are indexing and atomic updates, but if the domain is suitable these may not be an issue.
But you can, for example, with proper locking, do an "atomic" index update using basic file system commands, at least on Unix.
A simple case is having the indexing process running through the data to create the new index file under a temporary name. Then, when you are done, you simply rename (either the system call rename(2) or the shell mv command) the old file over the new file. Rename and mv are atomic operations on a Unix system (i.e. it either works or it doesn't and there's never a missing "in between state").
Same with creating new entries. Basically write the file fully to a temp file, then rename or mv it in to its final place. Then you never have an "intermediate" file in the "DB". Otherwise, you might have a race condition (such as a process reading a file that is still being written, and may get to the end before the writing process is complete -- ugly race condition).
If your primary indexing works well with directory names, then that works just fine. You can use a hashing scheme, for example, to create directories and subdirectories to locate new files.
Finding a file using the file name and directory structure is very fast as most filesystems today index their directories.
If you're putting a million files in a directory, there may well be tuning issues you'll want to look in to, but out of that box most will handle 10's of thousands easily. Just remember that if you need to SCAN the directory, there's going to be a lot of files to scan. Partitioning via directories helps prevent that.
But that all depends on your indexing and searching techniques.
Effectively, a stock off the shelf web server serving up static content is a large, flat file database, and the model works pretty good.
Finally, of course, you have the plethora of free Unix file system level tools at your disposal, but all them have issues with zillions of files (forking grep 1000000 times to find something in a file will have performance tradeoffs -- the overhead simply adds up).
If all of your files are on the same file system, then hard links also give you options (since they, too, are atomic) in terms of putting the same file in different places (basically for indexing).
For example, you could have a "today" directory, a "yesterday" directory, a "java" directory, and the actual message directory.
So, a post could be linked in the "today" directory, the "java" directory (because the post is tagged with "java", say), and in its final place (say /articles/2008/12/01/my\_java\_post.txt). Then, at midnight, you run two processes. The first one takes all files in the "today" directory, checks their create date to make sure they're not "today" (since the process can take several seconds and a new file might sneak in), and renames those files to "yesterday". Next, you do the same thing for the "yesterday" directory, only here you simply delete them if they're out of date.
Meanwhile, the file is still in the "java" and the ".../12/01" directory. Since you're using a Unix file system, and hard links, the "file" only exists once, these are all just pointers to the file. None of them are "the" file, they're all the same.
You can see that while each individual file move is atomic, the bulk is not. For example, while the "today" script is running, the "yesterday" directory can well contain files from both "yesterday" and "the day before" because the "yesterday" script had not yet run.
In a transactional DB, you would do that all at once.
But, simply, it is a tried and true method. Unix, in particular, works VERY well with that idiom, and the modern file systems can support it quite well as well.
|
*(answer copied and modified from [here](https://stackoverflow.com/questions/326203/alternatives-to-mysql#326210))*
I would advise against using a flat file for anything besides read-only access, because then you'd have to deal with concurrency issues like making sure only one process is writing to the file at once. Instead, I recommend [SQLite](http://www.sqlite.org/), a fully functional SQL database that's stored in a file. SQLite already has built-in concurrency, so you don't have to worry about things like file locking, and it's really fast for reads.
If, however, you are doing lots of database changes, it's best to do them all at once inside a [transaction](http://www.sqlite.org/lang_transaction.html). This will only write the changes to the file once, as opposed to every time an change query is issued. This dramatically increases the speed of doing multiple changes.
When a change query is issued, whether it's inside a tranasction or not, the whole database is locked until that query finishes. This means that extremely large transactions could adversely affect the performance of other processes because they must wait for the transaction to finish before they can access the database. In practice, I haven't found this to be that noticeable, but it's always good practice to try to minimize the number of database modifying queries you issue, and it's certainly faster then trying to use a flat file.
|
Are flat file databases any good?
|
[
"",
"java",
"database",
"linux",
"architecture",
""
] |
I'm creating a multi-part web form in ASP.NET that uses Panels for the different steps, making only the Panel for the current step visible. On Step 1, I have a drop-down list that uses a Javascript function to reconfigure some of the fields in the same Panel via "onchange". Obviously, since the client-side script is only affecting the DOM, when I go to Step 2 and then back up to Step 1, the fields in Step 1 are back to their orignal configuration even though the same drop-down choice is selected.
What is a good method for storing the visual state of the Panels between steps? I considered calling the drop-down's onchange function on page load, but that seemed clunky. Thanks!
--
Thanks for the quick answers - I think I'll try out the Wizard, but the AJAX solution also sounds like fun.
|
You might consider an ASP.Net Wizard control for this- it will automate a lot of what you're trying to do.
|
I suggest you to use the MultiView control, which is let's say semantically more appropiate. And then store this data in ViewState. I have written something similar and it rocks.
|
Good way to maintain ASP.NET Panel states?
|
[
"",
"c#",
"panel",
"state",
""
] |
How do you get a list of files within a directory so each can be processed?
|
But `boost::filesystem` can do that: <http://www.boost.org/doc/libs/1_37_0/libs/filesystem/example/simple_ls.cpp>
|
Here's what I use:
```
/* Returns a list of files in a directory (except the ones that begin with a dot) */
void GetFilesInDirectory(std::vector<string> &out, const string &directory)
{
#ifdef WINDOWS
HANDLE dir;
WIN32_FIND_DATA file_data;
if ((dir = FindFirstFile((directory + "/*").c_str(), &file_data)) == INVALID_HANDLE_VALUE)
return; /* No files found */
do {
const string file_name = file_data.cFileName;
const string full_file_name = directory + "/" + file_name;
const bool is_directory = (file_data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0;
if (file_name[0] == '.')
continue;
if (is_directory)
continue;
out.push_back(full_file_name);
} while (FindNextFile(dir, &file_data));
FindClose(dir);
#else
DIR *dir;
class dirent *ent;
class stat st;
dir = opendir(directory);
while ((ent = readdir(dir)) != NULL) {
const string file_name = ent->d_name;
const string full_file_name = directory + "/" + file_name;
if (file_name[0] == '.')
continue;
if (stat(full_file_name.c_str(), &st) == -1)
continue;
const bool is_directory = (st.st_mode & S_IFDIR) != 0;
if (is_directory)
continue;
out.push_back(full_file_name);
}
closedir(dir);
#endif
} // GetFilesInDirectory
```
|
How do I get a list of files in a directory in C++?
|
[
"",
"c++",
"file",
"directory",
""
] |
How can I parse integers passed to an application as command line arguments if the app is unicode?
Unicode apps have a main like this:
```
int _tmain(int argc, _TCHAR* argv[])
```
argv[?] is a wchar\_t\*. That means i can't use atoi. How can I convert it to an integer? Is stringstream the best option?
|
if you have a TCHAR array or a pointer to the begin of it, you can use `std::basic_istringstream` to work with it:
```
std::basic_istringstream<_TCHAR> ss(argv[x]);
int number;
ss >> number;
```
Now, `number` is the converted number. This will work in ANSI mode (\_TCHAR is typedef'ed to `char`) and in Unicode (\_TCHAR is typedef`ed to wchar\_t as you say) mode.
|
Dry coded and I don't develop on Windows, but using [`TCLAP`](http://tclap.sf.net/), this should get you running with wide character `argv` values:
```
#include <iostream>
#ifdef WINDOWS
# define TCLAP_NAMESTARTSTRING "~~"
# define TCLAP_FLAGSTARTSTRING "/"
#endif
#include "tclap/CmdLine.h"
int main(int argc, _TCHAR *argv[]) {
int myInt = -1;
try {
TCLAP::ValueArg<int> intArg;
TCLAP::CmdLine cmd("this is a message", ' ', "0.99" );
cmd.add(intArg);
cmd.parse(argc, argv);
if (intArg.isSet())
myInt = intArg.getValue();
} catch (TCLAP::ArgException& e) {
std::cout << "ERROR: " << e.error() << " " << e.argId() << endl;
}
std::cout << "My Int: " << myInt << std::endl;
return 0;
}
```
|
Parsing command line arguments in a unicode C++ application
|
[
"",
"c++",
"command-line",
"unicode",
""
] |
So I am currently learning C++ and decided to make a program that tests my skills I have learned so far. Now in my code I want to check if the value that the user enters is a double, if it is not a double I will put a if loop and ask them to reenter it. The problem I have is how do I go about checking what type of variable the user enters, ex- if a user enters a char or string, I can output an error message. Here is my code:
```
//cubes a user entered number
#include <iostream>
using namespace std;
double cube(double n); //function prototype
int main()
{
cout << "Enter the number you want to cube: "; //ask user to input number
double user;
cin >> user; //user entering the number
cout << "The cube of " << user << " is " << cube(user) << "." << endl; //displaying the cubed number
return 0;
}
double cube (double n) //function that cubes the number
{
return n*n*n; // cubing the number and returning it
}
```
Edit: I would have to say I just started and don't have the slightest of clue about your code, but I will check out your link. By the way, I haven't learned how to work with templates yet,I am learning about dealing with data, only Chapter 3 in my C++ Primer Plus 5th edition.
|
There is no suitable way to check if a string *really* contains a double within the standard library. You probably want to use [Boost](http://www.boost.org/). The following solution is inspired by recipe 3.3 in [C++ Cookbook](https://rads.stackoverflow.com/amzn/click/com/0596007612):
```
#include <iostream>
#include <boost/lexical_cast.hpp>
using namespace std;
using namespace boost;
double cube(double n);
int main()
{
while(true)
{
cout << "Enter the number you want to cube: ";
string user;
cin >> user;
try
{
// The following instruction tries to parse a double from the 'user' string.
// If the parsing fails, it raises an exception of type bad_lexical_cast.
// If an exception is raised within a try{ } block, the execution proceeds
// with one of the following catch() blocks
double d = lexical_cast <double> (user);
cout << "The cube of " << d << " is " << cube(d) << "." << endl;
break;
}
catch(bad_lexical_cast &e)
{
// This code is executed if the lexical_cast raised an exception; We
// put an error message and continue with the loop
cout << "The inserted string was not a valid double!" << endl;
}
}
return 0;
}
double cube (double n)
{
return n*n*n;
}
```
|
## Safe C++ Way
You can define a function for this using `std::istringstream`:
```
#include <sstream>
bool is_double(std::string const& str) {
std::istringstream ss(str);
// always keep the scope of variables as close as possible. we see
// 'd' only within the following block.
{
double d;
ss >> d;
}
/* eat up trailing whitespace if there was a double read, and ensure
* there is no character left. the eof bit is set in the case that
* `std::ws` tried to read beyond the stream. */
return (ss && (ss >> std::ws).eof());
}
```
To assist you in figuring out what it does (some points are simplified):
* Creation of a input-stringstream initialized with the string given
* Reading a double value out of it using `operator>>`. This means skipping whitespace and trying to read a double.
* If no double could be read, as in `abc` the stream sets the *fail*-bit. Note that cases like `3abc` will succeed and will *not* set the fail-bit.
* If the fail-bit is set, `ss` evaluates to a zero value, which means *false*.
* If an double was read, we skip trailing whitespace. If we then are at the end of the stream (note that `eof()` will return *true* if we tried to read past the end. `std::ws` does exactly that), `eof` will return true. Note this check makes sure that `3abc` will not pass our check.
* If both cases, right and left of the `&&` evaluate to *true*, we return true to the caller, signaling the given string is a double.
Similar, you check for `int` and other types. If you know how to work with templates, you know how to generalize this for other types as well. Incidentally, this is exactly what `boost::lexical_cast` provides to you. Check it out: <http://www.boost.org/doc/libs/1_37_0/libs/conversion/lexical_cast.htm>.
## C Way One
This way has advantages (being fast) but also major disadvantages (can't generalized using a template, need to work with raw pointers):
```
#include <cstdlib>
#include <cctype>
bool is_double(std::string const& s) {
char * endptr;
std::strtod(s.c_str(), &endptr);
if(endptr != s.c_str()) // skip trailing whitespace
while(std::isspace(*endptr)) endptr++;
return (endptr != s.c_str() && *endptr == '\0');
}
```
`strtod` will set `endptr` to the last character processed. Which is in our case the terminating null character. If no conversion was performed, endptr is set to the value of the string given to `strtod`.
## C Way Two
One might thing that `std::sscanf` does the trick. But it's easy to oversee something. Here is the correct way to do it:
```
#include <cstdio>
bool is_double(std::string const& s) {
int n;
double d;
return (std::sscanf(s.c_str(), "%lf %n", &d, &n) >= 1 &&
n == static_cast<int>(s.size()));
}
```
`std::sscanf` will return the items converted. Although the Standard specifies that `%n` is not included in that count, several sources contradict each other. It's the best to compare `>=` to get it right (see the manpage of `sscanf`). `n` will be set to the amount of the processed characters. It is compared to the size of the string. The space between the two format specifiers accounts for optional trailing whitespace.
## Conclusion
If you are a beginner, read into `std::stringstream` and do it the C++ way. Best not mess with pointers until you feel good with the general concept of C++.
|
Check variable type in C++
|
[
"",
"c++",
"variables",
"double",
""
] |
Is it possible to have a WPF window/element detect the drag'n'dropping of a file from windows explorer in C# .Net 3.5? I've found solutions for WinForms, but none for WPF.
|
Unfortunately, TextBox, RichTextBox, and FlowDocument viewers always mark drag-and-drop events as handled, which prevents them from bubbling up to your handlers. You can restore drag-and-drop events being intercepted by these controls by force-handling the drag-and-drop events (use UIElement.AddHandler and set handledEventsToo to true) and setting e.Handled to false in your handler.
|
Try the following :
```
private void MessageTextBox_Drop(object sender, DragEventArgs e)
{
if (e.Data is DataObject && ((DataObject)e.Data).ContainsFileDropList())
{
foreach (string filePath in ((DataObject)e.Data).GetFileDropList())
{
// Processing here
}
}
}
private void MessageTextBox_PreviewDragEnter(object sender, DragEventArgs e)
{
var dropPossible = e.Data != null && ((DataObject)e.Data).ContainsFileDropList();
if (dropPossible)
{
e.Effects = DragDropEffects.Copy;
}
}
private void MessageTextBox_PreviewDragOver(object sender, DragEventArgs e)
{
e.Handled = true;
}
```
|
Detect Drag'n'Drop file in WPF?
|
[
"",
"c#",
"wpf",
".net-3.5",
"drag-and-drop",
""
] |
Obviously it gets updated during a write operation, but are there any non-destructive operations that also force an update? Basically looking to be able to do the equivalent of the \*nix touch command, but in C# programmatically.
|
Use the function SetFileTime (C++) or File.SetLastWriteTime (C#) to set the last write time to the current time.
|
[System.IO.File.SetLastWriteTime(string path, DateTime lastWriteTime);](http://msdn.microsoft.com/en-us/library/system.io.file.setlastwritetime.aspx)
|
What's required for Windows to update the "file modified" timestamp?
|
[
"",
"c#",
"windows",
"file",
"timestamp",
""
] |
I recently started building a console version of a web application. I copied my custom sections from my web.config. to my app.config. When I go to get config information i get this error:
An error occurred creating the configuration section handler for x/y: Could not load type 'x' from assembly 'System.Configuration
The line that it is not liking is:
return ConfigurationManager.GetSection("X/Y") as Z;
Anyone run into something like this?
I was able to add
```
<add key="IsReadable" value="0"/>
```
in the appSettings and read it.
Addition:
I do actually have this defined about the custom section:
```
<configSections>
<sectionGroup name="x">
<section name="y" type="zzzzz"/>
</sectionGroup>
</configSections>
```
|
it sounds like your config-section handler is not defined
```
<configSection>
<section
name="YOUR_CLASS_NAME_HERE"
type="YOUR.NAMESPACE.CLASSNAME, YOUR.NAMESPACE, Version=1.1.0.0, Culture=neutral, PublicKeyToken=PUBLIC_TOKEN_ID_FROM_ASSEMBLY"
allowLocation="true"
allowDefinition="Everywhere"
/>
</configSection>
```
|
I had this identical issue recently. I created a custom sectiongroup for a web application(ran just fine), but when I ported this layer to a console app, the sectiongroup was failing.
You were correct in your question regarding how much of the "type" is required in your section definition. I've modified your configuration section with an example below:
```
<configSection>
<section
name="yourClassName"
type="your.namespace.className, your.assembly"
allowLocation="true"
allowDefinition="Everywhere" />
</configSection>
```
You'll notice, the type now has class name followed by assembly name. This is required for interaction outside of a web environment.
NOTE: Assembly name does not necessarily equal your namespace(for a given section).
|
App.config - custom section not working
|
[
"",
"c#",
""
] |
Is it possible to have a file belong to multiple subpackages? For example:
```
/**
* Name
*
* Desc
*
* @package Core
* @subpackage Sub1
* @subpackage Sub2
*/
```
Thanks!
|
It appears that PHPDoc does not allow you to do it for namespacing reasons. From the [PHPDoc Docs](http://manual.phpdoc.org/HTMLSmartyConverter/HandS/phpDocumentor/tutorial_tags.subpackage.pkg.html):
> NOTE: The @subpackage tag is intended to help categorize the elements that are in an actual @package value. Since PHP itself doesn't allow you to have two functions with the same name in the same script, PhpDocumentor also requires all names in an @package to be unique... meaning, @subpackage does not allow further "naming separation" inside that @package. What it does do is allow a level of visual grouping/separation of the elements inside that @package.
It seems that because PHPDoc's @package is a way to pseudo-namespace your functions and classes, but @subpackage is simply for categorization. Have you tried adding more than one subpackage? If so, what were the results?
|
You can use the following to do this:
```
* @package popcap\system\cache
```
This will create a hierarchy of packages when you compile the PHPdocs.
|
multiple subpackages for phpdoc
|
[
"",
"php",
"phpdoc",
""
] |
How do you safely encode a URL using JavaScript such that it can be put into a GET string?
```
var myUrl = "http://example.com/index.html?param=1&anotherParam=2";
var myOtherUrl = "http://example.com/index.html?url=" + myUrl;
```
I assume that you need to encode the `myUrl` variable on that second line?
|
Check out the built-in function [encodeURIComponent(str)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) and [encodeURI(str)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURI).
In your case, this should work:
```
var myOtherUrl =
"http://example.com/index.html?url=" + encodeURIComponent(myUrl);
```
|
You have three options:
* `escape()` will not encode: `@*/+`
* `encodeURI()` will not encode: `~!@#$&*()=:/,;?+'`
* `encodeURIComponent()` will not encode: `~!*()'`
But in your case, if you want to pass a [URL](http://en.wikipedia.org/wiki/Uniform_Resource_Locator) into a `GET` parameter of other page, you should use `escape` or `encodeURIComponent`, but not `encodeURI`.
See Stack Overflow question *[Best practice: escape, or encodeURI / encodeURIComponent](https://stackoverflow.com/questions/75980/best-practice-escape-or-encodeuri-encodeuricomponent)* for further discussion.
|
Encode URL in JavaScript
|
[
"",
"javascript",
"url",
"encode",
"urlencode",
""
] |
I just don't get it. Tried on VC++ 2008 and G++ 4.3.2
```
#include <map>
class A : public std::multimap<int, bool>
{
public:
size_type erase(int k, bool v)
{
return erase(k); // <- this fails; had to change to __super::erase(k)
}
};
int main()
{
A a;
a.erase(0, false);
a.erase(0); // <- fails. can't find base class' function?!
return 0;
}
```
|
When you declare a function in a class with the same name but different signature from a superclass, then the name resolution rules state that the compiler should *stop looking* for the function you are trying to call once it finds the first match. After finding the function by name, *then* it applies the overload resolution rules.
So what is happening is the compiler finds your implementation of `erase(int, bool)` when you call `erase(0)`, and then decides that the arguments don't match.
|
1: You need to be *extremely* careful when deriving from C++ standard library containers. It can be done, but because they don't have virtual destructors and other such niceties, it is usually the wrong approach.
2: Overload rules are a bit quirky here. The compiler first looks in the derived class, and if it finds *any* overload with the same name, it stops looking there. It only looks in the base class if no overloads were found in the derived class.
A simple solution to that is to introduce the functions you need from the base class into the derived class' namespace:
```
class A : public std::multimap<int, bool>
{
public:
using std::multimap<int, bool>::erase; // Any erase function found in the base class should be injected into the derived class namespace as well
size_type erase(int k, bool v)
{
return erase(k);
}
};
```
Alternatively, of course, you could simply write a small helper function in the derived class redirecting to the base class function
|
What is wrong with this inheritance?
|
[
"",
"c++",
"templates",
""
] |
I have a layout similar to this:
```
<div id="..."><img src="..."></div>
```
and would like to use a jQuery selector to select the child `img` inside the `div` on click.
To get the `div`, I've got this selector:
```
$(this)
```
How can I get the child `img` using a selector?
|
The jQuery constructor accepts a 2nd parameter called [`context`](http://api.jquery.com/jQuery/#jQuery-selector-context) which can be used to override the context of the selection.
```
jQuery("img", this);
```
Which is the same as using [`.find()`](http://api.jquery.com/find) like this:
```
jQuery(this).find("img");
```
If the imgs you desire are **only** direct descendants of the clicked element, you can also use [`.children()`](http://api.jquery.com/children):
```
jQuery(this).children("img");
```
|
You could also use
```
$(this).find('img');
```
which would return all `img`s that are descendants of the `div`
|
How to get the children of the $(this) selector?
|
[
"",
"javascript",
"jquery",
"jquery-selectors",
""
] |
I'm trying to get the contents from another file with `file_get_contents` (don't ask why).
I have two files: *test1.php* and *test2.php*. *test1.php* returns a string, bases on the user that is logged in.
*test2.php* tries to get the contents of *test1.php* and is being executed by the browser, thus getting the cookies.
To send the cookies with `file_get_contents`, I create a streaming context:
```
$opts = array('http' => array('header'=> 'Cookie: ' . $_SERVER['HTTP_COOKIE']."\r\n"))`;
```
I'm retrieving the contents with:
```
$contents = file_get_contents("http://www.example.com/test1.php", false, $opts);
```
But now I get the error:
> Warning: file\_get\_contents(<http://www.example.com/test1.php>) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
Does somebody knows what I'm doing wrong here?
edit:
forgot to mention: Without the *streaming\_context*, the page just loads. But without the cookies I don't get the info I need.
|
First, this is probably just a typo in your question, but the third arguments to file\_get\_contents() needs to be your streaming context, NOT the array of options. I ran a quick test with something like this, and everything worked as expected
```
$opts = array('http' => array('header'=> 'Cookie: ' . $_SERVER['HTTP_COOKIE']."\r\n"));
$context = stream_context_create($opts);
$contents = file_get_contents('http://example.com/test1.txt', false, $context);
echo $contents;
```
The error indicates the server is returning a 404. Try fetching the URL **from the machine PHP is running on** and **not** from your workstation/desktop/laptop. It may be that your web server is having trouble reaching the site, your local machine has a cached copy, or some other network screwiness.
Be sure you repeat your exact request when running this test, including the cookie you're sending (command line curl is good for this). It's entirely possible that the page in question may load fine in a browser without the cookie, but when you send the cookie the site actually is returning a 404.
Make sure that $\_SERVER['HTTP\_COOKIE'] has the raw cookie you think it does.
If you're screen scraping, download Firefox and a copy of the LiveHTTPHeaders extension. Perform all the necessary steps to reach whatever page it is you want in Firefox. Then, using the output from LiveHTTPHeaders, recreate the exact same request requence. Include every header, **not** just the cookies.
Finally, [PHP Curl](http://us.php.net/curl) exists for a reason. If at all possible, (I'm not asking!) use it instead. :)
|
Just to share this information.
When using `session_start()`, the session file is lock by PHP. Thus the actual script is the only script that can access the session file. If you try to access it via `fsockopen()` or `file_get_contents()` you can wait a long time since you try to open a file that has been locked.
One way to solve this problem is to use the `session_write_close()` to unlock the file and relock it after with `session_start()`.
Example:
```
<?php
$opts = array('http' => array('header'=> 'Cookie: ' . $_SERVER['HTTP_COOKIE']."\r\n"));
$context = stream_context_create($opts);
session_write_close(); // unlock the file
$contents = file_get_contents('http://120.0.0.1/controler.php?c=test_session', false, $context);
session_start(); // Lock the file
echo $contents;
?>
```
Since `file_get_contents()` is a blocking function, both script won't be in concurrency while trying to modify the session file.
But i'm sure this is not the best manner to manipulate session with an extend connection.
Btw: it's faster than **cURL** and `fsockopen()`
Let me know if you find something better.
|
How to send cookies with file_get_contents
|
[
"",
"php",
"session",
"cookies",
"file-get-contents",
""
] |
I am trying to deserialize a stream but I always get this error "End of Stream encountered before parsing was completed"?
Here is the code:
```
//Some code here
BinaryFormatter b = new BinaryFormatter();
return (myObject)b.Deserialize(s);//s---> is a Stream object that has been fill up with data some line over here
```
Any one have ideas?
|
Try to set the position to 0 of your stream and do not use your object but the object type.
```
BinaryFormatter b = new BinaryFormatter();
s.Position = 0;
return (YourObjectType)b.Deserialize(s);
```
|
Make sure the serialization completed, and that the serialization type matches the de-serialization type (i.e., make sure you're serializing with a BinaryFormatter if you're de-serializing with one). Also, make sure that the stream you serialized to really finished serializing, with a Stream.Flush() or something to that effect.
|
End of Stream encountered before parsing was completed?
|
[
"",
"c#",
".net",
"serialization",
".net-2.0",
"c#-2.0",
""
] |
I am porting an existing application to C# and want to improve performance wherever possible. Many existing loop counters and array references are defined as System.UInt32, instead of the Int32 I would have used.
Is there any significant performance difference for using UInt32 vs Int32?
|
I don't think there are any performance considerations, other than possible difference between signed and unsigned arithmetic at the processor level but at that point I think the differences are moot.
The bigger difference is in the CLS compliance as the unsigned types are not CLS compliant as not all languages support them.
|
The short answer is "No. Any performance impact will be negligible".
The correct answer is "It depends."
A better question is, "Should I use uint when I'm certain I don't need a sign?"
The reason you cannot give a definitive "yes" or "no" with regards to performance is because the target platform will ultimately determine performance. That is, the performance is dictated by whatever processor is going to be executing the code, and the instructions available. Your .NET code compiles down to [Intermediate Language](https://en.wikipedia.org/wiki/Common_Intermediate_Language "Intermediate Language") (IL or Bytecode). These instructions are then compiled to the target platform by the [Just-In-Time](https://msdn.microsoft.com/en-us/library/ht8ecch6(v=vs.90).aspx "Just-In-Time") (JIT) compiler as part of the [Common Language Runtime](https://msdn.microsoft.com/en-us/library/ht8ecch6(v=vs.90).aspx "Common Language Runtime") (CLR). You can't control or predict what code will be generated for every user.
So knowing that the hardware is the final arbiter of performance, the question becomes, "How different is the code .NET generates for a signed versus unsigned integer?" and "Does the difference impact my application and my target platforms?"
The best way to answer these questions is to run a test.
```
class Program
{
static void Main(string[] args)
{
const int iterations = 100;
Console.WriteLine($"Signed: {Iterate(TestSigned, iterations)}");
Console.WriteLine($"Unsigned: {Iterate(TestUnsigned, iterations)}");
Console.Read();
}
private static void TestUnsigned()
{
uint accumulator = 0;
var max = (uint)Int32.MaxValue;
for (uint i = 0; i < max; i++) ++accumulator;
}
static void TestSigned()
{
int accumulator = 0;
var max = Int32.MaxValue;
for (int i = 0; i < max; i++) ++accumulator;
}
static TimeSpan Iterate(Action action, int count)
{
var elapsed = TimeSpan.Zero;
for (int i = 0; i < count; i++)
elapsed += Time(action);
return new TimeSpan(elapsed.Ticks / count);
}
static TimeSpan Time(Action action)
{
var sw = new Stopwatch();
sw.Start();
action();
sw.Stop();
return sw.Elapsed;
}
}
```
The two test methods, **TestSigned** and **TestUnsigned**, each perform ~2 million iterations of a simple increment on a signed and unsigned integer, respectively. The test code runs 100 iterations of each test and averages the results. This should weed out any potential inconsistencies. The results on my i7-5960X compiled for x64 were:
```
Signed: 00:00:00.5066966
Unsigned: 00:00:00.5052279
```
These results are nearly identical, but to get a definitive answer, we really need to look at the bytecode generated for the program. We can use [ILDASM](https://msdn.microsoft.com/en-us/library/f7dy01k1(v=vs.110).aspx "ILDASM") as part of the .NET SDK to inspect the code in the assembly generated by the compiler.
[](https://i.stack.imgur.com/sMA1r.png)
Here, we can see that the C# compiler favors signed integers and actually performs most operations natively as signed integers and only ever treats the value in-memory as unsigned when comparing for the branch (a.k.a jump or if). Despite the fact that we're using an unsigned integer for both the iterator AND the accumulator in **TestUnsigned**, the code is nearly identical to the **TestSigned** method except for a single instruction: **IL\_0016**. A quick glance at the [ECMA spec](https://en.wikipedia.org/wiki/List_of_CIL_instructions "ECMA spec") describes the difference:
> blt.un.s :
> Branch to target if less than (unsigned or unordered), short form.
>
> blt.s :
> Branch to target if less than, short form.
Being such a common instruction, it's safe to assume that most modern high-power processors will have hardware instructions for both operations and they'll very likely execute in the same number of cycles, but **this is not guaranteed**. A low-power processor may have fewer instructions and not have a branch for unsigned int. In this case, the JIT compiler may have to emit multiple hardware instructions (A conversion first, then a branch, for instance) to execute the **blt.un.s** IL instruction. Even if this is the case, these additional instructions would be basic and probably wouldn't impact the performance significantly.
So in terms of performance, the long answer is "It is unlikely that there will be a performance difference at all between using a signed or an unsigned integer. If there is a difference, it is likely to be negligible."
So then if the performance is identical, the next logical question is, "Should I use an unsigned value when I'm certain I don't *need* a sign?"
There are two things to consider here: first, unsigned integers are NOT [CLS-compliant](https://msdn.microsoft.com/library/bhc3fa7f(v=vs.100).aspx "CLS-Compliant"), meaning that you may run into issues if you're exposing an unsigned integer as part of an API that another program will consume (such as if you're distributing a reusable library). Second, most operations in .NET, including the method signatures exposed by the BCL (for the reason above), use a signed integer. So if you plan on actually using your unsigned integer, you'll likely find yourself casting it quite a bit. This is going to have a very small performance hit and will make your code a little messier. In the end, it's probably not worth it.
**TLDR;** back in my C++ days, I'd say "Use whatever is most appropriate and let the compiler sort the rest out." C# is not quite as cut-and-dry, so I would say this for .NET: There's really no performance difference between a signed and unsigned integer on x86/x64, but most operations require a signed integer, so unless you really NEED to restrict the values to positive ONLY or you really NEED the extra range that the sign bit eats, stick with a signed integer. Your code will be cleaner in the end.
|
In C# is there any significant performance difference for using UInt32 vs Int32
|
[
"",
"c#",
"performance",
"int32",
"uint32",
""
] |
In C++ often do something like this:
```
typedef map<int, vector<int> > MyIndexType;
```
Where I then use it like this:
```
MyIndexType myIndex;
for( ... some loop ...)
{
myIndex[someId].push_back(someVal);
}
```
If there was no entry in the map the code will insert a new empty vector and then append to it.
In Python it would look like this:
```
myIndex = {}
for (someId,someVal) in collection:
try:
myIndex[someId].append(someVal)
except KeyError:
myIndex[someId] = [someVal]
```
The try except is a bit ugly here. Is there a way to tell the dictionary an object type to insert when a KeyError is encountered at dictionary declaration time?
|
You want to use:
```
from collections import defaultdict
myIndex = defaultdict(list)
myIndex[someId].append(someVal)
```
Standard Library [`defaultdict` objects](http://docs.python.org/library/collections.html#id3).
Example usage from the Python documentation:
```
>>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)]
>>> d = defaultdict(list)
>>> for k, v in s:
d[k].append(v)
>>> d.items()
[('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])]
```
|
Something like this perhaps:
```
myIndex = {}
for (someId,someVal) in collection:
myIndex.setdefault(someId, []).append(someVal)
```
|
What is the equivalent of map<int, vector<int> > in Python?
|
[
"",
"python",
"dictionary",
""
] |
In java, there's three levels of access:
* Public - Open to the world
* Private - Open only to the class
* Protected - Open only to the class and its subclasses (inheritance).
So why does the java compiler allow this to happen?
TestBlah.java:
```
public class TestBlah {
public static void main(String[] args) {
Blah a = new Blah("Blah");
Bloo b = new Bloo("Bloo");
System.out.println(a.getMessage());
System.out.println(b.getMessage()); //Works
System.out.println(a.testing);
System.out.println(b.testing); //Works
}
}
```
Blah.java:
```
public class Blah {
protected String message;
public Blah(String msg) {
this.message = msg;
}
protected String getMessage(){
return(this.message);
}
}
```
Bloo.java:
```
public class Bloo extends Blah {
public Bloo(String testing) {
super(testing);
}
}
```
|
Actually it should be:
> Open only to the [**classes on the same package**](http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html) the class and its subclasses (inheritance)
That's why
|
Because protected means subclass *or* other classes in the same package.
And there's actually a fourth "default" level of access, when the modifier is omitted, which provides access to other classes in the same package.
So `protected` is between default and `public` access.
|
Java Protected Access Not Working
|
[
"",
"java",
"inheritance",
"visibility",
"protected",
"access-levels",
""
] |
Is it faster to do the following:
```
if ($var != 'test1' && $var != 'test2' && $var != 'test3' && $var != 'test4') { ... }
```
Or:
```
if (!in_array($var, array('test1', 'test2', 'test3', 'test4') { ... }
```
Is there a number of values at which point it's faster to do one or the other?
(In this case, the array used in the second option doesn't alreay exist.)
|
i'd strongly suggest just using `in_array()`, any speed difference would be negligible, but the readability of testing each variable separately is horrible.
just for fun here's a test i ran:
```
$array = array('test1', 'test2', 'test3', 'test4');
$var = 'test';
$iterations = 1000000;
$start = microtime(true);
for($i = 0; $i < $iterations; ++$i) {
if ($var != 'test1' && $var != 'test2' && $var != 'test3' && $var != 'test4') {}
}
$end = microtime(true);
print "Time1: ". ($end - $start)."<br />";
$start2 = microtime(true);
for($i = 0; $i < $iterations; ++$i) {
if (!in_array($var, $array) ) {}
}
$end2 = microtime(true);
print "Time2: ".($end2 - $start2)."<br />";
// Time1: 1.12536692619
// Time2: 1.57462596893
```
slightly trivial note to watch for, if `$var` is not set, method 1 takes much longer (depending on how many conditions you test)
## Update for newer PHP versions:
Martijn: I'ved extended the array to **five** elements, and look for `test3`, as sort of an average case.
**PHP5.6**
```
Time1: 0.20484399795532
Time2: 0.29854393005371
```
**PHP7.1**
```
Time1: 0.064045906066895
Time2: 0.056781053543091
```
**PHP7.4**
```
Time1: 0.048759937286377
Time2: 0.049691915512085
```
**PHP8.0**
```
Time1: 0.045055150985718
Time2: 0.049431085586548
```
Conclusion: The original test wasnt the best test, and also: In php7+ is has become a matter of preference.
|
Note that if you're looking to replace a bunch of `!==` statements, you should pass the third parameter to [`in_array`](http://uk.php.net/manual/en/function.in-array.php) as `true`, which enforces type checking on the items in the array.
Ordinary `!=` doesn't require this, obviously.
|
Which is faster: in_array() or a bunch of expressions in PHP?
|
[
"",
"php",
"arrays",
"if-statement",
""
] |
How can I retrieve raw time-series data from a Proficy Historian/iHistorian?
Ideally, I would ask for data for a particular tag between two dates.
|
There are several different sampling modes you can experiment with.
* Raw
* Interpolated
* Lab
* Trend
* Calculated
These modes are available using all of the following APIs.
* User API (ihuapi.dll)
* SDK (ihsdk.dll)
* OLEDB (iholedb.dll)
* Client Acess API (Proficy.Historian.ClientAccess.API)
Of these the trend sampling mode is probably what you want since it is specifically designed for charting/trending. Though, lab and interpolated may be useful as well.
Read the electronic book for more information on each sampling mode. On my machine it is stored as `C:\Program Files\GE Fanuc\Proficy Historian\Docs\iHistorian.chm` and I have version 3.5 installed. Pay particular attention to the following sections.
* Using the Historian OLE DB Provider
* Advanced Topics | Retrieval
Here is how you can construct an OLEDB to do trend sampling.
```
set
SamplingMode = 'Trend',
StartTime = '2010-07-01 00:00:00',
EndTime = '2010-07-02 00:00:00',
IntervalMilliseconds = 1h
select
timestamp,
value,
quality
from
ihRawData
where
tagname = 'YOUR_TAG'
```
Showing the equivalent methods using the User API and the SDK are complex (more so with the User API) since they require a lot of plumbing in the code to get setup. The Client Access API is newer and uses WCF behind the scenes.
By the way, there are a few limitations with the OLEDB method though.
* Despite what the documentation says I have *never* been able to get native query parameters to work. That is a showstopper if you want to use it with SQL Server Reporting Services for example.
* You cannot write samples into the archive or in any way make changes to the Historian configuration including adding/changing tags, writing messages, etc.
* It can be a little slow in some cases.
* It has no provision for crosstabbing multiple tagnames into the columns and then carrying forward samples so that a value exists for each timestamp and tag combination. The trend sampling mode gets you halfway there, but still does not crosstab and does not actually load raw samples. Then again the User API and SDK cannot do this either.
|
A coworker of mine put this together:
In web.config:
```
<add name="HistorianConnectionString"
providerName="ihOLEDB.iHistorian.1"
connectionString="
Provider=ihOLEDB.iHistorian;
User Id=;
Password=;
Data Source=localhost;"
/>
```
In the data layer:
```
public DataTable GetProficyData(string tagName, DateTime startDate, DateTime endDate)
{
using (System.Data.OleDb.OleDbConnection cn = new System.Data.OleDb.OleDbConnection())
{
cn.ConnectionString = webConfig.ConnectionStrings.ConnectionStrings["HistorianConnectionString"];
cn.Open();
string queryString = string.Format(
"set samplingmode = rawbytime\n select value as theValue,Timestamp from ihrawdata where tagname = '{0}' AND timestamp between '{1}' and '{2}' and value > 0 order by timestamp",
tagName.Replace("'", "\""), startDate, endDate);
System.Data.OleDb.OleDbDataAdapter adp = new System.Data.OleDb.OleDbDataAdapter(queryString, cn);
DataSet ds = new DataSet();
adp.Fill(ds);
return ds.Tables[0];
}
}
```
---
**Update:**
This worked well but we ran into an issue with tags that don't update very often. If the tag didn't update near the start or end of the requested startDate and endDate, the trends would look bad. Worse, still were cases where there were no explicit points during the window requested--we'd get no data back.
I resolved this by making three queries:
1. The previous value *before* the start-date
2. The points between startDate and endDate
3. The next value *after* the endDate
This is a potentially inefficient way to do it but It Works:
```
public DataTable GetProficyData(string tagName, DateTime startDate, DateTime endDate)
{
DataSet ds = new DataSet();
string queryString;
System.Data.OleDb.OleDbDataAdapter adp;
using (System.Data.OleDb.OleDbConnection cn = new System.Data.OleDb.OleDbConnection())
{
cn.ConnectionString = proficyConn.ConnectionString;
cn.Open();
// always get a start value
queryString = string.Format(
"set samplingmode = lab\nselect value as theValue,Timestamp from ihrawdata where tagname = '{0}' AND timestamp between '{1}' and '{2}' order by timestamp",
tagName.Replace("'", "\""), startDate.AddMinutes(-1), startDate);
adp = new System.Data.OleDb.OleDbDataAdapter(queryString, cn);
adp.Fill(ds);
// get the range
queryString = string.Format(
"set samplingmode = rawbytime\nselect value as theValue,Timestamp from ihrawdata where tagname = '{0}' AND timestamp between '{1}' and '{2}' order by timestamp",
tagName.Replace("'", "\""), startDate, endDate);
adp = new System.Data.OleDb.OleDbDataAdapter(queryString, cn);
adp.Fill(ds);
// always get an end value
queryString = string.Format(
"set samplingmode = lab\nselect value as theValue,Timestamp from ihrawdata where tagname = '{0}' AND timestamp between '{1}' and '{2}' order by timestamp",
tagName.Replace("'", "\""), endDate.AddMinutes(-1), endDate);
adp = new System.Data.OleDb.OleDbDataAdapter(queryString, cn);
adp.Fill(ds);
return ds.Tables[0];
}
}
```
And yes, I know, those queries should be parameterized.
|
How do I query raw data from a Proficy Historian?
|
[
"",
"c#",
"oledb",
"proficy",
"historian",
""
] |
I always have this notion that writing SQL queries in the code behind is not good compared to writing it using a SqlDataSource
```
SqlDataAdapter ad = new SqlDataAdapter("SELECT * FROM Categories", myConnection);
DataSet ds = new DataSet();
ad.Fill(ds, "Categories");
myGridView.DataSource = ds;
myGridView.DataBind();
```
vs.
```
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="<%$ ConnectionStrings:myConnection %>"
SelectCommand="SELECT * FROM Categories" />
```
I feel using SqlDataSource is secure, easy to maintain.
Is my concern true? Please justify.
|
SQL queries in the code-behind and SQL queries in a SqlDataSource are pretty much equivalent.
they're both about the same security-wise; as for easier to maintain, SqlDataSource may be a bit easier in most cases.
A data-access layer is preferred, but SqlDataSource is sometimes a good-enough expediency. I wouldn't hit you with a rolled-up newspaper for using one if you didn't already have a data-access layer and it was for something simple/a one-off.
|
I wouldn't write SQL queries in code behind full stop. How about a data access layer?
What happens if you want to change your backing store? You're going to have to re-write all your code-behind.
What happens where you need to use the data in more than one place? You duplicate code.
You need think seriously about how you're architecting your solution before writing SQL queries in your code behind. Think about separation and maintainability long before you question the 'security' of SqlDataSource objects. Seriously.
|
Writing queries in code behind vs. SqlDataSource
|
[
"",
"asp.net",
"sql",
""
] |
I'm having an issue dragging a file from Windows Explorer on to a Windows Forms application.
It works fine when I drag text, but for some reason it is not recognizing the file. Here is my test code:
```
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_DragDrop(object sender, DragEventArgs e)
{
}
private void Form1_DragEnter(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent(DataFormats.Text))
{
e.Effect = DragDropEffects.Copy;
}
else if (e.Data.GetDataPresent(DataFormats.FileDrop))
{
e.Effect = DragDropEffects.Copy;
}
else
{
e.Effect = DragDropEffects.None;
}
}
}
}
```
AllowDrop is set to true on Form1, and as I mentioned, it works if I drag text on to the form, just not an actual file.
I'm using Vista 64-bit ... not sure if that is part of the problem.
|
The problem comes from Vista's [UAC](http://en.wikipedia.org/wiki/User_Account_Control). DevStudio is running as administrator, but explorer is running as a regular user. When you drag a file from explorer and drop it on your DevStudio hosted application, that is the same as a non-privileged user trying to communicate with a privileged user. It's not allowed.
This will probably not show up when you run the app outside of the debugger. Unless you run it as an administrator there (or if Vista auto-detects that it's an installer/setup app).
You could also [run explorer as an admin](http://www.neowin.net/forum/lofiversion/index.php/t575104.html), at least for testing. Or disable UAC (which I would not recommend, since you really want to catch these issues during development, not during deployment!)
|
I added the code that [arul](https://stackoverflow.com/questions/281706/drag-and-drop-from-windows-file-explorer-onto-a-windows-form-is-not-working#281770) mentioned and things still didn't work, but it got me thinking.
I started thinking it might be a Vista issue so I sent it to a friend that had Windows XP and it worked great! I then tried running it outside of the Release folder in the bin directory and what do you know it worked!
The only time it does not work is when I am running it inside the Visual Studio 2008 IDE ... that's just weird.
|
Drag and drop from Windows File Explorer onto a Windows Form is not working
|
[
"",
"c#",
"winforms",
"drag-and-drop",
"vista64",
""
] |
I am in the process of writing an application that will need multiple forms of authentication.
The application will need to support authentication to Active Directory, but be able to fail back to a SQL Membership Provider if the user is not in Active Directory. We can handle the failing to the SQL Provider in code based on the username provided because the username will be a different format than the Active Directory username.
Is this even possible? What I mean is, can I use membership and use both ActiveDirectoryMembershipProvider and SqlMembershipProvider together or will I have to roll my own?
Another additional added complexity is that I would like to automatically authenticate my internal users based of Windows Authentication back to AD, but use Forms Authentication for users not on our internal network, or users that are using the SQL Provider.
These will most likely be separate servers, one internal, and the other external so I have a lot of planning to do to figure out the data replication, and how I will authenticate the AD users if they hit the outside server etc.
I am wondering what thoughts are out there as I start down this road. Is what I am wanting to do even possible without me rolling my own, or is there a way to mesh these together?
---
Thanks for the reply.
The reason I asked originally was because I was able to get this specific senerio working about 7 years ago using IIS to authenticate and then passing back the credentials to a Lotus Domino Server Web App. If the user was not authenticated via the Windows Authentication/ISS then Domino would handle the authentication. This was what I was looking to do here, but really couldn't think of a way to make it work in IIS.
As for the rest of your reply, I think you are on to the way that I will need to take. I have thought this through and tossed it around in my head a lot. The application will be somewhat different on the two servers anyway since there is going to be limited access to the data on the external server anyway. The fact that so much is going to be different already I may just treat these as two applications, thus negating the need to use two types of authentication in the same application anyway.
I am playing around with the idea already of writing my own authentication/login window for the external server, and if the user trys to log in with their AD credentials on the external server I will be able to detect that and redirect them to the internal server. If they are not on the local network or VPN'd in they will simply not get access. This part still has some thought process to go though so I am not sure.
As an additional thought - is there a way to pull just enough of AD into a SQL database to allow me to authenticate users to the SQL database from the external server using their AD credentials, without creating any security issues? I hope I am clearly typing what I am thinking....
Thanks again!
Tim
|
This is the way I've handled a similar situation based on [this info](http://beensoft.blogspot.com/2008/06/mixing-forms-and-windows-authentication.html):
1. Configured the application to use Forms authentication.
2. Set the LoginUrl to a page called WinLogin.aspx.
3. In WinLogin.aspx, use Request.ServerVariables["LOGON\_USER"] to get the username then call FormsAuthentication.RedirectFromLoginPage( authorizedUserName, false ) to log them in. I guess you can manually check Active Directory as this point as well.
4. Create an html page that redirects to a page called Login.aspx
5. Login.aspx is your standard username/password login.
6. In IIS, Enable Integrated Authentication and Anonymous on the entire site, but deny anonymous access to WinLogin.aspx.
7. In IIS, set your 401 errors to the page created in step 3.
What basically happens is that when an unauthenicated user hits the site, they're redirected to WinLogin.aspx. Since anonymous is turned off, integrated security makes a check. If that passes, your custom code in WinLogin can run. If the integrated security check fails, a 401 error occurs. Your custom 401 page redirects to Login.aspx where the user can log in using their username and password with the SQL provider.
|
As far as I know, Web Applications are configured to use either Windows Authentication or Forms Authentication, but not both. Therefore, I do not believe it is possible to automatically authenticate internal users while requiring others to enter a username / password.
You could authenticate to Active Directory or a SQL user store via Forms authentication by using a custom provider. However, the AD users would still need to enter their username and password. Although I've never combined these two methods, I have used Forms authentication to authenticate against both sources at one time or another.
With that said, I think you may want to consider reducing the "flexibility" of your system. If you have an external facing server and an internal facing server, you could simply change the provider configuration on each copy of the application to go against a different source. Then, you could configure the internal one to use Windows (automatic) authentication and the external one to use Forms authentication.
IMHO, I believe that internal users should not be using the external server to access the application. If they are, they should have a user account stored in SQL, completely separated from their AD account. Basically, when someone accesses the application externally, they are acting as an external user, irregardless of their physical location.
|
ASP.NET Application to authenticate to Active Directory or SQL via Windows Authentication or Forms Authentication
|
[
"",
"asp.net",
"sql",
"active-directory",
"asp.net-membership",
"membership",
""
] |
I've been preaching both to my colleagues and here on SO about the goodness of using parameters in SQL queries, especially in .NET applications. I've even gone so far as to promise them as giving immunity against SQL injection attacks.
But I'm starting to wonder if this really is true. Are there any known SQL injection attacks that will be successfull against a parameterized query? Can you for example send a string that causes a buffer overflow on the server?
There are of course other considerations to make to ensure that a web application is safe (like sanitizing user input and all that stuff) but now I am thinking of SQL injections. I'm especially interested in attacks against MsSQL 2005 and 2008 since they are my primary databases, but all databases are interesting.
Edit: To clarify what I mean by parameters and parameterized queries. By using parameters I mean using "variables" instead of building the sql query in a string.
So instead of doing this:
```
SELECT * FROM Table WHERE Name = 'a name'
```
We do this:
```
SELECT * FROM Table WHERE Name = @Name
```
and then set the value of the @Name parameter on the query / command object.
|
**Placeholders** are enough to prevent injections. You might still be open to buffer overflows, but that is a completely different flavor of attack from an SQL injection (the attack vector would not be SQL syntax but binary). Since the parameters passed will all be escaped properly, there isn't any way for an attacker to pass data that will be treated like "live" SQL.
You can't use functions inside placeholders, and you can't use placeholders as column or table names, because they are escaped and quoted as string literals.
However, if you use **parameters** as part of a **string concatenation** inside your dynamic query, you are still vulnerable to injection, because your strings will not be escaped but will be literal. Using other types for parameters (such as integer) is safe.
That said, if you're using use input to set the value of something like `security_level`, then someone could just make themselves administrators in your system and have a free-for-all. But that's just basic input validation, and has nothing to do with SQL injection.
|
No, there is still risk of SQL injection any time you interpolate unvalidated data into an SQL query.
Query parameters help to avoid this risk by separating literal values from the SQL syntax.
```
'SELECT * FROM mytable WHERE colname = ?'
```
That's fine, but there are other purposes of interpolating data into a dynamic SQL query that cannot use query parameters, because it's not an SQL value but instead a table name, column name, expression, or some other syntax.
```
'SELECT * FROM ' + @tablename + ' WHERE colname IN (' + @comma_list + ')'
' ORDER BY ' + @colname'
```
It doesn't matter whether you're using stored procedures or executing dynamic SQL queries directly from application code. The risk is still there.
The remedy in these cases is to employ **FIEO** as needed:
* **Filter Input:** validate that the data look like legitimate integers, table names, column names, etc. before you interpolate them.
* **Escape Output:** in this case "output" means putting data into a SQL query. We use functions to transform variables used as string literals in an SQL expression, so that quote marks and other special characters inside the string are escaped. We should also use functions to transform variables that would be used as table names, column names, etc. As for other syntax, like writing whole SQL expressions dynamically, that's a more complex problem.
|
Are Parameters really enough to prevent Sql injections?
|
[
"",
"asp.net",
"sql",
"database",
"sql-injection",
""
] |
I'm searching for an easy to handle python native module to create python object representation from xml.
I found several modules via google (one of them is [XMLObject](http://freenet.mcnabhosting.com/python/xmlobject/)) but didn't want to try out all of them.
What do you think is the best way to do such things?
**EDIT:** I missed to mention that the XML I'd like to read is not generated by me. It's an existing XML file in a structure of which I have no control over.
|
You say you want an *object* representation, which I would interpret to mean that nodes become objects, and the attributes and children of the node are represented as attributes of the object (possibly according to some Schema). This is what XMLObject does, I believe.
There are some packages that I know of. [4Suite](https://pypi.python.org/pypi/4Suite-XML/1.0.2) includes some tools to do this, and I believe [Amara](http://xml3k.org/Amara) specifically implements this (built on top of 4Suite). You can also use [lxml.objectify](http://lxml.de/objectify.html), which was inspired by Amara and [gnosis.xml.objectify](http://gnosis.cx/download/).
Of course a third option is, given a concrete representation of the XML (using ElementTree or lxml) you can build your own custom model around that. [lxml.html](http://lxml.de/lxmlhtml.html) is an example of that, extending the base interface of lxml with some HTML-specific functionality.
|
I second the suggestion of xml.etree.ElementTree, mostly because it's now in the stdlib.
There is also a faster implementation, xml.etree.cElementTree available too.
If you really need performance, I would suggest lxml
<http://www.ibm.com/developerworks//xml/library/x-hiperfparse/>
|
module to create python object representation from xml
|
[
"",
"python",
"xml",
"pickle",
""
] |
I have a 'reference' SQL Server 2005 database that is used as our global standard. We're all set up for keeping general table schema and data properly synchronized, but don't yet have a good solution for other objects like views, stored procedures, and user-defined functions.
I'm aware of products like [Redgate's SQL Compare](http://www.red-gate.com/products/SQL_Compare/index.htm), but we don't really want to rely on (any further) 3rd-party tools right now.
Is there a way to ensure that a given stored procedure or view on the reference database, for example, is up to date on the target databases? Can this be scripted?
Edit for clarification: when I say 'scripted', I mean running a script that pushes out any changes to the target servers. Not running the same CREATE/ALTER script multiple times on multiple servers.
Any advice/experience on how to approach this would be much appreciated.
|
You can use the system tables to do this.
For example,
```
select * from sys.syscomments
```
The "text" column will give you all of the code for the store procedures (plus other data).
It is well worth looking at all the system tables and procedures. In fact, I suspect this is what RedGate's software and other tools do under the hood.
I have just myself begun experimenting with this, so I can't really be specific about all the gotcha's and what other system tables you need to query, but this should get you started.
Also see:
[Query to list SQL Server stored procedures along with lines of code for each procedure](https://stackoverflow.com/questions/291574/query-to-list-sql-server-stored-procedures-along-with-lines-of-code-for-each-proc)
which is slightly different question than yours, but related.
|
1) Keep all your views, triggers, functions, stored procedures, table schemas etc in Source Control and use that as the master.
2) Failing that, use your reference DB as the master and script out views and stored procedures etc: Right click DB, Tasks->Generate Scripts and choose your objects.
3) You could even use transactional replication between Reference and Target DBs.
I strongly believe the best way is to have **everything** scripted and placed in Source Control.
|
How can I synchronize views and stored procedures between SQL Server databases?
|
[
"",
"sql",
"sql-server",
""
] |
I was working on some code recently and came across a method that had 3 for-loops that worked on 2 different arrays.
Basically, what was happening was a foreach loop would walk through a vector and convert a DateTime from an object, and then another foreach loop would convert a long value from an object. Each of these loops would store the converted value into lists.
The final loop would go through these two lists and store those values into yet another list because one final conversion needed to be done for the date.
Then after all that is said and done, The final two lists are converted to an array using ToArray().
Ok, bear with me, I'm finally getting to my question.
So, I decided to make a single for loop to replace the first two foreach loops and convert the values in one fell swoop (the third loop is quasi-necessary, although, I'm sure with some working I could also put it into the single loop).
But then I read the article "What your computer does while you wait" by Gustav Duarte and started thinking about memory management and what the data was doing while it's being accessed in the for-loop where two lists are being accessed simultaneously.
So my question is, what is the best approach for something like this? Try to condense the for-loops so it happens in as little loops as possible, causing multiple data access for the different lists. Or, allow the multiple loops and let the system bring in data it's anticipating. These lists and arrays can be potentially large and looping through 3 lists, perhaps 4 depending on how ToArray() is implemented, can get very costy (O(n^3) ??). But from what I understood in said article and from my CS classes, having to fetch data can be expensive too.
Would anyone like to provide any insight? Or have I completely gone off my rocker and need to relearn what I have unlearned?
Thank you
|
Well, you've got complications if the two vectors are of different sizes. As has already been pointed out, this doesn't increase the overall complexity of the issue, so I'd stick with the simplest code - which is probably 2 loops, rather than 1 loop with complicated test conditions re the two different lengths.
Actually, these length tests could easily make the two loops *quicker* than a single loop. You might also get better memory fetch performance with 2 loops - i.e. you are looking at contiguous memory - i.e. A[0],A[1],A[2]... B[0],B[1],B[2]..., rather than A[0],B[0],A[1],B[1],A[2],B[2]...
So in every way, I'd go with 2 separate loops ;-p
|
The best approach? Write the most readable code, work out its complexity, and work out if that's actually a problem.
If each of your loops is O(n), then you've still only got an O(n) operation.
Having said that, it does sound like a LINQ approach would be more readable... and quite possibly more efficient as well. Admittedly we haven't seen the code, but I suspect it's the kind of thing which is *ideal* for LINQ.
|
Back to basics; for-loops, arrays/vectors/lists, and optimization
|
[
"",
"c#",
"arrays",
"optimization",
"list",
"performance",
""
] |
I'm having some trouble navigating Java's rule for inferring generic type parameters. Consider the following class, which has an optional list parameter:
```
import java.util.Collections;
import java.util.List;
public class Person {
private String name;
private List<String> nicknames;
public Person(String name) {
this(name, Collections.emptyList());
}
public Person(String name, List<String> nicknames) {
this.name = name;
this.nicknames = nicknames;
}
}
```
My Java compiler gives the following error:
```
Person.java:9: The constructor Person(String, List<Object>) is undefined
```
But `Collections.emptyList()` returns type `<T> List<T>`, not `List<Object>`. Adding a cast doesn't help
```
public Person(String name) {
this(name,(List<String>)Collections.emptyList());
}
```
yields
```
Person.java:9: inconvertible types
```
Using `EMPTY_LIST` instead of `emptyList()`
```
public Person(String name) {
this(name, Collections.EMPTY_LIST);
}
```
yields
```
Person.java:9: warning: [unchecked] unchecked conversion
```
Whereas the following change makes the error go away:
```
public Person(String name) {
this.name = name;
this.nicknames = Collections.emptyList();
}
```
Can anyone explain what type-checking rule I'm running up against here, and the best way to work around it? In this example, the final code example is satisfactory, but with larger classes, I'd like to be able to write methods following this "optional parameter" pattern without duplicating code.
For extra credit: when is it appropriate to use `EMPTY_LIST` as opposed to `emptyList()`?
|
The issue you're encountering is that even though the method `emptyList()` returns `List<T>`, you haven't provided it with the type, so it defaults to returning `List<Object>`. You can supply the type parameter, and have your code behave as expected, like this:
```
public Person(String name) {
this(name,Collections.<String>emptyList());
}
```
Now when you're doing straight assignment, the compiler can figure out the generic type parameters for you. It's called type inference. For example, if you did this:
```
public Person(String name) {
List<String> emptyList = Collections.emptyList();
this(name, emptyList);
}
```
then the `emptyList()` call would correctly return a `List<String>`.
|
You want to use:
```
Collections.<String>emptyList();
```
If you look at the source for what emptyList does you see that it actually just does a
```
return (List<T>)EMPTY_LIST;
```
|
Collections.emptyList() returns a List<Object>?
|
[
"",
"java",
"generics",
"type-inference",
""
] |
I have a script which logs on to a remote server and tries to rename files, using PHP.
The code currently looks something like this example from the php.net website:
```
if (ftp_rename($conn_id, $old_file, $new_file)) {
echo "successfully renamed $old_file to $new_file\n";
} else {
echo "There was a problem while renaming $old_file to $new_file\n";
}
```
but ... what was the error? Permissions, no such directory, disk full?
How can I get PHP to return the FTP error? Something like this:
```
echo "There was a problem while renaming $old_file to $new_file:
the server says $error_message\n";
```
|
Looking at the FTP API here:
<http://us.php.net/manual/en/function.ftp-rename.php>
There doesn't seem to be any way to get anything but true or false.
However, you could use ftp\_raw to send a raw RENAME command, and then parse the returned message.
|
You could use error\_get\_last() if return value is false.
|
How to get the FTP error when using PHP
|
[
"",
"php",
"error-handling",
"ftp",
""
] |
```
$sourcePath = 'images/'; // Path of original image
$sourceUrl = '';
$sourceName = 'photo1.jpg'; // Name of original image
$thumbPath = 'thumbs/'; // Writeable thumb path
$thumbUrl = 'thumbs/';
$thumbName = "test_thumb.jpg"; // Tip: Name dynamically
$thumbWidth = 100; // Intended dimension of thumb
// Beyond this point is simply code.
$sourceImage = imagecreatefromjpeg("$sourcePath/$sourceName");
$sourceWidth = imagesx($sourceImage);
$sourceHeight = imagesy($sourceImage);
$targetImage = imagecreate($thumbWidth,$thumbWidth);
imagecopyresized($targetImage,$sourceImage,0,0,0,0,$thumbWidth,$thumbWidth,imagesx($sourceImage),imagesy($sourceImage));
imagejpeg($targetImage, "$thumbPath/$thumbName");
// By now, the thumbnail is copied into the $thumbpath
// as the file name specified in $thumbName, so display
echo "<img src='$thumbUrl$thumbName' alt=''>";
```
The above code gives me a thumbnail and that's great, but the image quality is awful. It looks like the image has had the colours inverted and it looks like it has been squashed. I've had headaches all day doing this. Any one have any ideas?
|
Use [imagecreatetruecolor](http://pl2.php.net/manual/en/function.imagecreatetruecolor.php) instead of imagecreate and [imagecopyresampled](http://pl2.php.net/manual/en/function.imagecopyresampled.php) instead of imagecopyresized.
|
The third parameter is worth including as Dominic points out. It specifies the jpeg quality.
On the issue of "and it looks like it has been squashed", remember, you're making a square thumbnail from a source image which itself may or may not be square.
One way to get around this is to work with the source dimensions to work out a full width or full height (depending on whether the image is portrait or landscape) square to copy from the source. This means replacing the "0,0,0,0" in your arguments to imagecopyresized() with something dynamically calculated.
(EDIT: example)
```
function makeSquareThumb($srcImage, $destSize, $destImage = null) {
//I'm sure there's a better way than this, but it works...
//I don't like my folder and file checking in the middle, but need to illustrate the need for this.
$srcFolder = dirname($srcImage); //source folder
$srcName = basename($srcImage); //original image filename
//the IF ELSEIF ELSE below is NOT comprehensive - eg: what if the dest folder is the same as the source?
//writeable nature of the destination is not checked!
if(!destImage) {
$destFolder = $srcFolder.'/thumbs/';
if(!is_dir($destFolder)) {
//make the thumbs folder if there isn't one!
mkdir($destFolder);
}
$destImage = $destFolder.$srcName;
} elseif (is_dir($destImage)) {
$destFolder = $destImage;
$destImage = $destFolder.'/'.$srcName;
} else {
$destFolder = dirname($destImage);
}
//Now make it!
$srcCanvas = imagecreatefromjpeg($srcImage);
$srcWidth = imagesx($srcCanvas);
$srcHeight = imagesy($srcCanvas);
//this let's us easily sample a square from the middle, regardless of apsect ratio.
$shortSide = array($srcWidth,$srcHeight);
sort($shortSide);
$src_x = $srcWidth/2 - $shortSide[0]/2;
$src_y = $srcHeight/2 - $shortSide[0]/2;
//do it!
$destCanvas = imagecreatetruecolor($destSize, $destSize);
imagecopyresampled($destCanvas,$srcCanvas,0,0,$src_x,$src_y,$destSize,$destSize,$shortSide[0],$shortSide[0]);
imagejpeg($destCanvas, $destImage);
}
```
|
Generating thumbnails using PHP causes bad image quality
|
[
"",
"php",
"gdlib",
""
] |
I have a Freemarker template which contains a bunch of placeholders for which values are supplied when the template is processed. I want to conditionally include part of the template if the userName variable is supplied, something like:
```
[#if_exists userName]
Hi ${userName}, How are you?
[/#if_exists]
```
However, the FreeMarker manual seems to indicate that if\_exists is deprecated, but I can't find another way to achieve this. Of course, I could simple providing an additional boolean variable isUserName and use that like this:
```
[#if isUserName]
Hi ${userName}, How are you?
[/#if]
```
But if there's a way of checking whether userName exists then I can avoid adding this extra variable.
|
To check if the value exists:
```
[#if userName??]
Hi ${userName}, How are you?
[/#if]
```
Or with the standard freemarker syntax:
```
<#if userName??>
Hi ${userName}, How are you?
</#if>
```
To check if the value exists and is not empty:
```
<#if userName?has_content>
Hi ${userName}, How are you?
</#if>
```
|
This one seems to be a better fit:
```
<#if userName?has_content>
... do something
</#if>
```
<http://freemarker.sourceforge.net/docs/ref_builtins_expert.html>
|
How to check if a variable exists in a FreeMarker template?
|
[
"",
"java",
"templates",
"freemarker",
""
] |
I have an abstract base class and I want to declare a field or a property that will have a different value in each class that inherits from this parent class.
I want to define it in the baseclass so I can reference it in a base class method - for example overriding ToString to say "This object is of type *property/field*".
I have got three ways that I can see of doing this, but I was wondering - what is the best or accepted way of doing this? Newbie question, sorry.
**Option 1:**
Use an abstract Property and override it on the inherited classes. This benefits from being enforced (you have to override it) and it is clean. But, it feels slightly wrong to return a hard-code value rather than encapsulate a field and it is a few lines of code instead of just. I also have to declare a body for "set" but that is less important (and there is probably a way to avoid that which I am not aware of).
```
abstract class Father
{
abstract public int MyInt { get; set;}
}
class Son : Father
{
public override int MyInt
{
get { return 1; }
set { }
}
}
```
**Option 2**
I can declare a public field (or a protected field) and explicitly override it in the inherited class. The example below will give me a warning to use "new" and I can probably do that, but it feels wrong and it breaks the polymorphism, which was the whole point. Doesn't seem like a good idea...
```
abstract class Mother
{
public int MyInt = 0;
}
class Daughter : Mother
{
public int MyInt = 1;
}
```
**Option 3**
I can use a protected field and set the value in the constructor. This seems pretty tidy but relies on me ensuring the constructor always sets this and with multiple overloaded constructors there is always a chance some code path won't set the value.
```
abstract class Aunt
{
protected int MyInt;
}
class Niece : Aunt
{
public Niece()
{
MyInt = 1;
}
}
```
It's a bit of a theoretical question and I guess the answer has to be option 1 as it is the only *safe* option but I am just getting to grips with C# and wanted to ask this of people with more experience.
|
Of the three solutions only **Option 1** is **polymorphic**.
Fields by themselves cannot be overridden. Which is exactly why **Option 2** returns the ***new*** keyword warning.
The solution to the warning is not to append the “new” keyword, but to implement Option 1.
**If you need your field to be polymorphic you need to wrap it in a Property.**
**Option 3** is OK if you don’t need polymorphic behavior. You should remember though, that when at runtime the property MyInt is accessed, the derived class has no control on the value returned. The base class by itself is capable of returning this value.
This is how a truly polymorphic implementation of your property might look, allowing the derived classes to be in *control*.
```
abstract class Parent
{
abstract public int MyInt { get; }
}
class Father : Parent
{
public override int MyInt
{
get { /* Apply formula "X" and return a value */ }
}
}
class Mother : Parent
{
public override int MyInt
{
get { /* Apply formula "Y" and return a value */ }
}
}
```
|
Option 2 is a non-starter - you can't *override* fields, you can only *hide* them.
Personally, I'd go for option 1 every time. I try to keep fields private at all times. That's if you really need to be able to override the property at all, of course. Another option is to have a read-only property in the base class which is set from a constructor parameter:
```
abstract class Mother
{
private readonly int myInt;
public int MyInt { get { return myInt; } }
protected Mother(int myInt)
{
this.myInt = myInt;
}
}
class Daughter : Mother
{
public Daughter() : base(1)
{
}
}
```
That's probably the most appropriate approach if the value doesn't change over the lifetime of the instance.
|
Overriding fields or properties in subclasses
|
[
"",
"c#",
"properties",
"field",
""
] |
Due to a weird request, I can't put `null` in a database if there is no value. I'm wondering what can I put in the store procedure for nothing instead of `null`.
For example:
```
insert into blah (blah1) values (null)
```
Is there something like nothing or empty for "blah1" instead using `null`?
|
I would push back on this bizarre request. That's exactly what `NULL` is for in SQL, to denote a missing or inapplicable value in a column.
Is the requester experiencing grief over SQL logic with `NULL`?
**edit:** Okay, I've read your reply with the extra detail about this job assignment (btw, generally you should edit your original question instead of posting more information in an answer).
You'll have to declare all columns as `NOT NULL` and designate a special value in the domain of that column's data type to signify "no value." The appropriate value to choose might be different on a case by case basis, i.e. zero may signify nothing in a `person_age` column, but it might have significance in an `items_in_stock` column.
You should document the no-value value for each column. But I suppose they don't believe in documentation either. :-(
|
Depends on the data type of the column. For numbers (integers, etc) it could be zero (0) but if varchar then it can be an empty string ("").
I agree with other responses that NULL is best suited for this because it transcends all data types denoting the absence of a value. Therefore, zero and empty string might serve as a workaround/hack but they are fundamentally still actual values themselves that might have business domain meaning other than "not a value".
(If only the SQL language supported [a "Not Applicable" (N/A) value type that would serve as an alternative to NULL](http://en.wikipedia.org/wiki/Null_(SQL)#Criticisms)...)
|
TSQL: No value instead of Null
|
[
"",
"sql",
"sql-server",
"t-sql",
"null",
""
] |
## How is it possible to call a client side javascript method after a *specific* update panel has been loaded?
**`Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler)`** does not work for me because this will fire after ANY update panel finishes loading, and I can find no client side way to find which is the one
**`ScriptManager.GetCurrent(Page).AsyncPostBackSourceElementID AsyncPostBackSourceElementID`** does not work for me as this is a server side object, and i want Client Side
The ClientSide .Net framework must know which UpdatePanel it is updating in order to update the correct content. Surely there is a way to hook into this event?
Any help would be appreciated.
|
You can hook the [PageRequestManager.beginRequest](http://msdn.microsoft.com/en-us/library/bb397432.aspx) event and inspect the [BeginRequestEventArgs.postBackElement](http://msdn.microsoft.com/en-us/library/bb397485.aspx) property.
Note that it doesn't *really* give you the UpdatePanel, but the control inside of the UpdatePanel. That should be good enough, though.
Edit: Even better, the [PageRequestManager.pageLoaded](http://msdn.microsoft.com/en-us/library/bb397523.aspx) event gives you [PageLoadedEventArgs.panelsUpdated](http://msdn.microsoft.com/en-us/library/bb384016.aspx) (and [panelsCreated](http://msdn.microsoft.com/en-us/library/bb397552.aspx)) properties.
|
Thanks - both good answers. I went with the client side script "pageloaded" in the end. That is a fairly buried method that google did not reveal to me. For those who are interested, this code works with FireBug to give a good demo of the PageLoaded method working to find the updated panels:
```
<script type="text/javascript">
$(document).ready(function() {
panelsLoaded = 1;
Sys.WebForms.PageRequestManager.getInstance().add_pageLoaded(PageLoaded)
});
function PageLoaded(sender, args) {
console.log("I have occured " + panelsLoaded++ + " times!");
var panelsCreated = args.get_panelsCreated();
for (var i = 0; i < panelsCreated.length; i++) {
console.log("Panels Updating: " + panelsCreated[i].id);
}
var panelsUpdated = args.get_panelsUpdated();
for (var i = 0; i < panelsUpdated.length; i++) {
console.log("Panels Updating: " + panelsUpdated[i].id);
}
}
</script>
```
|
How to call a client side javascript function after a specific UpdatePanel has been loaded
|
[
"",
"asp.net",
"javascript",
"ajax",
"updatepanel",
""
] |
The C++ standard dictates that member variables inside a single access section must be layed out in memory in the same order they were declared in. At the same time, compilers are free to choose the mutual ordering of the access sections themselves. This freedom makes it impossible in theory to link binaries created by different compilers. So what are the remaining reasons for the strict in-section ordering? And does the ~~upcoming C++09~~ new C++11 standard provide a way to fully determine object layouts "by hand"?
|
> This freedom makes it impossible in theory to link binaries created by different compilers.
It's impossible for a number of reasons, and structure layout is the most minor. vtables, implementations of `operator new` and `delete`, data type sizes...
> So what are the remaining reasons for the strict in-section ordering?
C compatibility, I would have thought, so that a struct defined in C packs the same way it does in C++ *for a given compiler set*.
> And does the new C++~~09~~11 standard provide a way to fully determine object layouts "by hand"?
No, no more than the current standard does.
For a `class` or `struct` with no vtable and entirely private (or public) fields, though, it's already possible if you use the `[u]int[8|16|32|64]_t` types. What use case do you have for more than this?
|
[edit]
I learnt something new today! found the following standard quote:
> Nonstatic data members of a
> (non-union) class declared without an
> intervening access-specifier are
> allocated so that later members have
> higher addresses within a class
> object. The order of allocation of
> nonstatic data members separated by an
> access-specifier is unspecified
> (11.1). Implementation alignment
> requirements might cause two adjacent
> members not to be allocated
> immediately after each other; so might
> requirements for space for managing
> virtual functions (10.3) and virtual
> base classes (10.1).
Interesting - i have no idea why this degree of freedom is given. Continuing to th rest of my previous reply...
---
As mentioned, the reason for preserving the ordering is C compatibility, and back then I guess noone thought of benefits of reordering members, while memory layout was typically done by hand anyway. Also, what now would be considered "ugly tricks" (like zeroing selected members with memset, or having two structs with the same layout) were quite common.
The standard does not give you a way to enforce a given layout, but most compilers provide measures to control padding, e.g. #pragma pack on MSVC compilers.
The reason for automatic padding is platform portability: different architectures have different alignment requirements, e.g. some architectures throw on misaligned ints (and these were the simple cases back then).
|
Class layout in C++: Why are members sometimes ordered?
|
[
"",
"c++",
""
] |
How do I check if the timestamp date of a record is before midnight today?
datediff is driving me nuts...
|
Here is how to get 0 hour of today in SQL
```
SELECT (CAST(FLOOR(CAST(GETDATE() as FLOAT)) AS DateTime))
```
Just compare your time against that.
Don't use varchar casts since they are slow.
[Check this list](https://stackoverflow.com/questions/202243/custom-datetime-formatting-in-sql-server#202288) for more date time help.
|
Try:
```
WHERE dtColumn < DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE()))
```
|
SQL date comparison
|
[
"",
"sql",
"t-sql",
""
] |
I encountered a problem when running some old code that was handed down to me. It works 99% of the time, but once in a while, I notice it throwing a "Violation reading location" exception. I have a variable number of threads potentially executing this code throughout the lifetime of the process. The low occurrence frequency may be indicative of a race condition, but I don't know why one would cause an exception in this case. Here is the code in question:
```
MyClass::Dostuff()
{
static map<char, int> mappedChars;
if (mappedChars.empty())
{
for (char c = '0'; c <= '9'; ++c)
{
mappedChars[c] = c - '0';
}
}
// More code here, but mappedChars in not changed.
}
```
The exception is thrown in the map's operator[] implementation, on the very first call to the operator[] (Using the VS2005 implementation of STL.)
```
mapped_type& operator[](const key_type& _Keyval)
{
iterator _Where = this->lower_bound(_Keyval); //exception thrown on the first line
// More code here
}
```
I already tried freezing threads in operator[] and trying to get them to all run through it at the same time, but I was unable to reproduce the exception using that methodology.
Can you think of any reason why that would throw, and only some of the time?
(Yes, I know STL is not thread-safe and I'll need to make changes here. I am mostly curious as to WHY I'm seeing the behavior I described above.)
As requested, here some further details about the exception:
Unhandled exception at 0x00639a1c (app.exe) in app15-51-02-0944\_2008-10-23.mdmp: 0xC0000005: Access violation reading location 0x00000004.
Thanks to everyone suggesting solutions to multithreading issues, but this isn't what this question is meant to address. Yes, I understand the presented code is not protected correctly and is overkill in what it's trying to accomplish. I already have the fix for it implemented. I'm just trying to get a better understanding of why this exception was thrown to begin with.
|
Given an address of "4", Likely the "this" pointer is null or the iterator is bad. You should be able to see this in the debugger. If this is null, then the problem isn't in that function but who ever is calling that function. If the iterator is bad, then it's the race condition you alluded to. Most iterators can't tolerate the list being updated.
Okay wait - No FM here. Static's are initialized on first use. The code that does this is not multi-thread safe. one thread is doing the initialization while the 2nd thinks it's already been done but it's still in progress. The result is that is uses an uninitialized variable. You can see this in the assembly below:
```
static x y;
004113ED mov eax,dword ptr [$S1 (418164h)]
004113F2 and eax,1
004113F5 jne wmain+6Ch (41141Ch)
004113F7 mov eax,dword ptr [$S1 (418164h)]
004113FC or eax,1
004113FF mov dword ptr [$S1 (418164h)],eax
00411404 mov dword ptr [ebp-4],0
0041140B mov ecx,offset y (418160h)
00411410 call x::x (4111A4h)
00411415 mov dword ptr [ebp-4],0FFFFFFFFh
```
The $S1 is set to 1 when it init's. If set, (004113F5) it jumps over the init code - freezing the threads in the fnc won't help because this check is done on entry to a function. This isn't null, but one of the members is.
Fix by moving the map out of the method and into the class as static. Then it will initialize on startup. Otherwise, you have to put a CR around the calls do DoStuff(). You can protect from the remaining MT issues by placing a CR around the use of the map itself (e.g. where DoStuff uses operator[]).
|
mappedChars is static so it's shared by all the threads that execute DoStuff(). That alone could be your problem.
If you have to use a static map, then you may need to protect it with a mutex or critical section.
Personally, I think using a map for this purpose is overkill. I would write a helper function that takes a char and subtracts '0' from it. You won't have any thread safety issues with a function.
|
Violation reading location in std::map operator[]
|
[
"",
"c++",
"multithreading",
"exception",
"stl",
""
] |
I have the following entity class (in Groovy):
```
import javax.persistence.Entity
import javax.persistence.Id
import javax.persistence.GeneratedValue
import javax.persistence.GenerationType
@Entity
public class ServerNode {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
Long id
String firstName
String lastName
}
```
and my persistence.xml:
```
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">
<persistence-unit name="NewPersistenceUnit">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/Icarus"/>
<property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
<property name="hibernate.connection.username" value="root"/>
<property name="hibernate.connection.password" value=""/>
<property name="hibernate.archive.autodetection" value="class"/>
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.format_sql" value="true"/>
<property name="hbm2ddl.auto" value="create"/>
</properties>
<class>net.interaxia.icarus.data.models.ServerNode</class>
</persistence-unit>
</persistence>
```
and the script:
```
import javax.persistence.EntityManager
import javax.persistence.EntityManagerFactory
import javax.persistence.Persistence
import net.interaxia.icarus.data.models.ServerNode
def factory = Persistence.createEntityManagerFactory("NewPersistenceUnit")
def manager = factory.createEntityManager()
manager.getTransaction().begin()
manager.persist new ServerNode(firstName: "Test", lastName: "Server")
manager.getTransaction().commit()
```
the database *Icarus* exists, but currently has no tables. I would like Hibernate to automatically create and/or update the tables based on the entity classes. How would I accomplish this?
|
I don't know if leaving `hibernate` off the front makes a difference.
The [reference](http://docs.jboss.org/hibernate/core/3.3/reference/en/html/session-configuration.html#configuration-misc-properties) suggests it should be `hibernate.hbm2ddl.auto`
A value of `create` will create your tables at sessionFactory creation, and leave them intact.
A value of `create-drop` will create your tables, and then drop them when you close the sessionFactory.
Perhaps you should set the [`javax.persistence.Table`](http://java.sun.com/javaee/5/docs/api/javax/persistence/Table.html) annotation explicitly?
|
You might try changing this line in your persistence.xml from
```
<property name="hbm2ddl.auto" value="create"/>
```
to:
```
<property name="hibernate.hbm2ddl.auto" value="update"/>
```
This is supposed to maintain the schema to follow any changes you make to the Model each time you run the app.
Got this from [JavaRanch](https://coderanch.com/forums/search/search/-1?match_type=all&sort_by=time&q=hibernate.hbm2ddl.auto&groupByTopic=true)
|
Hibernate: Automatically creating/updating the db tables based on entity classes
|
[
"",
"java",
"mysql",
"hibernate",
"jpa",
"groovy",
""
] |
I have a Queue object that I need to ensure is thread-safe. Would it be better to use a lock object like this:
```
lock(myLockObject)
{
//do stuff with the queue
}
```
Or is it recommended to use Queue.Synchronized like this:
```
Queue.Synchronized(myQueue).whatever_i_want_to_do();
```
From reading the MSDN docs it says I should use Queue.Synchronized to make it thread-safe, but then it gives an example using a lock object. From the MSDN article:
> To guarantee the thread safety of the
> Queue, all operations must be done
> through this wrapper only.
>
> Enumerating through a collection is
> intrinsically not a thread-safe
> procedure. Even when a collection is
> synchronized, other threads can still
> modify the collection, which causes
> the enumerator to throw an exception.
> To guarantee thread safety during
> enumeration, you can either lock the
> collection during the entire
> enumeration or catch the exceptions
> resulting from changes made by other
> threads.
If calling Synchronized() doesn't ensure thread-safety what's the point of it? Am I missing something here?
|
Personally I always prefer locking. It means that *you* get to decide the granularity. If you just rely on the Synchronized wrapper, each individual operation is synchronized but if you ever need to do more than one thing (e.g. iterating over the whole collection) you need to lock anyway. In the interests of simplicity, I prefer to just have one thing to remember - lock appropriately!
EDIT: As noted in comments, if you *can* use higher level abstractions, that's great. And if you *do* use locking, be careful with it - document what you expect to be locked where, and acquire/release locks for as short a period as possible (more for correctness than performance). Avoid calling into unknown code while holding a lock, avoid nested locks etc.
In .NET 4 there's a *lot* more support for higher-level abstractions (including lock-free code). Either way, I still wouldn't recommend using the synchronized wrappers.
|
There's a major problem with the `Synchronized` methods in the old collection library, in that they synchronize at too low a level of granularity (per method rather than per unit-of-work).
There's a classic race condition with a synchronized queue, shown below where you check the `Count` to see if it is safe to dequeue, but then the `Dequeue` method throws an exception indicating the queue is empty. This occurs because each individual operation is thread-safe, but the value of `Count` can change between when you query it and when you use the value.
```
object item;
if (queue.Count > 0)
{
// at this point another thread dequeues the last item, and then
// the next line will throw an InvalidOperationException...
item = queue.Dequeue();
}
```
You can safely write this using a manual lock around the entire unit-of-work (i.e. checking the count *and* dequeueing the item) as follows:
```
object item;
lock (queue)
{
if (queue.Count > 0)
{
item = queue.Dequeue();
}
}
```
So as you can't safely dequeue anything from a synchronized queue, I wouldn't bother with it and would just use manual locking.
.NET 4.0 should have a whole bunch of properly implemented thread-safe collections, but that's still nearly a year away unfortunately.
|
In C# would it be better to use Queue.Synchronized or lock() for thread safety?
|
[
"",
"c#",
"multithreading",
"queue",
""
] |
I have seen this syntax in MSDN: [`yield break`](https://msdn.microsoft.com/en-us/library/9k7k7cf0.aspx), but I don't know what it does. Does anyone know?
|
It specifies that an iterator has come to an end. You can think of `yield break` as a `return` statement which does not return a value.
For example, if you define a function as an iterator, the body of the function may look like this:
```
for (int i = 0; i < 5; i++)
{
yield return i;
}
Console.Out.WriteLine("You will see me");
```
Note that after the loop has completed all its cycles, the last line gets executed and you will see the message in your console app.
Or like this with `yield break`:
```
int i = 0;
while (true)
{
if (i < 5)
{
yield return i;
}
else
{
// note that i++ will not be executed after this
yield break;
}
i++;
}
Console.Out.WriteLine("Won't see me");
```
In this case the last statement is never executed because we left the function early.
|
Ends an iterator block (e.g. says there are no more elements in the IEnumerable).
|
What does "yield break;" do in C#?
|
[
"",
"c#",
".net",
"yield",
""
] |
For example:
```
public class A : A.B
{
public class B { }
}
```
Which generates this error from the compiler:
> Circular base class dependency
> involving 'A' and 'A.B'
I always figured a nested class behaved just like a regular class except with special rules concerning accessing the outer class's private members, but I guess there's some implicit inheritance occurring between the two classes?
|
There's no implicit inheritance involved as far as I can tell. I would have expected this to be okay - although I can imagine weirdness if A and B were generic.
It's specified in section 10.1.4 of the spec:
> When a class B derives from a class A,
> it is a compile-time error for A to
> depend on B. A class directly depends
> on its direct base class (if any) and
> **directly depends on the class within
> which it is immediately nested** (if
> any). Given this definition, the
> complete set of classes upon which a
> class depends is the transitive
> closure of the directly depends on
> relationship.
I've highlighted the relevant section.
That explains why the compiler is rejecting it, but not why the language prohibits it. I wonder if there's a CLI restriction...
EDIT: Okay, I've had a response from Eric Lippert. Basically, it would be technically possible (there's nothing in the CLI to prohibit it), but:
* Allowing it would be difficult in the compiler, invalidating various current assumptions around ordering and cycles
* It's a pretty odd design decision which is easier to prohibit than to support
It was also noted on the email thread that it would make this kind of thing valid:
```
A.B x = new A.B.B.B.B.B.B.B.B.B.B.B.B();
```
... but that would already (as noted by Tinister) be valid if B derived from A.
Nesting + inheritance = oddness...
|
This is not a C# thing as much as it is a compiler thing. One of the jobs of a compiler is to lay out a class in memory, that is a bunch of basic data types, pointers, function pointers and other classes.
It can't construct the layout for class A until it knows what the layout of class B is. It can't know what the layout of class B is until it finished with the layout of class A. Circular dependency.
|
Why can't a class extend its own nested class in C#?
|
[
"",
"c#",
"nested-class",
""
] |
This is not to be confused with ["How to tell if a DOM element is visible?"](https://stackoverflow.com/questions/123999/how-to-tell-if-a-dom-element-is-visible)
I want to determine if a given DOM element is visible on the page.
E.g. if the element is a child of a parent which has `display:none;` set, then it won't be visible.
(This has nothing to do with whether the element is in the viewport or not)
I could iterate through each parent of the element, checking the `display` style, but I'd like to know if there is a more direct way?
|
From a quick test in Firefox, it looks like the size and position properties (clientWidth, offsetTop etc.) all return 0 when an element is hidden by a parent being `display:none`.
|
Using [Prototype](http://prototypejs.org):
```
if($('someDiv').visible) {...}
```
|
How to tell if a DOM element is displayed?
|
[
"",
"javascript",
"html",
"css",
"dom",
""
] |
Can I write to the end of a 5GB file in Java? This question came up in my office and no one is sure what the answer is.
|
This should be possible fairly easily using a [RandomAccessFile](http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html). Something like the following should work:
```
String filename;
RandomAccessFile myFile = new RandomAccessFile(filename, "rw");
// Set write pointer to the end of the file
myFile.seek(myFile.length());
// Write to end of file here
```
|
Yes. Take a look at this link RandomAccessFile
<http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html#seek(long)>
That is , you open the file, and then set the position to the end of the file. And start writing from there.
Tell us how it went.
|
Can I write to the end of a 5GB file in Java?
|
[
"",
"java",
"file-io",
""
] |
The thing is I've been using the [lock statement](http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx) to protect a critical part of my code, but now, I realize I could allow concurrent execution of that critical code is some conditions are met.
Is there a way to condition the lock?
|
```
Action doThatThing = someMethod;
if (condition)
{
lock(thatThing)
{
doThatThing();
}
}
else
{
doThatThing();
}
```
|
```
bool locked = false;
if (condition) {
Monitor.Enter(lockObject);
locked = true;
}
try {
// possibly critical section
}
finally {
if (locked) Monitor.Exit(lockObject);
}
```
EDIT: yes, there is a race condition unless you can assure that the condition is constant while threads are entering.
|
How can I write a conditional lock in C#?
|
[
"",
"c#",
".net",
"multithreading",
"locking",
""
] |
I want to load 52 images (deck of cards) in gif format from my recourse folder into an Image[] in c#. Any ideas?
Thanks,
Jon
|
You can read a Bitmap from a file like this;
```
public static Bitmap GetBitmap( string filename )
{
Bitmap retBitmap = null;
string path = String.Concat( BitmapDir, filename );
if ( File.Exists( path ) )
{
try
{
retBitmap = new Bitmap( path, true );
}
catch { }
}
return retBitmap;
}
```
You can get a list of files in a resource directory like this;
```
string[] files = Directory.GetFiles( BitmapDir, "*.gif" );
```
Just iterate through files calling GetBitmap( file ) and populate your array. BitmapDir is obviously the directory where your GIF files live.
|
Assuming that you have the images in a folder on your local file system and that you are running under .NET 3.5:
```
Image[] cards = Directory.GetFiles(cardsFolder).Select(f => Image.FromFile(f)).ToArray();
```
One-liners are always nice :-)
|
How do I get images located in my recourse folder into an array?
|
[
"",
"c#",
""
] |
I have a MySQL table containing domain names:
```
+----+---------------+
| id | domain |
+----+---------------+
| 1 | amazon.com |
| 2 | google.com |
| 3 | microsoft.com |
| | ... |
+----+---------------+
```
I'd like to be able to search through this table for a full hostname (i.e. 'www.google.com'). If it were the other way round where the table contained the full URL I'd use:
```
SELECT * FROM table WHERE domain LIKE '%google.com%'
```
But the inverse is not so straightforward. My current thinking is to search for the full hostname, then progressively strip off each part of the domain, and search again. (i.e. search for 'www.google.com' then 'google.com')
This is not particular efficient or clever, there must be a better way. I am sure it is a common problem, and no doubt easy to solve!
|
You can use the column on the right of the like too:
```
SELECT domain FROM table WHERE 'www.google.com' LIKE CONCAT('%', domain);
```
or
```
SELECT domain FROM table WHERE 'www.google.com' LIKE CONCAT('%', domain, '%');
```
It's not particularly efficient but it works.
|
In `mysql` you can use regular expressions (`RLIKE`) to perform matches. Given this ability you could do something like this:
```
SELECT * FROM table WHERE 'www.google.com' RLIKE domain;
```
It appears that the way `RLIKE` has been implemented it is even smart enough to treat the dot in that field (normally a wildcard in `regex`) as a literal dot.
`MySQL's` inclusion of regular expressions gives you a very powerful ability to parse and search strings. If you would like to know more about regular expressions, just google "`regex`". You can also use one of these links:
<http://en.wikipedia.org/wiki/Regular_expression>
<http://www.regular-expressions.info/>
<http://www.codeproject.com/KB/string/re.aspx>
|
Inverse of SQL LIKE '%value%'
|
[
"",
"sql",
"mysql",
"database",
""
] |
Currently I have the function CreateLog() for creating a a log4net Log with name after the constructing instance's class.
Typically used as in:
```
class MessageReceiver
{
protected ILog Log = Util.CreateLog();
...
}
```
If we remove lots of error handling the implementation boils down to:
[EDIT: Please read the longer version of CreateLog further on in this post.]
```
public ILog CreateLog()
{
System.Diagnostics.StackFrame stackFrame = new System.Diagnostics.StackFrame(1);
System.Reflection.MethodBase method = stackFrame.GetMethod();
return CreateLogWithName(method.DeclaringType.FullName);
}
```
Problem is that if we inheirit MessageReceiver into sub classes the log will still take its name from MessageReceiver since this is the declaring class of the method (constructor) which calls CreateLog.
```
class IMReceiver : MessageReceiver
{ ... }
class EmailReceiver : MessageReceiver
{ ... }
```
Instances of both these classes would get Logs with name "MessageReceiver" while I would like them to be given names "IMReceiver" and "EmailReceiver".
I know this can easily be done (and is done) by passing a reference to the object in creation when calling CreateLog since the GetType() method on object does what I want.
There are some minor reasons to prefer not adding the parameter and personally I feel disturbed by not finding a solution with no extra argument.
Is there anyone who can show me how to implement a zero argument CreateLog() that gets the name from the subclass and not the declaring class?
EDIT:
The CreateLog function does more than mentioned above. The reason for having one log per instance is to be able to differ between different instances in the logfile. This is enforced by the CreateLog/CreateLogWithName pair.
Expanding on the functionality of CreateLog() to motivate its existence.
```
public ILog CreateLog()
{
System.Diagnostics.StackFrame stackFrame = new System.Diagnostics.StackFrame(1);
System.Reflection.MethodBase method = stackFrame.GetMethod();
Type type = method.DeclaringType;
if (method.IsStatic)
{
return CreateLogWithName(type.FullName);
}
else
{
return CreateLogWithName(type.FullName + "-" + GetAndInstanceCountFor(type));
}
}
```
Also I prefer writing ILog Log = Util.CreateLog(); rather than copying in some long cryptic line from an other file whenever I write a new class. I am aware that the reflection used in Util.CreateLog is not guaranteed to work though - is System.Reflection.MethodBase.GetCurrentMethod() guaranteed to work?
|
> Is there anyone who can show me how to implement a zero argument CreateLog() that gets the name from the subclass and not the declaring class?
I don't think you'll be able to do it by looking at the stack frame.
While your class is `IMReceiver`, the call to `CreateLog` method is in the `MessageReceiver` class. The stack frame must tell you where the method is being called from, or it wouldn't be any use, so it's always going to say `MessageReceiver`
If you called `CreateLog` explicitly in your `IMReceiver` and other classes, then it works, as the stack frame shows the method being called in the derived class (because it actually is).
Here's the best thing I can come up with:
```
class BaseClass{
public Log log = Utils.CreateLog();
}
class DerivedClass : BaseClass {
public DerivedClass() {
log = Utils.CreateLog();
}
}
```
If we trace creation of logs, we get this:
```
new BaseClass();
# Log created for BaseClass
new DerivedClass();
# Log created for BaseClass
# Log created for DerivedClass
```
The second 'log created for derived class' overwrites the instance variable, so your code will behave correctly, you'll just be creating a BaseClass log which immediately gets thrown away. This seems hacky and bad to me, I'd just go with specifying the type parameter in the constructor or using a generic.
IMHO specifying the type is cleaner than poking around in the stack frame anyway
If you can get it without looking at the stack frame, your options expand considerably
|
Normally, [MethodBase.ReflectedType](http://msdn.microsoft.com/en-us/library/system.reflection.memberinfo.reflectedtype.aspx) would have your info. But, according to MSDN [StackFrame.GetMethod](http://msdn.microsoft.com/en-us/library/system.diagnostics.stackframe.getmethod.aspx):
> The method that is currently executing may be inherited from a base class, although it is called in a derived class. In this case, the ReflectedType property of the MethodBase object that is returned by GetMethod identifies the base class, not the derived class.
which means you're probably out of luck.
|
How do I find the type of the object instance of the caller of the current function?
|
[
"",
"c#",
".net",
"reflection",
"logging",
""
] |
At every company I have worked at, I have found that people are still writing their SQL queries in the ANSI-89 standard:
```
select a.id, b.id, b.address_1
from person a, address b
where a.id = b.id
```
rather than the ANSI-92 standard:
```
select a.id, b.id, b.address_1
from person a
inner join address b
on a.id = b.id
```
For an extremely simple query like this, there's not a big difference in readability, but for large queries I find that having my join criteria grouped in with listing out the table makes it much easier to see where I might have issues in my join, and let's me keep all my filtering in my WHERE clause. Not to mention that I feel that outer joins are much intuitive than the (+) syntax in Oracle.
As I try to evangelize ANSI-92 to people, are there any concrete performance benefits in using ANSI-92 over ANSI-89? I would try it on my own, but the Oracle setups we have here don't allow us to use EXPLAIN PLAN - wouldn't want people to try to optimize their code, would ya?
|
According to "SQL Performance Tuning" by Peter Gulutzan and Trudy Pelzer, of the six or eight RDBMS brands they tested, there was no difference in optimization or performance of SQL-89 versus SQL-92 style joins. One can assume that most RDBMS engines transform the syntax into an internal representation before optimizing or executing the query, so the human-readable syntax makes no difference.
I also try to evangelize the SQL-92 syntax. Sixteen years after it was approved, it's about time people start using it! And all brands of SQL database now support it, so there's no reason to continue to use the nonstandard `(+)` Oracle syntax or `*=` Microsoft/Sybase syntax.
As for why it's so hard to break the developer community of the SQL-89 habit, I can only assume that there's a large "base of the pyramid" of programmers who code by copy & paste, using ancient examples from books, magazine articles, or another code base, and these people don't learn new syntax abstractly. Some people pattern-match, and some people learn by rote.
I am gradually seeing people using SQL-92 syntax more frequently than I used to, though. I've been answering SQL questions online since 1994.
|
Well the ANSI092 standard includes some pretty heinous syntax. [Natural Joins](http://en.wikipedia.org/wiki/Join_(SQL)#Natural_join) are one and the USING Clause is another. IMHO, the addition of a column to a table shouldn't break code but a NATURAL JOIN breaks in a most egregious fashion. The "best" way to break is by compilation error. For example if you SELECT \* somewhere, the addition of a column **could** fail to compile. The next best way to fail would be a run time error. It's worse because your users may see it, but it still gives you a nice warning that you've broken something. If you use ANSI92 and write queries with NATURAL joins, it won't break at compile time and it won't break at run time, the query will just suddenly start producing wrong results. These types of bugs are insidious. Reports go wrong, potentially financial disclosure are incorrect.
For those unfamiliar with NATURAL Joins. They join two tables on every column name that exists in both tables. Which is really cool when you have a 4 column key and you're sick of typing it. The problem comes in when Table1 has a pre-existing column named DESCRIPTION and you add a new column to Table2 named, oh I don't know, something innocuous like, mmm, DESCRIPTION and now you're joining the two tables on a VARCHAR2(1000) field that is free form.
The USING clause can lead to total ambiguity in addition to the problem described above. In another [SO post](https://stackoverflow.com/q/228424/111424), someone showed this ANSI-92 SQL and asked for help reading it.
```
SELECT c.*
FROM companies AS c
JOIN users AS u USING(companyid)
JOIN jobs AS j USING(userid)
JOIN useraccounts AS us USING(userid)
WHERE j.jobid = 123
```
This is completely ambiguous. I put a UserID column in both Companies and user tables and there's no complaint. What if the UserID column in companies is the ID of the last person to modify that row?
**I'm serious, Can anyone explain why such ambiguity was necessary? Why is it built straight into the standard?**
I think Bill is correct that there is a large base of developer who copy/paste there way through coding. In fact, I can admit that I'm kind of one when it comes to ANSI-92. Every example I ever saw showed multiple joins being nested in parentheses. Honesty, that makes picking out the tables in the sql difficult at best. But then an SQL92 evangilist explained that would actually force a join order. JESUS... all those Copy pasters I've seen are now actually forcing a join order - a job that's 95% of the time better left to optimizers *especially* a copy/paster.
Tomalak got it right when he said,
> people don't switch to new syntax just
> because it is there
It has to give me something and I don't see an upside. And if there is an upside, the negatives are an albatross too big to be ignored.
|
Why isn't SQL ANSI-92 standard better adopted over ANSI-89?
|
[
"",
"sql",
"join",
"ansi-sql",
"ansi-92",
""
] |
I have a Google Map that suddenly stopped working for no apparent reason (I hadn't touched the code for months, but the wrapper code from our CMS may have changed without Corporate telling me).
<http://www.democratandchronicle.com/section/builder>
(sorry about the nasty HTML outside the map, most of that comes from our corporate parent...)
I've narrowed it down to this part of my `drawMarker` function:
```
GEvent.addListener(marker, 'click', function() {
marker.openInfoWindowHtml(html, { maxWidth: 500 });
});
```
Of note:
* `alert(html);` displays the correct HTML for the infowindow.
* The HTML in the html variable is indeed valid.
* The click event is firing (confirmed by `alert('test');` within it)
* Another map I host on the same site [works fine](http://www.democratandchronicle.com/garagesales), despite similar code.
* No JavaScript errors in Firebug or IE that I can see.
I've been bashing my head against this for a while. What am I missing?
|
This resolved itself. I suspect an update to the API broke something for a version or two.
|
I've had random problems with Google Maps API at times and more than once it has been fixed by going back one API version. i.e. if your google maps API javascript inclusion string is like this `http://maps.google.com/maps?file=api&v=2.xd&key=XXXXX`
change the **2.x** to something a few versions back (back when it was working) like **2.132** or something
|
Google Maps API - GMarker.openInfoWindowHtml() stopped working
|
[
"",
"javascript",
"google-maps",
"mapping",
"openinfowindowhtml",
""
] |
Our site has multiple "wizards" where various data is collected over several pages, and cannot be committed to the database until the last step.
What is the best/correct way to make a wizard like this with ASP.Net MVC
edit: My boss is now saying "no javascript" - any thoughts on how to get around that restriction?
|
I don't believe there is a best/correct way, but the way I'd do it is...
Each wizard gets its own page. Each step gets its own div. All steps are in the same form.
The previous/next buttons would essentially hide/show the div in each step of the process. The last step's submit button submits the entire form. It would be pretty trivial to implement this using jQuery, and it would be easy to maintain as all the wizard steps are in a single ViewPage.
On the controller side, you'd have two controller methods, the HttpVerbs.Get version that would prepare the form for viewing and the HttpVerbs.Post version that would take a FormsResult and parse it to get out the information required to submit the user's answers to storage/other processes.
---
Wow, your boss stinks.
This answer almost gracefully works for those \*\*\*\*\*\* who have javascript disabled (yes, both of them). You can tweak it to hide the next-previous buttons via CSS and unhide them in your javascript code. This way people with javascript see the wizard and people without javascript will see the entire form (without next/prev buttons).
The other option is to create a view for each step in the wizard. You can store the intermediate results of the form in the Session. This way would cost plenty of time and effort to implement, which means you could probably squeeze some overtime out of your boss when you demonstrate, in about twenty minutes of effort you spend during lunch, how easy the javascript route is to implement.
|
If you can't use JavaScript, then make each step a view, with a method in the controller, and keep your data in session until ready to submit to the database.
You can make your Next and Prev buttons using the ActionLink HtmlHelper method.
|
how to make a wizard with ASP.Net MVC
|
[
"",
"c#",
".net",
"asp.net-mvc",
"wizard",
""
] |
I am looking for good ideas for implementing a generic way to have a single line (or anonymous delegate) of code execute with a timeout.
```
TemperamentalClass tc = new TemperamentalClass();
tc.DoSomething(); // normally runs in 30 sec. Want to error at 1 min
```
I'm looking for a solution that can elegantly be implemented in many places where my code interacts with temperamental code (that I can't change).
In addition, I would like to have the offending "timed out" code stopped from executing further if possible.
|
The really tricky part here was killing the long running task through passing the executor thread from the Action back to a place where it could be aborted. I accomplished this with the use of a wrapped delegate that passes out the thread to kill into a local variable in the method that created the lambda.
I submit this example, for your enjoyment. The method you are really interested in is CallWithTimeout. **This will cancel the long running thread by aborting it, and swallowing the ThreadAbortException**:
Usage:
```
class Program
{
static void Main(string[] args)
{
//try the five second method with a 6 second timeout
CallWithTimeout(FiveSecondMethod, 6000);
//try the five second method with a 4 second timeout
//this will throw a timeout exception
CallWithTimeout(FiveSecondMethod, 4000);
}
static void FiveSecondMethod()
{
Thread.Sleep(5000);
}
```
The static method doing the work:
```
static void CallWithTimeout(Action action, int timeoutMilliseconds)
{
Thread threadToKill = null;
Action wrappedAction = () =>
{
threadToKill = Thread.CurrentThread;
try
{
action();
}
catch(ThreadAbortException ex){
Thread.ResetAbort();// cancel hard aborting, lets to finish it nicely.
}
};
IAsyncResult result = wrappedAction.BeginInvoke(null, null);
if (result.AsyncWaitHandle.WaitOne(timeoutMilliseconds))
{
wrappedAction.EndInvoke(result);
}
else
{
threadToKill.Abort();
throw new TimeoutException();
}
}
}
```
|
**We are using code like this heavily in productio**n:
```
var result = WaitFor<Result>.Run(1.Minutes(), () => service.GetSomeFragileResult());
```
Implementation is open-sourced, works efficiently even in parallel computing scenarios and is available as a part of [Lokad Shared Libraries](https://github.com/Lokad/lokad-shared-libraries)
```
/// <summary>
/// Helper class for invoking tasks with timeout. Overhead is 0,005 ms.
/// </summary>
/// <typeparam name="TResult">The type of the result.</typeparam>
[Immutable]
public sealed class WaitFor<TResult>
{
readonly TimeSpan _timeout;
/// <summary>
/// Initializes a new instance of the <see cref="WaitFor{T}"/> class,
/// using the specified timeout for all operations.
/// </summary>
/// <param name="timeout">The timeout.</param>
public WaitFor(TimeSpan timeout)
{
_timeout = timeout;
}
/// <summary>
/// Executes the spcified function within the current thread, aborting it
/// if it does not complete within the specified timeout interval.
/// </summary>
/// <param name="function">The function.</param>
/// <returns>result of the function</returns>
/// <remarks>
/// The performance trick is that we do not interrupt the current
/// running thread. Instead, we just create a watcher that will sleep
/// until the originating thread terminates or until the timeout is
/// elapsed.
/// </remarks>
/// <exception cref="ArgumentNullException">if function is null</exception>
/// <exception cref="TimeoutException">if the function does not finish in time </exception>
public TResult Run(Func<TResult> function)
{
if (function == null) throw new ArgumentNullException("function");
var sync = new object();
var isCompleted = false;
WaitCallback watcher = obj =>
{
var watchedThread = obj as Thread;
lock (sync)
{
if (!isCompleted)
{
Monitor.Wait(sync, _timeout);
}
}
// CAUTION: the call to Abort() can be blocking in rare situations
// http://msdn.microsoft.com/en-us/library/ty8d3wta.aspx
// Hence, it should not be called with the 'lock' as it could deadlock
// with the 'finally' block below.
if (!isCompleted)
{
watchedThread.Abort();
}
};
try
{
ThreadPool.QueueUserWorkItem(watcher, Thread.CurrentThread);
return function();
}
catch (ThreadAbortException)
{
// This is our own exception.
Thread.ResetAbort();
throw new TimeoutException(string.Format("The operation has timed out after {0}.", _timeout));
}
finally
{
lock (sync)
{
isCompleted = true;
Monitor.Pulse(sync);
}
}
}
/// <summary>
/// Executes the spcified function within the current thread, aborting it
/// if it does not complete within the specified timeout interval.
/// </summary>
/// <param name="timeout">The timeout.</param>
/// <param name="function">The function.</param>
/// <returns>result of the function</returns>
/// <remarks>
/// The performance trick is that we do not interrupt the current
/// running thread. Instead, we just create a watcher that will sleep
/// until the originating thread terminates or until the timeout is
/// elapsed.
/// </remarks>
/// <exception cref="ArgumentNullException">if function is null</exception>
/// <exception cref="TimeoutException">if the function does not finish in time </exception>
public static TResult Run(TimeSpan timeout, Func<TResult> function)
{
return new WaitFor<TResult>(timeout).Run(function);
}
}
```
---
This code is still buggy, you can try with this small test program:
```
static void Main(string[] args) {
// Use a sb instead of Console.WriteLine() that is modifying how synchronous object are working
var sb = new StringBuilder();
for (var j = 1; j < 10; j++) // do the experiment 10 times to have chances to see the ThreadAbortException
for (var ii = 8; ii < 15; ii++) {
int i = ii;
try {
Debug.WriteLine(i);
try {
WaitFor<int>.Run(TimeSpan.FromMilliseconds(10), () => {
Thread.Sleep(i);
sb.Append("Processed " + i + "\r\n");
return i;
});
}
catch (TimeoutException) {
sb.Append("Time out for " + i + "\r\n");
}
Thread.Sleep(10); // Here to wait until we get the abort procedure
}
catch (ThreadAbortException) {
Thread.ResetAbort();
sb.Append(" *** ThreadAbortException on " + i + " *** \r\n");
}
}
Console.WriteLine(sb.ToString());
}
}
```
There is a race condition. It is clearly possible that a ThreadAbortException gets raised after the method `WaitFor<int>.Run()` is being called. I didn't find a reliable way to fix this, however with the same test I cannot repro any problem with the *TheSoftwareJedi* accepted answer.
[](https://i.stack.imgur.com/iX0X8.png)
|
Implement C# Generic Timeout
|
[
"",
"c#",
"multithreading",
"c#-3.0",
"asynchronous",
"timeout",
""
] |
We have 50+ Java batch processes that run at different times of the day. They run on a Solaris box, and are started via cron. Currently, the only way we only know if they succeed or fail is by an email generated at the end of each batch process. We have a support team that monitors these emails. Recently, we've had issues with emails not being received, even though the batches are running. There must be a better way.
Without having to reinvent the wheel, are there any open source batch monitoring applications?
And a more general question, what is the best way to monitor batch processes?
|
Is there currently some batch management system in-place? Or are the jobs run through the OS scheduler? (ie, Windows `Schedule Tasks` or \*nix `cron`)
[Quartz](http://www.opensymphony.com/quartz) is an Open Source (Apache License) java-based job scheduler that has infrastructure in place for *listeners* that can be used for notification purposes, but there would be some code development involved.
|
I don't know about open source batch monitoring applications but there is a new sub project of Spring: Spring-Batch that provides a batch processing framework. I've used it successfully in a few new projects.
When you kick off a batch job you can wire up a job execution listener. In my case, when the job fails with an exception, my execution listener intercepts this and sends out an error email with the pertinent stack trace to a well known email list. I use a Tasklet at the end of the batch to send an email to indicate normal completion.
Of course, if there is an error in the email subsystem (and the message doesn't get sent) all bets are off...
|
Java batch monitoring
|
[
"",
"java",
"frameworks",
"monitoring",
"batch-processing",
""
] |
I understand that using the "===" compares type, so running the following code results in "not equal" because it's comparing a number type to a string type.
```
var a = 20;
var b = "20";
if (a === b) {
alert("They are equal");
} else {
alert("They are not equal");
}
```
But I dont understand how using the "==" to compare only the value results in the "They are equal" message.
```
var a = 20;
var b = "20";
if (a == b) {
alert("They are equal");
} else {
alert("They are not equal");
}
```
How are the values equal? Isn't the string "20" stored as the ASCII characters 50 and 48 (0110010 and 0110000 in binary) and 20 stored as the actual binary number 0010100?
EDIT: Thanks everyone! I think all the responses are great and have helped me understand this much better.
|
The `==` operator compares only the values of the variables. If the types are different, a conversion is operated. So the number 20 is converted to the string "20" and the result is compared.
The `===` operator compares not only the values, but also the types, so no cast is operated. In this case `"20" !== 20`
|
The JavaScript engine sees the a as a number and casts the b to number before the valuation.
|
How is "20" and 20 considered equal in JavaScript?
|
[
"",
"javascript",
"comparison",
"binary",
"operators",
"ascii",
""
] |
Do you have any ideas how to call DoEvents from a C# DLL
|
Don't. It's sketchy enough when you are the app controlling the event loop; pumping messages from a DLL just adds to the risk that you'll end up hitting code a naive programmer didn't make safe for re-entrancy.
|
Do you mean `System.Windows.Forms.Application.DoEvents()`?
|
DoEvents In a DLL
|
[
"",
"c#",
"dll",
""
] |
Python is installed in a local directory.
My directory tree looks like this:
```
(local directory)/site-packages/toolkit/interface.py
```
My code is in here:
```
(local directory)/site-packages/toolkit/examples/mountain.py
```
To run the example, I write `python mountain.py`, and in the code I have:
```
from toolkit.interface import interface
```
And I get the error:
```
Traceback (most recent call last):
File "mountain.py", line 28, in ?
from toolkit.interface import interface
ImportError: No module named toolkit.interface
```
I have already checked `sys.path` and there I have the directory `/site-packages`. Also, I have the file `__init__.py.bin` in the toolkit folder to indicate to Python that this is a package. I also have a `__init__.py.bin` in the examples directory.
I do not know why Python cannot find the file when it is in `sys.path`. Any ideas? Can it be a permissions problem? Do I need some execution permission?
|
Based on your comments to orip's post, I guess this is what happened:
1. You edited `__init__.py` on windows.
2. The windows editor added something non-printing, perhaps a carriage-return (end-of-line in Windows is CR/LF; in unix it is LF only), or perhaps a CTRL-Z (windows end-of-file).
3. You used WinSCP to copy the file to your unix box.
4. WinSCP thought: "This has something that's not basic text; I'll put a .bin extension to indicate binary data."
5. The missing `__init__.py` (now called `__init__.py.bin`) means python doesn't understand toolkit as a package.
6. You create `__init__.py` in the appropriate directory and everything works... ?
|
I ran into something very similar when I did this exercise in LPTHW; I could never get Python to recognise that I had files in the directory I was calling from. But I was able to get it to work in the end. What I did, and what I recommend, is to try this:
(NOTE: From your initial post, I am assuming you are using an \*NIX-based machine and are running things from the command line, so this advice is tailored to that. Since I run Ubuntu, this is what I did)
1. Change directory (cd) to the directory *above* the directory where your files are. In this case, you're trying to run the `mountain.py` file, and trying to call the `toolkit.interface.py` module, which are in separate directories. In this case, you would go to the directory that contains paths to both those files (or in other words, the closest directory that the paths of both those files share). Which in this case is the `toolkit` directory.
2. When you are in the `toolkit` directory, enter this line of code on your command line:
`export PYTHONPATH=.`
This sets your PYTHONPATH to ".", which basically means that your PYTHONPATH will now look for any called files within the directory you are currently in, (and more to the point, in the *sub-directory branches* of the directory you are in. So it doesn't just look in your current directory, but in all the directories that are **in** your current directory).
3. After you've set your PYTHONPATH in the step above, run your module from your current directory (the `toolkit` directory). Python should now find and load the modules you specified.
|
Python error "ImportError: No module named"
|
[
"",
"python",
"importerror",
"python-import",
""
] |
I'm trying to convert an Excel document into a table in SQL 2005. I found the link below and am wondering if it looks like a solution. If so, what would the @excel\_full\_file\_name syntax be and where would the path be relative to?
<http://www.siccolo.com/Articles/SQLScripts/how-to-create-sql-to-convert-Excel_to_table.html>
|
You can use the BULK INSERT T-SQL command if you just want a pure sql solution. You have to save the file as csv/text first.
```
BULK
INSERT YourDestinationTable
FROM 'D:\YourFile.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
```
Alternatively, you can try OPENROWEST - again , a pure T-SQL solution.
```
SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;DATABASE=D:\YourExcelFile.xls', 'Select * from YourExcelFile')
```
It really depends on how much control and flexibility you want, the SSIS route will have benefits over these methods.
|
Glancing over the code, I would expect it to be the full path name of the excel document:
For example: c:\path\to\my\excel\document.xls
I haven't installed the procedure though or run it, so I could be wrong - but that's what it appears to be at first glance.
|
Creating a SQL table from a xls (Excel) file
|
[
"",
"sql",
"excel",
"sql-server-2005",
"database-table",
""
] |
I'll try to explain my scenario as best i can;
At each application *tick* I query the current state of the keyboard and mouse and wrap them in individual classes and data structures. For the keyboard it's an array of my *Keys* enum (one item for each of the keys that are currently pressed) and for the mouse it's a class containing coordinate delta's and bools for each buttons pressed.
I also have a rudimentary state machine managed via a state manager class which maintains the stack and marshalls the states.
All I want to know is, how best to pass the input (snapshots) to the individual states my app can be in at any one time?
I would like to process as much of the input as possible away from the individual states, so as to reduce repeating logic within the states.
Or would it be better to keep the input snapshots as pure as possible and pass them to the states so they can keep they're input-specific-logic hidden?
**Note**
This structure is similiar to how I imagine a game would work, and although this application is not a game it does need to be processed as quickly as possible.
|
Why are you querying the state of the keyboard and mouse with each tick? A much better and traditional solution would be to capture events fired from the keyboard and mouse. That way you only need to update the state when you HAVE to.
|
If you simply query your keyboard and mouse every tick, I can guarantee you'll run into problems. When you query every tick, you'll find that you miss inputs that occur quickly (within the time domain of a single tick). Imagine a situation where the user presses and releases a key/button between two ticks (it will happen more than you think) - you'll completely miss the input.
Particularly in C#, which is *completely* geared for input events like this, you should be using events.
|
Passing input to a state machine (c#)
|
[
"",
"c#",
"user-input",
"state-machine",
""
] |
I have a user table in my mysql database that has a password column. Currently, I use the MD5 algorithm to hash the users' password for storage in the database. Now I like to think that I am a security conscience person. I noticed while reading the MySQL docs that they don't recommend MD5 or the SHA/SHA1 hashing methods, but don't offer an alternative.
What would be the best way to hash my passwords in MySQL? A function that is natively supported in both PHP and MySQL would be ideal and necessary with my current implementation.
Thanks!
|
It's not necessarily that you shouldn't use MD5, as much it's that you shouldn't use *just* MD5, as this leaves you vulnerable to rainbow-table attacks (a rainbow table is a table of precomputed hash values - if your password is even remotely common or simple, the attacker needs merely to look up the hash and he knows your plaintext password.)
At the very least you should add a salt to every password so that any existing rainbow table is useless, forcing the attacker to generate an entire new rainbow table just for your database of passwords.
Better still is to use a different salt for every password in your database, say the username it's associated with, so that an attacker can't even generate a rainbow table for your whole database and has to crack each entry separately.
MD5 is also a very fast algorithm. Speed is the enemy when it comes to cracking - the longer it takes to generate a hash, the longer it takes for each attempt a hacker makes. Something simple like hashing the plaintext 100 times with a new additional salt each time would be barely perceptible (if at all) to a user logging in to your site, but it would increase the time it takes to brute-force a password by the same 100 times.
Far, far more detail here: <http://www.codinghorror.com/blog/archives/000953.html>
|
MD5 is considered to be weak by today's standards. It would still take some work to crack a hash made with MD5, but it's several times easier than guessing the password by brute-force. Ideally, cracking a hash should not be easier than brute-force.
SHA1 is also considered easier to crack than guessing the password by brute-force.
I actually contributed a patch to MySQL to surface the SHA224, SHA256, SHA384, and SHA512 functions from OpenSSL. These are recommended by NIST for password hashing (actually SHA256 and higher).
My patch was finished by MySQL engineers, and is included in MySQL 6.0.5 and later, if I recall.
If you use an earlier version of MySQL (and who doesn't), then you can probably use an implementation of strong hashing functions in your host language. PHP has the `hash()` function for example. You can do the hashing in your application and save the resulting message string to the database.
Don't forget to do [salting](http://en.wikipedia.org/wiki/Salting_(cryptography)), too!
|
What function to use to hash passwords in MySQL?
|
[
"",
"php",
"mysql",
"hash",
"passwords",
""
] |
I have the following shell script registered in my "Login Items" preferences but it does not seem to have any effect. It is meant to launch the moinmoin wiki but only works when it is run by hand from a terminal window, after which it runs until the machine is next shut down.
```
#!/bin/bash
cd /Users/stuartcw/Documents/Wiki/moin-1.7.2
/usr/bin/python wikiserver.py >> logs/`date +"%d%b%Y"`.log 2>&1 &
```
I would really like the Wiki to be available after restarting so any help in understanding this would be appreciated.
|
launchd is one of the best parts of MacOS X, and it causes me great pain to not be able to find it on other systems.
Edit and place this in `/Library/LaunchDaemons` as `com.you.wiki.plist`
```
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.you.wiki</string>
<key>LowPriorityIO</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>RunAtLoad</key>
<true/>
<key>Nice</key>
<integer>1</integer>
<key>WorkingDirectory</key>
<string>/Users/stuartcw/Documents/Wiki/moin-1.7.2</string>
<key>UserName</key>
<string>user to run this as</string>
<key>ProgramArguments</key>
<array>
<string>/usr/bin/python</string>
<string>wikiserver.py</string>
</array>
</dict>
</plist>
```
|
Try using launchd. More info at <http://www.macgeekery.com/tips/all_about_launchd_items_and_how_to_make_one_yourself>
|
Shell Script doesn't run automatically though it is registered in Mac OS X Login Items
|
[
"",
"python",
"macos",
"bash",
"launchd",
"moinmoin",
""
] |
I am trying to write my first real python function that does something real. What i want to accomplish is searching a given folder, and then open all images and merging them together so they make a filmstrip image. Imagine 5 images stacked on top of eachother in one image.
I have this code now, which should be pretty much ok, but propably needs some modification:
```
import os
import Image
def filmstripOfImages():
imgpath = '/path/here/'
files = glob.glob(imgpath + '*.jpg')
imgwidth = files[0].size[0]
imgheight = files[0].size[1]
totalheight = imgheight * len(files)
filename = 'filmstrip.jpg'
filmstrip_url = imgpath + filename
# Create the new image. The background doesn't have to be white
white = (255,255,255)
filmtripimage = Image.new('RGB',(imgwidth, totalheight),white)
row = 0
for file in files:
img = Image.open(file)
left = 0
right = left + imgwidth
upper = row*imgheight
lower = upper + imgheight
box = (left,upper,right,lower)
row += 1
filmstripimage.paste(img, box)
try:
filmstripimage.save(filename, 'jpg', quality=90, optimize=1)
except:
filmstripimage.save(miniature_filename, 'jpg', quality=90)")
```
How do i modify this so that it saves the new filmstrip.jpg in the same directory as I loaded the images from? And it probably has some things that are missing or wrong, anybody got a clue?
Related question: [How to generate a filmstrip image in python from a folder of images?](https://stackoverflow.com/questions/334827/how-to-generate-a-filmstrip-image-in-python-from-a-folder-of-images)
|
It is not an answer to your question, but It might be helpful:
```
#!/usr/bin/env python
from PIL import Image
def makefilmstrip(images, mode='RGB', color='white'):
"""Return a combined (filmstripped, each on top of the other) image of the images.
"""
width = max(img.size[0] for img in images)
height = sum(img.size[1] for img in images)
image = Image.new(mode, (width, height), color)
left, upper = 0, 0
for img in images:
image.paste(img, (left, upper))
upper += img.size[1]
return image
if __name__=='__main__':
# Here's how it could be used:
from glob import glob
from optparse import OptionParser
# process command-line args
parser = OptionParser()
parser.add_option("-o", "--output", dest="file",
help="write combined image to OUTPUT")
options, filepatterns = parser.parse_args()
outfilename = options.file
filenames = []
for files in map(glob, filepatterns):
if files:
filenames += files
# construct image
images = map(Image.open, filenames)
img = makefilmstrip(images)
img.save(outfilename)
```
Example:
```
$ python filmstrip.py -o output.jpg *.jpg
```
|
I think if you change your `try` section to this:
```
filmstripimage.save(filmstrip_url, 'jpg', quality=90, optimize=1)
```
|
How to complete this python function to save in the same folder?
|
[
"",
"python",
"image-processing",
"python-imaging-library",
""
] |
Why does nvarchar(256) seem to the be the standard for user names in SQL Server?
Any system functions that return a user name return nvarchar(256), the ASP Membership provider uses nvarchar(256)
256 seems like an odd number (yes, I know its even...) - 255 I could understand (1 byte address) but 256 doesn't make sense to me.
Can anyone explain?
|
As programmers we automatically count starting at 0, but in this case nvarchar(0) would mean no characters. Turns out that 256 is your nice round number 2^8.
|
2^8 is 256, not 255.
Many times you will see numbering schemes from 0-255 which is 256 numbers when you include the 0.
|
nvarchar(256)....?
|
[
"",
"sql",
"sql-server",
""
] |
I'm looking at some GXT code for GWT and I ran across this use of Generics that I can't find another example of in the Java tutorials. The class name is [`com.extjs.gxt.ui.client.data.BaseModelData`](http://extjs.com/deploy/gxtdocs/com/extjs/gxt/ui/client/data/BaseModelData.html) if you want to look at all of the code. Here are the important parts:
```
private RpcMap map;
public <X> X get(String property) {
if (allowNestedValues && NestedModelUtil.isNestedProperty(property)) {
return (X)NestedModelUtil.getNestedValue(this, property);
}
return map == null ? null : (X) map.get(property);
}
```
`X` is defined nowhere else in the class or anywhere in the hierarchy, and when I hit "go to declaration" in eclipse it just goes to the `<X>` in the public method signature.
I've tried to call this method with the following two examples to see what happens:
```
public Date getExpiredate() {
return get("expiredate");
}
public String getSubject() {
return get("subject");
}
```
They compile and show no errors or warnings. I would think at the very least I would have to do a cast to get this to work.
Does this mean that Generics allow a magic return value that can be anything and will just blow up at runtime? This seems counter to what generics are supposed to do. Can anyone explain this to me and possibly give me a link to some documentation that explains this a little better? I've went through Sun's 23 page pdf on generics and every example of a return value is defined either at the class level or is in one of the parameters passed in.
|
The method returns a type of whatever you expect it to be (`<X>` is defined in the method and is absolutely unbounded).
This is very, very dangerous as no provision is made that the return type actually matches the returned value.
The only advantage this has is that you don't have to cast the return value of such generic lookup methods that can return any type.
I'd say: use such constructs with care, because you lose pretty much all type-safety and gain only that you don't have to write an explicit cast at each call to `get()`.
And yes: this pretty much is black magic that blows up at runtime and breaks the entire idea of what generics should achieve.
|
The type is declared on the method. That's that "`<X>`" means. The type is scoped then to just the method and is relevant to a particular call. The reason your test code compiles is that the compiler tries to determine the type and will complain only if it can't. There are cases where you have to be explicit.
For example, the declaration for [`Collections.emptySet()`](http://java.sun.com/javase/6/docs/api/java/util/Collections.html#emptySet()) is
```
public static final <T> Set<T> emptySet()
```
In this case, the compiler can guess:
```
Set<String> s = Collections.emptySet();
```
But if it can't, you must type:
```
Collections.<String>emptySet();
```
|
Java Generics: Generic type defined as return type only
|
[
"",
"java",
"generics",
"java-5",
""
] |
This originally was a problem I ran into at work, but is now something I'm just trying to solve for my own curiosity.
I want to find out if int 'a' contains the int 'b' in the most efficient way possible. I wrote some code, but it seems no matter what I write, parsing it into a string and then using indexOf is twice as fast as doing it mathematically.
Memory is not an issue (within reason), just sheer processing speed.
This is the code I have written to do it mathematically:
```
private static int[] exponents = {10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000, 1000000000 };
private static boolean findMatch(int a, int b) {
if (b > a) return false;
if (a == b) return true;
int needleLength = getLength(b);
int exponent = exponents[needleLength];
int subNum;
while (a >= 1) {
subNum = a % exponent;
if (subNum == b)
return true;
a /= 10;
}
return false;
}
private static int getLength(int b) {
int len = 0;
while (b >= 1) {
len++;
b /= 10;
}
return len;
}
```
Here's the string method I'm using, which seems to trump the mathematical method above:
```
private static boolean findStringMatch(int a, int b) {
return String.valueOf(a).indexOf(String.valueOf(b)) != -1;
}
```
So although this isn't really required for me to complete my work, I was just wondering if anyone could think of any way to further optimize my way of doing it mathematically, or an entirely new approach altogether. Again memory is no problem, I am just shooting for sheer speed.
I'm really interested to see or hear anything anyone has to offer on this.
**EDIT:** When I say contains I mean can be anywhere, so for example, findMatch(1234, 23) == true
**EDIT:** For everyone saying that this crap is unreadable and unnecessary: you're missing the point. The point was to get to geek out on an interesting problem, not come up with an answer to be used in production code.
|
This is along Kibbee's line, but I got a little intrigued by this before he posted and worked this out:
```
long mask ( long n ) {
long m = n % 10;
long n_d = n;
long div = 10;
int shl = 0;
while ( n_d >= 10 ) {
n_d /= 10;
long t = n_d % 10;
m |= ( t << ( shl += 4 ));
}
return m;
}
boolean findMatch( int a, int b ) {
if ( b < a ) return false;
if ( a == b ) return true;
long m_a = mask( a ); // set up mask O(n)
long m_b = mask( b ); // set up mask O(m)
while ( m_a < m_b ) {
if (( m_a & m_b ) == m_a ) return true;
m_a <<= 4; // shift - fast!
if ( m_a == m_b ) return true;
} // O(p)
return false;
}
void testContains( int a, int b ) {
print( "findMatch( " + a + ", " + b + " )=" + findMatch( a, b ));
}
testContains( 12, 120 );
testContains( 12, 125 );
testContains( 123, 551241238 );
testContains( 131, 1214124 );
testContains( 131, 1314124 );
```
---
Since 300 characters is far too little to make an argument in, I'm editing this main post to respond to Pyrolistical.
Unlike the OP, I wasn't that surprised that a native compiled indexOf was faster than Java code with primitives. So my goal was not to find something I thought was faster than a native method called zillions of times all over Java code.
The OP made it clear that this was not a production problem and more along the lines of an idle curiosity, so my answer solves that curiosity. My guess was that speed was an issue, when he was trying to solve it in production, but as an idle curiosity, "This method will be called millions and millions of times" no longer applies. As he had to explain to one poster, it's no longer pursued as production code, so the complexity no longer matters.
Plus it provides the only implementation on the page that manages to find the "123" in "551241238", so unless correctness is an extraneous concern, it provides that. Also the solution space of "an algorithm that solves the problem mathematically using Java primitives but beats optimized native code" might be *EMPTY*.
Plus, it's not clear from your comment whether or not you compared apples to apples. The functional spec is f( int, int )-> boolean, not f( String, String )-> boolean (which is kind of the domain of `indexOf`) . So unless you tested something like this (which could still beat mine, and I wouldn't be awfully surprised.) the additional overhead *might* eat up some of that excess 40%.
```
boolean findMatch( int a, int b ) {
String s_a = "" + a;
String s_b = "" + b;
return s_a.indexOf( s_b ) > -1;
}
```
It does the same basic steps. log10( a ) encoding + log10( b ) encoding + actually finding the match, which is as well O(*n*) where *n* is the largest logarithm.
|
It *should* be faster string way, because your problem is textual, not mathematical. Notice that the your "contains" relationship says nothing about the numbers, it only says something about their *decimal* representations.
Notice also that the function you want to write will be unreadable - another developer will never understand what you are doing. (See what trouble you had with that here.) The string version, on the other hand, is perfectly clear.
|
Finding numerical substrings mathematically, without string comparison
|
[
"",
"java",
"performance",
"integer",
"substring",
"contains",
""
] |
I traditionally deploy a set of web pages which allow for manual validation of core application functionality. One example is LoggerTest.aspx which generates and logs a test exception. I've always chosen to raise a DivideByZeroException using an approach similar to the following code snippet:
```
try
{
int zero = 0;
int result = 100 / zero;
}
catch (DivideByZeroException ex)
{
LogHelper.Error("TEST EXCEPTION", ex);
}
```
The code works just fine but I feel like there must be a more elegant solution. Is there a best way to raise an exception in C#?
|
```
try
{
throw new DivideByZeroException();
}
catch (DivideByZeroException ex)
{
LogHelper.Error("TEST EXCEPTION", ex);
}
```
|
Short answer:
```
throw new Exception("Test Exception");
```
You will need
```
using System;
```
|
What's the best way to raise an exception in C#?
|
[
"",
"c#",
"exception",
""
] |
Based on their work, how do you distinguish a great SQL developer?
Examples might include:
Seldom uses CURSORs, and tries to refactor them away.
Seldom uses temporary tables, and tries to refactor them away.
Handles NULL values in OUTER JOINs with confidence.
Avoids SQL extensions that are not widely implemented.
Knows how to indent with elegance.
|
I've found that a great SQL developer is usually also a great database designer, and will prefer to be involved in both the design and implementation of the database. That's because a bad database design can frustrate and hold back even the best developer - good SQL instincts don't always work right in the face of pathological designs, or systems where RI is poor or non-existent. So, one way to tell a great SQL developer is to test them on data modeling.
Also, a great DB developer has to have complex join logic down cold, and know exactly what the results of various multi-way joins will be in different situations. Lack of comfort with joins is the #1 cause of bad SQL code (and bad SQL design, for that matter).
As for specific syntax things, I'd hesitate at directives like:
> Does not use CURSORs.
>
> Does not use temporary tables.
Use of those techniques might allow you to tell the difference between a dangerously amateur SQL programmer (who uses them when simple relational predicates would be far better) and a decent starting SQL programmer (who knows how to do most stuff without them). However, there are many situations in real world usage where temp tables and cursors are perfectly adequate ways (sometimes, the only ways) to accomplish things (short of moving to another layer to do the processing, which is sometimes better anyway).
So, use of advanced concepts like these isn't forbidden, but unless you're clearly dealing with a SQL expert working on a really tough problem that, for some reason, doesn't lend itself to a relational solution ... yeah, they're probably warning signs.
|
I don't think that cursors, temporary tables or other SQL practices are inherently bad or that their usage is a clear sign of how good a database programmer is.
I think there is the right tool for every type of problem. Sure, if you only have a hammer, everything looks like a nail. I think a great SQL programmer or database developer is a person who knows which tool is the right one in a specific situation. IMHO you can't generalize excluding specific patterns.
But a rule of thumb may be: a great database developer will find a more **short and elegant solution** for complex situations than the average programmer.
|
Signs of a great SQL developer
|
[
"",
"sql",
""
] |
I have a php script which accesses a MSSQL2005 database, reads some data from it and sends the results in a mail.
There are special characters in both some column names and in the fields itself.
When I access the script through my browser (webserver iis), the query is executed correctly and the contents of the mail are correctly (for my audience) encoded.
However, when I execute php from the console, the query fails (due to the special characters in the column names). If I replace the special characters in the query with calls to chr() and the character code in latin-1, the query gets executed correctly, but the results are also encoded in latin-1 and therefore not displayed correctly in the mail.
Why is PHP/the MSSQL driver/… using a different encoding in the two scenarios? Is there a way around it?
If you wonder, I need the console because I want to schedule the script using SQLAgent (or taskmanager or whatever).
|
Depending on the type of characters you have in your database, it might be a console limitation I guess. If you type `chcp` in the console, you'll see what is the active code page, which might something like [CP437](http://en.wikipedia.org/wiki/Code_page_437) also known as Extended ASCII. If you have characters out of this code page, like in UTF8, you might run into problems. You can change the current active code page by typing `chcp 65001` to switch to UTF8.
You might also want to change the default Raster font to Lucida Console depending on the required characters as not all fonts support extended characters (right click on command prompt window's title, properties, font).
As already said, PHP's unicode support is not ideal, but you can manage to do it in PHP5 with a few well placed function call of [utf8\_decode](http://php.net/utf8_decode). The secret of character encoding is to understand well what is the current encoding of **all** the tools you are using: database, database connection, current bytes in your PHP variable, your output to the console screen, your email's body encoding, your email client, and so on...
For everything that have special characters, in our modern days, something like UTF8 is often recommended. Make sure everything along the way is set to UTF8 and convert only where necessary.
|
PHP's poor support for the non English world is well known. I've never used a database with characters outside the basic ASCII realm, but obviously you already have a work around and it seems you just have to live with it.
If you wanted to take it a step further, you could:
1. Write an array that contains all the special chars and their CHR equivalents
2. foreach the array and str\_replace on the query
But if the query is hardcoded, I guess what you have is fine. Also, make sure you are using the latest PHP, at least 4.4.x, there's always a change this was fixed but I skimmed the 4.x.x release notes and I don't see anything that relates to your problem.
|
PHP, MSSQL2005 and Codepages
|
[
"",
"php",
"sql-server",
"encoding",
"codepages",
""
] |
What is the difference between `HAVING` and `WHERE` in an `SQL SELECT` statement?
EDIT: I have marked Steven's answer as the correct one as it contained the key bit of information on the link:
> When `GROUP BY` is not used, `HAVING` behaves like a `WHERE` clause
The situation I had seen the `WHERE` in did not have `GROUP BY` and is where my confusion started. Of course, until you know this you can't specify it in the question.
|
> HAVING specifies a search condition for a
> group or an aggregate function used in SELECT statement.
[Source](http://blog.sqlauthority.com/2007/07/04/sql-server-definition-comparison-and-difference-between-having-and-where-clause/)
|
HAVING: is used to check conditions *after* the aggregation takes place.
WHERE: is used to check conditions *before* the aggregation takes place.
This code:
```
select City, CNT=Count(1)
From Address
Where State = 'MA'
Group By City
```
Gives you a table of all cities in MA and the number of addresses in each city.
This code:
```
select City, CNT=Count(1)
From Address
Where State = 'MA'
Group By City
Having Count(1)>5
```
Gives you a table of cities in MA with more than 5 addresses and the number of addresses in each city.
|
What is the difference between HAVING and WHERE in SQL?
|
[
"",
"sql",
"where-clause",
"having",
""
] |
I have a C# application that is a client to a web service. One of my requirements is to allow capturing the SOAP that I send, so that if there is a problem, I can either fix the bug, or demonstrate that the problem is in the service I am calling.
My WebReference proxy service class derives from `System.Web.Services.Protocols.SoapHttpClientProtocol` as usual. If I had a magic wand, I would make this base class implement an event `OnPost` that I could handle to write the SOAP into my logs and continue.
Short of running a packet sniffer like WireShark, is there an easy way to get this level of logging?
|
I think what you are looking for is addressed in this question:
[Getting RAW Soap Data from a Web Reference Client running in ASP.net](https://stackoverflow.com/questions/300674/getting-raw-soap-data-from-a-web-reference-client-running-in-aspnet)
It looks like a lot of code though.
|
If the application is running on your local box and the web service isn't doing anything funky, you can use Fiddler. Fire Up IE, run Fiddler, and you'll see your web service calls go through fiddler's proxy too.
I just used this this morning to do almost the same thing. I had to prove the data my web service was sending wasn't messed up.
|
In C#, how would I capture the SOAP used in a web service call?
|
[
"",
"c#",
"web-services",
"soap",
""
] |
I am using the SharpZipLib open source .net library from [www.icsharpcode.net](http://www.icsharpcode.net)
My goal is to unzip an xml file and read it into a dataset. However I get the following error reading the file into a dataset: "Data at the root level is invalid. Line 1, position 1."
I believe what is happening is the unzipping code is not releasing the file for the following reasons.
1.) If I unzip the file and exit the application. When I restart the app I CAN read the unzipped file into a dataset.
2.) If I read in the xml file right after writing it out (no zipping) then it works fine.
3.) If I write the dataset to xml, zip it up, unzip it, then attempt to read it back in I get the exception.
The code below is pretty straight forward. UnZipFile will return the name of the file just unzipped. Right below this call is the call to read it into a dataset. The variable fileToRead is the full path to the newly unzipped xml file.
```
string fileToRead = UnZipFile(filepath, DOViewerUploadStoreArea);
ds.ReadXml(fileToRead )
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
ZipInputStream s = new ZipInputStream(File.OpenRead(file));
ZipEntry myEntry;
string tmpEntry = String.Empty;
while ((myEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(myEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
FileStream streamWriter = File.Create(fileWDir);
int size = 4096;
byte[] data = new byte[4096];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0) { streamWriter.Write(data, 0, size); }
else { break; }
}
streamWriter.Close();
}
s.Close();
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
```
|
Well, what does the final file look like? (compared to the original). You don't show the zipping code, which might be part of the puzzle, especially as you are partially swallowing the exception.
I would also try ensuring everything `IDisposable` is `Dispose()`d, ideally via `using`; also - in case the problem is with path construction, use `Path.Combine`. And note that if `myEntry.Name` contains sub-directories, you will need to create them manually.
Here's what I have - it works for unzipping ICSharpCode.SharpZipLib.dll:
```
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using(Stream inStream = File.OpenRead(file))
using (ZipInputStream s = new ZipInputStream(inStream))
{
ZipEntry myEntry;
byte[] data = new byte[4096];
while ((myEntry = s.GetNextEntry()) != null)
{
string fileWDir = Path.Combine(dirToUnzipTo, myEntry.Name);
string dir = Path.GetDirectoryName(fileWDir);
// note only supports a single level of sub-directories...
if (!Directory.Exists(dir)) Directory.CreateDirectory(dir);
unzippedfile = fileWDir; // note; returns last file if multiple
using (FileStream outStream = File.Create(fileWDir))
{
int size;
while ((size = s.Read(data, 0, data.Length)) > 0)
{
outStream.Write(data, 0, size);
}
outStream.Close();
}
}
s.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
return (unzippedfile);
}
```
It could also be that the problem is either in the code that writes the zip, or the code that reads the generated file.
|
I compared the original with the final using TextPad and they are identical.
Also I rewrote the code to take advantage of the using. Here is the code.
My issue seems to be centered around file locking or something. If I unzip the file quit the application then start it up it will read find.
```
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using (ZipInputStream s = new ZipInputStream(File.OpenRead(file)))
{
ZipEntry theEntry;
while ((theEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(theEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
if (fileName != String.Empty)
{
using (FileStream streamWriter = File.Create(fileWDir))
{
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
streamWriter.Write(data, 0, size);
}
else
{
break;
}
}
}
}
}
}
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
```
|
Unzipping a file error
|
[
"",
"c#",
"xml",
""
] |
I have a WCF service which will be hosted under IIS. Now I have some resources(Connections) that I create within service constructor. I need to free up those resources when IIS which is hosting the service shuts down or resets. These are not the resources that I will be clearing out every time client disconnects but they are application wide resources which will only free up when the service itself or IIS shuts down.
I saw another post where someone mentioned that he has created a listener to capture the events of `ServiceHost`. Can someone help me with figuring out how to get a handle of the `ServiceHost` and capture the IIS reset or IIS shutdown event and free up my resources in WCF service?
|
Well, I'm out of ideas, but I think that [this article](http://msdn.microsoft.com/en-us/library/bb332338.aspx) contains your answer in the chapter: "Accessing ServiceHost in IIS". It seems you need to build your own HostFactory because out of the box IIS uses the standard HostFactory and practically controls the creation and destruction of Hosts. By providing your own HostFactory you can add your own code to control the initialization and destruction...
|
You can use the IDisposable pattern with finalizer on the class that holds the resources.
On unload of AppDomain, all objects are finalized and if the object that has reference to the resources (such connections) has a finalizer, the finalizer will be called and you can close / dispose the resources at that point.
|
How to Create a Listener for WCF ServiceHost events when service is hosted under IIS?
|
[
"",
"c#",
"wcf",
"iis",
""
] |
I have a stored procedure that creates and opens some cursors. It closes them at the end, but if it hits an error those cursors are left open! Then subsequent runs fail when it tries to create cursors since a cursor with the name already exists.
Is there a way I can query which cursors exists and if they are open or not so I can close and deallocate them? I feel like this is better than blindly trying to close and swallow errors.
|
This seems to work for me:
```
CREATE PROCEDURE dbo.p_cleanUpCursor @cursorName varchar(255) AS
BEGIN
DECLARE @cursorStatus int
SET @cursorStatus = (SELECT cursor_status('global',@cursorName))
DECLARE @sql varchar(255)
SET @sql = ''
IF @cursorStatus > 0
SET @sql = 'CLOSE '+@cursorName
IF @cursorStatus > -3
SET @sql = @sql+' DEALLOCATE '+@cursorName
IF @sql <> ''
exec(@sql)
END
```
|
[Look here](http://msdn.microsoft.com/en-us/library/aa172595(SQL.80).aspx) for info on how to find cursors. I have never used any of them because I could figure out a way to get it done without going Row By Agonizing Row.
You should rebuild the sp to either
* not use cursors ( we can help -
there is almost always a way to
avoid RBAR)
* build it in a transaction and roll it back if there is a failure or if you detect an error. Here are some excellent articles on this. [part 1](http://www.sommarskog.se/error-handling-I.html) and [part 2](http://www.sommarskog.se/error-handling-II.html)
If you have SQL2005, you can also use [try catch](http://technet.microsoft.com/en-us/library/ms175976.aspx)
EDIT (in response to your post):Ideally, data generation is best handled at the application level as they are better suited for non set based operations.
Red Gate has a [SQL Data generator](http://www.red-gate.com/products/SQL_Data_Generator/index.htm) that I have used before (its great for single tables, but takes some configuring if you have lots of FK or a wide [normalized] database).
|
Is there any way to get a list of open/allocated cursors in SQL server?
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
"database-cursor",
""
] |
The C++ standard imposes an ordering on class member variables in memory. It says that the addresses of member variables have to increase in the order of declaration, but only inside one access section. Very specifically, this does not seem to prevent compilers from laying out access sections in an interleaved way. For example:
```
class X {
public:
int i;
int j;
private:
int k;
int n;
}
```
Does the standard allow compilers to lay out the data members in the order i, k, j, n? This would give compilers some (limited) freedom in optimizing object layout without violating the standard.
|
I checked out the C++ standard. In section 9.2, paragraph (or clause or whatever) 12, it says "The order of allocation of nonstatic data members separated by an access-specifier is unspecified." "Unspecified" means implementation-dependent behavior that need not be documented.
Therefore, the standard is explicitly saying nothing about the allocation, except that i must precede j and k must precede n. Therefore, a compiler is allowed to allocate in the order i, k, j, n, and nothing about the ordering need be documented.
|
And no, i think he is NOT trying to spam. This is a valid question and quite interesting i think.
Ok now i think compilers can do that. The standard says in 9.2. p12:
`Implementation alignment require-
ments might cause two adjacent members not to be allocated immediately after each other; so might
requirements for space for managing virtual functions (10.3) and virtual base classes (10.1).`
|
Can C++ access sections be interleaved?
|
[
"",
"c++",
""
] |
Python has a [number of soap stacks](https://stackoverflow.com/questions/206154/whats-the-best-soap-client-library-for-python-and-where-is-the-documentation-fo); as near as I can tell, all have substantial defects.
Has anyone had luck consuming *and* using WSDL for S3, EC2, and SQS in python?
My experience is that suds fails when constructing a Client object; after some wrangling, ZSI generates client code that doesn't work; etc.
Finally, I'm aware of [boto](http://code.google.com/p/boto/) but as it is a hand-rolled wrapper around AWS, it is (1) incomplete and (2) never up-to-date with the latest AWS WSDL.
|
The REST or "Query" APIs are definitely easier to use than SOAP, but unfortunately at least one service (EC2) doesn't provide any alternatives to SOAP. As you've already discovered, Python's existing SOAP implementations are woefully inadequate for most purposes; one workaround approach is to just generate the XML for the SOAP envelope/body directly, instead of going through an intermediate SOAP layer. If you're somewhat familiar with XML / SOAP, this isn't too hard to do in most cases, and allows you to work around any particular idiosyncrasies with the SOAP implementation on the other end; this can be quite important, as just about every SOAP stack out there has its own flavour of bugginess / weirdness to contend with.
|
if i'm not mistaken, you can consume Amazon Web Services via REST as well as SOAP. using REST with python would be *much* easier.
|
What's the best python soap stack for consuming Amazon Web Services WSDL?
|
[
"",
"python",
"amazon-web-services",
"soap",
"wsdl",
""
] |
I have an application that uses `Ajax.Request` and its `onSuccess` event handler in lots of places.
I need to call a function (that will check the response) before all these `onSuccess` events fire. I tried using `Ajax.Responders.register` with `onComplete` event but it fires after `Ajax.Request`'s `onSuccess` event. Any suggestions?
|
This might be a little late, but for the benefit of anyone else wondering about the same problem I will propose this solution:
You can use Prototypes own implementation of aspect-oriented programming to do this. Granted you will have to modify all your onSuccess-parameters, but it can be done with a simple search-and-replace, instead of updating all your callback functions. Here is an example Ajax.Request creation:
```
new Ajax.Request('example.html', {
parameters: {action: 'update'},
onSuccess: this.updateSuccessful
});
```
Say you have similar code snippets spread all over your code, and you want to preceed them all with a certain function that validates the response before the actual function is run(or even prevented from running at all). By using Funtion.wrap supplied in Prototype we can do this by extending the code above:
```
new Ajax.Request('example.html', {
parameters: {action: 'update'},
onSuccess: this.updateSuccessful.wrap(validateResponse)
});
```
Where 'validateResponse' is a function similar to this:
```
// If you use the X-JSON-header of the response for JSON, add the third param
function validateResponse(originalFn, transport /*, json */) {
// Validate the transport
if (someConditionMet) {
originalFn(transport /*, json */);
}
}
```
Thus you have extended your onSuccess-functions in one central place with just a quick search for onSuccess and pasting in 'wrap(validateResponse)'. This also gives you the option of having several wrapper-functions depending on the needs of the particular Ajax-request.
|
similar to Aleksander Krzywinski's answer, but I believe this would prevent you from having to sprinkle the use of "wrap" everywhere, by consolidating it to the onCreate Responder.
```
Ajax.Responders.register({
onCreate: function(request) {
request.options['onSuccess'] = request.options['onSuccess'].wrap(validateResponse);
}
});
```
|
Extending every Ajax.Request onSuccess event (Javascript Prototype Framework)
|
[
"",
"javascript",
"ajax",
"prototypejs",
""
] |
I am trying to create a Key Listener in java however when I try
```
KeyListener listener = new KeyListener();
```
Netbeans is telling me that KeyListener is abstract;cannot be instantiated. I know that I am missing some other piece of this key listener, but since this is my first time using a key listener i am unsure of what else i need. Why is it telling me this?
Thanks,
Tomek
|
`KeyListener` is an interface - it has to be implemented by something. So you could do:
```
KeyListener listener = new SomeKeyListenerImplementation();
```
but you can't instantiate it directly. You *could* use an anonymous inner class:
```
KeyListener listener = new KeyListener()
{
public void keyPressed(KeyEvent e) { /* ... */ }
public void keyReleased(KeyEvent e) { /* ... */ }
public void keyTyped(KeyEvent e) { /* ... */ }
};
```
It depends on what you want to do, basically.
|
KeyListener is an interface, so you must write a class that implements it to use it. As Jon suggested, you could create an anonymous class that implements it inline, but there's a class called KeyAdapter that is an abstract class implementing KeyListener, with empty methods for each interface method. If you subclass KeyAdapter, then you only need to override those methods you care about, not every method. Thus, if you only cared about keyPressed, you could do this
```
KeyListener listener = new KeyAdapter()
{
public void keyPressed(KeyEvent e) { /* ... */ }
};
```
This could save you a bit of work.
|
KeyListener in Java is abstract; cannot be instantiated?
|
[
"",
"java",
"instantiation",
"keylistener",
""
] |
In the application there is a dialog where only numeric string entries are valid. Therefore I would like to set the numeric keyboard layout.
Does anyone know how to simulate key press on the keyboard or any other method to change the keyboard layout?
Thanks!
|
You don't need to.
Just like full windows, you can set the edit control to be numeric input only. You can either do it [manually](http://msdn.microsoft.com/en-us/library/bb761655(VS.85).aspx) or in the dialog editor in the properites for the edit control.
The SIP should automatically display the numeric keyboard when the numeric only edit control goes into focus.
|
There is only one way to do this (edit: this is referring to the SIP in non-smartphone Windows Mobile, so I'm not sure it's relevant to your question), and it does involve simulating a mouse click on the 123 button. This is only half the problem, however, since you also need to know whether the keyboard is already in numeric mode or not. The way to do this is to peek a pixel near the upper left corner of the keypad - if you look at how the 123 button works, you'll see that it's system text on windows background, and then inverted in numeric mode (so the pixel will be the system text color only when in numeric mode). There's one more bit of weirdness you have to do to guarantee it will work on all devices (you have to draw a pixel on the keyboard, too).
Lucky for you, I have an easy-to-use code sample that does all this. Unlucky for you, it's in C#, but I think it should at least point you in the right direction.
|
Changing Keyboard Layout on Windows Mobile
|
[
"",
"c++",
"windows-mobile",
""
] |
I'm developing an application with a team in .Net (C++) and provide a COM interface to interact with python and other languages.
What we've found is that pushing data through COM turns out to be pretty slow.
I've considered several alternatives:
* dumping data to a file and sending the file path through com
* Shared Memory via [mmap](http://docs.python.org/library/mmap.html?highlight=shared%20memory#module-mmap)?
* Stream data through a socket directly?
From your experience what's the best way to pass around data?
|
Staying within the Windows interprocess communication mechanisms, we had positive experience using *windows named pipes*.
Using Windows overlapped IO and the `win32pipe` module from [pywin32](http://pywin32.sourceforge.net/).
You can learn much about win32 and python in the [Python Programming On Win32](http://oreilly.com/catalog/9781565926219/index.html) book.
The sending part simply writes to `r'\\.\pipe\mypipe'`.
A listener (`ovpipe`) object holds an event handle, and waiting for a message with possible other events involves calling `win32event.WaitForMultipleObjects`.
```
rc = win32event.WaitForMultipleObjects(
eventlist, # Objects to wait for.
0, # Wait for one object
timeout) # timeout in milli-seconds.
```
Here is part of the python overlapped listener class:
```
import win32event
import pywintypes
import win32file
import win32pipe
class ovpipe:
"Overlapped I/O named pipe class"
def __init__(self):
self.over=pywintypes.OVERLAPPED()
evt=win32event.CreateEvent(None,1,0,None)
self.over.hEvent=evt
self.pname='mypipe'
self.hpipe = win32pipe.CreateNamedPipe(
r'\\.\pipe\mypipe', # pipe name
win32pipe.PIPE_ACCESS_DUPLEX| # read/write access
win32file.FILE_FLAG_OVERLAPPED,
win32pipe.PIPE_TYPE_MESSAGE| # message-type pipe
win32pipe.PIPE_WAIT, # blocking mode
1, # number of instances
512, # output buffer size
512, # input buffer size
2000, # client time-out
None) # no security attributes
self.buffer = win32file.AllocateReadBuffer(512)
self.state='noconnected'
self.chstate()
def execmsg(self):
"Translate the received message"
pass
def chstate(self):
"Change the state of the pipe depending on current state"
if self.state=='noconnected':
win32pipe.ConnectNamedPipe(self.hpipe,self.over)
self.state='connectwait'
return -6
elif self.state=='connectwait':
j,self.strbuf=win32file.ReadFile(self.hpipe,self.buffer,self.over)
self.state='readwait'
return -6
elif self.state=='readwait':
size=win32file.GetOverlappedResult(self.hpipe,self.over,1)
self.msg=self.strbuf[:size]
ret=self.execmsg()
self.state = 'noconnected'
win32pipe.DisconnectNamedPipe(self.hpipe)
return ret
```
|
XML/JSON and a either a Web Service or directly through a socket. It is also language and platform independent so if you decide you want to host the python portion on UNIX you can, or if you want to suddenly use Java or PHP or pretty much any other language you can.
As a general rule proprietary protocols/architectures like COM offer more restrictions than they do benefits. This is why the open specifications appeared in the first place.
HTH
|
What's the best way to transfer data from python to another application in windows?
|
[
"",
"python",
"winapi",
"com",
"data-transfer",
""
] |
Basically I need to insert a bunch of data to an Excel file. Creating an OleDB connection appears to be the fastest way but I've seen to have run into memory issues. The memory used by the process seems to keep growing as I execute INSERT queries. I've narrowed them down to only happen when I output to the Excel file (the memory holds steady without the output to Excel). I close and reopen the connection in between each worksheet, but this doesn't seem to have an effect on the memory usage (as so does Dispose()). The data is written successfully as I can verify with relatively small data sets. If anyone has insight, it would be appreciated.
*initializeADOConn()* is called in the constructor
*initADOConnInsertComm()* creates the insert parameterized insert query
*writeRecord()* is called whenever a new record is written. New worksheets are created as needed.
```
public bool initializeADOConn()
{
/* Set up the connection string and connect.*/
string connectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;" +
"Data Source=" + this.destination + ";Extended Properties=\"Excel 8.0;HDR=YES;\"";
//DbProviderFactory factory =
//DbProviderFactories.GetFactory("System.Data.OleDb");
conn = new OleDbConnection(connectionString);
conn.ConnectionString = connectionString;
conn.Open();
/* Intialize the insert command. */
initADOConnInsertComm();
return true;
}
public override bool writeRecord(FileListerFileInfo file)
{
/* If all available sheets are full, make a new one. */
if (numWritten % EXCEL_MAX_ROWS == 0)
{
conn.Close();
conn.Open();
createNextSheet();
}
/* Count this record as written. */
numWritten++;
/* Get all of the properties of the FileListerFileInfo record and add
* them to the parameters of the insert query. */
PropertyInfo[] properties = typeof(FileListerFileInfo).GetProperties();
for (int i = 0; i < insertComm.Parameters.Count; i++)
insertComm.Parameters[i].Value = properties[i].GetValue(file, null);
/* Add the record. */
insertComm.ExecuteNonQuery();
return true;
}
```
EDIT:
No, I do not use Excel at all. I'm intentionally avoiding Interop.Excel due to its poor performance (at least from my dabbles with it).
|
The answer is **Yes**, the formula you describe *does* equal a bad time.
If you have a database handy (SQL Server or Access are good for this), you can do all of your inserts into a database table, and then export the table all at once into an Excel spreadsheet.
Generally speaking, databases are good at handling lots of inserts, while spreadsheets aren't.
|
Here are a couple of ideas:
Is the target workbook open? There is a bug ([Memory leak occurs when you query an open Excel worksheet by using ActiveX Data Objects](http://support.microsoft.com/kb/319998/en-us)) which IIRC is actually in the OLE DB provider for Jet (which you are using) although this isn't confirmed in the above article.
Regardless, bulk insert would seem to be the way to go.
You could use the same Jet OLE DB provider to do this: all you need is a one row table. You could even fabricate one on the fly. To create a new Excel workbook, execute `CREATE TABLE` DDL using a non-existent xls file in the connection string and the provider will create the workbook for you with a worksheet to represent the table. You have a connection to your Excel workbook so you could execute this:
```
CREATE TABLE [EXCEL 8.0;DATABASE=C:\MyFabricatedWorkbook;HDR=YES].OneRowTable
(
x FLOAT
);
```
(Even better IMO would be to fabricate a Jet database i.e. .mdb file).
Use `INSERT` to create a dummy row:
```
INSERT INTO [EXCEL 8.0;DATABASE=C:\MyFabricatedWorkbook;HDR=YES].OneRowTable (x)
VALUES (0);
```
Then, still using your connection to your target workbook, you could use something similar to the following to create a derived table (DT1) of your values to `INSERT` in one hit:
```
INSERT INTO MyExcelTable (key_col, data_col)
SELECT DT1.key_col, DT1.data_col
FROM (
SELECT 22 AS key_col, 'abc' AS data_col
FROM [EXCEL 8.0;DATABASE=C:\MyFabricatedWorkbook;HDR=YES].OneRowTable
UNION ALL
SELECT 55 AS key_col, 'xyz' AS data_col
FROM [EXCEL 8.0;DATABASE=C:\MyFabricatedWorkbook;HDR=YES].OneRowTable
UNION ALL
SELECT 99 AS key_col, 'efg' AS data_col
FROM [EXCEL 8.0;DATABASE=C:\MyFabricatedWorkbook;HDR=YES].OneRowTable
) AS DT1;
```
|
Does ADO.NET + massive INSERTs + Excel + C# = "A bad time"?
|
[
"",
"c#",
"excel",
"ado.net",
"memory-management",
"memory-leaks",
""
] |
How do I pad a numeric string with zeroes to the left, so that the string has a specific length?
|
To pad strings:
```
>>> n = '4'
>>> print(n.zfill(3))
004
```
To pad numbers:
```
>>> n = 4
>>> print(f'{n:03}') # Preferred method, python >= 3.6
004
>>> print('%03d' % n)
004
>>> print(format(n, '03')) # python >= 2.6
004
>>> print('{0:03d}'.format(n)) # python >= 2.6 + python 3
004
>>> print('{foo:03d}'.format(foo=n)) # python >= 2.6 + python 3
004
>>> print('{:03d}'.format(n)) # python >= 2.7 + python3
004
```
[String formatting documentation](https://docs.python.org/3/library/string.html#format-string-syntax).
|
Just use the [`rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust) method of the string object.
This example creates a 10-character length string, padding as necessary:
```
>>> s = 'test'
>>> s.rjust(10, '0')
>>> '000000test'
```
|
How do I pad a string with zeros?
|
[
"",
"python",
"string",
"zero-padding",
""
] |
Today my colleagues and me have a discussion about the usage of the `final` keyword in Java to improve the garbage collection.
For example, if you write a method like:
```
public Double doCalc(final Double value)
{
final Double maxWeight = 1000.0;
final Double totalWeight = maxWeight * value;
return totalWeight;
}
```
Declaring the variables in the method `final` would help the garbage collection to clean up the memory from the unused variables in the method after the method exits.
Is this true?
|
Here's a slightly different example, one with final reference-type fields rather than final value-type local variables:
```
public class MyClass {
public final MyOtherObject obj;
}
```
Every time you create an instance of MyClass, you'll be creating an outgoing reference to a MyOtherObject instance, and the GC will have to follow that link to look for live objects.
The JVM uses a mark-sweep GC algorithm, which has to examine all the live refereces in the GC "root" locations (like all the objects in the current call stack). Each live object is "marked" as being alive, and any object referred to by a live object is also marked as being alive.
After the completion of the mark phase, the GC sweeps through the heap, freeing memory for all unmarked objects (and compacting the memory for the remaining live objects).
Also, it's important to recognize that the Java heap memory is partitioned into a "young generation" and an "old generation". All objects are initially allocated in the young generation (sometimes referred to as "the nursery"). Since most objects are short-lived, the GC is more aggressive about freeing recent garbage from the young generation. If an object survives a collection cycle of the young generation, it gets moved into the old generation (sometimes referred to as the "tenured generation"), which is processed less frequently.
So, off the top of my head, I'm going to say "no, the 'final' modifer doesn't help the GC reduce its workload".
In my opinion, the best strategy for optimizing your memory-management in Java is to eliminate spurious references as quickly as possible. You could do that by assigning "null" to an object reference as soon as you're done using it.
Or, better yet, minimize the size of each declaration scope. For example, if you declare an object at the beginning of a 1000-line method, and if the object stays alive until the close of that method's scope (the last closing curly brace), then the object might stay alive for much longer that actually necessary.
If you use small methods, with only a dozen or so lines of code, then the objects declared within that method will fall out of scope more quickly, and the GC will be able to do most of its work within the much-more-efficient young generation. You don't want objects being moved into the older generation unless absolutely necessary.
|
Declaring a local variable `final` will not affect garbage collection, it only means you can not modify the variable. Your example above should not compile as you are modifying the variable `totalWeight` which has been marked `final`. On the other hand, declaring a primitive (`double` instead of `Double`) `final` will allows that variable to be inlined into the calling code, so that could cause some memory and performance improvement. This is used when you have a number of `public static final Strings` in a class.
In general, the compiler and runtime will optimize where it can. It is best to write the code appropriately and not try to be too tricky. Use `final` when you do not want the variable to be modified. Assume that any easy optimizations will be performed by the compiler, and if you are worried about performance or memory use, use a profiler to determine the real problem.
|
Does using final for variables in Java improve garbage collection?
|
[
"",
"java",
"garbage-collection",
"final",
""
] |
I want to make this specialized w/o changing main. Is it possible to specialize something based on its base class? I hope so.
-edit-
I'll have several classes that inherit from SomeTag. I don't want to write the same specialization for each of them.
```
class SomeTag {};
class InheritSomeTag : public SomeTag {};
template <class T, class Tag=T>
struct MyClass
{
};
template <class T>
struct MyClass<T, SomeTag>
{
typedef int isSpecialized;
};
int main()
{
MyClass<SomeTag>::isSpecialized test1; //ok
MyClass<InheritSomeTag>::isSpecialized test2; //how do i make this specialized w/o changing main()
return 0;
}
```
|
This article describes a neat trick: <http://www.gotw.ca/publications/mxc++-item-4.htm>
Here's the basic idea. You first need an IsDerivedFrom class (this provides runtime and compile-time checking):
```
template<typename D, typename B>
class IsDerivedFrom
{
class No { };
class Yes { No no[3]; };
static Yes Test( B* ); // not defined
static No Test( ... ); // not defined
static void Constraints(D* p) { B* pb = p; pb = p; }
public:
enum { Is = sizeof(Test(static_cast<D*>(0))) == sizeof(Yes) };
IsDerivedFrom() { void(*p)(D*) = Constraints; }
};
```
Then your MyClass needs an implementation that's potentially specialized:
```
template<typename T, int>
class MyClassImpl
{
// general case: T is not derived from SomeTag
};
template<typename T>
class MyClassImpl<T, 1>
{
// T is derived from SomeTag
public:
typedef int isSpecialized;
};
```
and MyClass actually looks like:
```
template<typename T>
class MyClass: public MyClassImpl<T, IsDerivedFrom<T, SomeTag>::Is>
{
};
```
Then your main will be fine the way it is:
```
int main()
{
MyClass<SomeTag>::isSpecialized test1; //ok
MyClass<InheritSomeTag>::isSpecialized test2; //ok also
return 0;
}
```
|
Update for concepts, using C++-20:
```
#include <concepts>
struct NotSomeTag { };
struct SomeTag { };
struct InheritSomeTag : SomeTag { };
template<typename T>
concept ConceptSomeTag = std::is_base_of_v<SomeTag, T>;
template<class T>
struct MyClass {
};
// Specialization.
template<ConceptSomeTag ST>
struct MyClass<ST> {
using isSpecialized = int;
};
int main() {
MyClass<SomeTag>::isSpecialized test1; /* ok */
MyClass<InheritSomeTag>::isSpecialized test2; /* ok */
MyClass<NotSomeTag>::isSpecialized test3; /* fail */
}
```
My post from 2014, using C++-11:
```
#include <type_traits>
struct SomeTag { };
struct InheritSomeTag : SomeTag { };
template<typename T, bool = std::is_base_of<SomeTag, T>::value>
struct MyClass { };
template<typename T>
struct MyClass<T, true> {
typedef int isSpecialized;
};
int main() {
MyClass<SomeTag>::isSpecialized test1; /* ok */
MyClass<InheritSomeTag>::isSpecialized test2; /* ok */
}
```
|
Template specialization based on inherit class
|
[
"",
"c++",
"templates",
"specialization",
""
] |
I'm working on a PyGTK/glade application that currently has 16 windows/dialogs and is about 130KB, and will eventually have around 25 windows/dialogs and be around 200KB. Currently, I'm storing all the windows in one monolithic glade file. When I run a window I call it like...
```
self.wTree = gtk.glade.XML("interface.glade", "WindowXYZ")
```
I wonder if it would be a better idea to split each window into it's own glade file. Instead of one glade file with 25 windows/dialogs I'd have 25 glade files with one window/dialog each and call it like so:
```
self.wTree = gtk.glade.XML("windowxyz.glade")
```
What do you guys think is the best way to do this? Is one method more resource intensive than another? One thing that would be nice about going to individual glade files is that naming widgets would be easier. For example, I name all my OK buttons "windowxyz\_ok", but I could change it to simply "ok" instead. Makes things simpler. The downside is that it may be a bit less convenient to make changes to different windows.
I'm open to any and all arguments. Thanks!
|
In my projects, I always have one window per glade file. I'd recommend the same for your project.
The following are the two main reasons:
* It will be faster and use less memory, since each call to gtk.glade.XML() parses the whole thing. Sure you can pass in the root argument to avoid creating the widget tree for all windows, but you'd still have to *parse* all the XML, even if you're not interested in it.
* Conceptually its easier to understand if have one toplevel per window. You easily know which filename a given dialog/window is in just by looking at the filename.
|
Did you take some timings to find out whether it makes a difference?
The problem is that, as far as I understand it, Glade always creates all widgets when it parses an XML file, so if you open the XML file and only read a single widget, you are wasting a lot of resources.
The other problem is that you need to re-read the file if you want to have another instance of that widget.
The way I did it before was to put all widgets that were created only once (like the about window, the main window etc) into one glade file, and separate glade files for widgets that needed to be created several times.
|
How to handle a glade project with many windows
|
[
"",
"python",
"gtk",
"pygtk",
"glade",
""
] |
A puzzler from a coworker that I cannot figure out...
```
update btd.dbo.tblpayroll
set empname = ( select b.Legal_Name
from ( SELECT Legal_Name,
Employee_ID
FROM Com.dbo.Workers
WHERE isnumeric(Employee_ID) = 1
) b
where b.Employee_ID = empnum
and b.Legal_name is not NULL
)
where empname is NULL
```
---
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value 'N0007 ' to data type int. The table alias b would actually be a view.
The value 'N0007 ' is in the Workers table. I don't see why it is not being filtered from the results that are being joined.
EDIT:
The alias does, in fact, return the correct rows - so isNumeric is doing the job.
|
I suspect that the optimizer is attempting to apply the where clause of the outer select before the inner select. Presumably it thinks it would be able to do an index lookup on Employee\_ID resulting in a faster query in this case. Try:
```
update btd.dbo.tblpayroll
set empname = ( select Legal_Name
from Com.dbo.Workers
where isnumeric(Employee_ID) = 1
and convert(varchar,Employee_ID)
= convert(varchar,empnum)
and Legal_name is not NULL)
where empname is NULL
```
Converting them all to varchar should take care of it. I don't think it's much less efficient than you wanted orginally since the isnumeric, if done first, would have forced a table scan anyway.
|
Maybe N is considered currency symbol? You can try to replace IsNumeric with
```
LIKE REPLICATE('[0-9]',/*length of Employee_ID*/)
```
or just
```
LIKE '[0-9]%'
```
if letter cannot be in the middle
|
Data not filtering before a join
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
I have a class `Page` that creates an instance of `DB`, which is named `$db`.
In the `__construct()` of `Page`, I create the new `$db` object and I pull a bunch of config data from a file.
Now the DB class has a method `_connectToDB()` which (attempts) to connect to the database.
Is there a way in the DB class to call the parent class's config array? I don't want to make global variables if I don't have to and I don't want to grab the config data twice.
Pseudo code might look something like this...
```
$dbUsername = get_calling_class_vars(configArray['dbUserName']);
```
|
I find that it's often easier to initialise all the "important" objects close to whatever variables they need to know. You could try it this way:
```
/* Code to get config variables here */
$DB = new DB($config);
/* You might want to delete the database password from $config here */
$Page = new Page($config, $DB);
```
Doing it this way means you can also do type-checking on the database object if you want to:
```
class Page {
function __construct(array $config, DBClass $db) { }
}
```
|
You can use `static` if you want to share variables without passing these:
```
class page{
static $configArray = [];
function doWhatever(){
$db = new DB();
$db->connectToDB();
}
}
class DB{
function connectToDB(){
$dbUsername = page::$configArray['dbUserName'];
}
}
```
In this case makes sense to have those data as `static`, because even if you have a multiple instances of the page class, the config parameters should always be the same.
Anyway I think that could be better to construct and keep the $configArray only in the DB class. If you want to call it from outside you can use the same `static` logic.
|
Can you get a calling class's variables?
|
[
"",
"php",
"oop",
""
] |
When would I implement IDispose on a class as opposed to a destructor? I read [this article](http://www.dotnetspider.com/resources/1382-Understanding-IDisposable-pattern.aspx), but I'm still missing the point.
My assumption is that if I implement IDispose on an object, I can explicitly 'destruct' it as opposed to waiting for the garbage collector to do it. Is this correct?
Does that mean I should always explicitly call Dispose on an object? What are some common examples of this?
|
A finalizer (aka destructor) is part of garbage collection (GC) - it is indeterminate when (or even if) this happens, as GC mainly happens as a result of memory pressure (i.e. need more space). Finalizers are usually only used for cleaning up *unmanaged* resources, since managed resources will have their own collection/disposal.
Hence `IDisposable` is used to *deterministically* clean up objects, i.e. now. It doesn't collect the object's memory (that still belongs to GC) - but is used for example to close files, database connections, etc.
There are lots of previous topics on this:
* [deterministic finalization](https://stackoverflow.com/questions/188473/how-do-you-manage-deterministic-finalization-in-c)
* [disposing objects](https://stackoverflow.com/questions/245856/when-should-i-dispose-my-objects-in-net)
* [using block](https://stackoverflow.com/questions/212198/what-is-the-c-using-block-and-why-should-i-use-it)
* [resources](https://stackoverflow.com/questions/117135/resources-that-have-to-be-manually-cleaned-up-in-c)
Finally, note that it is not uncommon for an `IDisposable` object to also have a finalizer; in this case, `Dispose()` usually calls `GC.SuppressFinalize(this)`, meaning that GC doesn't run the finalizer - it simply throws the memory away (much cheaper). The finalizer still runs if you forget to `Dispose()` the object.
|
The role of the `Finalize()` method is to ensure that a .NET object can clean up unmanaged resources **when garbage collected**. However, objects such as database connections or file handlers should be released as soon as possible, instead on relying on garbage collection. For that you should implement `IDisposable` interface, and release your resources in the `Dispose()` method.
|
What is the difference between using IDisposable vs a destructor in C#?
|
[
"",
"c#",
".net",
"dispose",
"destructor",
""
] |
I have run into a bit of a tricky problem in some C++ code, which is most easily described using code. I have classes that are something like:
```
class MyVarBase
{
}
class MyVar : public MyVarBase
{
int Foo();
}
class MyBase
{
public:
MyBase(MyVarBase* v) : m_var(v) {}
virtual MyVarBase* GetVar() { return m_var; }
private:
MyVarBase* m_var;
}
```
I also have a subclass of MyBase that needs to have a member of type MyVar because it needs to call Foo. Moving the Foo function into MyVarBase is not an option. Does it make sense to do this:
```
class MyClass : public MyBase
{
public:
MyClass(MyVar* v) : MyBase(v), m_var(v) {}
MyVar* GetVar() { return m_var; }
private:
MyVar* m_var;
}
```
This seems to work but looks really bad and I'm not sure if it's going to cause a memory leak or break a copy constructor. My other options might be to name the MyVar variable in MyClass something else but have it be equal to the m\_var pointer in the base, or to templatise MyBase on the MyVar type.
All these options don't seem ideal so I wanted to know if anyone else has run into a situation like this and if there is a good way to make it work.
|
The correct way to do this is to have the variable only in the base class. As the derived class knows it must be of dynamic type `MyVar`, this is totally reasonable:
```
class MyClass : public MyBase
{
public:
MyClass(MyVar* v) : MyBase(v) {}
MyVar* GetVar() { return static_cast<MyVar*>(MyBase::GetVar()); }
}
```
Since MyVar is derived from MyVarBase, the different return-types of `GetVar` would still work if GetVar was virtual (as is the case here). Note that with that method, there must be no function in `MyBase` that can reset the pointer to something different, obviously.
Note that `static_cast` is the right cast in that case. Using `dynamic_cast`, as proposed by one commenter, will tell the readers and users of GetVar that `MyBase::GetVar()` could return a pointer to an object not of type MyVar. But that doesn't reflect our intention, as you only ever pass MyVar. To be consequent is the most important thing in software development. What you *could* do is to assert it is non-null. It will abort at runtime with an error-message in debug-builds of your project:
```
MyVar* GetVar() {
assert(dynamic_cast<MyVar*>(MyBase::GetVar()) != 0);
return static_cast<MyVar*>(MyBase::GetVar());
}
```
|
Without knowing more of the context, it's hard to say for sure, but I'd reconsider whether you need this class hierarchy in the first place. Do you really need the MyVarBase and MyBase classes? Could you get away with composition instead of inheritance? Maybe you don't need the class heirarchy at all, if you template the functions and classes that work with the above classes. Maybe CRTP can help too. There are plenty of alternatives to using virtual functions (and to some extent, inheritance in general) in C++.
Anyway, your own suggested solution certainly shouldn't leak any memory (since it doesn't allocate any), and wouldn't break the default copy constructor either (since it just performs a member-wise copy. Now you just have an extra member in the derived class, but that gets copied too).
Of course, you get the additional problem of keeping the two variables in sync, so I still don't like that solution much.
Alternatively, you may be able to remove the variable from the base class. Make the GetVar() function pure virtual, so it is only implemented in derived classes, which can define the member variable in any way they like.
|
Overriding a member variable in C++
|
[
"",
"c++",
"oop",
""
] |
First of all, I'm not looking for miracle... I know how PHP works and that there's not really way to hide my code from the clients without using encryption. But that comes with the cost of an extension to be installed on the running server.
I'm looking for something different though... I'm not looking to encrypt my code or even obfuscate it. There are many PHP scripts without encrypted/obfuscated code but they are commercial applications. For instance, vBulletin and/or IP.Board forum applications.
I just want to know what approach do these guys use for their applications...
I'm also open to any other suggestions.
Please note that I'm a single person and not working for a company. My product is also very specific, it won't sell that much. I just want you guys to know that I can't afford to consult a legal professional either to sue someone or prepare a commercial license. I'm just looking for a simple way to protect my simple product, if it's indeed possible, somehow...
|
Obfuscating things can only inconvenience your legitimate, law-abiding customers, while the people who would would rip you off are not your target paying customers anyway. (edited out other thoughts about obfuscation)
Another suggestion for protecting your software: create a business model in which the code is an incomplete part of the value of your offering. For example, sell product licenses along with access to some data you manage on your site, or license the product on a subscription model or with customer support.
Designing a EULA is a legal matter, not a coding matter. You can start by reading some EULA text for products and websites you use. You might find some interesting details!
Creating a proprietary license is is highly flexible, and probably a subject beyond the intended scope of StackOverflow, since it's not strictly about coding.
Some parts of a EULA that come to mind:
* Limiting your liability if the product has bugs or causes damage.
* Spelling out how the customer can use their licensed software, for how long, on how many machines, with or without redistribution rights, etc.
* Giving you rights to audit their site, so you can enforce the licenses.
* What happens if they violate the EULA, e.g. they lose their privilege to use your software.
You should consult a legal professional to prepare a commercial EULA.
**edit:** If this project can't justify the expense of a lawyer, check out these resources:
* "[EULA advice](http://discuss.joelonsoftware.com/default.asp?biz.5.293027.14)" on joelonsoftware
* "[How to Write an End User License Agreement](http://www.avangate.com/articles/eula-software_75.htm)"
|
You need to consider your objectives:
1) **Are you trying to prevent people from reading/modifying your code?** If yes, you'll need an obfuscation/encryption tool. I've used [Zend Guard](http://www.zend.com/en/products/guard/) with good success.
2) **Are you trying to prevent unauthorized redistribution of your code??** A EULA/proprietary license will give you the legal power to prevent that, but won't actually stop it. An key/activation scheme will allow you to actively monitor usage, but can be removed unless you also encrypt your code. Zend Guard also has capabilities to lock a particular script to a particular customer machine and/or create time limited versions of the code if that's what you want to do.
I'm not familiar with vBulletin and the like, but they'd either need to encrypt/obfuscate or trust their users to do the right thing. In the latter case they have the protection of having a EULA which prohibits the behaviors they find undesirable, and the legal system to back up breaches of the EULA.
If you're not prepared/able to take legal action to protect your software and you don't want to encrypt/obfuscate, your options are a) Release it with a EULA so you're have a legal option if you ever need it and hope for the best, or b) consider whether an open source license might be more appropriate and just allow redistribution.
|
Best solution to protect PHP code without encryption
|
[
"",
"php",
"encryption",
"obfuscation",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.