Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I wonder if anyone can help improve my understanding of JOINs in SQL. [If it is significant to the problem, I am thinking MS SQL Server specifically.]
Take 3 tables A, B [A related to B by some A.AId], and C [B related to C by some B.BId]
If I compose a query e.g
```
SELECT *
FROM A JOIN B
ON A.AId = B.AId
```
All good - I'm sweet with how this works.
What happens when Table C (Or some other D,E, .... gets added)
In the situation
```
SELECT *
FROM A JOIN B
ON A.AId = B.AId
JOIN C ON C.BId = B.BId
```
What is C joining to? - is it that B table (and the values therein)?
Or is it some other temporary result set that is the result of the A+B Join that the C table is joined to?
[The implication being not all values that are in the B table will necessarily be in the temporary result set A+B based on the join condition for A,B]
A specific (and fairly contrived) example of why I am asking is because I am trying to understand behaviour I am seeing in the following:
```
Tables
Account (AccountId, AccountBalanceDate, OpeningBalanceId, ClosingBalanceId)
Balance (BalanceId)
BalanceToken (BalanceId, TokenAmount)
Where:
Account->Opening, and Closing Balances are NULLABLE
(may have opening balance, closing balance, or none)
Balance->BalanceToken is 1:m - a balance could consist of many tokens
```
Conceptually, Closing Balance of a date, would be tomorrows opening balance
If I was trying to find a list of all the opening and closing balances for an account
I might do something like
```
SELECT AccountId
, AccountBalanceDate
, Sum (openingBalanceAmounts.TokenAmount) AS OpeningBalance
, Sum (closingBalanceAmounts.TokenAmount) AS ClosingBalance
FROM Account A
LEFT JOIN BALANCE OpeningBal
ON A.OpeningBalanceId = OpeningBal.BalanceId
LEFT JOIN BALANCE ClosingBal
ON A.ClosingBalanceId = ClosingBal.BalanceId
LEFT JOIN BalanceToken openingBalanceAmounts
ON openingBalanceAmounts.BalanceId = OpeningBal.BalanceId
LEFT JOIN BalanceToken closingBalanceAmounts
ON closingBalanceAmounts.BalanceId = ClosingBal.BalanceId
GROUP BY AccountId, AccountBalanceDate
```
Things work as I would expect until the last JOIN brings in the closing balance tokens - where I end up with duplicates in the result.
[I can fix with a DISTINCT - but I am trying to understand why what is happening is happening]
I have been told the problem is because the relationship between Balance, and BalanceToken is 1:M - and that when I bring in the last JOIN I am getting duplicates because the 3rd JOIN has already brought in BalanceIds multiple times into the (I assume) temporary result set.
I know that the example tables do not conform to good DB design
Apologies for the essay, thanks for any elightenment :)
Edit in response to question by Marc
Conceptually for an account there should not be duplicates in BalanceToken for An Account (per AccountingDate) - I think the problem comes about because 1 Account / AccountingDates closing balance is that Accounts opening balance for the next day - so when self joining to Balance, BalanceToken multiple times to get opening and closing balances I think Balances (BalanceId's) are being brought into the 'result mix' multiple times. If it helps to clarify the second example, think of it as a daily reconciliation - hence left joins - an opening (and/or) closing balance may not have been calculated for a given account / accountingdate combination. | *Conceptually* here is what happens when you join three tables together.
1. The optimizer comes up with a plan, which includes a join order. It could be A, B, C, or C, B, A or any of the combinations
2. The query execution engine applies any predicates (`WHERE` clause) to the first table that doesn't involve any of the other tables. It selects out the columns mentioned in the `JOIN` conditions or the `SELECT` list or the `ORDER BY` list. Call this result A
3. It joins this result set to the second table. For each row it joins to the second table, applying any predicates that may apply to the second table. This results in another temporary resultset.
4. Then it joins in the final table and applies the `ORDER BY`
This is conceptually what happens. Infact there are many possible optimizations along the way. The advantage of the relational model is that the sound mathematical basis makes various transformations of plan possible while not changing the correctness.
For example, there is really no need to generate the full result sets along the way. The `ORDER BY` may instead be done via accessing the data using an index in the first place. There are lots of types of joins that can be done as well. | We know that the data from `B` is going to be filtered by the (inner) join to `A` (the data in `A` is also filtered). So if we (inner) join from `B` to `C`, thus the set `C` is **also** filtered by the relationship to `A`. And note also that any duplicates from the join **will be included**.
However; what order this happens in is up to the optimizer; it could decide to do the `B`/`C` join first then introduce `A`, or any other sequence (probably based on the estimated number of rows from each join and the appropriate indexes).
---
HOWEVER; in your later example you use a `LEFT OUTER` join; so `Account` is not filtered *at all*, and may well my duplicated if any of the other tables have multiple matches.
Are there duplicates (per account) in `BalanceToken`? | Understanding how JOIN works when 3 or more tables are involved. [SQL] | [
"",
"sql",
"join",
""
] |
I need a very quick introduction to localization in a class library
I am not interested in pulling the locale from the user context, rather I have users stored in the db, and their locale is also setup in the db....
my functions in the class library can already pull the locale code from the user profile in the db... now I want to include use resx depending on locale...
I need a few steps to do this correctly...
And yeah - I have already googled this, and some research, but all the tutorials I can find are way too complex for my needs. | Unfortunately, this subject is way too complicated. ;) I know, I've done the research as well.
To get you started though,
1. create a Resources directory in your assembly.
2. Start with English and add a "Resources File" (.resx) to that directory. Name it something like "text.resx". In the event that the localized resource can't be found, the app will default to pulling out of this file.
3. Add your text resources.
4. Add another resources file. Name this one something like "text.es.resx" Note the "es" part of the file name. In this case, that defines spanish. Note that each language has it's own character code definition. Look that up.
5. Add your spanish resources to it.
Now that we have resource files to work from, let's try to implement.
In order to set the culture, pull that from your database record. Then do the following:
```
String culture = "es-MX"; // defines spanish culture
Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(culture);
Thread.CurrentThread.CurrentUICulture = new CultureInfo(culture);
```
This could happen in the app that has loaded your assembly OR in the assembly initialization itself. You pick.
To utlize the resource, all you have to do is something like the following within your assembly:
```
public string TestMessage() {
return Resources.Text.SomeTextValue;
}
```
Ta Da. Resources made easy. Things can get a little more complicated if you need to change usercontrols or do something directly in an aspx page. Update your question if you need more info.
Note that you could have resource files named like "text.es-mx.resx" That would be specific to mexican spanish. However, that's not always necessary because "es-mx" will fall back to "es" before it falls back to the default. Only you will know how specific your resources need to be. | Name your resxes with the culture in them (eg. resource\_en-GB.resx) and select which resource to query based on the culture. | C# Class Library Localization | [
"",
"c#",
"localization",
""
] |
Let's say we have a system like this:
```
______
{ application instances ---network--- (______)
{ application instances ---network--- | |
requests ---> load balancer { application instances ---network--- | data |
{ application instances ---network--- | base |
{ application instances ---network--- \______/
```
A request comes in, a load balancer sends it to an application server instance, and the app server instances talk to a database (elsewhere on the LAN). The application instances can either be separate processes or separate threads. Just to cover all the bases, let's say there are several identical processes, each with a pool of identical application service threads.
If the database is performing slowly, or the network gets bogged down, clearly the throughput of request servicing is going to get worse.
Now, in all my pre-Python experience, this would be accompanied by a corresponding *drop* in CPU usage by the application instances -- they'd be spending more time blocking on I/O and less time doing CPU-intensive things.
However, I'm being told that with Python, this is not the case -- under certain Python circumstances, this situation can cause Python's CPU usage to go *up*, perhaps all the way to 100%. Something about the Global Interpreter Lock and the multiple threads supposedly causes Python to spend all its time switching between threads, checking to see if any of them have an answer yet from the database. "Hence the rise in single-process event-driven libraries of late."
Is that correct? Do Python application service threads actually use *more* CPU when their I/O latency *increases?* | In theory, no, in practice, its possible; it depends on what you're doing.
There's a full [hour-long video](http://blip.tv/file/2232410) and [pdf about it](http://www.dabeaz.com/python/GIL.pdf), but essentially it boils down to some unforeseen consequences of the GIL with CPU vs IO bound threads with multicores. Basically, a thread waiting on IO needs to wake up, so Python begins "pre-empting" other threads every Python "tick" (instead of every 100 ticks). The IO thread then has trouble taking the GIL from the CPU thread, causing the cycle to repeat.
Thats grossly oversimplified, but thats the gist of it. The video and slides has more information. It manifests itself and a larger problem on multi-core machines. It could also occur if the process received signals from the os (since that triggers the thread switching code, too).
Of course, as other posters have said, this goes away if each has its own process.
Coincidentally, the slides and video explain why you can't CTRL+C in Python sometimes. | The key is to launch the application instances in separate processes. Otherwise multi-threading issues seem to be likely to follow. | Can a slow network cause a Python app to use *more* CPU? | [
"",
"python",
"multithreading",
""
] |
How do I get the current [stack trace](http://en.wikipedia.org/wiki/Stack_trace) in Java, like how in .NET you can do [`Environment.StackTrace`](http://msdn.microsoft.com/en-us/library/system.environment.stacktrace.aspx?ppud=4)?
I found `Thread.dumpStack()` but it is not what I want - I want to get the stack trace back, not print it out. | You can use `Thread.currentThread().getStackTrace()`.
That returns an array of [`StackTraceElement`](http://docs.oracle.com/javase/7/docs/api/java/lang/StackTraceElement.html)s that represent the current stack trace of a program. | ```
StackTraceElement[] st = Thread.currentThread().getStackTrace();
```
is fine if you don't care what the first element of the stack is.
```
StackTraceElement[] st = new Throwable().getStackTrace();
```
will have a defined position for your current method, if that matters. | How can I get the current stack trace in Java? | [
"",
"stack-trace",
"java",
""
] |
I now that I can insert text to `<div>` tag by :
```
<script type="text/javascript">
function doSomething(){
var lbl = document.getElementById('messageLabel');
lbl.innerHTML = "I just did something.";
}
</script>
</head>
<body>
<div>
<div id="messageLabel"></div>
<input type="button" value="Click Me!" onclick="doSomething();" />
</div>
```
**My question:** how can I append text to to a link?
Examples not working [1](http://pastebin.com/m61d3df96) and [2](http://pastebin.com/m13a66ac9). | From the example you posted above, try using the code below instead. I changed the id of the div tag to be different from the link you're trying to change and changed the code to modify the href of anchor.
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<title>Javascript Debugging with Firebug</title>
<script type="text/javascript">
function addLink(){
var anchor = document.getElementById('link');
anchor.href += "Dsi7x-A89Mw";
}
</script>
</head>
<body>
<div>
<div id="linkdiv"></div>
<input type="button" value="Click Me!" onclick="addLink();" />
<a id="link" href="http://www.youtube.com/watch?v=">Click here</a>
</div>
</body>
</html>
``` | ```
var anchor = document.getElementById('anchorID');
anchor.innerHTML = anchor.innerHTML + " I just did something.";
```
Should add "I just did something." to your current anchor text | Appending text in <a>-tags with Javascript | [
"",
"javascript",
"append",
""
] |
I have an application written in C# that uses Outlook Interop to open a new mail message pre-filled with details the user can edit before manually sending it.
```
var newMail = (Outlook.MailItem)outlookApplication.CreateItem(
Outlook.OlItemType.olMailItem);
newMail.To = "example@exam.ple";
newMail.Subject = "Example";
newMail.BodyFormat = Outlook.OlBodyFormat.olFormatHTML;
newMail.HTMLBody = "<p>Dear Example,</p><p>Example example.</p>";
newMail.Display(false);
```
When the same user creates a new message manually the font is set to *Calibri* or whichever font the user has set as their default. The problem is that the text in the automatic email appears in *Times New Roman* font which we do not want.
If I view source of one of the delivered emails I can see that Outlook has explicitly set the font in the email source:
```
// Automated
p.MsoNormal, li.MsoNormal, div.MsoNormal
{
margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman";
}
// Manual
p.MsoNormal, li.MsoNormal, div.MsoNormal
{
margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
}
```
Why are the formats different and how can I get the automated email to use the users default settings? I am using version 11 of the interop assemblies as there is a mix of Outlook 2003 and 2007 installed. | Since it is an HTML email, you can easily embed whatever styling you want into the actual HTML body. I suspect that is what Outlook is doing when you create a message from the Outlook GUI.
I don't actually know how to get the user settings. I looked through the Outlook API (it is a strange beast), but didn't see anything that would provide access to the default message properties. | It's totally frustrating.
It doesn't help that when you Google the problem there are countless answers telling you to simply *style* the text in CSS. Yeeessss, fine, if you're generating the full email and can style/control the entire text. But in your case (and ours) the intention is to launch the email with some initial text and the end user adds his own text. It's his additional text which is unfailingly rendered in Times New Roman.
The solution we found is to approach the problem from another direction. And that is to fix the base/underlying default in Outlook to be our selected font instead of Times New Roman.
That can be done by:
1. Open Word (yes, not Outlook)
2. Go to Options -> Advanced -> Web Options
3. Change the default font in the Fonts tab
Video here:
<http://www.youtube.com/watch?v=IC2RvfoMFz8>
I appreciate this doesn't help if you *need* to control or vary the font programmatically. But for those working with customers who simply want the base email font to **not** be Times New Roman whenever emails are generated from code, this may help. | Outlook Interop, Mail Formatting | [
"",
"c#",
"email",
"interop",
"outlook",
"stylesheet",
""
] |
In a Master page, I have this....
```
<ul id="productList">
<li id="product_a" class="active">
<a href="">Product A</a>
</li>
<li id="product_b">
<a href="">Product B</a>
</li>
<li id="product_c">
<a href="">Product C</a>
</li>
</ul>
```
I need to change the class of the selected list item...
When 'Product A' is clicked, it gets the 'active' class, and the others get none. When 'Product B' is clicked it gets the 'active' class, and the others get none.
I am trying to do it in the Home Controller, but I am having a hard time gaining a refference to the page or any of its elements. | You could have an id field in the model that corresponds to the id of the list item. Check if the current model matches the id of the list item. If it does, then set the active class. | > I am having a hard time gaining a refference to the page or any of its elements.
Sounds like you're not really getting MVC. Your controller should not have a reference to the html elements of the view. You need to create a Model (probably a View Model in this case) that contains the list of Products and indicates which one is selected. Your view then simply displays the contents of the View Model as HTML. It would probably include a loop over the products in the model with a check for an Active property. | How to set the style/class of an HTML element in MVC? | [
"",
"c#",
"html",
"css",
"model-view-controller",
""
] |
We've been waiting forever to see if it's going to become a full-fledged language, and yet there doesn't seem to be a release of the formal definition. Just committees and discussions and revising.
Does anyone know of a planned deadline for C++0x, or are we going to have to start calling it C++1x? | Well the committee is currently very busy working on the next revision - every meeting is prefaced by many papers, that are a good indicator of the effort that is going into the new standard: <http://www.open-std.org/jtc1/sc22/wg21/>
What is a little concerning (but reassuring in the sense that they will not rush publishing a standard just to assuage the public, yet do sense the urgency involved) is that Stroustrup just put out a paper saying that we need to take a second look at concepts and make sure that they are as simple as can be - and has proposed a reasonable solution.
[Edit] For those who are interested, this paper is available at: <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2906.pdf>.
C++0x will be a huge improvement upon C++ in many regards, and while I do not speak for the committee - my hope is that it will happen by late 2010.
[Edit] As underscored by one of the commenters, it is worth appreciating that there is significant concern amongst a few committee members that either the quality of the standard or the schedule (late 2010) will have to suffer if concepts are included: <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2893.pdf>. But whether these concerns will be substantiated is worth being patient about - we will have more information about this once the committee concludes its meeting in Frankfurt this july (the post-meeting mailing can be expected in late-july, early august).
Personally, i sense that it would not be a huge loss to get the standard out without concepts (maintain the late 2010 schedule), and then add them as a TR - versus rushing them through even when there is palpable uneasiness amongst the more seasoned committee members (about concepts) - but I will defer to the committee here - while they have never claimed or been described as perfect, the majority of them are far more qualified to make these decisions than I am and deserve some of our confidence if history is any indicator - I would err on the side of trusting their instincts (over mine) assuming there was some reasonable consensus amongst them.
For some perspective, and so that one does not despair about these obstacles too much, compare this to what happened within the ecmascript community - Brendan Eich, the creator, had some very different design goals for the next revision of ecmascript (es5) from some of the other similarly talented wizards in the ecmascript community - they had multiple meetings and after much discussion (some of it heated ;) formulated a very very reasonable compromise followed by a frenzy of activity that has resulted in ecmascript 5 (all in the span of 1-2 years, including the conflict) which will be an excellent and pragmatic, yet much more conservative than Eich had initially proposed, revision of javascript. I have similar hopes for C++ (acknowledging that C++ is a much much much larger language - but then much more effort has been put in ;) | FTR: C++11 [has been approved by ISO on 12 August 2011](http://herbsutter.com/2011/08/12/we-have-an-international-standard-c0x-is-unanimously-approved/). | When is the planned date for C++0x to be released into the wild? | [
"",
"c++",
"c++11",
""
] |
I am having an error:
Error 2 'int[]' does not contain a definition for 'Contains' and the best extension method overload 'System.Linq.Enumerable.Contains(System.Collections.Generic.IEnumerable, TSource)' has some invalid arguments
This is my code:
```
public partial class mymymy : System.Web.UI.Page
{
int[] validType = { 2, 3, 4, 5, 6, 8, 13, 14, 16, 22 };
protected void Page_Load(object sender, EventArgs e)
{
}
protected void LinqDataSource_Selecting(object sender, LinqDataSourceSelectEventArgs e)
{
using (var dc = new soxMainDataContext())
{
var qry = from item in dc.LeaveRequests
where **validType**.Contains(item.Type)
&& item.MgtApproval == null
select item;
e.Result = qry;
}
}
}
``` | I strongly suspect that `item.Type` isn't an `int`. Is it an enum? If so, try explicitly casting:
```
var qry = from item in dc.LeaveRequests
where validType.Contains((int) item.Type)
&& item.MgtApproval == null
select item;
```
Alternatively, as dot notation:
```
var query = dc.LeaveRequests.Where(item => validType.Contains((int) item.Type)
&& item.MgtApproval == null);
``` | `item.Type` isn't an `int`. | Linq-to-sql error: 'int[]' does not contain a definition for 'Contains' | [
"",
"c#",
"linq-to-sql",
""
] |
I'm using .Net 3.5 (C#) and I've heard the performance of C# `List<T>.ToArray` is "bad", since it memory copies for all elements to form a new array. Is that true? | No that's not true. Performance is good since all it does is memory copy all elements (\*) to form a new array.
Of course it depends on what you define as "good" or "bad" performance.
(\*) references for reference types, values for value types.
**EDIT**
In response to your comment, using Reflector is a good way to check the implementation (see below). Or just think for a couple of minutes about how you would implement it, and take it on trust that Microsoft's engineers won't come up with a worse solution.
```
public T[] ToArray()
{
T[] destinationArray = new T[this._size];
Array.Copy(this._items, 0, destinationArray, 0, this._size);
return destinationArray;
}
```
Of course, "good" or "bad" performance only has a meaning relative to some alternative. If in your specific case, there is an alternative technique to achieve your goal that is measurably faster, then you can consider performance to be "bad". If there is no such alternative, then performance is "good" (or "good enough").
**EDIT 2**
In response to the comment: "No re-construction of objects?" :
No reconstruction for reference types. For value types the values are copied, which could loosely be described as reconstruction. | Reasons to call `ToArray()`
* If the returned value is not meant to be modified, returning it as an array makes that fact a bit clearer.
* If the caller is expected to perform many non-sequential accesses to the data, there can be a performance benefit to an array over a `List<>`.
* If you know you will need to pass the returned value to a third-party function that expects an array.
* Compatibility with calling functions that need to work with .NET version 1 or 1.1. These versions don't have the `List<>` type (or any generic types, for that matter).
Reasons not to call `ToArray()`
* If the caller ever does need to add or remove elements, a `List<>` is absolutely required.
* The performance benefits are not necessarily guaranteed, especially if the caller is accessing the data in a sequential fashion. There is also the additional step of converting from `List<>` to array, which takes processing time.
* The caller can always convert the list to an array themselves.
Taken from [here](http://dotnetperls.com/convert-list-array). | C# List<T>.ToArray performance is bad? | [
"",
"c#",
".net",
"performance",
"memory",
""
] |
We provide a popular open source Java FTP library called [edtFTPj](http://www.enterprisedt.com/products/edtftpj/overview.html).
We would like to drop support for JRE 1.3 - this would clean up the code base and also allow us to more easily use JRE 1.4 features (without resorting to reflection etc). The JRE 1.3 is over 7 years old now!
Is anyone still using JRE 1.3 out there? Is anyone aware of any surveys that give an idea of what percentage of users are still using 1.3? | Sun allows you to buy [support packages for depreciated software](http://www.sun.com/software/javaseforbusiness/perspectives.jsp) such as JRE 1.4. For banks and some other organizations, paying $100,000 per year for support of an outdated product is cheaper than upgrading. I would suggest only offering paid support for JRE 1.3. If anyone needs support for this, they can pay for a hefty support package. You would then shelve your current 1.3 code base, and if a customer with a support contract requires a bug fix, then you could fix the 1.3 version for them, which would likely just mean selectively applying a patch from a more recent version. | Even JDK 1.4 reached the end of its support life in Oct 2008. I think you're safe.
But don't take it from me. The people that you really need to ask are your customers. Maybe putting a survey up on your download page and soliciting feedback will help. If no one asks in three months, drop it. | Dropping support for JRE 1.3 | [
"",
"java",
""
] |
I am extending the Visual Studio 2003 debugger using autoexp.dat and a DLL to improve the way it displays data in the watch window. The main reason I am using a DLL rather than just the basic autoexp.dat functionality is that I want to be able to display things conditionally. e.g. I want to be able to say "If the name member is not an empty string, display name, otherwise display [some other member]"
I'm quite new to OOP and haven't got any experience with the STL. So it might be that I'm missing the obvious.
I'm having trouble displaying vector members because I don't know how to get the pointer to the memory the actual values are stored in.
Am I right in thinking the values are stored in a contiguous block of memory? And is there any way to get access to the pointer to that memory?
Thanks!
[edit:] To clarify my problem (I hope):
In my DLL, which is called by the debugger, I use a function called ReadDebuggeeMemory which makes a copy of the memory used by an object. It doesn't copy the memory the object points to. So I need to know the actual address value of the internal pointer in order to be able to call ReadDebuggeeMemory on that as well. At the moment, the usual methods of getting the vector contents are returning garbage because that memory hasn't been copied yet.
[update:]
I was getting garbage, even when I was looking at the correct pointer \_Myfirst because I was creating an extra copy of the vector, when I should have been using a pointer to a vector. So the question then becomes: how do you get access to the pointer to the vector's memory via a pointer to the vector? Does that make sense? | The elements in a standard vector are allocated as one contiguous memory chunk.
You can get a pointer to the memory by taking the address of the first element, which can be done is a few ways:
```
std::vector<int> vec;
/* populate vec, e.g.: vec.resize(100); */
int* arr = vec.data(); // Method 1, C++11 and beyond.
int* arr = &vec[0]; // Method 2, the common way pre-C++11.
int* arr = &vec.front(); // Method 3, alternative to method 2.
```
However unless you need to pass the underlying array around to some old interfaces, generally you can just use the operators on vector directly.
Note that you can only access up to `vec.size()` elements of the returned value. Accessing beyond that is undefined behavior (even if you think there is capacity reserved for it).
If you had a pointer to a vector, you can do the same thing above just by dereferencing:
```
std::vector<int>* vecptr;
int* arr = vecptr->data(); // Method 1, C++11 and beyond.
int* arr = &(*vecptr)[0]; // Method 2, the common way pre-C++11.
int* arr = &vec->front(); // Method 3, alternative to method 2.
```
Better yet though, try to get a reference to it.
### About your solution
You came up with the solution:
```
int* vMem = vec->_Myfirst;
```
The only time this will work is on that specific implementation of that specific compiler version. This is not standard, so this isn't guaranteed to work between compilers, or even different versions of your compiler.
It might seem okay if you're only developing on that single platform & compiler, but it's better to do the the standard way given the choice. | Yes, The values are stored in a contiguous area of memory, and you can take the address of the first element to access it yourself.
However, be aware that operations which change the size of the vector (eg push\_back) can cause the vector to be reallocated, which means the memory may move, invalidating your pointer. The same happens if you use iterators.
```
vector<int> v;
v.push_back(1);
int* fred = &v[0];
for (int i=0; i<100; ++i)
v.push_back(i);
assert(fred == &v[0]); // this assert test MAY fail
``` | How does the C++ STL vector template store its objects in the Visual Studio compiler implementation? | [
"",
"c++",
"visual-studio",
"stl",
"vector",
""
] |
I've inherited some code on a system that I didn't setup, and I'm running into a problem tracking down where the PHP include path is being set.
I have a php.ini file with the following include\_path
```
include_path = ".:/usr/local/lib/php"
```
I have a PHP file in the webroot named test.php with the following phpinfo call
```
<?php
phpinfo();
```
When I take a look at the the phpinfo call, the local values for the include\_path is being overridden
```
Local Value Master Value
include_path .;C:\Program Files\Apache Software Foundation\ .:/usr/local/lib/php
Apache2.2\pdc_forecasting\classes
```
Additionally, the php.ini files indicates no additional .ini files are being loaded
```
Configuration File (php.ini) Path /usr/local/lib
Loaded Configuration File /usr/local/lib/php.ini
Scan this dir for additional .ini files (none)
additional .ini files parsed (none)
```
So, my question is, what else in a standard PHP system (include some PEAR libraries) could be overriding the include\_path between php.ini and actual php code being interpreted/executed. | Outisde of the PHP ways
```
ini_set( 'include_path', 'new/path' );
// or
set_include_path( 'new/path' );
```
Which could be loaded in a PHP file via `auto_prepend_file`, an `.htaccess` file can do do it as well
```
phpvalue include_path new/path
``` | There are several reasons why you are getting there weird results.
* include\_path overridden somewhere in your php code. Check your code whether it contains `set_include_path()` call. With this function you can customise include path. If you want to retain current path just concatenate string `. PATH_SEPARATOR . get_include_path()`
* include\_path overridden in `.htaccess` file. Check if there are any `php_value` or `php_flag` directives adding dodgy paths
* non-standard configuration file in php interpreter. It is very unlikely, however possible, that your php process has been started with custom `php.ini` file passed. Check your web server setup and/or php distribution to see what is the expected location of `php.ini`. Maybe you are looking at wrong one. | What can change the include_path between php.ini and a PHP file | [
"",
"apache",
"include",
"php",
""
] |
**Overview**
I'm using a listfield class to display a set of information vertically. Each row of that listfield takes up 2/5th's of the screen height.
As such, when scrolling to the next item (especially when displaying an item partially obscured by the constraints of the screen height), the whole scroll/focus action is very jumpy.
I would like to fix this jumpiness by implementing smooth scrolling between scroll/focus actions. Is this possible with the ListField class?
**Example**
Below is a screenshot displaying the issue at hand.
[](https://i.stack.imgur.com/lBTJE.jpg)
(source: [perkmobile.com](http://perkmobile.com/smooth_scroll.jpg))
Once the user scrolls down to ListFieldTHREE row, this row is "scrolled" into view in a very jumpy manner, no smooth scrolling. I know making the row height smaller will mitigate this issue, but I don't wan to go that way.
**Main Question**
How do I do smooth scrolling in a ListField? | Assuming you want the behavior that the user scrolls down 1 'click' of the trackball, and the next item is then highlighted but instead of an immediate scroll jump you get a smooth scroll to make the new item visible (like in Google's Gmail app for BlackBerry), you'll have to roll your own component.
The basic idea is to subclass VerticalFieldManager, then on a scroll (key off the moveFocus method) you have a separate Thread update a vertical position variable, and invalidate the manager multiple times.
The thread is necessary because if you think about it you're driving an animation off of a user event - the smooth scroll is really an animation on the BlackBerry, as it lasts longer than the event that triggered it.
I've been a bit vague on details, and this isn't a really easy thing to do, so hopefully this helps a bit. | There isn't an official API way of doing this, as far as I know, but it can probably be fudged through a clever use of NullField(Field.FOCUSABLE), which is how many custom BlackBerry UIs implement forced focus behavior.
One approach would be to derive each "list item" from a class that interlaces focusable NullFields with the visible contents of the list item itself -- this would essentially force the scrolling system to "jump" at smaller intervals rather than the large intervals dictated by the natural divisions between the list items, and would have the side benefit of not modifying the visible positioning of the contents of the list item. | Blackberry - How to implement ListField Smooth Scrolling? | [
"",
"java",
"blackberry",
"scroll",
"listfield",
"smooth-scrolling",
""
] |
I'm trying to write a restful web service in java that will take a few string params and a binary file (pdf) param.
I understand how to do the strings but I'm getting hung up on the binary file. Any ideas / examples?
Here's what I have so far
```
@GET
@ConsumeMime("multipart/form-data")
@ProduceMime("text/plain")
@Path("submit/{client_id}/{doc_id}/{html}/{password}")
public Response submit(@PathParam("client_id") String clientID,
@PathParam("doc_id") String docID,
@PathParam("html") String html,
@PathParam("password") String password,
@PathParam("pdf") File pdf) {
return Response.ok("true").build();
}
```
Since I've posted this the link that had the answer has been removed, so here is my implementation.
```
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
@Path("submit")
public Response submit(@FormDataParam("clientID") String clientID,
@FormDataParam("html") String html,
@FormDataParam("pdf") InputStream pdfStream) {
try {
byte[] pdfByteArray = DocUtils.convertInputStreamToByteArrary(pdfStream);
} catch (Exception ex) {
return Response.status(600).entity(ex.getMessage()).build();
}
}
...
public static byte[] convertInputStreamToByteArrary(InputStream in) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
final int BUF_SIZE = 1024;
byte[] buffer = new byte[BUF_SIZE];
int bytesRead = -1;
while ((bytesRead = in.read(buffer)) > -1) {
out.write(buffer, 0, bytesRead);
}
in.close();
byte[] byteArray = out.toByteArray();
return byteArray;
}
``` | ```
@POST
@Consumes(MediaType.MULTIPART_FORM_DATA)
@Produces(MediaType.TEXT_PLAIN)
@Path("submit")
public Response submit(@FormDataParam("clientID") String clientID,
@FormDataParam("html") String html,
@FormDataParam("pdf") InputStream pdfStream) {
try {
byte[] pdfByteArray = DocUtils.convertInputStreamToByteArrary(pdfStream);
} catch (Exception ex) {
return Response.status(600).entity(ex.getMessage()).build();
}
}
...
public static byte[] convertInputStreamToByteArrary(InputStream in) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
final int BUF_SIZE = 1024;
byte[] buffer = new byte[BUF_SIZE];
int bytesRead = -1;
while ((bytesRead = in.read(buffer)) > -1) {
out.write(buffer, 0, bytesRead);
}
in.close();
byte[] byteArray = out.toByteArray();
return byteArray;
}
``` | You could store the binary attachment in the body of the request instead. Alternatively, check out this mailing list archive here:
<http://markmail.org/message/dvl6qrzdqstrdtfk>
It suggests using Commons FileUpload to take the file and upload it appropriately.
Another alternative here using the MIME multipart API:
<http://n2.nabble.com/File-upload-with-Jersey-td2377844.html> | How do I write a restful web service that accepts a binary file (pdf) | [
"",
"java",
"web-services",
"rest",
"attachment",
""
] |
> **Possible Duplicate:**
> [When do you use Java’s @Override annotation and why?](https://stackoverflow.com/questions/94361/when-do-you-use-javas-override-annotation-and-why)
From the javadoc for the @Override annotation:
> Indicates that a method declaration is
> intended to override a method
> declaration in a superclass. If a
> method is annotated with this
> annotation type but does not override
> a superclass method, compilers are
> required to generate an error message.
I've tended to use the @Override annotation in testing, when I want to test a specific method of a type and replace the behaviour of some other method called by my test subject. One of my colleagues is firmly of the opinion that this is not a valid use but is not sure why. Can anyone suggest why it is to be avoided?
I've added an example below to illustrate my usage.
For the test subject Foo:
```
public class Foo {
/**
* params set elsewhere
*/
private Map<String, String> params;
public String doExecute(Map<String, String> params) {
// TODO Auto-generated method stub
return null;
}
public String execute() {
return doExecute(params);
}
}
```
I would define a test like this:
```
public void testDoExecute() {
final Map<String, String> expectedParams = new HashMap<String, String>();
final String expectedResult= "expectedResult";
Foo foo = new Foo() {
@Override
public String doExecute(Map<String, String> params) {
assertEquals(expectedParams, params);
return expectedResult;
}
};
assertEquals(expectedResult, foo.execute());
}
```
Then if my doExecute() signature changes I'll get a compile error on my test, rather than a confusing execution failure. | Using the Override annotation in that kind of tests is perfectly valid, but the annotation has no specific relationship to testing at all; It can (and should) be used all over production code as well. | The purpose of @Override is that you declare your intent to override a method, so that if you make a mistake (e.g. wrong spelling of the method name, wrong argument type) the compiler can complain and you find your mistake early.
So this is a perfectly valid use. | When to use @Override in java | [
"",
"java",
"unit-testing",
"annotations",
""
] |
I am trying to call `NetUserChangePassword` to change the passwords on a remote computer. I am able to change the password when I log-in to the machine, but I can't do it via code. The return value is 2245 which equates to the Password Being too short.
I read this link: <http://support.microsoft.com/default.aspx?scid=kb;en-us;131226> but nothing in the link was helpful to me. (My code did not seem to fit any of the issues indicated.)
If you have any ideas how to fix this error or have a different way to programmatically change a users password on a remote (Windows 2003) machine I would be grateful to hear it.
I am running the code on a windows xp machine.
Here is my current code in-case it is helpful (Also shows my create user code which works just fine).
```
public partial class Form1 : Form
{
[DllImport("netapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
private static extern int NetUserAdd(
[MarshalAs(UnmanagedType.LPWStr)] string servername,
UInt32 level,
ref USER_INFO_1 userinfo,
out UInt32 parm_err);
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
public struct USER_INFO_1
{
[MarshalAs(UnmanagedType.LPWStr)]
public string sUsername;
[MarshalAs(UnmanagedType.LPWStr)]
public string sPassword;
public uint uiPasswordAge;
public uint uiPriv;
[MarshalAs(UnmanagedType.LPWStr)]
public string sHome_Dir;
[MarshalAs(UnmanagedType.LPWStr)]
public string sComment;
public uint uiFlags;
[MarshalAs(UnmanagedType.LPWStr)]
public string sScript_Path;
}
[DllImport("netapi32.dll", CharSet = CharSet.Unicode,
CallingConvention = CallingConvention.StdCall, SetLastError = true)]
static extern uint NetUserChangePassword(
[MarshalAs(UnmanagedType.LPWStr)] string domainname,
[MarshalAs(UnmanagedType.LPWStr)] string username,
[MarshalAs(UnmanagedType.LPWStr)] string oldpassword,
[MarshalAs(UnmanagedType.LPWStr)] string newpassword);
// Method to change a Password of a user on a remote machine.
private static uint ChangeUserPassword(string computer, string userName,
string oldPassword, string newPassword)
{
return NetUserChangePassword(computer, userName,
oldPassword, newPassword);
}
// Method used to create a new user on a Remote Machine
private static uint CreateUser(string computer, string userName,
string password)
{
const int UF_DONT_EXPIRE_PASSWD = 0x10000;
const int UF_ACCOUNTDISABLE = 0x000002;
const int USER_PRIV_GUEST = 0; // lmaccess.h:656
const int USER_PRIV_USER = 1; // lmaccess.h:657
const int USER_PRIV_ADMIN = 2; // lmaccess.h:658
USER_INFO_1 userinfo = new USER_INFO_1()
{
sComment = "Scan Track User",
sUsername = userName,
sPassword = password,
sHome_Dir = "",
sScript_Path = "",
uiPriv = USER_PRIV_USER,
uiFlags = UF_DONT_EXPIRE_PASSWD
};
uint output;
NetUserAdd(computer, 1, ref userinfo, out output);
return output;
}
private void button1_Click(object sender, EventArgs e)
{
string computer = "10.1.9.115";
string userName = "test2";
string psswrd = "ssssss";
string fullname = "";
uint output = CreateUser(computer, userName, psswrd);
MessageBox.Show(output.ToString());
}
private void button2_Click(object sender, EventArgs e)
{
string computer = "10.1.9.115";
string userName = "test";
string oldPassword = "!B3tt3rLuck!@!";
string newPassword = "!B3tt3r-Luck2";
uint result = ChangeUserPassword(computer, userName,
oldPassword, newPassword);
MessageBox.Show(result.ToString());
}
public Form1()
{
InitializeComponent();
}
}
``` | Error 2245 could also be a password history problem. Is the new password one that was used in the recent past?
Edit:
It looks like this function broke after Server 2003 SP 2. I got the same error when calling the function from C++ using the example in the documentation. You'll probably need to use NetUserSetInfo. | I was stumped by the same issue during initial development and testing, until I discovered an undocumented restriction of this API - the password that you are trying to change MUST ACTUALLY BE EXPIRED for the change to succeed! | Need a way to change a remote user password - NetUserChangePassword fails with 2245 | [
"",
"c#",
"winapi",
"pinvoke",
"netapi32",
""
] |
I'm working on an application. I have a servlet (writeDataBase.class) that writes some information in a database. This is working fine. My folder structure looks like: webapps/HelloWord/web-inf/classes. In folder 'classes' is where the file writeDataBase.class is placed.
web.xml looks like:
```
<servlet>
<servlet-name>HelloWord</servlet-name>
<servlet-class>writeDataBase.writeDataBase</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HelloWord</servlet-name>
<url-pattern>/write-data</url-pattern>
</servlet-mapping>
```
If I want to add a new servlet that will read data from the data base, how should I do it? As a class of the same package? How should I modify the file structure and the web.xml file? | Perhaps:
```
<servlet>
<servlet-name>HelloExcel</servlet-name>
<servlet-class>writeDataBase.readDataBase</servlet-class>
</servlet>
<servlet>
<servlet-name>HelloWord</servlet-name>
<servlet-class>writeDataBase.writeDataBase</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HelloExcel</servlet-name>
<url-pattern>/read-data</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>HelloWord</servlet-name>
<url-pattern>/write-data</url-pattern>
</servlet-mapping>
``` | Just add servlet mappings to the XML file. As for what package you put the classes in; the package declarations are up to you - but the classes must still be located in the web-inf/classes directory | adding new servlet class in web-xml and filesystem (tomcat) | [
"",
"java",
"servlets",
""
] |
I'm building a [Windows Forms](http://en.wikipedia.org/wiki/Windows_Forms) application where I have a `menuStrip` and a `toolStripMenuItem` called 'Preferences', that is self-explanatory for opening a preferences form.
The problem is that I have bound the shortcut key for that item to `Ctrl` + `P` which opens the printer dialog, which I assume is defaulted in all Windows Forms forms.
Is there any way to disable that the printer dialog overwrites my shortkey? | When naming the "Preferences" menu item, just put an ampersand (`&`) in front of the `P`. You'll end up with `Alt` instead of `Ctrl` for your shortcut key, but Visual Studio will wire everything up for you. | There is no default wiring in a Windows Forms application that intercepts the `Ctrl` + `P` shortcut and brings up a print dialog. If you have a print command in a menu on your form, that menu item might have the `Ctrl` + `P` configured as shortcut key and if that is the case, one of them will need to get some other shortcut key.
I do however question whether the user would bring up the preferences dialog so often that the command actually needs a shortcut key. I would probably just use access keys instead and let the print function keep `Ctrl` + `P`. That would conform to how many other applications work as well. | Window Forms shortcut key using C# | [
"",
"c#",
"winforms",
"keyboard-shortcuts",
""
] |
I have these two class(table)
```
@Entity
@Table(name = "course")
public class Course {
@Id
@Column(name = "courseid")
private String courseId;
@Column(name = "coursename")
private String courseName;
@Column(name = "vahed")
private int vahed;
@Column(name = "coursedep")
private int dep;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(name = "student_course", joinColumns = @JoinColumn(name = "course_id"), inverseJoinColumns = @JoinColumn(name = "student_id"))
private Set<Student> student = new HashSet<Student>();
//Some setter and getter
```
and this one:
```
@Entity
@Table(name = "student")
public class Student {
@Id
@Column(name="studid")
private String stId;
@Column(nullable = false, name="studname")
private String studName;
@Column(name="stmajor")
private String stMajor;
@Column(name="stlevel", length=3)
private String stLevel;
@Column(name="stdep")
private int stdep;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(name = "student_course"
,joinColumns = @JoinColumn(name = "student_id")
,inverseJoinColumns = @JoinColumn(name = "course_id")
)
private Set<Course> course = new HashSet<Course>();
//Some setter and getter
```
After running this code an extra table was created in database(student\_course), now I wanna know how can I add extra field in this table like (Grade, Date , and ... (I mean student\_course table))
I see some solution but I don't like them, Also I have some problem with them:
[First Sample](http://boris.kirzner.info/blog/archives/2008/07/19/hibernate-annotations-the-many-to-many-association-with-composite-key/) | If you add extra fields on a linked table (STUDENT\_COURSE), you have to choose an approach according to skaffman's answer or another as shown bellow.
There is an approach where the linked table (STUDENT\_COURSE) behaves like a @Embeddable according to:
```
@Embeddable
public class JoinedStudentCourse {
// Lets suppose you have added this field
@Column(updatable=false)
private Date joinedDate;
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="STUDENT_ID", insertable=false, updatable=false)
private Student student;
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="COURSE_ID", insertable=false, updatable=false)
private Course course;
// getter's and setter's
public boolean equals(Object instance) {
if(instance == null)
return false;
if(!(instance instanceof JoinedStudentCourse))
return false;
JoinedStudentCourse other = (JoinedStudentCourse) instance;
if(!(student.getId().equals(other.getStudent().getId()))
return false;
if(!(course.getId().equals(other.getCourse().getId()))
return false;
// ATT: use immutable fields like joinedDate in equals() implementation
if(!(joinedDate.equals(other.getJoinedDate()))
return false;
return true;
}
public int hashcode() {
// hashcode implementation
}
}
```
So you will have in both Student and Course classes
```
public class Student {
@CollectionOfElements
@JoinTable(
table=@Table(name="STUDENT_COURSE"),
joinColumns=@JoinColumn(name="STUDENT_ID")
)
private Set<JoinedStudentCourse> joined = new HashSet<JoinedStudentCourse>();
}
public class Course {
@CollectionOfElements
@JoinTable(
table=@Table(name="STUDENT_COURSE"),
joinColumns=@JoinColumn(name="COURSE_ID")
)
private Set<JoinedStudentCourse> joined = new HashSet<JoinedStudentCourse>();
}
```
remember: @Embeddable class has its lifecycle bound to the owning entity class (Both Student and Course), so take care of it.
advice: Hibernate team suppports these two approachs (@OneToMany (skaffman's answer) or @CollectionsOfElements) due some limitations in @ManyToMany mapping - cascade operation.
regards, | The student\_course table is there purely to record the association between the two entities. It is managed by hibernate, and can contain no other data.
The sort of data you want to record needs to be modelled as another entity. Perhaps you could a one-to-many association between Course and StudentResult (which contains the grade, etc), and then a many-to-one association between StdentResult and Student. | Can add extra field(s) to @ManyToMany Hibernate extra table? | [
"",
"java",
"hibernate",
""
] |
I was looking through the DXUTCore project that comes with the DirectX March 2009 SDK, and noticed that instead of making normal accessor methods, they used macros to create the generic accessors, similar to the following:
```
#define GET_ACCESSOR( x, y ) inline x Get##y() { DXUTLock l; return m_state.m_##y;};
...
GET_ACCESSOR( WCHAR*, WindowTitle );
```
It seems that the ## operator just inserts the text from the second argument into the macro to create a function operating on a variable using that text. Is this something that is standard in C++ (i.e. not Microsoft specific)? Is its use considered good practice? And, what is that operator called? | [Token-pasting operator](http://msdn.microsoft.com/en-us/library/09dwwt6y.aspx), used by the pre-processor to join two tokens into a single token. | This is also standard C++, contrary to what [Raldolpho](https://stackoverflow.com/questions/1121971/what-is-the-purpose-of-the-operator-in-c-and-what-is-it-called/1122005#1122005) stated.
Here is the relevant information:
> 16.3.3 The ## operator [cpp.concat]
>
> 1 A `##` preprocessing token shall not
> occur at the beginning or at the end
> of a replacement list for either form
> of macro definition.
>
> 2 If, in the
> replacement list, a parameter is
> immediately preceded or followed by a
> `##` preprocessing token, the parameter is replaced by the corresponding
> argument’s preprocessing token
> sequence.
>
> 3 For both object-like and
> function-like macro invocations,
> before the replacement list is
> reexamined for more macro names to
> replace, each instance of a `##`
> preprocessing token in the replacement
> list (not from an argument) is deleted
> and the preceding preprocessing token
> is concatenated with the following
> preprocessing token. If the result is
> not a valid preprocessing token, the
> behavior is undefined. The resulting
> token is available for further macro
> replacement. The order of evaluation
> of `##` operators is unspecified. | What is the purpose of the ## operator in C++, and what is it called? | [
"",
"c++",
"c",
"macros",
""
] |
It just crossed my mind that it would be extremely nice to be able to apply javascript code like you can apply css.
Imagine something like:
```
/* app.jss */
div.closeable : click {
this.remove();
}
table.highlightable td : hover {
previewPane.showDetailsFor(this);
}
form.protectform : submit { }
links.facebox : click {}
form.remote : submit {
postItUsingAjax()... }
```
I'm sure there are better examples.
You can do pretty similar stuff with on dom:loadad -> $$(foo.bar).onClick (but this will only work for elements present at dom:loadad) ... etc. But having a jss file would be really cool.
Well, this has to be a question, not a braindump... so my question is: is there something like that?
**Appendum**
I know Jquery and prototype allow to do similar things with $$ and convenient helpers to catch events. But what I sometimes dislike about this variant is that the handler only gets installed onto elements which have been present when the site first got loaded. | The closest thing I've seen to what you're talking about are jQuery's Live Events:
<http://docs.jquery.com/Events/live>
They basically will pick up new elements as they're created and add the appropriate handler code you've assigned. | You should look into [jQuery](http://jquery.com/). I don't have the luxury of using where I work, but it looks like this:
```
$("div.closeable").click(function () {
$(this).remove();
});
```
That's not too far removed from your first example. | Apply javascript to pages like we apply css... wouldn't it be nice? | [
"",
"javascript",
""
] |
I have a query regarding the directory returned from Path.GetTempPath() function.
It returns "C:\Documents and Settings\USER\Local Settings\Temp" as the directory.
I am saving some temp files there and I am wondering when this folder is cleared, so I know how long they will exist, if it is cleared at all that is.
Is it every time I restart the computer? or is it after a certain amount of time? or space is used up?
A nice easy one for someone to answer for me!
Thanks | It gets cleared whenever the computer gets "cleaned up". This could be done a number of ways: manually by a user, through the Disk Cleanup tool, etc. | It's never cleared (except by the user when he gets tired of all the files clogging up his machine). If you create a file in there, it's your responsibility to delete it once you're finished with it. It is for *temporary* files, after all. | When is the Documents and Settings\USER\Local Settings\Temp folder cleared? | [
"",
"c#",
"windows",
"settings",
""
] |
How do I check if a particular key exists in a JavaScript object or array?
If a key doesn't exist, and I try to access it, will it return false? Or throw an error? | Checking for undefined-ness is not an accurate way of testing whether a key exists. What if the key exists but the value is actually `undefined`?
```
var obj = { key: undefined };
console.log(obj["key"] !== undefined); // false, but the key exists!
```
You should instead use the `in` operator:
```
var obj = { key: undefined };
console.log("key" in obj); // true, regardless of the actual value
```
If you want to check if a key doesn't exist, remember to use parenthesis:
```
var obj = { not_key: undefined };
console.log(!("key" in obj)); // true if "key" doesn't exist in object
console.log(!"key" in obj); // Do not do this! It is equivalent to "false in obj"
```
Or, if you want to particularly test for properties of the object instance (and not inherited properties), use `hasOwnProperty`:
```
var obj = { key: undefined };
console.log(obj.hasOwnProperty("key")); // true
```
For performance comparison between the methods that are `in`, `hasOwnProperty` and key is `undefined`, see [this **benchmark**](https://jsben.ch/WqlIl):
[](https://i.stack.imgur.com/PEuZT.png) | # Quick Answer
> How do I check if a particular key exists in a JavaScript object or array?
> If a key doesn't exist and I try to access it, will it return false? Or throw an error?
Accessing directly a missing property using (associative) array style or object style will return an *undefined* constant.
## The slow and reliable *in* operator and *hasOwnProperty* method
As people have already mentioned here, you could have an object with a property associated with an "undefined" constant.
```
var bizzareObj = {valid_key: undefined};
```
In that case, you will have to use *hasOwnProperty* or *in* operator to know if the key is really there. But, *but at what price?*
so, I tell you...
*in* operator and *hasOwnProperty* are "methods" that use the Property Descriptor mechanism in Javascript (similar to Java reflection in the Java language).
<http://www.ecma-international.org/ecma-262/5.1/#sec-8.10>
> The Property Descriptor type is used to explain the manipulation and reification of named property attributes. Values of the Property Descriptor type are records composed of named fields where each field’s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In addition, any field may be present or absent.
On the other hand, calling an object method or key will use Javascript [[Get]] mechanism. That is a far way faster!
## Benchmark
<https://jsben.ch/HaHQt>
.
#### Using *in* operator
```
var result = "Impression" in array;
```
The result was
```
12,931,832 ±0.21% ops/sec 92% slower
```
#### Using hasOwnProperty
```
var result = array.hasOwnProperty("Impression")
```
The result was
```
16,021,758 ±0.45% ops/sec 91% slower
```
#### Accessing elements directly (brackets style)
```
var result = array["Impression"] === undefined
```
The result was
```
168,270,439 ±0.13 ops/sec 0.02% slower
```
#### Accessing elements directly (object style)
```
var result = array.Impression === undefined;
```
The result was
```
168,303,172 ±0.20% fastest
```
## EDIT: What is the reason to assign to a property the `undefined` value?
That question puzzles me. In Javascript, there are at least two references for absent objects to avoid problems like this: `null` and `undefined`.
`null` is the primitive value that represents the intentional absence of any object value, or in short terms, the **confirmed** lack of value. On the other hand, `undefined` is an unknown value (not defined). If there is a property that will be used later with a *proper* value consider use `null` reference instead of `undefined` because in the initial moment the property is **confirmed** to lack value.
Compare:
```
var a = {1: null};
console.log(a[1] === undefined); // output: false. I know the value at position 1 of a[] is absent and this was by design, i.e.: the value is defined.
console.log(a[0] === undefined); // output: true. I cannot say anything about a[0] value. In this case, the key 0 was not in a[].
```
## Advice
Avoid objects with `undefined` values. Check directly whenever possible and use `null` to initialize property values. Otherwise, use the slow `in` operator or `hasOwnProperty()` method.
## EDIT: 12/04/2018 - NOT RELEVANT ANYMORE
As people have commented, modern versions of the Javascript engines (with firefox exception) have changed the approach for access properties. The current implementation is slower than the previous one for this particular case but the difference between access key and object is neglectable. | Checking if a key exists in a JavaScript object? | [
"",
"javascript",
"arrays",
"object",
""
] |
I'm loading a Bitmap from a jpg file. If the image is not 24bit RGB, I'd like to convert it. The conversion should be fairly fast. The images I'm loading are up to huge (9000\*9000 pixel with a compressed size of 40-50MB). How can this be done?
Btw: I don't want to use any external libraries if possible. But if you know of an open source utility class performing the most common imaging tasks, I'd be happy to hear about it. Thanks in advance. | The jpeg should start with 0xFF 0xD8. After that you will find various fields in the format:
1. Field identifier 2 bytes
2. Field length, excluding field identifier. 2 bytes.
3. Variable data.
Parse through the fields. The identifier you will be looking for is 0xFF 0xC0. This is called SOF0, and contains height, width, bit depth, etc. 0xFF 0xC0 will be followed by two bytes for the field length. Immediately following that will be a single byte showing the bit depth, which will usually be 8. Then there will be two bytes for height, two for width, and a single byte for the number of components; this will usually be 1 (for greyscale) or 3. (for color) | This isn't something I've tried myself, but I think you might need to acccess the picture's EXIF information as a start.
Check out Scott Hanselman's blog-entry [on accessing EXIF information from pictures](http://www.hanselman.com/blog/AccessingEXIFPhotoDataFromJPEGsWithPowerShell.aspx). | How to read and modify the colorspace of an image in c# | [
"",
"c#",
"graphics",
"rgb",
"color-space",
""
] |
I have some concerns related to the fact of testing some functions containing the assert macro from [assert.h](http://en.wikipedia.org/wiki/Assert.h).
If the assert fails the test fails also.
This leaves me with some test cases that will never work.
For example a function instead of indicating failure (return false or something similar) asserts.
Is there a solution for this (unit-testing functions containing assert)? | No, unit testing is what you do during development. Asserts are a run-time construct.
In my experience, most of the time asserts are turned off in production. But you should always be testing.
CppUnit is a fine test framework. It's part of the nUnit family for C++. | Maybe it's just me, but I would think that if you have assertion failures, you shouldn't even be thinking about higher-level unit testing until you get them fixed. The idea is that assertions should **never** fail under any circumstances, including unit tests, if the code is written properly. Or at least that's how I write my code. | Is assert and unit-testing incompatible? | [
"",
"c++",
"c",
"unit-testing",
"assertions",
""
] |
I'm seeking an open source QR codes image generator component in java (J2SE), but the open source licence mustn't be a GPL licence (needs to be included in a close source project).
BTW, i can't access the web from the project so no Google API. | Mercer - no, there is an encoder in the library too. com.google.zxing.qrcode.encoder. We provide that in addition to an example web app using Google Chart APIs | [ZXing](http://code.google.com/p/zxing/) is is an open-source, multi-format 1D/2D barcode image processing library implemented in Java.
It is released under the *The Apache License*, so it allows use of the source code for the development of proprietary software as well as free and open source software. | QR codes image generator in java (open source but no GPL) | [
"",
"java",
"encode",
"qr-code",
""
] |
A recent mention of PostSharp reminded me of this:
Last year where I worked, we were thinking of using PostSharp to inject instrumentation into our code. This was in a Team Foundation Server Team Build / Continuous Integration environment.
Thinking about it, I got a nagging feeling about the way PostSharp operates - it edits the IL that is generated by the compilers. This bothered me a bit.
I wasn't so much concerned that PostSharp would not do its job correctly; I was worried about the fact that this was the first time I could recall hearing about a tool like this. I was concerned that other tools might not take this into account.
Indeed, as we moved along, we *did* have some problems stemming from the fact that PostSharp was getting confused about which folder the original IL was in. This was breaking our builds. It appeared to be due to a conflict with the MSBUILD target that resolves project references. The conflict appeared to be due to the fact that PostSharp uses a temp directory to store the unmodified versions of the IL.
At any rate, I didn't have StackOverflow to refer to back then! Now that I do, I'd like to ask you all if you know of any other tools that edit IL as part of a build process; or whether Microsoft takes that sort of tool into account in Visual Studio, MSBUILD, Team Build, etc.
---
**Update:** Thanks for the answers.
The bottom line seems to be that, at least with VS 2010, Microsoft really *should* be aware that this sort of thing can happen. So if there are problems in this area in VS2010, then Microsoft may share the blame. | I do know that Dotfuscator, a code obfuscator, does modify the IL of assemblies assemblies and it is used in many build processes.
The IL is modified not only for code obfuscation and protection but also to inject additional functionality into your applications (see our (PreEmptive's) blog posts on Runtime Intelligence [here](http://blogs.preemptive.com/post/What-Is-Runtime-Intelligence.aspx).
In addition Microsoft's Common Compiler Infrastructure has the ability to read in assemblies, modify them and rewrite them. See [CodePlex](http://cciast.codeplex.com/) for the project. | I know about [Mono.Cecil](http://mono-project.com/Cecil), a framework library expanding the System.Reflection toolset, it's used by the [Lin](http://www.codeproject.com/KB/cs/LinFuPart1.aspx)[Fu](http://code.google.com/p/linfu/) project.
I'm not sure about build process support, you should check them out for size. | Which Tools Perform Post-Compile Modification of IL? | [
"",
"c#",
"postsharp",
"intermediate-language",
""
] |
My application is having memory problems, including copying lots of strings about, using the same strings as keys in lots of hashtables, etc. I'm looking for a base class for my strings that makes this very efficient.
I'm hoping for:
* String interning (multiple strings of the same value use the same memory),
* copy-on-write (I think this comes for free in nearly all std::string implementations),
* something with ropes would be a bonus (for O(1)-ish concatenation).
My platform is g++ on Linux (but that is unlikely to matter).
Do you know of such a library? | If most of your strings are immutable, the [Boost Flyweight](http://www.boost.org/doc/libs/1_39_0/libs/flyweight/doc/index.html) library might suit your needs.
It will do the string interning, but I don't believe it does copy-on-write. | > copy-on-write (I think this comes for free in nearly all std::string implementations)
I don't believe this is the case any longer. Copy-on-write causes problems when you modify the strings through iterators: in particular, this either causes unwanted results (i.e. no copy, and both strings get modified) or an unnecessary overhead (since the iterators cannot be implemented purely in terms of pointers: they need to perform additional checks when being dereferenced).
Additionally, all modern C++ compilers perform [NRVO](http://en.wikipedia.org/wiki/Return_value_optimization) and eliminate the need for copying return value strings in most cases. Since this has been one of the most common cases for copy-on-write semantics, it has been removed due to the aforementioned downsides. | Memory-efficient C++ strings (interning, ropes, copy-on-write, etc) | [
"",
"c++",
"string",
"performance",
"memory-management",
""
] |
I got one problem at my exam for subject Principal of Programming Language. I thought for long time but i still did not understand the problem
Problem:
Below is a program C, that is executed in MSVC++ 6.0 environment on a PC with configuration ~ CPU Intel 1.8GHz, Ram 512MB
```
#define M 10000
#define N 5000
int a[M][N];
void main() {
int i, j;
time_t start, stop;
// Part A
start = time(0);
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
a[i][j] = 0;
stop = time(0);
printf("%d\n", stop - start);
// Part B
start = time(0);
for (j = 0; j < N; j++)
for (i = 0; i < M; i++)
a[i][j] = 0;
stop = time(0);
printf("%d\n", stop - start);
}
```
Explain why does part A only execute in **1s**, but it took part B **8s** to finish? | Row-major order versus column-major order.
Recall first that all multi-dimensional arrays are represented in memory as a continguous block of memory. Thus the multidimensional array A(m,n) might be represented in memory as
a00 a01 a02 ... a0n a10 a11 a12 ... a1n a20 ... amn
In the first loop, you run through this block of memory sequentially. Thus, you run through the array traversing the elements in the following order
```
a00 a01 a02 ... a0n a10 a11 a12 ... a1n a20 ... amn
1 2 3 n n+1 n+2 n+3 ... 2n 2n+1 mn
```
In the second loop, you skip around in memory and run through the array traversing the elements in the following order
```
a00 a10 a20 ... am0 a01 a11 a21 ... am1 a02 ... amn
```
or, perhaps more clearly,
```
a00 a01 a02 ... a10 a11 a12 ... a20 ... amn
1 m+1 2m+1 2 m+2 2m+2 3 mn
```
All that skipping around really hurts you because you don't gain advantages from caching. When you run through the array sequentially, neighboring elements are loaded into the cache. When you skip around through the array, you don't get these benefits and instead keep getting cache misses harming performance. | This has to do with how the array's memory is laid out and how it gets loaded into the cache and accessed: in version A, when accessing a cell of the array, the neighbors get loaded with it into the cache, and the code then immediately accesses those neighbors. In version B, one cell is accessed (and its neighbors loaded into the cache), but the next access is far away, on the next row, and so the whole cache line was loaded but only one value used, and another cache line must be filled for each access. Hence the speed difference. | Speed of C program execution | [
"",
"c++",
"programming-languages",
""
] |
I have an array of ints like this: [32,128,1024,2048,4096]
Given a specific value, I need to get the closest value in the array that is equal to, or higher than, the value.
I have the following code
```
private int GetNextValidSize(int size, int[] validSizes)
{
int returnValue = size;
for (int i = 0; i < validSizes.Length; i++)
{
if (validSizes[i] >= size)
{
returnValue = validSizes[i];
break;
}
}
return returnValue;
}
```
It works, but is there any better/faster way to do it? The array will never contain more than 5-10 elements.
**Clarification:** I actually want to return the original value/size if it is bigger than any of the valid sizes. The validSizes array can be considered to always be sorted, and it will always contain at least one value. | Given that you have only 5-10 elements I would consider this to be ok. | With only 5-10 elements, definitely go with the simplest solution. Getting a binary chop working would help with a larger array, but it's got at least the *potential* for off-by-one errors.
Rather than breaking, however, I would return directly from the loop to make it even simpler, and use foreach as well:
```
private int GetNextValidSize(int size, int[] validSizes)
{
int returnValue = size;
foreach (int validSize in validSizes)
{
if (validSize >= size)
{
return validSizes;
}
}
// Nothing valid
return size;
}
```
You can make this even simpler with LINQ:
```
// Make sure we return "size" if none of the valid sizes are greater
return validSizes.Concat(new[] { size })
.First(validSize => validSize >= size);
```
It would be even simpler without the `Concat` step... or if there were a `Concat` method that just took a single element. That's easy to write, admittedly:
```
public static IEnumerable<T> Concat(this IEnumerable<T> source,
T tail)
{
foreach (T element in source)
{
yield return element;
}
yield return tail;
}
```
then it's just:
```
return validSizes.Concat(size).First(validSize => validSize >= size);
```
Alternatively (and I realise I'm presenting way more options than are really needed here!) an overload for `FirstOrDefault` which took the default value to return:
```
public static T FirstOrDefault(this IEnumerable<T> source,
Func<T, bool> predicate,
T defaultValue)
{
foreach (T element in source)
{
if (predicate(element))
{
return element;
}
}
return defaultValue;
}
```
Call it like this:
```
return validSizes.FirstOrDefault(validSize => validSize >= size, size);
```
Both of these are overkill for a single use, but if you're already building up a library of extra LINQ operators, it could be useful. | Round integer to nearest high number in array | [
"",
"c#",
"arrays",
""
] |
How do you take a couple of data tables and put them in a dataset and relate (that doesn't even sound like correct English) them?
I know how to create datatables. | Here is an example from one of my classes
```
// create the relationship between Booking and Booking_MNI
DataRelation relBookingMNI;
relBookingMNI = new DataRelation("BookingToBookingMNI",dsBooking.Tables["Booking"].Columns["Record_Id"],dsBooking.Tables["Booking_MNI"].Columns["booking_record_id"]);
dsBooking.Relations.Add(relBookingMNI);
```
dsBooking is my main dataset that contains 2 tables Booking and Booking\_MNI
Where the Record\_Id is the primary key and booking\_record\_id is the foreign key
I changed the code below to match my first example. But I think this is what you are looking for. In our production code this will produce the plus "+" symbol to the left of the row which would allow you to drill into the related table. Again I took production code and made it look like the first example so I don't know if it will compile but it should get you going in the right direction.
```
DataTable dtBooking = ds.Tables[0];
DataTable dtBooking_MNI = ds.Tables[1];
dtBooking.PrimaryKey = new DataColumn[] {dtBooking.Columns["Record_Id"]};
dtBooking_MNI.PrimaryKey = new DataColumn[] {dtBooking_MNI.Columns["booking_Record_Id"]};
/* Setup DataRelation between the DataTables */
DataColumn[] dcBookingColsArray = new DataColumn[1] {dtBooking.Columns["Record_Id"]};
DataColumn[] dcBookingMNIColsArray = new DataColumn[1] {dtBooking_MNI.Columns["booking_record_Id"]};
DataRelation relBooking_To_MNI = new DataRelation("Booking_To_MNI",dcBookingColsArray,dcBookingMNIColsArray);
ds.Relations.Add(relBooking_To_MNI_Units);
// grid where you want to display the relationship
grdBooking.DataSource = ds;
``` | Look at the DataRelation class. It is what is used in a DataSet to relate two DataTables together. | .Net C# DataTables and DataSets, How to relate tables | [
"",
"c#",
"datatable",
"dataset",
""
] |
I am just curious about the java memory model a little.
Here is what i though.
If i have the following class
```
public class Test {
int[] numbers = new int[Integer.MAX_VALUE]; // kids dont try this at home
void increment(int ind){
numbers[ind]++;
}
int get(int ind){
return numbers[ind];
}
}
```
There are multiple readers get() and one writer increment() thread accessing this class.
The question is here , is there actually any synchronization at all that i have to do in order to leave the class at a consistent state after each method call?
Why i am asking this, i am curious if the elements in the array are cached in some way by the JVM or is this only applied to class members? If the members inside the array could be cached, is there a way to define them as volatile ?
Thanks
Roman | You will definitely have to use some sort of synchronization (either on your class or the underlying data structure) in order to ensure the data is left in a consistent state after method calls. Consider the following situations, with two Threads A and B, with the integer array initially containing all zero values.
* Thread A calls increment(0). The post-increment operation is not atomic; you can actually consider it to be broken down into at least three steps:
+ Read the current value; Add one to the current value; Store the value.
* Thread B also calls increment(0). If this happens soon after Thread A has done the same, they will both read the *same initial value* for the element at index 0 of the array.
* At this point, both Thread A and B have read a value of '0' for the element they want to increment. Both will increment the value to '1' and store it back in the first element of the array.
* Thus, only the work of the Thread that *last writes* to the array is seen.
The situation is similar if you had a `decrement()` method. If both `increment()` and `decrement()` were called at near-simultaneous times by two separate Threads, there is no telling what the outcome would be. The value would either be incremented by one or decremented by one, and the operations would not "cancel" each other out.
**EDIT: Update to reflect Roman's (OP) comment below**
Sorry, I mis-read the post. I think I understand your question, which is along the lines of:
> "If I declare an array as `volatile`,
> does that mean access to its elements
> are treated as `volatile` as well?"
The quick answer is No: Please see [this article for more information](http://www.javamex.com/tutorials/volatile_arrays.shtml); the information in the previous answers here is also correct. | As an alternative to synchronizing those methods, you could also consider replacing the int[] with an array of [AtomicIntegers](http://java.sun.com/javase/6/docs/api/java/util/concurrent/atomic/AtomicInteger.html). This would have the benefit/downside (depending on your application) of allowing concurrent access to different elements in your list. | Caching of instances | [
"",
"java",
"multithreading",
""
] |
I have a question very similar to this:
> [How to know if a line intersects a plane in C#?](https://stackoverflow.com/q/30080/3924118)
I am searching for a method (in C#) that tells if a line is intersecting an arbitrary polygon.
I think the [algorithm by Chris Marasti-Georg](https://stackoverflow.com/a/30109/3924118) was very helpful, but missing the most important method, i.e. line to line intersection.
Does anyone know of a line intersection method to complete Chris Marasti-Georg's code or have anything similar?
Is there a built-in code for this in C#?
This method is for use with the Bing Maps algorithm enhanced with a forbidden area feature. The resulting path must not pass through the forbidden area (the arbitrary polygon). | There is no builtin code for edge detection built into the .NET framework.
Here's code (ported to C#) that does what you need (the actual algorithm is found at comp.graphics.algorithms on Google groups) :
```
public static PointF FindLineIntersection(PointF start1, PointF end1, PointF start2, PointF end2)
{
float denom = ((end1.X - start1.X) * (end2.Y - start2.Y)) - ((end1.Y - start1.Y) * (end2.X - start2.X));
// AB & CD are parallel
if (denom == 0)
return PointF.Empty;
float numer = ((start1.Y - start2.Y) * (end2.X - start2.X)) - ((start1.X - start2.X) * (end2.Y - start2.Y));
float r = numer / denom;
float numer2 = ((start1.Y - start2.Y) * (end1.X - start1.X)) - ((start1.X - start2.X) * (end1.Y - start1.Y));
float s = numer2 / denom;
if ((r < 0 || r > 1) || (s < 0 || s > 1))
return PointF.Empty;
// Find intersection point
PointF result = new PointF();
result.X = start1.X + (r * (end1.X - start1.X));
result.Y = start1.Y + (r * (end1.Y - start1.Y));
return result;
}
``` | Slightly off topic, but if the line is **infinite** I think there's a much simpler solution:
The line does not go through the polygon if all the point lie on the same *side* of the line.
With help from these two:
* [Using linq or otherwise, how do check if all list items have the same value and return it, or return an “otherValue” if they don’t?](https://stackoverflow.com/questions/4390406/using-linq-or-otherwise-how-do-check-if-all-list-items-have-the-same-value-and)
* [Determine which side of a line a point lies](https://stackoverflow.com/questions/3461453/determine-which-side-of-a-line-a-point-lies)
I got this little gem:
```
public class PointsAndLines
{
public static bool IsOutside(Point lineP1, Point lineP2, IEnumerable<Point> region)
{
if (region == null || !region.Any()) return true;
var side = GetSide(lineP1, lineP2, region.First());
return
side == 0
? false
: region.All(x => GetSide(lineP1, lineP2, x) == side);
}
public static int GetSide(Point lineP1, Point lineP2, Point queryP)
{
return Math.Sign((lineP2.X - lineP1.X) * (queryP.Y - lineP1.Y) - (lineP2.Y - lineP1.Y) * (queryP.X - lineP1.X));
}
}
``` | How to tell if a line intersects a polygon in C#? | [
"",
"c#",
"geometry",
"2d",
"bing-maps",
"computational-geometry",
""
] |
I have a textbox bound to a property. The property continously gets updated from a timer. What I'm trying to do is to make it so that when I'm in the middle of typing something in the textbox, the textbox should stop updating itself from the property. Is there a way to do that?
Thanks! | I'm not a WPF or databinding expert, so there may be a better way, but I'd say you'll have to handle the GotFocus and LostFocus events and add/remove the databinding in those event handlers. | I'd do something in the order of:
```
public void Timer_Tick(object sender,EventArgs eArgs)
{
if(!Textbox.GotFocus())
{
// Regular updating of textbox
}
}
``` | Problem with manually editing a textbox bound to a property that's continuously updated | [
"",
"c#",
"wpf",
"data-binding",
"binding",
""
] |
I can get the unique id like className@2345 of my object by calling its toString() method, but after I overwrite the toString() method, how can I get that unique id? | You can call System.identityHashCode() and pass your object as parameter, then you will get it. | More precisely
```
obj.getClass().getName() + "@" + Integer.toHexString(System.identityHashCode(obj))
``` | How can I get the unique id of an Object after overwriting its toString() method? | [
"",
"java",
""
] |
I'm interested in UI testing a client only Java application. What is the most popular framework for doing so? What is your recommended framework? Also, why doesn't Selenium do this (or does it)? It seems to me that anything that can test a web app should be able to test a windows app. | Try [FEST framework](https://code.google.com/p/fest/).
This is what was previously known as Abbot, if I'm not mistaken. I use this for automated testing and it seems to be very simple and convenient. Simple things are made easy and complex things are not a rocket science there. I considered UISpecj4j, but it didn't suite me, because there is no technical possibility to test Drag'n'Drop while it's a must. Besides it's quite difficult to develop tests when you don't actually see what's happening (UISpec4J uses fake, invisible look'n'feel)
The only thing is making troubles in FEST (also, by design) - it grabs a mouse and doesn't let you do anything while it running a test. | We use [TestComplete](http://www.automatedqa.com/products/testcomplete/). | Selenium is to Web UI testing as ________ is to Windows application UI testing | [
"",
"java",
"testing",
"selenium",
"gui-testing",
"ui-testing",
""
] |
I'm trying to write php script that would:
* connect to mysql
* create database
* create user and password
* add user to database
* import prepared sql file to database
The thing is that it'll be on shared host (and I'm trying to make it universal and work with different hosts). I guess database and db user will be prefixed with my account name (what I use for ftp, to login to control panel, etc) on shared hosts? Something like mylogin*\_*dbname and mylogin*\_*dbuser. It's visible for example in cpanel - when I add database I enter its name and after it's created cpanel shows it as mylogin*\_*somedb. How do I make my script work with this on multiple different shared hosts - add my prefix automatically depending on my main login?
Now working with such code (don't have a clue if it works, that's just what came to my mind):
```
<?php
mysql_connect("host", "user", "password"); // Connection to MySQL
$query = "CREATE DATABASE somedb;
USE somedb;
SOURCE path/to/sqlfile.sql;
CREATE USER someuser IDENTIFIED BY PASSWORD 'somepass';
GRANT SELECT,INSERT,UPDATE,DELETE ON somedb.* TO 'someuser'@'host';";
$arr= explode( ';', $query );
foreach( $arr as $command )
{
mysql_query( $command );
}
mysql_close(); // Disconnection from MySQL
?>
``` | [KISS principle](http://en.wikipedia.org/wiki/KISS_principle): just use phpMyAdmin? It's almost certainly installed. If it's not, [install it](http://phpmyadmin.net/).
Its import capability is magnificent. If your database is by any chance too big, gzip it. If it's still to big, try splitting it up in a few pieces. I doubt you need to transfer it as a single big transaction. Do you?
---
After the explanation in first comment, well, here goes. This is my very simplistic script which does what you want. Except it doesn't take a look at the separators: one query == one line.
```
<link rel="stylesheet" href="style/contents.css"/>
<?
function timesanitize($v) {
if ($v > 0)
return round($v, 4);
else
return 0;
}
$startmt = microtime();
include_once 'include/db.php';
$f = fopen("db.sql","r");
echo dbGetEngine() . "<br>";
echo "<ul>";
do {
$l = rtrim(fgets($f));
if (strlen($l) == 0)
continue;
if (substr($l, 0, 1) == '#')
continue;
$l = str_replace(
array("\\n"),
array("\n"),
$l);
if (dbGetEngine() == "pgsql")
$l = str_replace(
array("IF NOT EXISTS", "LONGBLOB"),
array("", "TEXT"),
$l);
try {
echo "<li>".nl2br(htmlspecialchars($l));
$mt = microtime();
$db->query($l);
echo "<ul><li>ok - " . timesanitize(microtime() - $mt) . "</ul>";
} catch (PDOException $e) {
echo "<ul><li>".$e->getMessage() . "</ul>";
}
} while (!feof($f));
fclose($f);
echo 'total: ' . timesanitize(microtime() - $startmt);
?>
```
It also outputs a small statistic of how long each query took. It's based around PDO; I believe PDO was introduced in PHP5.1 or PHP5.2. I think it should be trivial to modify it to work directly with `mysql_*()` functions, if for some reason you prefer that.
And once again: yes, I know it sucks. But as long as it Works For Me (tm), and possibly You... :-)
---
To complete the code, here are `include/db.php` and a sample `include/config.php`:
`include/db.php`:
```
<?
include_once 'include/config.php';
try {
$attribs =
array(
PDO::ATTR_PERSISTENT => $config['db']['persistent'],
PDO::ATTR_ERRMODE => $config['db']['errormode']
);
$db = new PDO(
$config['db']['uri'],
$config['db']['user'],
$config['db']['pass'],
$attribs
);
$db->query("SET NAMES 'utf8'");
$db->query("SET CHARACTER SET 'utf8'");
} catch (PDOException $e) {
print "Error!: " . $e->getMessage() . "<br/>";
die();
}
function dbGetEngine() {
global $config;
return substr($config['db']['uri'], 0, strpos($config['db']['uri'], ':'));
}
?>
```
`include/config.php`:
```
<?
//$config['db']['uri'] = 'sqlite:' . realpath('.') . '/site.db'; // PDO's database access URI
$config['db']['uri'] = 'mysql:host=localhost;dbname=sitedb'; // server should be : 195.78.32.7
//$config['db']['uri'] = 'pgsql:host=localhost;dbname=sitedb';
$config['db']['user'] = 'user_goes_here'; // database username
$config['db']['pass'] = 'pass_goes_here'; // database password
$config['db']['persistent'] = false; // should the connection be persistent
$config['db']['errormode'] = PDO::ERRMODE_EXCEPTION; // PDO's error mode
?>
```
Included are sample connection strings for SQLite, MySQL and PostgreSQL. | The fact that this is on a shared host shouldn't make that much of a difference in my opinion. Whatever your environment, you'll have to use your username and either the database name gets prefixed with that or it doesn't according to hosting setup. That said it is probably *smart* to adopt a naming convention with your username as a prefix or some other prefix naming convention.
As to what you're trying to do I have some suggestions:
1. You should run each statement separately (as you do) but add error checking between each step so you don't attempt to source the file if the database wasn't created for instance.
2. Maybe create separate functions for each kind of operation and send in parameters for the values. This would clean up the code in the longer run.
3. If you have a situation where something is automatically prefixed to the name you provide and you have to detect the final name, you can probably do a "`show databases;`" query and search for the name you gave in the results. Then use the complete name for the resulting queries. | Create and import mysql database on shared host in php | [
"",
"php",
"mysql",
""
] |
How do i set the source IP/interface with Python and urllib2? | Unfortunately the stack of standard library modules in use (urllib2, httplib, socket) is somewhat badly designed for the purpose -- at the key point in the operation, `HTTPConnection.connect` (in httplib) delegates to `socket.create_connection`, which in turn gives you no "hook" whatsoever between the creation of the socket instance `sock` and the `sock.connect` call, for you to insert the `sock.bind` just before `sock.connect` that is what you need to set the source IP (I'm evangelizing widely for NOT designing abstractions in such an airtight, excessively-encapsulated way -- I'll be speaking about that at OSCON this Thursday under the title "Zen and the Art of Abstraction Maintenance" -- but here your problem is how to deal with a stack of abstractions that WERE designed this way, sigh).
When you're facing such problems you only have two not-so-good solutions: either copy, paste and edit the misdesigned code into which you need to place a "hook" that the original designer didn't cater for; or, "monkey-patch" that code. Neither is GOOD, but both can work, so at least let's be thankful that we have such options (by using an open-source and dynamic language). In this case, I think I'd go for monkey-patching (which is bad, but copy and paste coding is even worse) -- a code fragment such as:
```
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
```
Depending on your exact needs (do you need all sockets to be bound to the same source IP, or...?) you could simply run this before using `urllib2` normally, or (in more complex ways of course) run it at need just for those outgoing sockets you DO need to bind in a certain way (then each time restore `socket.socket = true_socket` to get out of the way for future sockets yet to be created). The second alternative adds its own complications to orchestrate properly, so I'm waiting for you to clarify whether you do need such complications before explaining them all.
AKX's good answer is a variant on the "copy / paste / edit" alternative so I don't need to expand much on that -- note however that it doesn't exactly reproduce `socket.create_connection` in its `connect` method, see the source [here](http://svn.python.org/view/python/trunk/Lib/socket.py?revision=73145&view=markup) (at the very end of the page) and decide what other functionality of the `create_connection` function you may want to embody in your copied/pasted/edited version if you decide to go that route. | This seems to work.
```
import urllib2, httplib, socket
class BindableHTTPConnection(httplib.HTTPConnection):
def connect(self):
"""Connect to the host and port specified in __init__."""
self.sock = socket.socket()
self.sock.bind((self.source_ip, 0))
if isinstance(self.timeout, float):
self.sock.settimeout(self.timeout)
self.sock.connect((self.host,self.port))
def BindableHTTPConnectionFactory(source_ip):
def _get(host, port=None, strict=None, timeout=0):
bhc=BindableHTTPConnection(host, port=port, strict=strict, timeout=timeout)
bhc.source_ip=source_ip
return bhc
return _get
class BindableHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(BindableHTTPConnectionFactory('127.0.0.1'), req)
opener = urllib2.build_opener(BindableHTTPHandler)
opener.open("http://google.com/").read() # Will fail, 127.0.0.1 can't reach google.com.
```
You'll need to figure out some way to parameterize "127.0.0.1" there, though. | Source interface with Python and urllib2 | [
"",
"python",
"urllib2",
""
] |
I'm trying to access UPS tracking info and, as per their example, I need to build a request like so:
```
<?xml version="1.0" ?>
<AccessRequest xml:lang='en-US'>
<AccessLicenseNumber>YOURACCESSLICENSENUMBER</AccessLicenseNumber>
<UserId>YOURUSERID</UserId>
<Password>YOURPASSWORD</Password>
</AccessRequest>
<?xml version="1.0" ?>
<TrackRequest>
<Request>
<TransactionReference>
<CustomerContext>guidlikesubstance</CustomerContext>
</TransactionReference>
<RequestAction>Track</RequestAction>
</Request>
<TrackingNumber>1Z9999999999999999</TrackingNumber>
</TrackRequest>
```
I'm having a problem creating this with 1 XmlDocument in C#. When I try to add the second:
`<?xml version="1.0" ?> or the <TrackRequest>`
it throws an error:
> System.InvalidOperationException: This
> document already has a
> 'DocumentElement' node.
I'm guessing this is because a standard XmlDocument would only have 1 root node. Any ideas?
Heres my code so far:
```
XmlDocument xmlDoc = new XmlDocument();
XmlDeclaration xmlDeclaration = xmlDoc.CreateXmlDeclaration("1.0", "utf-8", null);
XmlElement rootNode = xmlDoc.CreateElement("AccessRequest");
rootNode.SetAttribute("xml:lang", "en-US");
xmlDoc.InsertBefore(xmlDeclaration, xmlDoc.DocumentElement);
xmlDoc.AppendChild(rootNode);
XmlElement licenseNode = xmlDoc.CreateElement("AccessLicenseNumber");
XmlElement userIDNode = xmlDoc.CreateElement("UserId");
XmlElement passwordNode = xmlDoc.CreateElement("Password");
XmlText licenseText = xmlDoc.CreateTextNode("mylicense");
XmlText userIDText = xmlDoc.CreateTextNode("myusername");
XmlText passwordText = xmlDoc.CreateTextNode("mypassword");
rootNode.AppendChild(licenseNode);
rootNode.AppendChild(userIDNode);
rootNode.AppendChild(passwordNode);
licenseNode.AppendChild(licenseText);
userIDNode.AppendChild(userIDText);
passwordNode.AppendChild(passwordText);
XmlElement rootNode2 = xmlDoc.CreateElement("TrackRequest");
xmlDoc.AppendChild(rootNode2);
``` | An XML document can only ever have one root node. Otherwise it's not well formed. You will need to create 2 xml documents and join them together if you need to send both at once. | Its throwing an exception because you are trying to create invalid xml. XmlDocument will only generate well formed xml.
You could do it using an XMLWriter and setting XmlWriterSettings.ConformanceLevel to Fragment or you could create two XmlDocuments and write them out into the same stream. | C# XmlDocument Nodes | [
"",
"c#",
"xml",
"xmldocument",
""
] |
```
$to = "jijodasgupta@gmail.com";
$subject = "Hi!";
$body = "Hi,\n\nHow are you?";
if (mail($to, $subject, $body)) {
echo("pMessage successfully sent!/p");
} else {
echo("pMessage delivery failed.../p");
}
```
Wrote a basic php sendmail code that but it gives me the following error:
> Warning: mail() [function.mail]: "sendmail\_from" not set in php.ini or
> custom "From:" header missing in C:\xampp\htdocs\mail.php on line 5
> Message delivery failed...
I changed the ``php.ini file and put `sendmail_from=jijodasgupta@gmail.com` but still the problem persists.
Writing the mail script for the first time.
Am I doing something wrong? Is there a better way to do this? | > additional\_headers (optional)
>
> String to be inserted at the end of
> the email header.
>
> This is typically used to add extra
> headers (From, Cc, and Bcc). Multiple
> extra headers should be separated with
> a CRLF (\r\n).
>
> Note: When sending mail, the mail must
> contain a From header. This can be set
> with the additional\_headers parameter,
> or a default can be set in php.ini.
> Failing to do this will result in an
> error message similar to Warning:
> mail(): "sendmail\_from" not set in
> php.ini or custom "From:" header
> missing. The From header sets also
> Return-Path under Windows.
I hope that helps. | First all, check you edited the correct php.ini - add `phpinfo();` to your script output diagnostic information, including the location of php.ini. You should also be able to see the configured "sendmail\_from" value here too.
Failing that, provide a From header, as indicated by [usoban](https://stackoverflow.com/questions/1104555/php-send-mail-code-not-working/1104583#1104583)
```
$hdrs="From: jijodasgupta@gmail.com";
mail($to, $subject, $body, $hdrs);
``` | php send mail code not working | [
"",
"sendmail",
"php",
""
] |
I have a string in PHP (came from some data source), which represents a formatted unsigned 32-bit integer.
I need to store it into a MySQL database as a *signed* 32-bit integer, so that later I can retrieve it from PHP and use it as a (possibly negative) signed integer constant (since PHP doesn't have unsigned integers).
So, what I need is a conversion method, either for PHP or MySQL. It shouldn't be platform-dependent (no endian / 32/64-bit issues).
I know how to convert a signed integer into unsigned using MySQL:
```
select CAST((-1062726980 & 0xFFFFFFFF) AS UNSIGNED INTEGER);
+------------------------------------------------------+
| CAST((-1062726980 & 0xFFFFFFFF) AS UNSIGNED INTEGER) |
+------------------------------------------------------+
| 3232240316 |
+------------------------------------------------------+
```
But I can't get it work the other way around (note: MySQL uses 64-bit arithmetic when doing casts).
Thanks. | This:
```
$val = (bccomp("2147483647", $val) < 0) ? bcsub($val, "4294967296") : $val;
```
seems to work, even though it's somewhat slow. | If you just cast the number to an integer in PHP, it will do the trick.
```
echo (int)3232240316 . "\n";
```
gives
```
-1062726980
```
Note: if you want to cast a signed int to an unsigned int in PHP, just do this:
```
$number += 4294967296;
```
Example:
```
$number = -1062726980;
echo $number . "\n";
$number += 4294967296;
echo $number . "\n";
```
gives:
```
-1062726980
3232240316
``` | How to cast a 32-bit integer from unsigned to signed in MySQL or PHP? | [
"",
"php",
"mysql",
"casting",
"unsigned",
"signed",
""
] |
I need to strip leading spaces from a column in Oracle. I've Googled but haven't found any answers except to write my own function which I'd like to avoid.
What's the easiest way to accomplish this? | You can user [LTRIM](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions083.htm#i77875) Oracle function:
```
SQL> select ltrim(' hello world') from dual;
LTRIM('HELLOWORLD')
-------------------
hello world
```
For ending spaces you can use [RTRIM](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions141.htm#i1018967). And for more options check out [TRIM](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions199.htm#i79689). | use the [trim](http://www.techonthenet.com/oracle/functions/trim.php) function removes all specified characters either from the beginning or the ending of a string.
```
trim( [ leading | trailing | both [ trim_character ] ] string1 )
``` | How do you strip leading spaces in Oracle? | [
"",
"sql",
"oracle",
""
] |
Currently, I'm doing it in this fashion:
```
class Spam(object):
decorated = None
@classmethod
def decorate(cls, funct):
if cls.decorated is None:
cls.decorated = []
cls.decorated.append(funct)
return funct
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated # [<function foo at 0x...>]
print Spam.decorated # None
```
I **need** to be able to do this in a subclass as shown. The problem is that I can't seem to figure out how to make the `decorated` field not shared between instances. Right now I have a hackish solution by initially setting it to `None` and then checking it when the function is decorated, but that only works one way. In other words, if I subclass `Eggs` and then decorate something with the `Eggs.decorate` function, it affects all subclasses.
I guess my question is: is it possible to have mutable class fields that don't get shared between base and sub classes? | I figured it out through using metaclasses. Thanks for all who posted. Here is my solution if anybody comes across a similar problem:
```
class SpamMeta(type):
def __new__(cls, name, bases, dct):
SpamType = type.__new__(cls, name, bases, dct)
SpamType.decorated = []
return SpamType
class Spam(object):
__metaclass__ = SpamMeta
@classmethod
def decorate(cls, funct):
cls.decorated.append(funct)
return funct
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated # [<function foo at 0x...>]
print Spam.decorated # []
``` | I'm fairly sure you can't. I thought about doing this with property(), but unfortunately the class of the class itself--where a property would need to go--is ClassType itself.
You can write your decorator like this, but it changes the interface a little:
```
class Spam(object):
decorated = {}
@classmethod
def get_decorated_methods(cls):
return cls.decorated.setdefault(cls, [])
@classmethod
def decorate(cls, funct):
cls.get_decorated_methods().append(funct)
return funct
class Eggs(Spam):
pass
@Spam.decorate
def foo_and_spam():
print "spam"
@Eggs.decorate
def foo_and_eggs():
print "eggs"
print Eggs.get_decorated_methods() # [<function foo_and_eggs at 0x...>]
print Spam.get_decorated_methods() # [<function foo_and_spam at 0x...>]
``` | Creating a decorator in a class with access to the (current) class itself | [
"",
"python",
"inheritance",
"class",
"decorator",
""
] |
I've been looking around, and so far haven't managed to find a good way to do this. It's a common problem, I'm sure.
Suppose I have the following:
```
class SomeClass : IComparable
{
private int myVal;
public int MyVal
{
get { return myVal; }
set { myVal = value; }
}
public int CompareTo(object other) { /* implementation here */ }
}
class SortedCollection<T>
{
private T[] data;
public T Top { get { return data[0]; } }
/* rest of implementation here */
}
```
The idea being, I'm going to implement a binary heap, and rather than only support Insert() and DeleteMin() operations, I want to support "peeking" at the highest (or lowest, as the case may be) priority value on the stack. Never did like Heisenberg, and that whole "you can't look at things without changing them" Uncertainty Principle. Rubbish!
The problem, clearly, is that the above provides no means to prevent calling code from modifying MyVal (assuming SortedCollection) via the Top property, which operation has the distinct possibility of putting my heap in the wrong order. Is there any way to prevent modifications from being applied to the internal elements of the heap via the Top property? Or do I just use the code with a warning: "Only stable if you don't modify any instances between the time they're inserted and dequeue'd. YMMV." | To answer your question: **No**, there's no way to implement the kind of behavior you want - as long as T is of reference type (and possibly even with some value-types)
You can't really do much about it. As long as you provide a getter, calling code can modify the internal contents of your data depending on the accessibility of said data (i.e. on properties, fields, and methods).
```
class SomeClass : IComparable
{
private int myVal;
public int MyVal
{
get { return myVal; }
set { myVal = value; }
}
public int CompareTo(object other) { /* implementation here */ }
}
class SortedCollection<T>
{
private T[] data;
public T Top { get { return data[0]; } }
/* rest of implementation here */
}
//..
// calling code
SortedCollection<SomeClass> col;
col.Top.MyVal = 500; // you can't really prevent this
```
**NOTE** What I mean is you can't really prevent it *in the case of classes that you don't control*. In the example, like others have stated you can make MyVal's set private or omit it; but since SortedColleciton is a generic class, ***you can't do anything about other people's structures***.. | You can have a readonly property (that is, a property with only a getter):
```
private int myVal;
public int MyVal { get { return myVal; } }
```
But be careful: this may not always work how you expect. Consider:
```
private List<int> myVals;
public List<int> MyVals { get { return myVals; } }
```
In this case, you can't change which List the class uses, but you can still call that List's `.Add()`, `.Remove()`, etc methods. | Does C#/CLR contain a mechanism for marking the return values of properties as read-only / immutable? | [
"",
"c#",
"immutability",
""
] |
I have a web page. When this web page is loaded, I want to execute some JavaScript. This JavaScript uses JQuery. However, it appears that when the page is loaded, the jQuery library has not been fully loaded. Because of this, my JavaScript does not execute properly.
What are some best practices for ensuring that your jQuery library is loaded before executing your JavaScript? | The jQuery library should be included in the `<head>` part of your page:
```
<script src="/js/jquery-1.3.2.min.js" type="text/javascript"></script>
```
Your own code should come after this. JavaScript is loaded linearly and synchronously, so you don't have to worry about it as long as you include the scripts in order.
To make your code execute when the DOM has finished loading (because you can't really use much of the jQuery library until this happens), do this:
```
$(function () {
// Do stuff to the DOM
$('body').append('<p>Hello World!</p>');
});
```
If you're mixing JavaScript frameworks and they are interfering with each other, you may need to use jQuery like this:
```
jQuery(function ($) { // First argument is the jQuery object
// Do stuff to the DOM
$('body').append('<p>Hello World!</p>');
});
``` | There are several answers already, each of them work great but all of them depend on making sure jQuery is downloaded synchronously before any dependencies try to use it. This approach is very safe, but it can also present an inefficiency in that no other resources can be downloaded/run while the jQuery library script is being downloaded and parsed. If you put jQuery in the head using an embedded tag, the users browser will be sitting idle while downloading and processing jQuery when it could be doing other things like downloading images, downloading other scripts, etc.
If you are looking for speedy responsiveness on startup, every millisecond counts.
A huge help to me over the last year or so has been Steve Souders, first when he was at Yahoo, and now at Google. If you have time, check out his slides for [Even Faster Web Sites](http://sites.google.com/site/io/even-faster-web-sites "Even Faster Web Sites") and [High Performance Web Sites](http://stevesouders.com/hpws/rules.php "High Performance Web Sites"). Excellent. For a huge paraphrase, continue here.
When attaching javascript files to the DOM, it has been found that putting an inline <script> tag in the <head> of the document causes most browsers to block while that file is being downloaded and parsed. IE8 and Chrome are exceptions. To get around this, you can create a "script" element via document.createElement, set the appropriate attributes, and then attach it to the document head. This approach does not block in any major browser.
```
<script type='text/javascript'>
// Attaching jQuery the non-blocking way.
var elmScript = document.createElement('script');
elmScript.src = 'jQuery.js'; // change file to your jQuery library
elmScript.type = 'text/javascript';
document.getElementsByTagName('head')[0].appendChild( elmScript );
</script>
```
This ensures that jQuery, while downloading and parsing, is not blocking the downloading and parsing of any other images or javascript files. But this could present a problem if you have a script that depends on jQuery finishes its download before jQuery does. jQuery won't be there and you will get an error saying $ or jQuery is undefined.
So, to ensure that your code will run, you have to do some dependency checking. If your dependencies are not met, wait for a little while and try again.
```
<script type='text/javascript'>
runScript();
function runScript() {
// Script that does something and depends on jQuery being there.
if( window.$ ) {
// do your action that depends on jQuery.
} else {
// wait 50 milliseconds and try again.
window.setTimeout( runScript, 50 );
}
}
</script>
```
Like I said earlier, there are no problems with the approaches shown before, they are all valid and all work. But this method will ensure that you are not blocking the download of other items on the page while waiting for jQuery.
This approach will work for any file, so if you have to include 10 js files, you can attach each of them using this method. You just have to make sure your check function checks for every dependency before running. | JavaScript - How do I make sure a jQuery is loaded? | [
"",
"javascript",
"jquery",
""
] |
I am trying to get the mercurial revision number/id (it's a hash not a number) programmatically in python.
The reason is that I want to add it to the css/js files on our website like so:
```
<link rel="stylesheet" href="example.css?{% mercurial_revision "example.css" %}" />
```
So that whenever a change is made to the stylesheet, it will get a new url and no longer use the old cached version.
***OR*** if you know where to find good documentation for the mercurial **python module**, that would also be helpful. I can't seem to find it anywhere.
### My Solution
I ended up using subprocess to just run a command that gets the hg node. I chose this solution because the api is not guaranteed to stay the same, but the bash interface probably will:
```
import subprocess
def get_hg_rev(file_path):
pipe = subprocess.Popen(
["hg", "log", "-l", "1", "--template", "{node}", file_path],
stdout=subprocess.PIPE
)
return pipe.stdout.read()
```
example use:
```
> path_to_file = "/home/jim/workspace/lgr/pinax/projects/lgr/site_media/base.css"
> get_hg_rev(path_to_file)
'0ed525cf38a7b7f4f1321763d964a39327db97c4'
``` | It's true there's no official API, but you can get an idea about best practices by reading other extensions, particularly those bundled with hg. For this particular problem, I would do something like this:
```
from mercurial import ui, hg
from mercurial.node import hex
repo = hg.repository('/path/to/repo/root', ui.ui())
fctx = repo.filectx('/path/to/file', 'tip')
hexnode = hex(fctx.node())
```
**Update** At some point the parameter order changed, now it's like this:
```
repo = hg.repository(ui.ui(), '/path/to/repo/root' )
``` | Do you mean [this documentation](https://www.mercurial-scm.org/wiki/MercurialApi)?
Note that, as stated in that page, there is no *official* API, because they still reserve the right to change it at any time. But you can see the list of changes in the last few versions, it is not very extensive. | Mercurial scripting with python | [
"",
"python",
"mercurial",
"revision",
""
] |
This question is partly about delegates, and partly about generics.
Given the simplified code:
```
internal sealed class TypeDispatchProcessor
{
private readonly Dictionary<Type, Delegate> _actionByType
= new Dictionary<Type, Delegate>();
public void RegisterProcedure<T>(Action<T> action)
{
_actionByType[typeof(T)] = action;
}
public void ProcessItem(object item)
{
Delegate action;
if (_actionByType.TryGetValue(item.GetType(), out action))
{
// Can this call to DynamicInvoke be avoided?
action.DynamicInvoke(item);
}
}
}
```
I read [elsewhere on SO](https://stackoverflow.com/questions/1115453/upto-what-extent-reflection-should-be-used/1115634#1115634) that invoking a delegate directly (with parenthesis) is orders of magnitude faster than calling `DynamicInvoke`, which makes sense.
For the code sample above, I'm wondering whether I can perform the type checking and somehow improve performance.
*Some context:* I have a stream of objects that get farmed out to various handlers, and those handlers can be registered/unregistered at runtime. The above pattern functions perfectly for my purposes, but I'd like to make it snappier if possible.
One option would be to store `Action<object>` in the `Dictionary`, and wrap the `Action<T>` delegates with another delegate. I haven't yet compared the performance change that this second indirect call would affect. | I strongly suspect that wrapping the calls would be a lot more efficient than using `DynamicInvoke`. Your code would then be:
```
internal sealed class TypeDispatchProcessor
{
private readonly Dictionary<Type, Action<object>> _actionByType
= new Dictionary<Type, Action<object>>();
public void RegisterProcedure<T>(Action<T> action)
{
_actionByType[typeof(T)] = item => action((T) item);
}
public void ProcessItem(object item)
{
Action<object> action;
if (_actionByType.TryGetValue(item.GetType(), out action))
{
action(item);
}
}
}
```
It's worth benchmarking it, but I think you'll find this a lot more efficient. `DynamicInvoke` has to check all the arguments with reflection etc, instead of the simple cast in the wrapped delegate. | So I did some measurements on this.
```
var delegates = new List<Delegate>();
var actions = new List<Action<object>>();
const int dataCount = 100;
const int loopCount = 10000;
for (int i = 0; i < dataCount; i++)
{
Action<int> a = d => { };
delegates.Add(a);
actions.Add(o => a((int)o));
}
var sw = Stopwatch.StartNew();
for (int i = 0; i < loopCount; i++)
{
foreach (var action in actions)
action(i);
}
Console.Out.WriteLine("{0:#,##0} Action<object> calls in {1:#,##0.###} ms",
loopCount * dataCount, sw.Elapsed.TotalMilliseconds);
sw = Stopwatch.StartNew();
for (int i = 0; i < loopCount; i++)
{
foreach (var del in delegates)
del.DynamicInvoke(i);
}
Console.Out.WriteLine("{0:#,##0} DynamicInvoke calls in {1:#,##0.###} ms",
loopCount * dataCount, sw.Elapsed.TotalMilliseconds);
```
I created a number of items to indirectly invoke to avoid any kind of optimisation the JIT might perform.
The results are quite compelling!
```
1,000,000 Action calls in 47.172 ms
1,000,000 Delegate.DynamicInvoke calls in 12,035.943 ms
1,000,000 Action calls in 44.686 ms
1,000,000 Delegate.DynamicInvoke calls in 12,318.846 ms
```
So, in this case, substituting the call to `DynamicInvoke` for an extra indirect call and a cast was approximately **270 times faster**. All in a day's work. | Can Delegate.DynamicInvoke be avoided in this generic code? | [
"",
"c#",
".net",
"generics",
"delegates",
"dynamic-invoke",
""
] |
How do I formate a java.sql Timestamp to my liking ? ( to a string, for display purposes) | `java.sql.Timestamp` extends `java.util.Date`. You can do:
```
String s = new SimpleDateFormat("MM/dd/yyyy").format(myTimestamp);
```
Or to also include time:
```
String s = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss").format(myTimestamp);
``` | Use String.format (or [java.util.Formatter](http://java.sun.com/javase/6/docs/api/java/util/Formatter.html)):
```
Timestamp timestamp = ...
String.format("%1$TD %1$TT", timestamp)
```
**EDIT:**
please see the documentation of Formatter to know what TD and TT means: click on [java.util.Formatter](http://java.sun.com/javase/6/docs/api/java/util/Formatter.html)
The first 'T' stands for:
```
't', 'T' date/time Prefix for date and time conversion characters.
```
and the character following that 'T':
```
'T' Time formatted for the 24-hour clock as "%tH:%tM:%tS".
'D' Date formatted as "%tm/%td/%ty".
``` | How to format a java.sql Timestamp for displaying? | [
"",
"java",
"datetime",
"formatting",
""
] |
I'm having a problem getting a `change` event to register with the following code:
```
var map = function(){
function addMapTriggers(){
$("#map_form select").change(getDataWithinBounds);
}
return{
init: function(){
getDataWithinBounds();
addMapTriggers();
}
};
}();
```
and in a jQuery `document.ready`:
```
$(function(){
map.init();
});
```
So I get my initial data, then every time I change one of the selects, I get the map data again (`getDataWithinBounds` sends an ajax call). Problem is, it doesn't work, the change event is never added.
However, if, in the console I type `map.init();` it does work. This is weird, I don't understand how there is any difference whatsoever? What am I missing here? I'm testing this on Safari AND Firefox with the exact same behavior. | I'm so embarrassed right now and I apologize for wasting everyone's time. Of course the above code works, what's different about what I'm doing is that my site is using the jquery selectbox plugin, which styles the selects by creating a UL with click events to act like a select (not my choice, trust me) Unfortunately, I was calling init *before* applying the selectbox code, which itself alters the binding of the selects. Putting the selectbox call first, then calling map.init() completely fixed it.
Many apologies and thanks a lot (SolutionYogi in particular) for taking the time to help me out. | Is 'map\_form' id of your select? If yes, then you should do
```
$("#map_form").change(getDataWithinBounds);
```
'#map\_form select' will try to hook up change event to all the select which are present in element specified by #map\_form.
**EDIT:**
Noticed one more thing, map.init doesn't really exist.
In your code, map is a named function which is executed immediately which returns an object which has init method. This doesn't mean that 'map' var has the init method.
You want to try something like this,
```
var map = function()
{
function addMapTriggers()
{
$("#map_form select").change(getDataWithinBounds);
}
return {
init: function()
{
getDataWithinBounds();
addMapTriggers();
}
};
}();
```
In above code, you run an anonymous function which returns an object which has init method and assign the result to map.
Now map object will have init method and you can do this:
```
$(function()
{
map.init();
});
``` | Adding events on load | [
"",
"javascript",
"jquery",
"jquery-events",
""
] |
I was trying to upload a photo on facebook using a browser with no flash, and could see that it didn't work.
I am pretty confident in handling files and related issues using Php and have done some sites allowing users to upload and manage files (images, docs etc). But I never thought about the above flash approach. I googled a bit and saw that there are few scripts available on which I can look how it works using flash.
But my questions are, when I should decide to use flash for user uploads. What are the advantages of using this approach? or disadvantages?
Thanks | Depends on your audience: If you are pretty sure that your users have flash installed, there is nothing against it. It even gives some neat advantages, like:
* Upload of multiple files at once
* Progress bar while uploading
* Instant preview of uploaded media
* etc. | It isn't a choice between Flash and PHP. You need something on the client to send the data and something on the server to receive it.
Ask yourself the following question:
> Does Flash offer anything useful for my project?
The obvious things it lets you do are having a nice UI for selecting multiple files at once, and a simple method for seeing the progress of uploads.
If you decide that it does offer features, then implement it using [progressive enhancement](http://icant.co.uk/articles/pragmatic-progressive-enhancement/). [Flickr](http://Flickr) is an example of this — with Flash you get the fancy uploader, without it you still get a series of regular file inputs. They aren't as nice to use, but they are functional and what would have been used if Flash wasn't an option in the first place | Flash file upload vs php file upload - which one to choose? | [
"",
"php",
"flash",
"file-upload",
""
] |
I was wondering why MYSQL uses a single equals sign in conditional statements instead of the more typical two equals signs. Is there a technical/historical reason for this? Thanks. | > Hi, I was wondering why MYSQL uses a single equals sign in conditional statements instead of the more typical two equals signs. Is there a technical/historical reason for this? Thanks.
Comparison is much more common in `SQL` than assignment.
That's why `SQL` uses more short syntax to do more common things.
In classical `SQL`, comparison can be distinguished from assignment by context (assignment can be only in `SET` clause of an `UPDATE` statement), that's why one operator can be used for both operations.
In `MySQL`'s extension to `SQL`, assignment to a session variable is denoted by `:=` | More like historical.
It's SQL. It has used a single equals sign for comparison since the early '70s. | Single Equals in MYSQL | [
"",
"sql",
"mysql",
"conditional-statements",
"equals",
""
] |
I'm wondering if you can overload an operator and use it without changing the object's original values.
Edited code example:
```
class Rational{
public:
Rational(double n, double d):numerator_(n), denominator_(d){};
Rational(){}; // default constructor
double numerator() const { return numerator_; } // accessor
double denominator() const { return denominator_; } // accessor
private:
double numerator_;
double denominator_;
};
const Rational operator+(const Rational& a, const Rational& b)
{
Rational tmp;
tmp.denominator_ = (a.denominator() * b.denominator());
tmp.numerator_ = (a.numerator() * b.denominator());
tmp.numerator_ += (b.numerator() * a.denominator());
return tmp;
}
```
I made the accessors const methods, but I'm still getting a privacy error for every tmp.denominator\_ / numerator\_. | What you're looking for are the "binary" addition and subtraction operators:
```
const Rational operator+(const Rational& A, const Rational& B)
{
Rational result;
...
return result;
}
```
**update** (in response to new code and comments):
You are getting that error because your accessor functions are not declared as constant functions, so the compiler has to assume that they might modify the original object. Change your accessors as follows, and you should be good to go:
```
double numerator() const { return numerator_; }
double denominator() const { return denominator_; }
```
**update**
To properly handle privacy issues, you should declare the binary `operator+` function as a `friend` of the `Rational` class. Here is how it would look:
```
class Rational {
public:
Rational(double n, double d):numerator_(n), denominator_(d) {};
Rational() {}; // default constructor
double numerator() const { return numerator_; } // accessor
double denominator() const { return denominator_; } // accessor
friend Rational operator+(const Rational& A, const Rational& B);
private:
double numerator_;
double denominator_;
};
const Rational operator+(const Rational& a, const Rational& b)
{
Rational result;
result.denominator_ = (a.denominator_ * b.denominator_);
result.numerator_ = (a.numerator_ * b.denominator_);
result.numerator_ += (b.numerator_ * a.denominator_);
return result;
}
``` | Maybe I'm missing something, but why don't you just take out the code that modifies the arguments?
```
const Rational Rational::operator+(Rational& num)
{
Rational tmp;
tmp.denominator_ = (denominator_*num.denominator_);
//numerator_*=num.denominator_;
//num.numerator_*=denominator_;
tmp.numerator_ = (numerator_+num.numerator_);
return tmp;
}
```
This would be caught earlier by being [const-correct](http://www.parashift.com/c++-faq-lite/const-correctness.html).
That means your function signature should be this:
```
Rational Rational::operator+(const Rational& num) const
```
Then you will get errors because you are modifying const objects. The way your operators are written now is generally considered incorrect.
When you add `2 + 3`, neither 2 nor 3 changes: they are const.
### Edit
Sorry, I missed the actual math part. Here are a few things:
As a member function (what I have above), do this:
```
// Note that I use lhs and rhs to refer to the left-hand
// and right-hand sides of an operation. As a member function
// my left-hand side is implicitly `this`.
Rational Rational::operator+(const Rational& rhs) const
{
Rational temp;
temp.denominator_ = (denominator() * rhs.denominator());
temp.numerator_ = (numerator() * rhs.denominator());
temp.numerator_ += (denominator() * rhs.numerator());
return temp;
}
```
As a global function, do this:
```
Rational operator+(const Rational& lhs, const Rational& rhs)
{
Rational temp;
temp.denominator_ = (lhs.denominator() * rhs.denominator());
temp.numerator_ = (lhs.numerator() * rhs.denominator());
temp.numerator_ += (lhs.denominator() * rhs.numerator());
return temp;
}
```
The issue here is you'll get access violations: your member variables are private to the class. You need to let the compiler know that it is okay if this function handles your class's private variables by making the function a friend of the class:
```
class Rational
{
public:
friend Rational operator+(const Rational& lhs, const Rational& rhs);
}
``` | Way of overloading operator without changing original values? | [
"",
"c++",
""
] |
In a view in django I use `random.random()`. How often do I have to call `random.seed()`?
One time for every request?
One time for every season?
One time while the webserver is running? | Call `random.seed()` rarely if at all.
To be random, you must allow the random number generator to run without touching the seed. The sequence of numbers is what's random. If you change the seed, you start a new sequence. The seed values may not be very random, leading to problems.
Depending on how many numbers you need, you can consider resetting the seed from `/dev/random` periodically.
You should try to reset the seed just before you've used up the previous seed. You don't get the full 32 bits of randomness, so you might want to reset the seed after generating 2\*\*28 numbers. | Don't set the seed.
The only time you want to set the seed is if you want to make sure that the same events keep happening. For example, if you don't want to let players cheat in your game you can save the seed, and then set it when they load their game. Then no matter how many times they save + reload, it still gives the same outcomes. | Seeding random in django | [
"",
"python",
"django",
"random",
"django-views",
""
] |
I've recently been having some problems with my imports in Django (Python)... It's better to explain using a file diagram:
```
- project/
- application/
- file.py
- application2/
- file2.py
```
In `project/application/file.py` I have the following:
```
def test_method():
return "Working"
```
The problem occurs in `project/application2/file2.py`, when I try to import the method from above:
```
from application.file import test_method
```
Usually works, but sometimes not.
```
from project.application.file import test_method
```
Does work, but it goes against Django's portability guidelines as the project folder must always be called the same.
I wouldn't mind, but it's the fact that this issue is occurring inconsistently, most of the time omitting `project` is fine, but occasionally not (and as far as I can see, with no reason).
I can pretty much guarantee I'm doing something stupid, but has anyone experienced this? Would I be better just putting the `project` in front of all relevant imports to keep things consistent? Honestly, it's unlikely the `project` folder name will ever change, I just want things to stick with guidelines where possible. | For import to find a module, it needs to either be in sys.path. Usually, this includes "", so it searches the current directory. If you load "application" from project, it'll find it, since it's in the current directory.
Okay, that's the obvious stuff. A confusing bit is that Python remembers which modules are loaded. If you load application, then you load application2 which imports application, the module "application" is already loaded. It doesn't need to find it on disk; it just uses the one that's already loaded. On the other hand, if you didn't happen to load application yet, it'll search for it--and not find it, since it's not in the same directory as what's loading it ("."), or anywhere else in the path.
That can lead to the weird case where importing sometimes works and sometimes doesn't; it only works if it's already loaded.
If you want to be able to load these modules as just "application", then you need to arrange for project/ to be appended to sys.path.
(Relative imports sound related, but it seems like application and application2 are separate packages--relative imports are used for importing within the same package.)
Finally, be sure to consistently treat the whole thing as a package, or to consistently treat each application as their own package. Do not mix and match. If package/ is in the path (eg. sys.path includes package/..), then you can indeed do "from package.application import foo", but if you then also do "from application import foo", it's possible for Python to not realize these are the same thing--their names are different, and they're in different paths--and end up loading two distinct copies of it, which you definitely don't want. | If you dig into the django philosophy, you will find, that a project is a collection of apps. Some of these apps could depend on other apps, which is just fine. However, what you always want is to make your apps plug-able so you can move them to a different project and use them there as well. To do this, you need to strip all things in your code that's related to your project, so when doing imports you would do.
```
from aplication.file import test_method
```
This would be the django way of doing this. Glenn answered why you are getting your errors so I wont go into that part. When you run the command to start a new project:
```
django-admin.py startproject myproject
```
This will create a folder with a bunch of files that django needs, manage.py settings,py ect, but it will do another thing for you. It will place the folder "myproject" on your python path. In short this means that what ever application you put in that folder, you would be able to import like shown above. You don't need to use django-admin.py to start a project as nothing magical happens, it's just a shortcut. So you can place you application folders anywhere really, you just need to have them on a python path, so you can import from them directly and make your code project independent so it easily can be used in any future project, abiding to the DRY principle that django is built upon. | Forced to use inconsistent file import paths in Python (/Django) | [
"",
"python",
"django",
"import",
"path",
""
] |
In java it's possible to dynamically implement an interface using a dynamic proxy, something like this:
```
public static <T> T createProxy(InvocationHandler invocationHandler, Class<T> anInterface) {
if (!anInterface.isInterface()) {
throw new IllegalArgumentException("Supplied interface must be an interface!");
}
return (T) Proxy.newProxyInstance(anInterface.getClassLoader(), new Class[]{anInterface}, invocationHandler);
}
```
Is there an equivalent in .Net? | There are several libraries that implement this in .NET. [Here's a list of them](http://kozmic.pl/archive/2009/03/18/comparing-execution-speed-of-.net-dynamic-proxy-frameworks.aspx), with a benchmark. | The most widely used one is the [Castle Project's Dynamic Proxy](http://www.castleproject.org/projects/dynamicproxy/), which is also used by several (or at least 1) mocking frameworks. Keep in mind that methods (and sugared-up methods like properties) are not virtual by default in dotnet, so that can create some headaches if you weren't anticipating it in your class design. | What is the .Net equivalent of Java's Dynamic Proxies? | [
"",
"java",
".net",
"dynamic",
"proxy",
""
] |
I am confused about how to make a Form visible. When we create a Windows Forms application, the default Form1 is automatically visible, even without explicit call to Show method. But if we want to show another Form and make it visible, we have to make it visible by calling Show.
Any ideas why there is such differences?
I am using VSTS 2008 + C# + .Net 2.0. | This is because Form1 will be the main form of the application. Specifically, it will be passed to the `Application.Run` method, which will create an `ApplicationContext` object with Form1 assigned as main form. When the application starts, it checks if the `ApplicationContext` has a main form and if so, the `Visible` property of that form will be set to `true`, which will cause the form to be displayed.
Or, expressed in code, this is `Application.Run`:
```
public static void Run(Form mainForm)
{
ThreadContext.FromCurrent().RunMessageLoop(-1, new ApplicationContext(mainForm));
}
```
`RunMessageLoop` will call another internal function to set up the message loop, and in that function we find the following:
```
if (this.applicationContext.MainForm != null)
{
this.applicationContext.MainForm.Visible = true;
}
```
This is what makes Form1 show.
This also gives a hint on how to act to prevent Form1 form showing automatically at startup. All we need to do is to find a way to start the application without having Form1 assigned as main form in the `ApplicationContext`:
```
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
// create the form, but don't show it
Form1 form = new Form1();
// create an application context, without a main form
ApplicationContext context = new ApplicationContext();
// run the application
Application.Run(context);
}
``` | Take a look at the file "Program.cs" that VS generates for you.
```
static class Program {
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main() {
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1()); // and especially this line :)
}
}
``` | Windows Forms visibility issue | [
"",
"c#",
".net",
"forms",
"visual-studio-2008",
""
] |
I'm new to the OOP paradigm, so there's probably a simple explanation for this question...
Do you always need to declare public object-wide variables in a class? For example:
```
<?php
class TestClass
{
var $declaredVar;
function __construct()
{
$this->declaredVar = "I am a declared variable.";
$this->undeclaredVar = "I wasn't declared, but I still work.";
}
function display()
{
echo $this->declaredVar . "<br />";
echo $this->undeclaredVar;
echo "<br /><br />";
}
}
$test = new TestClass;
$test->display();
$test->declaredVar = "The declared variable was changed.";
$test->undeclaredVar = "The undeclared variable was changed.";
$test->display();
?>
```
In this code, even though `$declaredVar` is the only declared variable, `$undeclaredVar` is just as accessible and useable--it seems to act as if I had declared it as public.
If undeclared class variables are always accessible like that, what's the point of declaring them all up front? | That variable isn't uninitialized, it's just undeclared.
Declaring variables in a class definition is a point of style for readability.
Plus you can set accessibility (private or public).
Anyway, declaring variables explicitly has nothing to do with OOP, it's programming-language-specific. In Java you can't do that because variables must be declared explicitly. | If you declare a member inside the class you can set its accessibility e.g
```
private $varname;
``` | When should I declare variables in a PHP class? | [
"",
"php",
"oop",
""
] |
I have a class which has a delegate member.
I can set the delegate for each instantiated object of that class but has not found any way to save that object yet | This is a pretty risky thing to do.
While it's true that you can serialize and deserialize a delegate just like any other object, the delegate is a pointer to a method inside the program that serialized it. If you deserialize the object in another program, you'll get a `SerializationException` - if you're lucky.
For instance, let's modify darin's program a bit:
```
class Program
{
[Serializable]
public class Foo
{
public Func<string> Del;
}
static void Main(string[] args)
{
Func<string> a = (() => "a");
Func<string> b = (() => "b");
Foo foo = new Foo();
foo.Del = a;
WriteFoo(foo);
Foo bar = ReadFoo();
Console.WriteLine(bar.Del());
Console.ReadKey();
}
public static void WriteFoo(Foo foo)
{
BinaryFormatter formatter = new BinaryFormatter();
using (var stream = new FileStream("test.bin", FileMode.Create, FileAccess.Write, FileShare.None))
{
formatter.Serialize(stream, foo);
}
}
public static Foo ReadFoo()
{
Foo foo;
BinaryFormatter formatter = new BinaryFormatter();
using (var stream = new FileStream("test.bin", FileMode.Open, FileAccess.Read, FileShare.Read))
{
foo = (Foo)formatter.Deserialize(stream);
}
return foo;
}
}
```
Run it, and you'll see that it creates the object, serializes it, deserializes it into a new object, and when you call `Del` on the new object it returns "a". Excellent. Okay, now comment out the call to `WriteFoo`, so that the program it's just deserializing the object. Run the program again and you get the same result.
Now swap the declaration of a and b and run the program. Yikes. Now the deserialized object is returning "b".
This is happening because what's actually being serialized is the name that the compiler is assigning to the lambda expression. And the compiler assigns names to lambda expressions in the order it finds them.
And that's what's risky about this: you're not serializing the delegate, you're serializing a symbol. It's the *value* of the symbol, and not what the symbol represents, that gets serialized. The behavior of the deserialized object depends on what the value of that symbol represents in the program that's deserializing it.
To a certain extent, this is true with all serialization. Deserialize an object into a program that implements the object's class differently than the serializing program did, and the fun begins. But serializing delegates couples the serialized object to the symbol table of the program that serialized it, not to the implementation of the object's class.
If it were me, I'd consider making this coupling explicit. I'd create a static property of `Foo` that was a `Dictionary<string, Func<string>>`, populate this with keys and functions, and store the key in each instance rather than the function. This makes the deserializing program responsible for populating the dictionary before it starts deserializing `Foo` objects. To an extent, this is exactly the same thing that using the `BinaryFormatter` to serialize a delegate is doing; the difference is that this approach makes the deserializing program's responsibility for assigning functions to the symbols a lot more apparent. | Actually you can with [BinaryFormatter](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatters.binary.binaryformatter.aspx) as it preserves type information. And here's the proof:
```
class Program
{
[Serializable]
public class Foo
{
public Func<string> Del;
}
static void Main(string[] args)
{
Foo foo = new Foo();
foo.Del = Test;
BinaryFormatter formatter = new BinaryFormatter();
using (var stream = new FileStream("test.bin", FileMode.Create, FileAccess.Write, FileShare.None))
{
formatter.Serialize(stream, foo);
}
using (var stream = new FileStream("test.bin", FileMode.Open, FileAccess.Read, FileShare.Read))
{
foo = (Foo)formatter.Deserialize(stream);
Console.WriteLine(foo.Del());
}
}
public static string Test()
{
return "test";
}
}
```
An important thing you should be aware of if you decide to use BinaryFormatter is that its format is not well documented and the implementation could have breaking changes between .NET and/or CLR versions. | Could we save delegates in a file (C#) | [
"",
"c#",
"delegates",
""
] |
I am currently writing a plugin to a CAD style software. The plugin does calculations based on data read from the CAD model and a lot of table lookups (think *printed* tables in a calculation guide). I inherited this plugin and the current solution defines a class `Constant` which has a bunch of static struct members and two-dimensional arrays. These arrays are then indexed by enum values at runtime to find the appropriate data.
I'm not too happy with the solution, as the representation in the `Constant` class is kind of hard to read - the enum values used when retrieving data are of course not visible when *editing* the data (although that only ever happens manually and very seldom).
I'd prefer not to bundle a DB (and engine) with a small plugin, but would like similar semantics, for instance using LINQ to select values where some fields match etc.
What is your preferred solution to this problem?
* do you use a bunch of XML files and parse them at runtime?
* do you use a templating engine (t4?) to generate classes from XML files at compile time?
* do you store XML versions of datasets in the resources (read 'em in at runtime, LINQ to dataset...)
* would you just keep the `Constants` class, maybe add some documentation to the members (don't get me started about legacy code with no comments whatsoever...) | I'm doing that at the moment for a performance-critical application. The way I do it is by serializing the data in a flat file (as part of my release process), then (at launch) deserializing it into a class model, allowing LINQ-to-Objects querying.
In most scenarios xml would be reasonable, but by preference I'm using my own binary serializer - mainly for speed.
Since the constant data doesn't change, it is usually fine to cache it away in an instance you keep handy and re-use many times (rather than deserialize it per use).
To clarify: the data to use is stored in a standard database (they are good at that type of thing, and lots of tools exist for import / edit / query / etc). As part of my release process, I load the data into the object model (using the same classes) from the database, and serialize it:
```
// during release
MyDataModel data = new MyDataModel(); // wraps multiple data lists
data.Load(); // from database tables, using ORM
data.Save("data.bin"); // serialization
```
then I ship data.bin with the app (well, actually it is stored separately, but that is an aside...); and at runtime:
```
MyDataModel data = new MyDataModel();
data.Load("data.bin"); // deserialization
data.Freeze(); // make immutable
```
Note that in this case, the data is "popsicle immutable" - i.e. to preserve the "constant" nature (but while letting it edit the data during load), I have a cascading `Freeze` method that sets a flag on the items; once this flag is set, all edits (to items or to lists) throw an exception. This has zero performance impact or the running code: since you expect to treat it as constant data, it only does reads! | Unless performance is a constraint, I'd go with XML-dataset-in-resources approach. It's fairly obvious (so whoever picks up the code from you in the future won't be befuzzled by its look), it's easy both to edit and to access, and there's no need to reinvent the wheel in form of custom parser of tabular XML data. | How would you store tabular constant Data in your C# project? | [
"",
"c#",
"visual-studio-2008",
"resources",
"constants",
""
] |
I have a SQL table with two columns `Date` and `User` and contains the following rows:
```
**Date User**
2009-02-20 Danny
2009-02-20 Matthew
2009-02-15 William
2009-02-15 Abner
2009-12-14 Joseph
1900-01-01 Adam
1900-01-01 Eve
```
Given a date, how should I write my WHERE clause to return a list of users for that date, or if no users users were found for that date, return a list of users for the next earlier date. For example, if the given date is '**2009-02-19**', the users list returned should be **William** and **Abner**. | ```
SELECT User
FROM MyTable
WHERE MyDate = (SELECT MAX(MyDate) FROM MyTable WHERE MyDate <= '2009-02-19')
``` | ```
SELECT [User] FROM myTable WHERE Date =
(SELECT Max(Date) FROM myTable WHERE Date <= '2009-02-19')
``` | How should I write my WHERE clause in SQL Server for the following case? | [
"",
"sql",
"sql-server",
""
] |
I've created a python script that's intended to be used from the command line. How do I go about packaging it? This is my first python package and I've read a bit about setuptools, but I'm still not sure the best way to do this.
---
# Solution
I ended up using [setup.py](http://peak.telecommunity.com/DevCenter/setuptools) with the key configurations noted below:
```
setup(
....
entry_points="""
[console_scripts]
mycommand = mypackage.mymodule:main
""",
....
)
```
Here's a good [example](http://github.com/jsmits/github-cli/blob/c5b4166976bbf94fc3f929cc369ce094bc02b88e/setup.py) in context. | @Zach, given your clarification in your comment to @soulmerge's answer, it looks like what you need is to write a setup.py as per the instructions regarding the [distutils](http://docs.python.org/distutils/index.html#distutils-index) -- [here](http://docs.python.org/distutils/packageindex.html) in particular is how you register on pypi, and [here](http://docs.python.org/distutils/uploading.html) on how to upload to pypi once you are registrered -- and possibly (if you need some extra functionality wrt what the distutils supply on their own) add setuptools, of which `easy_install` is part, via the instructions [here](http://peak.telecommunity.com/dist/ez_setup.py). | Rather than using setuptools non standard way of proceeding, it is possible to directly rely on `distutils` setup's function, using the `scripts` argument, as stated here: <http://docs.python.org/distutils/setupscript.html#installing-scripts>
```
from distutils import setup
setup(
...,
scripts=['path/to/your/script',],
...
)
```
It allows you to stay compatible a) with all python versions and b) not having to rely on a setuptools as an external dependency. | How to package a command line Python script | [
"",
"python",
"command-line",
"packaging",
""
] |
I am using eclipse 3.5 (cocoa build) on Macos 10.5 with Java 1.5.0.19.
I just have 3 java files opened
1 files ~ 2000 lines
the other 2 are ~ 700 lines.
But when I switch from 1 file tab to another, eclipse takes a long time (~ 20 seconds) to switch to another tab.
I have already change the eclipse.ini to
```
more eclipse.ini
-startup
../../../plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar
--launcher.library
../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx_1.0.0.v20090519
-product
org.eclipse.epp.package.jee.product
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
256m
-vmargs
-Dosgi.requiredJavaVersion=1.5
-XstartOnFirstThread
-Dorg.eclipse.swt.internal.carbon.smallFonts
-XX:MaxPermSize=512m
-Xms128m
-Xmx1024m
-Xdock:icon=../Resources/Eclipse.icns
-XstartOnFirstThread
-Dorg.eclipse.swt.internal.carbon.smallFonts
```
Is there any way to make eclipse 3.5 more speedy?
Thank you. | I switched this line in the eclipse.ini file (found inside the eclipse application package):
```
-Dosgi.requiredJavaVersion=1.5
```
to
```
-Dosgi.requiredJavaVersion=1.6
```
and tab switching was speedy again. | Go with the 32-bit Cocoa release. The 64-bit won't help IMHO. It really works great on my 2.4 GHz MBP. I usually have about 30 files open, some fairly large, never experienced what you describe.
Try to get a new plain-vanilla 32-bit Cocoa distro, don't modify anything and check if there's an issue. It could be a rogue plugin, too. Do you have any installed?
Check you heap status. Open the Eclipse preferences, in the very first preferences page there's a "show heap status" option. You might be running low on memory. Check the swap status of your machine using the activity monitor - if it swaps a lot I'd recommend shutting down other applications. In general, I recommend 4 GB RAM for development machines. | Eclipse 3.5 (and newer) switching tab is very slow | [
"",
"java",
"eclipse",
"macos",
""
] |
I made a C# windows application in visual c# express 2008. I executed the exe file seperately on my PC and it worked perfectly.. Now when I ran the same file on another computer having vista an error occurred "WindowsFormapplication has stopped working". On my computer too it stops working after 5-10 min. So is the problem with my code or is it anything else?
I am a newbie here... Any help would be appreciated..
Edited.....
It uses timers:
```
System.Timers.Timer timer = new System.Timers.Timer();
timer.Elapsed += new ElapsedEventHandler(startup.onTimerEvent);
timer.AutoReset = true;
timer.Interval = 60000;
timer.Start();
Application.Run();
GC.KeepAlive(timer);
```
It also creates and sets a registry key.. | I'm guessing it is a permissions issue. On Vista the system is locked down tighter so if your program couldn't get the permission it needed it will throw an exception. If that happens at the start of the program that's why you get the error on start up.
On your own machine you may be waiting 5-10 minutes because it is a different bug that is causing the application to fail. | Could you be using something from a version of .NET that is not available on the second machine? | C# application stopped working | [
"",
"c#",
"windows-vista",
""
] |
In JavaScript, what is the difference between an object and a hash? How do you create one vs the other, and why would you care? Is there a difference between the following code examples?
```
var kid = {
name: "juni",
age: 1
}
```
And:
```
var kid = new Object();
kid.name = "juni";
kid.age = 1;
```
And:
```
var kid = new Object();
kid["name"] = "juni";
kid["age"] = 1;
```
Can you think of any other code example I should illustrate?
The *core question here* is *what is the difference between an object and a hash?* | There just isn't any. All three of those are literally equal. | They are different notation systems that you can use interchangeably. There are many situations where using the bracket syntax `[ ]` can be more appealing, an example would be when referencing an object with a variable.
```
var temp = "kid";
var obj = new Object();
obj[temp] = 5; // this is legal, and is equivalent to object.kid
obj.temp = 5; // this references literally, object.temp
``` | Difference between an object and a hash? | [
"",
"javascript",
"json",
""
] |
I was researching how to get the memory offset of a member to a class in C++ and came across this on [wikipedia:](http://en.wikipedia.org/wiki/Offsetof)
> In C++ code, you can not use offsetof to access members of structures or classes that are not Plain Old Data Structures.
I tried it out and it seems to work fine.
```
class Foo
{
private:
int z;
int func() {cout << "this is just filler" << endl; return 0;}
public:
int x;
int y;
Foo* f;
bool returnTrue() { return false; }
};
int main()
{
cout << offsetof(Foo, x) << " " << offsetof(Foo, y) << " " << offsetof(Foo, f);
return 0;
}
```
I got a few warnings, but it compiled and when run it gave reasonable output:
```
Laptop:test alex$ ./test
4 8 12
```
I think I'm either misunderstanding what a POD data structure is or I'm missing some other piece of the puzzle. I don't see what the problem is. | Short answer: offsetof is a feature that is only in the C++ standard for legacy C compatibility. Therefore it is basically restricted to the stuff than can be done in C. C++ supports only what it must for C compatibility.
As offsetof is basically a hack (implemented as macro) that relies on the simple memory-model supporting C, it would take a lot of freedom away from C++ compiler implementors how to organize class instance layout.
The effect is that offsetof will often work (depending on source code and compiler used) in C++ even where not backed by the standard - except where it doesn't. So you should be very careful with offsetof usage in C++, especially ~~since I do not know a single compiler that will generate a warning for non-POD use...~~ Modern GCC and Clang will emit a warning if `offsetof` is used outside the standard (`-Winvalid-offsetof`).
**Edit**: As you asked for example, the following might clarify the problem:
```
#include <iostream>
using namespace std;
struct A { int a; };
struct B : public virtual A { int b; };
struct C : public virtual A { int c; };
struct D : public B, public C { int d; };
#define offset_d(i,f) (long(&(i)->f) - long(i))
#define offset_s(t,f) offset_d((t*)1000, f)
#define dyn(inst,field) {\
cout << "Dynamic offset of " #field " in " #inst ": "; \
cout << offset_d(&i##inst, field) << endl; }
#define stat(type,field) {\
cout << "Static offset of " #field " in " #type ": "; \
cout.flush(); \
cout << offset_s(type, field) << endl; }
int main() {
A iA; B iB; C iC; D iD;
dyn(A, a); dyn(B, a); dyn(C, a); dyn(D, a);
stat(A, a); stat(B, a); stat(C, a); stat(D, a);
return 0;
}
```
This will crash when trying to locate the field `a` inside type `B` statically, while it works when an instance is available. This is because of the virtual inheritance, where the location of the base class is stored into a lookup table.
While this is a contrived example, an implementation could use a lookup table also to find the public, protected and private sections of a class instance. Or make the lookup completely dynamic (use a hash table for fields), etc.
The standard just leaves all possibilities open by restricting offsetof to POD (IOW: no way to use a hash table for POD structs... :)
Just another note: I had to reimplement offsetof (here: offset\_s) for this example as GCC actually errors out when I call offsetof for a field of a virtual base class. | Bluehorn's answer is correct, but for me it doesn't explain the reason for the problem in simplest terms. The way I understand it is as follows:
If NonPOD is a non-POD class, then when you do:
```
NonPOD np;
np.field;
```
the compiler does not necessarily access the field by adding some offset to the base pointer and dereferencing. For a POD class, the C++ Standard constrains it to do that(or something equivalent), but for a non-POD class it does not. The compiler might instead read a pointer out of the object, add an offset to *that* value to give the storage location of the field, and then dereference. This is a common mechanism with virtual inheritance if the field is a member of a virtual base of NonPOD. But it is not restricted to that case. The compiler can do pretty much anything it likes. It could call a hidden compiler-generated virtual member function if it wants.
In the complex cases, it is obviously not possible to represent the location of the field as an integer offset. So `offsetof` is not valid on non-POD classes.
In cases where your compiler just so happens to store the object in a simple way (such as single inheritance, and normally even non-virtual multiple inheritance, and normally fields defined right in the class that you're referencing the object by as opposed to in some base class), then it will just so happen to work. There are probably cases which just so happen to work on every single compiler there is. This doesn't make it valid.
### Appendix: how does virtual inheritance work?
With simple inheritance, if B is derived from A, the usual implementation is that a pointer to B is just a pointer to A, with B's additional data stuck on the end:
```
A* ---> field of A <--- B*
field of A
field of B
```
With simple multiple inheritance, you generally assume that B's base classes (call 'em A1 and A2) are arranged in some order peculiar to B. But the same trick with the pointers can't work:
```
A1* ---> field of A1
field of A1
A2* ---> field of A2
field of A2
```
A1 and A2 "know" nothing about the fact that they're both base classes of B. So if you cast a B\* to A1\*, it has to point to the fields of A1, and if you cast it to A2\* it has to point to the fields of A2. The pointer conversion operator applies an offset. So you might end up with this:
```
A1* ---> field of A1 <---- B*
field of A1
A2* ---> field of A2
field of A2
field of B
field of B
```
Then casting a B\* to A1\* doesn't change the pointer value, but casting it to A2\* adds `sizeof(A1)` bytes. This is the "other" reason why, in the absence of a virtual destructor, deleting B through a pointer to A2 goes wrong. It doesn't just fail to call the destructor of B and A1, it doesn't even free the right address.
Anyway, B "knows" where all its base classes are, they're always stored at the same offsets. So in this arrangement offsetof would still work. The standard doesn't require implementations to do multiple inheritance this way, but they often do (or something like it). So offsetof might work in this case on your implementation, but it is not guaranteed to.
Now, what about virtual inheritance? Suppose B1 and B2 both have A as a virtual base. This makes them single-inheritance classes, so you might think that the first trick will work again:
```
A* ---> field of A <--- B1* A* ---> field of A <--- B2*
field of A field of A
field of B1 field of B2
```
But hang on. What happens when C derives (non-virtually, for simplicity) from both B1 and B2? C must only contain 1 copy of the fields of A. Those fields can't immediately precede the fields of B1, and also immediately precede the fields of B2. We're in trouble.
So what implementations might do instead is:
```
// an instance of B1 looks like this, and B2 similar
A* ---> field of A
field of A
B1* ---> pointer to A
field of B1
```
Although I've indicated B1\* pointing to the first part of the object after the A subobject, I suspect (without bothering to check) the actual address won't be there, it'll be the start of A. It's just that unlike simple inheritance, the offsets between the actual address in the pointer, and the address I've indicated in the diagram, will *never* be used unless the compiler is certain of the dynamic type of the object. Instead, it will always go through the meta-information to reach A correctly. So my diagrams will point there, since that offset will always be applied for the uses we're interested in.
The "pointer" to A could be a pointer or an offset, it doesn't really matter. In an instance of B1, created as a B1, it points to `(char*)this - sizeof(A)`, and the same in an instance of B2. But if we create a C, it can look like this:
```
A* ---> field of A
field of A
B1* ---> pointer to A // points to (char*)(this) - sizeof(A) as before
field of B1
B2* ---> pointer to A // points to (char*)(this) - sizeof(A) - sizeof(B1)
field of B2
C* ----> pointer to A // points to (char*)(this) - sizeof(A) - sizeof(B1) - sizeof(B2)
field of C
field of C
```
So to access a field of A using a pointer or reference to B2 requires more than just applying an offset. We must read the "pointer to A" field of B2, follow it, and only then apply an offset, because depending what class B2 is a base of, that pointer will have different values. There is no such thing as `offsetof(B2,field of A)`: there can't be. offsetof will *never* work with virtual inheritance, on any implementation. | Why can't you use offsetof on non-POD structures in C++? | [
"",
"c++",
"offsetof",
""
] |
Are there any JavaScript libraries out there that allow you to create heatmaps using in-browser graphic rendering features such as `<canvas>` or SVG?
I know about HeatMapAPI.com, but their heat maps are generated on the server side. I think that in the era of `<canvas>` element we don't need that anymore!
If there is nothing like this yet, are there any volunteers to participate in creating such a tool? | I created a demo including a real-time heatmap with the `<canvas>` element and javascript. I also added the documented code next to the heatmap sample. The heatmap generation process is based on an alpha map in the canvas element which depends on the users mouse movement.
You can take a look at my demo right here:
<http://www.patrick-wied.at/static/heatmap/> | I created a hit map with the help of Google Visualization API [<http://code.google.com/apis/visualization/documentation/]>. It uses SVG & VML, and also cross browser compatible. Hope it'll help. | Creating heatmaps using <canvas> element? | [
"",
"javascript",
"html",
"canvas",
"heatmap",
""
] |
I have an Arduino connected to my computer running a loop, sending a value over the serial port back to the computer every 100 ms.
I want to make a Python script that will read from the serial port only every few seconds, so I want it to just see the last thing sent from the Arduino.
How do you do this in Pyserial?
Here's the code I tried which does't work. It reads the lines sequentially.
```
import serial
import time
ser = serial.Serial('com4',9600,timeout=1)
while 1:
time.sleep(10)
print ser.readline() #How do I get the most recent line sent from the device?
``` | Perhaps I'm misunderstanding your question, but as it's a serial line, you'll have to read everything sent from the Arduino sequentially - it'll be buffered up in the Arduino until you read it.
If you want to have a status display which shows the latest thing sent - use a thread which incorporates the code in your question (minus the sleep), and keep the last complete line read as the latest line from the Arduino.
**Update:** `mtasic`'s example code is quite good, but if the Arduino has sent a partial line when `inWaiting()` is called, you'll get a truncated line. Instead, what you want to do is to put the last *complete* line into `last_received`, and keep the partial line in `buffer` so that it can be appended to the next time round the loop. Something like this:
```
def receiving(ser):
global last_received
buffer_string = ''
while True:
buffer_string = buffer_string + ser.read(ser.inWaiting())
if '\n' in buffer_string:
lines = buffer_string.split('\n') # Guaranteed to have at least 2 entries
last_received = lines[-2]
#If the Arduino sends lots of empty lines, you'll lose the
#last filled line, so you could make the above statement conditional
#like so: if lines[-2]: last_received = lines[-2]
buffer_string = lines[-1]
```
Regarding use of `readline()`: Here's what the Pyserial documentation has to say (slightly edited for clarity and with a mention to readlines()):
> Be careful when using "readline". Do
> specify a timeout when opening the
> serial port, otherwise it could block
> forever if no newline character is
> received. Also note that "readlines()"
> only works with a timeout. It
> depends on having a timeout and
> interprets that as EOF (end of file).
which seems quite reasonable to me! | ```
from serial import *
from threading import Thread
last_received = ''
def receiving(ser):
global last_received
buffer = ''
while True:
# last_received = ser.readline()
buffer += ser.read(ser.inWaiting())
if '\n' in buffer:
last_received, buffer = buffer.split('\n')[-2:]
if __name__ == '__main__':
ser = Serial(
port=None,
baudrate=9600,
bytesize=EIGHTBITS,
parity=PARITY_NONE,
stopbits=STOPBITS_ONE,
timeout=0.1,
xonxoff=0,
rtscts=0,
interCharTimeout=None
)
Thread(target=receiving, args=(ser,)).start()
``` | pyserial - How to read the last line sent from a serial device | [
"",
"python",
"serial-port",
"arduino",
"pyserial",
""
] |
So I want to make sure all of my database / network operations are not happening on the UI thread of my application. To do this I normally use the BeginInvoke function to make the call and then Invoke to do the actual update. I am not sure if I am doing things correctly compared to the way it should be done. Could anyone please provide comments on the following code:
```
private void folderTree_NodeExpandedChanged(object sender, RadTreeViewEventArgs e)
{
if (e.Node.Tag != null)
{
var path = (string) e.Node.Tag;
if (!string.IsNullOrEmpty(path))
{
if (Directory.Exists(path))
{
folderTree.BeginUpdate();
BeginInvoke(
new Action(() => GetDirectories(path, e.Node)));
folderTree.EndUpdate();
}
}
}
}
private void GetDirectories(string path, RadTreeNode parent)
{
var dirs = (new DirectoryInfo(path)).GetDirectories();
Array.ForEach(dirs, d => Invoke(new Action(
() => AddNode(d.Name, d.FullName, parent))));
}
``` | On a control/form, `BeginInvoke` pushes work to the UI thread, via the message pump (with a small overhead incurred for the privilege). So you've gone from the UI thread to the UI thread... I suspect you want a `BackgroundWorker` / `ThreadPool` or similar in here somewhere... (perhaps `ThreadPool` would be easiest from the current position).
Also; you probably don't want to do the update work synchronously, and you don't want lots of switches between the threads; I'd pass the entire array over (or mid-size chunks); not individual records. | As far as I can tell, the following call:
```
BeginInvoke(new Action(() => GetDirectories(path, e.Node)));
```
...is very little different to this one:
```
GetDirectories(path, e.Node);
```
It only adds an extra layer or two around the call, but there is no asynchronosity in there; it all happens on the same thread. You will need to include some mechanisms to push the work to another thread (`BackgroundWorker` control, use the [`ThreadPool`](http://msdn.microsoft.com/en-us/library/system.threading.threadpool.aspx) or something similar) | Async loading in C# - optimization / proper usage? | [
"",
"c#",
"optimization",
""
] |
My applicationContext.xml:
```
<bean id="studentService" class="com.coe.StudentService">
<property name="studentProfile" ref="studentProfile" />
</bean>
<bean id="studentProfile" class="com.coe.student.StudentProfile">
</bean>
```
My web.xml:
```
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/applicationContext.xml</param-value>
</context-param>
```
My classes:
```
StudentService{
private StudentProfile studentProfile;
//has appropriate getters/setters
```
}
```
StudentProfile{
private String name;
//has getter/setter
```
}
i have a jsp that calls studentService.studentProfile.name, and the error says that studentProfile is null
My assumption is that when the server starts up, Spring instantiates all objects as requested, so when the StudentService is called, would not Spring also set StudentProfile? | Not really an answer to your question, but a possible solution to your problem, if you're willing to work with annotations instead:
Web.xml
```
<!-- Spring -->
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>spring.xml</param-value>
</context-param>
```
Spring.xml
```
<context:component-scan base-package="com.coe" />
```
Java code
```
@Service
StudentService{
@Autowired
private StudentProfile studentProfile; }
@Repository//???
StudentProfile{
private String name;}
```
That said I have a little trouble understanding why StudentProfile would be a bean (assuming every student has a profile) and StudentService would have reference to a single StudentProfile, but that might just be your terminology.. (Or my lack of understanding thereof) | Normally with Spring and a web-app, you would have a [DispatcherServlet](http://static.springsource.org/spring/docs/2.5.x/reference/mvc.html) and a bunch of controllers, that pass onto JSP views. These controllers would be managed by Spring.
If you want to go direct to a JSP without using DispatcherServlet, then you need some way to make the first injection into your pages (ContextLoaderListener doesn't do this). That is, you must explicitly lookup the initial bean using JSP initialization code such as
[Disclaimer: not tested]
```
<%@ page import="org.springframework.web.context.support.WebApplicationContextUtils" %>
<%!
private StudentService studentService;
public void jspInit() {
studentService = (StudentService) WebApplicationContextUtils.
getRequiredWebApplicationContext(getServletContext()).
getBean("studentService");
}
%>
``` | How can I force instantiation of all referenced beans during Spring start-up? | [
"",
"java",
"spring",
""
] |
This may have been answered elsewhere but I could not find a suitable response.
I have this code:
```
enum enumWizardPage
{
WP_NONE = 0x00,
WP_CMDID = 0x01,
WP_LEAGUES = 0x02,
WP_TEAMS = 0x04,
WP_COMP = 0x08,
WP_DIVISIONS = 0x10,
WP_FORMULAS = 0x20,
WP_FINISHED = 0x40,
};
```
Which is legacy and I have to modify it by adding a few new values.
The issue is each value must be a unique bit so they may be OR combined to a bitmap.
The values are set using the #x## hex format, but I'm wondering if this is the max it can store?
What will be the effect, if any, if I change my code to
```
enum enumWizardPage
{
WP_NONE = 0x0000,
WP_CMDID = 0x0001,
WP_LEAGUES = 0x0002,
WP_TEAMS = 0x0004,
WP_COMP = 0x0008,
WP_DIVISIONS = 0x0010,
WP_FORMULAS = 0x0020,
WP_FINISHED = 0x0040,
};
``` | From [N4659 C++ 7.2/5](https://timsong-cpp.github.io/cppwp/n4659/dcl.enum#7):
> For an enumeration whose underlying type is not fixed, the underlying type is an integral type that can represent all the enumerator values defined in the enumeration. If no integral type can represent all the enumerator values, the enumeration is ill-formed. It is implementation-defined which integral type is used as the underlying type except that the underlying type shall not be larger than `int` unless the value of an enumerator cannot fit in an `int` or `unsigned int`. If the *enumerator-list* is empty, the underlying type is as if the enumeration had a single enumerator with value 0. | The type of a C++ enum is the enum itself. Its range is rather arbitrary, but in practical terms, its underlying type is an `int`.
It is implicitly cast to `int` wherever it's used, though.
### C++11 changes
This has changed since C++11, which introduced typed enums. An untyped `enum` now is defined as being at least the width of `int` (and wider if larger values are needed). However, given a typed `enum` defined as follows:
```
enum name : type {};
```
An enumeration of type `name` has an underlying type of `type`. For example, `enum : char` defines an `enum` the same width as `char` instead of `int`.
Further, an `enum` can be explicitly scoped as follows:
```
enum class name : type {
value = 0,
// ...
};
```
(Where `name` is required, but `type` is optional.) An `enum` declared this way will no longer implicitly cast to its underlying type (requiring a `static_cast<>`) and values must be referenced with a fully-qualified name. In this example, to assign `value` to an `enum` variable, you must refer to it as `name::value`. | What is the underlying type of a c++ enum? | [
"",
"c++",
"enums",
"size",
"types",
"hex",
""
] |
In Java you can do the following to locally scope variables within a method:
```
public void blah() {
{
int a = 1;
int b = 2;
}
{
int a = 3;
int b = 4;
}
}
```
I don't even know what the name for this technique is, I've only needed to use it once or twice. In my current situation, this may really come in handy with this SQL project.
Is this even possible in SQL?
---
I'm using MS SQL Server...
To give more context to the situation:
We have several sql scripts stored as files which help perform various operations on a database. These scripts are fairly large and were designed to be run independently. Sometimes, we need to run several of these scripts together and deliver in a single file.
Since most of these script have common variables we run into a conflict when joining these scripts together. Of course, it is easy to move all variable declaration to the beginning of the file, but the goal is to have this process automated. | SQL is a declarative language (i.e. focuses on the 'whats' and not the 'hows'). Hence its definition (ansi) does not contain procedurals and blocks.
Most vendor implementations fill this void by embedding their own procedural language - TSQL in case of Microsoft, PL/SQL in case of Oracle etc. (Not sure of Mysql and SQLlite etc) - and even they dont do an exact feature match with Java (No anonymous codeblocks).
Each SQL operation works on a set, and returns a set.
Thus "mytable" is a set, "select from mytable" is a set , and "select from (select from mytable) " is a set.
So its kind of not possible to drill down into a lower level of granularity, perhaps "columns" or pseudo columns are the 'variables'
Oracle defines a "select from dual" operation to guarantee a 'loop' of 1 (i.e. a select statement that returns exactly 1 row) | In MySQL variable declarations must come at the beginning of the subroutine/stored procedure, like a C function's variables must be. | Locally enclose a code block in SQL | [
"",
"sql",
"sql-server",
""
] |
Suppose I have two tables that are linked (one has a foreign key to the other):
```
CREATE TABLE Document (
Id INT PRIMARY KEY,
Name VARCHAR 255
)
CREATE TABLE DocumentStats (
Id INT PRIMARY KEY,
DocumentId INT, -- this is a foreign key to table Document
NbViews INT
)
```
I know, this is not the smartest way of doing things, but this is the best example I could come up with.
Now, I want to get all documents that have more than 500 views. The two solutions that come to my mind are:
```
SELECT *
FROM Document, DocumentStats
WHERE DocumentStats.Id = Document.Id
AND DocumentStats.NbViews > 500
```
or:
```
SELECT *
FROM Document
INNER JOIN DocumentStats ON Document.Id = DocumentStats.Id
WHERE DocumentStats.NbViews > 500
```
Are both queries equivalent, or is there one way that is far better than the other? If so, why?
EDIT: as requested in the answers, this question was aimed at SQL Server, but I would be interested in knowing if it is different for other database engines (MySQL, etc...). | Theoretically, no, it shouldn't be any faster. The query optimizer should be able to generate an identical execution plan. However, some database engines can produce better execution plans for one of them (not likely to happen for such a simple query but for complex enough ones). You should test both and see (on your database engine). | Performance of "JOIN" versus "WHERE"... everything hinges on how well the database engine is able to optimize the query for you. It will take into account any indexes you might have on the columns being returned and consider that performance of WHERE and JOIN clauses also come down to the physical database file itself and its fragmentation level and even the storage technology you use to store the database files on.
SQL server executes queries in the following order (this should give you an idea of the functions of the WHERE and JOIN clauses)
## Microsoft SQL Server query process order
the following is taken from the excellent series of books about Microsoft SQL Server, *Inside Microsoft SQL Server 2005: T-SQL Querying* which can be found [here](https://rads.stackoverflow.com/amzn/click/com/0735623139)
> (Step 8) SELECT (Step 9) DISTINCT (Step 11)
> <top\_specification> <select\_list>
> (Step 1) FROM *left\_table*
> (Step 3)
> *join\_type* JOIN *right\_table*
> (Step 2) ON *join\_condition*
> (Step 4) WHERE
> *where\_condition*
> (Step 5) GROUP BY *group\_by\_list*
> (Step 6) WITH [CUBE|ROLLUP]
> (Step 7) HAVING
> *having\_clause*
> (Step 10) ORDER BY *order\_by\_list* | Is a JOIN faster than a WHERE? | [
"",
"sql",
"performance",
"join",
"where-clause",
""
] |
I'm trying to write a jQuery plugin that will provide additional functions/methods to the object that calls it. All the tutorials I read online (have been browsing for the past 2 hours) include, at the most, how to add options, but not additional functions.
Here's what I am looking to do:
//format div to be a message container by calling the plugin for that div
```
$("#mydiv").messagePlugin();
$("#mydiv").messagePlugin().saySomething("hello");
```
or something along those lines.
Here's what it boils down to: I call the plugin, then I call a function associated with that plugin. I can't seem to find a way to do this, and I've seen many plugins do it before.
Here's what I have so far for the plugin:
```
jQuery.fn.messagePlugin = function() {
return this.each(function(){
alert(this);
});
//i tried to do this, but it does not seem to work
jQuery.fn.messagePlugin.saySomething = function(message){
$(this).html(message);
}
};
```
How can I achieve something like that?
Thank you!
---
Update Nov 18, 2013: I've changed the correct answer to that of Hari's following comments and upvotes. | According to the jQuery Plugin Authoring page (<http://docs.jquery.com/Plugins/Authoring>), it's best not to muddy up the jQuery and jQuery.fn namespaces. They suggest this method:
```
(function( $ ){
var methods = {
init : function(options) {
},
show : function( ) { },// IS
hide : function( ) { },// GOOD
update : function( content ) { }// !!!
};
$.fn.tooltip = function(methodOrOptions) {
if ( methods[methodOrOptions] ) {
return methods[ methodOrOptions ].apply( this, Array.prototype.slice.call( arguments, 1 ));
} else if ( typeof methodOrOptions === 'object' || ! methodOrOptions ) {
// Default to "init"
return methods.init.apply( this, arguments );
} else {
$.error( 'Method ' + methodOrOptions + ' does not exist on jQuery.tooltip' );
}
};
})( jQuery );
```
Basically you store your functions in an array (scoped to the wrapping function) and check for an entry if the parameter passed is a string, reverting to a default method ("init" here) if the parameter is an object (or null).
Then you can call the methods like so...
```
$('div').tooltip(); // calls the init method
$('div').tooltip({ // calls the init method
foo : 'bar'
});
$('div').tooltip('hide'); // calls the hide method
$('div').tooltip('update', 'This is the new tooltip content!'); // calls the update method
```
Javascripts "arguments" variable is an array of all the arguments passed so it works with arbitrary lengths of function parameters. | Here's the pattern I have used for creating plugins with additional methods. You would use it like:
```
$('selector').myplugin( { key: 'value' } );
```
or, to invoke a method directly,
```
$('selector').myplugin( 'mymethod1', 'argument' );
```
Example:
```
;(function($) {
$.fn.extend({
myplugin: function(options,arg) {
if (options && typeof(options) == 'object') {
options = $.extend( {}, $.myplugin.defaults, options );
}
// this creates a plugin for each element in
// the selector or runs the function once per
// selector. To have it do so for just the
// first element (once), return false after
// creating the plugin to stop the each iteration
this.each(function() {
new $.myplugin(this, options, arg );
});
return;
}
});
$.myplugin = function( elem, options, arg ) {
if (options && typeof(options) == 'string') {
if (options == 'mymethod1') {
myplugin_method1( arg );
}
else if (options == 'mymethod2') {
myplugin_method2( arg );
}
return;
}
...normal plugin actions...
function myplugin_method1(arg)
{
...do method1 with this and arg
}
function myplugin_method2(arg)
{
...do method2 with this and arg
}
};
$.myplugin.defaults = {
...
};
})(jQuery);
``` | How to create a jQuery plugin with methods? | [
"",
"javascript",
"jquery",
"jquery-plugins",
"javascript-framework",
""
] |
I created a vector out of a struct to store multiple types of values. However, I can't get input to work.
```
#include "std_lib_facilities.h"
struct People{
string name;
int age;
};
int main()
{
vector<People>nameage;
cout << "Enter name then age until done. Press enter, 0, enter to continue.:\n";
People name;
People age;
while(name != "0"){
cin >> name;
nameage.push_back(name);
cin >> age;
nameage.push_back(age);}
vector<People>::iterator i = (nameage.end()-1);
nameage.erase(i);
}
```
I've also tried having the name and age variables in the main function be string/int types, and while that fixes the operator issue, it leads to an issue with function calling in the push\_back line.
P.S. Is it possible to push\_back multiple inputs such as...
```
cin >> name >> age;
nameage.push_back(name,age);
```
? | Why not do:
```
People p;
cin >> p.name;
cin >> p.age;
nameage.push_back( p );
```
You can't just `cin >>` p, as `istream` doesn't understand how to input a "People" object. So you can either define `operator>>` for `People`, or you can just read in the individual fields into a People object.
Also, note, you need to `push_back` an object of type `People`, as that is what your `vector` is -- it is a `People` container. | One option is to define an `operator>>` for People.
```
struct People
{
friend std::istream & operator>> (std::istream & in, People & person);
};
std::istream & operator>> (std::istream & in, People & person)
{
in >> person.name >> person.age;
return in;
}
```
Then you could write:
```
Person p;
cout << "Enter the person's name and age, separated by a space: ";
cin >> p;
nameage.push_back (p);
``` | Class doesn't support operators | [
"",
"c++",
"class",
"operators",
""
] |
A friend asked me for help on building a query that would show how many pieces of each model were sold on each day of the month, showing zeros when no pieces were sold for a particular model on a particular day, even if no items of any model are sold on that day. I came up with the query below, but it isn't working as expected. I'm only getting records for the models that have been sold, and I don't know why.
```
select days_of_months.`Date`,
m.NAME as "Model",
count(t.ID) as "Count"
from MODEL m
left join APPLIANCE_UNIT a on (m.ID = a.MODEL_FK and a.NUMBER_OF_UNITS > 0)
left join NEW_TICKET t on (a.NEW_TICKET_FK = t.ID and t.TYPE = 'SALES'
and t.SALES_ORDER_FK is not null)
right join (select date(concat(2009,'-',temp_months.id,'-',temp_days.id)) as "Date"
from temp_months
inner join temp_days on temp_days.id <= temp_months.last_day
where temp_months.id = 3 -- March
) days_of_months on date(t.CREATION_DATE_TIME) =
date(days_of_months.`Date`)
group by days_of_months.`Date`,
m.ID, m.NAME
```
I had created the temporary tables `temp_months` and `temp_days` in order to get all the days for any month. I am using MySQL 5.1, but I am trying to make the query ANSI-compliant. | You should `CROSS JOIN` your dates and models so that you have exactly one record for each day-model pair no matter what, and then `LEFT JOIN` other tables:
```
SELECT date, name, COUNT(t.id)
FROM (
SELECT ...
) AS days_of_months
CROSS JOIN
model m
LEFT JOIN
APPLIANCE_UNIT a
ON a.MODEL_FK = m.id
AND a.NUMBER_OF_UNITS > 0
LEFT JOIN
NEW_TICKET t
ON t.id = a.NEW_TICKET_FK
AND t.TYPE = 'SALES'
AND t.SALES_ORDER_FK IS NOT NULL
AND t.CREATION_DATE_TIME >= days_of_months.`Date`
AND t.CREATION_DATE_TIME < days_of_months.`Date` + INTERVAL 1 DAY
GROUP BY
date, name
```
The way you do it now you get `NULL`'s in `model_id` for the days you have no sales, and they are grouped together.
Note the `JOIN` condition:
```
AND t.CREATION_DATE_TIME >= days_of_months.`Date`
AND t.CREATION_DATE_TIME < days_of_months.`Date` + INTERVAL 1 DAY
```
instead of
```
DATE(t.CREATION_DATE_TIME) = DATE(days_of_months.`Date`)
```
This will help make your query `sargable` (optimized by indexes) | You need to use outer joins, as they do not require each record in the two joined tables to have a matching record.
<http://dev.mysql.com/doc/refman/5.1/en/join.html> | Left and right joining in a query | [
"",
"sql",
"mysql",
"join",
""
] |
I am having categories as following,
* Fun
+ Jokes
+ Comedy
* Action
+ Movies
+ TV Shows
Now One video can have multiple categories or sub categories, let's say VideoId: 23 is present in Categories `Fun, Fun->Comedy, Action->TV Shows` but not in Action category. Now I am not getting idea that hwo should I maintain these categories in Database. Should I create only one column "`CategoryId AS VARCHAR`" in `Videos` and add category id as comma-separated values (1,3,4) like this but then how I will fetch the records if someone is browsing category Jokes?
Or should I create another table which will have videoId and categoryid, in that case if a Video is present in 3 different categories then 3 rows will be added to that new table
Please suggest some way of how to maintain categories for a particular record in the table
Thanks | You categories table could have a column in it called parentID that reference another entry in the categories table. It would be a foreign key to itself. NULL would represent a top-level category. Something other then NULL would represent "I am a child category of this category". You could assign a video to any category still, top-level, child, or somewhere inbetween.
Also, use autoincrement notnull integers for your primary keys, not varchar. It's a performance consideration.
---
To answer your comment:
3 tables: Videos, Categories, and Video\_Category
Video\_Category would have VideoID and CategoryID columns. The primary key would be a combination of the two columns (a compound primary key) | I suggest that create another table which will have videoId and categoryid. Then you can use sql-query as follow:
```
select a.*,GROUP_CONCAT(b.category_id) as cagegory_ids
from table_video a
left join table_video_category b on a.video_id=b.video_id
group by a.video_id
``` | How to maintain subcategory in MYSQL? | [
"",
"sql",
"mysql",
"database",
""
] |
Is there a way to run JUnit-Tests from several projects conveniently fast in Eclipse?
The JUnit-Runner lets you define a package or a folder where from all contained tests are executed.
Is there a way to do this with tests from several projects inside Eclipse?
Preferably it should be via the Junit-Runner. If there is some way to have it fast via an Ant-job (so not depend on a complete build with ant before), that would be also nice. | You can't do it through the UI. Looking at the extension-points the highest-level element JUnit will collect for is the Project. I suppose you could write a plugin to contribute an additional context item/shortcut for a working set, make working sets the top-level items in the package explorer and group the projects you want to test together below that working set.
The problems with doing this is you'd have trouble defining the context rules for enabling/disabling the "run as" contribution and I'm not sure the semantics extend to working sets. So you'd have to write some sort of wrapper to iterate the contained projects and collect their test types. This does seem an interesting little problem. I might have a play with it after school today.
Another (slightly less) hacky way would be to set up another project with project dependencies on all your target projects, then use linked resources to bring all the test types into the new project (I've posted an [answer](https://stackoverflow.com/questions/1130400/eclipse-classpath-entries-only-used-for-tests/1130437#1130437) before that describes how to link sources across projects).
Of course if you do this you will need to manage the dependencies of the test project as well.
If you create a TestSuite for each project and another uber TestSuite that references all the projects' suites, you have to check every test is included, which is error-prone.
If you don't fancy mucking about with plugins or linked-resources, you're probably best off using Ant. | It’s actually quite easy to perform JUnit tests across multiple projects from within Eclipse. Have a look at [Classpath Suite](http://johanneslink.net/projects/cpsuite.jsp). It’s not with the standard JUnit runner, but you did not mention where that requirement came from, so I’m not sure whether this will affect you.
All the usage information is on that page, but to summarize:
1. Create an Eclipse project depending on all the projects you want to test.
2. Write a simple test suite including everything:
```
@RunWith(ClasspathSuite.class)
public class MySuite {}
``` | Run Junit-Tests from several projects conveniently fast in Eclipse | [
"",
"java",
"eclipse",
"ant",
"junit",
"eclipse-3.5",
""
] |
I have a few large projects I am working on in my new place of work, which have a complicated set of statically linked library dependencies between them.
The libs number around 40-50 and it's really hard to determine what the structure was initially meant to be, there isn't clear documentation on the full dependency map.
What tools would anyone recommend to extract such data?
Presumably, in the simplest manner, if did the following:
1. define the set of paths which correspond to library units
2. set all .cpp/.h files within those to belong to those compilation units
3. capture the 1st order #include dependency tree
One would have enough information to compose a map - refactor - and recompose the map, until one has created some order.
I note that <http://www.ndepend.com> have something nice but that's exclusively .NET unfortunately.
I read something about Doxygen being able accomplish some static dependency analysis with configuration; has anyone ever pressed it into service to accomplish such a task? | [This link](https://stackoverflow.com/questions/422745/does-any-tool-similar-to-ndepend-exist-for-unmanaged-c-code) leads to:
[CppDepend](http://www.cppdepend.com/) | <http://github.com/yuzhichang/cppdep> may be what you want.
I wrote it for analyzing dependencies among components/packages/package groups of a large C/C++ project.
It's a rewrite of dep\_utils(adep/cdep/ldep) which is provided by John Lakos' book Large-Scale C++ Software Design. | (Visual) C++ project dependency analysis | [
"",
"c++",
"visual-c++",
"refactoring",
""
] |
In my java application I need to send mails to different mail addresses. I am using the next piece of code , but for some reason it doesn't work.
```
public class main {
public static void main(String[] args) {
Properties props = new Properties();
props.put("mail.smtp.host", "mail.yahoo.com.");
props.put("mail.smtp.auth", "true");
props.put("mail.debug", "true");
Session session = Session.getInstance(props, new MyAuth());
try {
Message msg = new MimeMessage(session);
msg.setFrom(new InternetAddress("giginnho@yahoo.com"));
InternetAddress[] address = {new InternetAddress("rantravee@yahoo.com")};
msg.setRecipients(Message.RecipientType.TO, address);
msg.setSubject("subject ");
msg.setSentDate(new Date());
msg.setText("Message here ");
Transport.send(msg);
} catch (MessagingException e) {}
}
}
class MyAuth extends Authenticator {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication("my username","my password");
}
}
```
I get the folowing text from debuging it:
```
[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]
DEBUG SMTP: useEhlo true, useAuth true
DEBUG SMTP: useEhlo true, useAuth true
DEBUG SMTP: trying to connect to host "smtp.mail.yahoo.com.au.", port 25, isSSL false
```
Could anyone inform me , what I am doing wrong here ? | I am not sure, but I faced the same problem when sending mail using a gmail id, you are using yahoo. The problem was gmail uses ssl layer protection, i think same is the case with yahoo so you need to use
```
mail.smtps.host instead of mail.smtp.host
```
and same for other properties too.
and isSSL to true.
I can post complete code snippet, once i reach office and use office's machine. For now you can look at <http://www.rgagnon.com/javadetails/java-0570.html> | Yahoo! Mail SMTP server address: smtp.mail.yahoo.com
Yahoo! Mail SMTP user name: Your full Yahoo! Mail email address (including "@yahoo.com")
Yahoo! Mail SMTP password: Your Yahoo! Mail password
Yahoo! Mail SMTP port: 465
Yahoo! Mail SMTP TLS/SSL required: yes
Similar settings work with gmail. For yahoo you might need yahoo plus account | Unable to send mail from Java application | [
"",
"java",
"email",
""
] |
How can I do the following things in python:
1. List all the IP interfaces on the current machine.
2. Receive updates about changes in network interfaces (goes up, goes down, changes IP address).
Any python package available in Ubuntu Hardy will do. | I think the best way to do this is via [dbus-python](http://dbus.freedesktop.org/releases/dbus-python/).
The [tutorial](http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html) touches on network interfaces a little bit:
```
import dbus
bus = dbus.SystemBus()
proxy = bus.get_object('org.freedesktop.NetworkManager',
'/org/freedesktop/NetworkManager/Devices/eth0')
# proxy is a dbus.proxies.ProxyObject
``` | I have been using the following code,
```
temp = str(os.system("ifconfig -a | awk '$2~/^Link/{_1=$1;getline;if($2~/^addr/){print _1" "}}'"))
```
it will give the 'up' network interfaces
e.g. eth0, eth2, wlan0 | Handling Multiple Network Interfaces in Python | [
"",
"python",
"linux",
"networking",
"ip",
""
] |
```
From : www.example.com/cut/456
To : www.example.com/cut/index.php?tag=456
```
I try this and it doesn't work
```
RewriteEngine On
RewriteRule ^([^/]*)$ /cut/index.php?tag=$1 [L]
``` | ```
RewriteEngine On
RewriteBase /cut/
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([^/]*)$ /index.php?tag=$1 [L]
```
It's work with this | unless your `.htaccess` located in `cut` directory, it should be better to use:
```
RewriteRule ^cut/(.+)/?$ /cut/index.php?tag=$1 [L]
```
otherwise
```
RewriteRule ^(.+)/?$ index.php?tag=$1 [L]
```
should do. It seems to be the issue with your substitution string, it's hard to say what exactly is the cause since *doesn't work* is not a sufficient description of a problem. | mod_rewrite problem with subfolders paths | [
"",
"php",
"apache",
"mod-rewrite",
""
] |
I struck up in a Javascript method overriding problem for a while. the problem is i got onclick event handler on one of my controls and i need to to do inject some method before the event handler triggers the actual method.
assume DGrid.Headerclik is actuall assigned to onclick event.
And this is what i did
```
DGrid.Headerclik = handleinLocal;
```
so whenever user clicks the grid now the control comes to handleinLocal method. here i have to do some processing and then call the base Headerclik().
```
function handleinLocal(){
// here i need to call the DGrid.Headerclik() method (base)
}
```
but this is not working as expected. on calling the DGrid.Headerclik() within the handleinLocal() recursively calls the handleinLocal() method. But i need to call the base method...
is there a way to achieve theis in JavaScript?? | You should store the previous handler in a (closure) variable:
```
(function() {
var oldHandler = DGrid.Headerclik;
DGrid.Headerclik = handleInLocal;
function handleInLocal() {
// ...
oldHandler();
// ...
}
})();
``` | Save the original callback in a variable before you reassign it your own handler. Then after you have done what you need to do, invoke the original handler from the variable.
```
var callback = DGrid.Headerclik;
DGrid.Headerclik = handleinLocal;
function handleinLocal()
{
...your code...
callback(); // invoke original handler
}
``` | Javascript Overriding issue | [
"",
"javascript",
""
] |
I'm looking for a fluent way of determining if a number falls within a specified set of ranges. My current code looks something like this:
```
int x = 500; // Could be any number
if ( ( x > 4199 && x < 6800 ) ||
( x > 6999 && x < 8200 ) ||
( x > 9999 && x < 10100 ) ||
( x > 10999 && x < 11100 ) ||
( x > 11999 && x < 12100 ) )
{
// More awesome code
}
```
Is there a better way of doing this? | Extension methods?
```
bool Between(this int value, int left, int right)
{
return value > left && value < right;
}
if(x.Between(4199, 6800) || x.Between(6999, 8200) || ...)
```
You can also do this awful hack:
```
bool Between(this int value, params int[] values)
{
// Should be even number of items
Debug.Assert(values.Length % 2 == 0);
for(int i = 0; i < values.Length; i += 2)
if(!value.Between(values[i], values[i + 1])
return false;
return true;
}
if(x.Between(4199, 6800, 6999, 8200, ...)
```
Awful hack, improved:
```
class Range
{
int Left { get; set; }
int Right { get; set; }
// Constructors, etc.
}
Range R(int left, int right)
{
return new Range(left, right)
}
bool Between(this int value, params Range[] ranges)
{
for(int i = 0; i < ranges.Length; ++i)
if(value > ranges[i].Left && value < ranges[i].Right)
return true;
return false;
}
if(x.Between(R(4199, 6800), R(6999, 8200), ...))
```
Or, better yet (this does not allow duplicate lower bounds):
```
bool Between(this int value, Dictionary<int, int> ranges)
{
// Basically iterate over Key-Value pairs and check if value falls within that range
}
if(x.Between({ { 4199, 6800 }, { 6999, 8200 }, ... }
``` | Define a Range type, then create a set of ranges and an extension method to see whether a value lies in any of the ranges. Then instead of hard-coding the values, you can create a collection of ranges and perhaps some individual ranges, giving them useful names to explain *why* you're interested in them:
```
static readonly Range InvalidUser = new Range(100, 200);
static readonly Range MilkTooHot = new Range (300, 400);
static readonly IEnumerable<Range> Errors =
new List<Range> { InvalidUser, MilkTooHot };
...
// Normal LINQ (where Range defines a Contains method)
if (Errors.Any(range => range.Contains(statusCode))
// or (extension method on int)
if (statusCode.InAny(Errors))
// or (extension methods on IEnumerable<Range>)
if (Errors.Any(statusCode))
```
You may be interested in the generic `Range` type which is part of [MiscUtil](http://pobox.com/~skeet/csharp/miscutil). It allows for iteration in a simple way as well:
```
foreach (DateTime date in 19.June(1976).To(25.December(2005)).Step(1.Days()))
{
// etc
}
```
(Obviously that's also using some DateTime/TimeSpan-related extension methods, but you get the idea.) | Determine if a number falls within a specified set of ranges | [
"",
"c#",
"range",
""
] |
In legacy database tables we have numbered columns like C1, C2, C3, C100 or M1, M2, M3, M100.
This columns represent BLOB data.
It is not possible to change anything it this database.
By using JPA Embeddable we map all of the columns to single fields. And then during embedding we override names by using 100 override annotations.
Recently we have switched to Hibernate and I've found things like **UserCollectionType** and **CompositeUserType**. But I hadn't found any use cases that are close to mine.
*Is it possible to implement some user type by using Hibernate to be able to map a bundle of columns to a collection **without additional querying**?*
**Edit:**
As you probably noticed the names of columns can differ from table to table. I want to create one type like "LegacyArray" with no need to specify all of the @Columns each time I use this type.
But instead I'd use
```
@Type(type = "LegacyArrayUserType",
parameters =
{
@Parameter(name = "prefix", value = "A"),
@Parameter(name = "size", value = "128")
})
List<Integer> legacyA;
@Type(type = "LegacyArrayUserType",
parameters =
{
@Parameter(name = "prefix", value = "B"),
@Parameter(name = "size", value = "64")
})
List<Integer> legacyB;
``` | I can think of a couple of ways that I would do this.
**1. Create views for the collection information that simulates a normalized table structure, and map it to Hibernate as a collection:**
Assuming your existing table is called `primaryentity`, I would create a view that's similar to the following:
```
-- untested SQL...
create view childentity as
(select primaryentity_id, c1 from primaryentity union
select primaryentity_id, c2 from primaryentity union
select primaryentity_id, c3 from primaryentity union
--...
select primaryentity_id, c100 from primaryentity)
```
Now from Hibernate's perspective, `childentity` is just a normalized table that has a foreign key to `primarykey`. Mapping this should be pretty straight forward, and is covered here:
* <http://docs.jboss.org/hibernate/stable/core/reference/en/html/collections.html>
The benefits of this approach:
* From Hibernate's point of view, the tables are normalized, it's a fairly simple mapping
* No updates to your existing tables
The drawbacks:
* Data is read-only, I don't think your view can be defined in an updatable manner (I could be wrong)
* Requires change to the database, you may need to create lots of views
Alternately, if your DBA won't even let you add a view to the database, or if you need to perform updates:
---
**2. Use Hibernate's [dynamic model mapping facility](https://www.hibernate.org/171.html) to map your C1, C2, C3 properties to a [Map](http://java.sun.com/javase/6/docs/api/java/util/Map.html), and have some code you your [DAO](http://en.wikipedia.org/wiki/Data_access_object) layer do the appropriate conversation between the Map and the Collection property:**
I have never done this myself, but I believe Hibernate does allow you to map tables to HashMaps. I'm not sure how dynamically Hibernate allows you to do this (i.e., Can you get away with simply specifying the table name, and having Hibernate automatically map all the columns?), but it's another way I can think of doing this.
If going with this approach though, be sure to use the [data access object](http://en.wikipedia.org/wiki/Data_access_object) pattern, and ensure that the internal implementation (use of HashMaps) is hidden from the client code. Also be sure to check before writing to the database that the size of your collection does not exceed the number of available columns.
The benefits of this approach:
* No change to the database at all
* Data is updatable
* O/R Mapping is relatively simple
The drawbacks:
* Lots of plumbing in the DAO layer to map the appropriate types
* Uses experimental Hibernate features that may change in the future | Personally, I think that design sounds like it breaks [first normal form](http://en.wikipedia.org/wiki/First_normal_form) for relational databases. What happens if you need C101 or M101? Change your schema again? I think it's very intrusive.
If you add Hibernate to the mix it's even worse. Adding C101 or M101 means having to alter your Java objects, your Hibernate mappings, everything.
If you have 1:m relationships with C and M tables, you'd be able handle the cases I just cited by adding additional rows. Your Java objects contain Collection<C> or Collection<M>. Your Hibernate mappings are one-to-many that don't change.
Maybe the reason that you don't see any Hibernate examples to match your case because it's a design that's not recommended.
If you must, maybe you should look at [Hibernate Component Mapping](http://docs.jboss.org/hibernate/stable/core/reference/en/html/components.html).
UPDATE: The fact that this is legacy is duly noted. My point in bringing up first normal form is as much for others who might find this question in the future as it is for the person who posted the question. I would not want to answer the question in such a way that it silently asserted this design as "good".
Pointing out Hibernate component mapping is pertinent because knowing the name of what you're looking for can be the key when you're searching. Hibernate allows an object model to be finer grained than the relational model it maps. You are free to model a denormalized schema (e.g., Name and Address objects as part of a larger Person object). That's just the name they give such a technique. It might help find other examples as well. | Map database column1, column2, columnN to a collection of elements | [
"",
"java",
"hibernate",
"jpa",
"usertype",
"compositeusertype",
""
] |
Basically, my question is short and sweet: Is the following a bad idea (encapsulating and rethrowing ex.InnerException instead of ex)
(There's a similar question [here](https://stackoverflow.com/questions/57383/in-c-how-can-i-rethrow-innerexception-without-losing-stack-trace), but not quite... I want to reencapsulate the InnerException, so the stack trace is preserved without reflecting on internals)
```
public abstract class RpcProvider
{
public virtual object CallMethod(string methodName, params object[] parameters)
{
MethodInfo mi = this.GetType().GetMethod(methodName);
if (mi == null || mi.GetCustomAttributes(typeof(RpcCallAttribute), true).Length == 0)
{
throw new NotImplementedException("This method is not provided by this RPC provider.");
}
else
{
try
{
return mi.Invoke(this, parameters);
}
catch (TargetInvocationException ex)
{
throw new RpcException("There was an error in the RPC call. See the InnerException for details.", ex.InnerException);
}
}
}
}
```
The stacktrace below seems intact and fine (well, it's sans the internals of how reflection invokes the method), so is there an issue with this at all? For the stacktrace below to make sense, my inheritance hierachy is:
```
-Oxide.Net.Rpc.RpcProvider
|-Oxide.Net.Rpc.XmlRpc
|-StartMenuSorter.DesktopMasters
(sanitised to protect the innocent, ie. me)
at Oxide.Net.Rpc.XmlRpc.DoRequest(Uri rpcConnectionUri, IXPathNavigable request, String userAgent) in \Projects\OxideLib\Oxide.Net\Rpc\XmlRpc.cs:line 243
at StartMenuSorter.DesktopMasters.GetIconInformation(IEnumerable`1 icons) in \Projects\StartMenuSorter\StartMenuSorter\DesktopMasters.cs:line 17
at Oxide.Net.Rpc.RpcProvider.CallMethod(String methodName, Object[] parameters) in \Projects\OxideLib\Oxide.Net\Rpc\RpcProvider.cs:line 52
at StartMenuSorter.Program.Main() in \Projects\StartMenuSorter\StartMenuSorter\Program.cs:line 36
at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
``` | Looks completely reasonable: the stacktrace includes details of where the RpcProvider is calling the method, and hides gory and unnecessary reflection gubbins, so you're fine.
As with all error handling code, the ultimate consumer will be other developers, so it would be good to ask "do I have enough detail to debug this myself if something goes wrong?"
As there's not so much noise in the way as if the outer exception were thrown the technique does its job well. | I see no problem in this. You're basically taking one type of exception and translating it into a more meaningful exception. I believe this is a good approach in this case. | Bad idea to rethrow ex.InnerException? | [
"",
"c#",
"exception",
"reflection",
""
] |
I'm experimenting with the Entity Framework and I want to connect to an Access 2007 database.
The following code is inspired by <http://msdn.microsoft.com/en-us/library/system.data.entityclient.entityconnection.connectionstring.aspx>
I suspect that I've got the wrong end of the stick...
```
OleDbConnectionStringBuilder oledbConn = new OleDbConnectionStringBuilder();
oledbConn.DataSource = @"..\..\..\..\Pruebas.accdb"; //yep Access 2007!
EntityConnectionStringBuilder entityBuilder = new EntityConnectionStringBuilder ();
entityBuilder.Provider = "Microsoft.ACE.OLEDB.12.0";
entityBuilder.ConnectionString = oledbConn.ToString();
EntityConnection ec = new EntityConnection(entityBuilder.ToString());
ec.Open();
ec.Close();
```
The EntityConnectionStringBuilder tells me that it doesn't support the DataSource property. I can connect fine with ADO.net so I know that the path and the provider are correct.
Is this just the complete wrong approach? | The approach you are using to build the EF connection string is correct.
BUT...
The Entity Framework only works with Providers (i.e. SqlClient) that support something called provider services.
The OleDB provider doesn't support 'Provider Services' so you can't use EF with the OleDb (unless you can find a third party OleDb provider with support for EF).
Hope this helps
Alex
(Entity Framework Team, Microsoft) | I'm not sure you have either end of the stick. :)
Check out [this example](http://msdn.microsoft.com/en-us/library/system.data.entityclient.entityconnectionstringbuilder.aspx) instead. There might be other issues with your code, but it looks like you're setting the entity builder's ConnectionString property when you need to be setting its **ProviderConnectionString** property (among other properties).
It seems to me that for something called a "connection string builder", the ConnectionString property should be read-only (it's not). I guess it's intended to also double as a connection string parser.
Edit: I just looked at your code again, and I think all you have to do is change ConnectionString to ProviderConnectionString. You may have the stick after all! | Entity Framework with OleDB connection - am I just plain nuts? | [
"",
"c#",
"entity-framework",
"ms-access",
""
] |
I am currently writing a Comet application which requires me to send chunks of data at a time on a persistent connection. However, I'm having trouble flushing the message to the client before closing the connection. Is there any reason the PrintWriter.flush() method is not behaving like I think it should?
This is my Tomcat Comet implementation:
```
public void event(CometEvent event) throws IOException, ServletException {
HttpServletRequest request = event.getHttpServletRequest();
HttpServletResponse response = event.getHttpServletResponse();
if (event.getEventType() == EventType.BEGIN) {
request.setAttribute("org.apache.tomcat.comet.timeout", 300 * 1000);
PrintWriter out = response.getWriter();
out.println("BEGIN!");
out.flush();
System.out.println("EventType.BEGIN");
} else if (event.getEventType() == EventType.READ) {
InputStream is = request.getInputStream();
byte[] buf = new byte[512];
do {
int n = is.read(buf); //can throw an IOException
if (n > 0) {
System.out.println("Read " + n + " bytes: " + new String(buf, 0, n)
+ " for session: " + request.getSession(true).getId());
} else if (n < 0) {
return;
}
} while (is.available() > 0);
System.out.println("subtype: "+event.getEventSubType());
System.out.println("EventType.READ");
} else if (event.getEventType() == EventType.END) {
PrintWriter out = response.getWriter();
out.println("END!");
out.close();
System.out.println("checkError: "+out.checkError());
System.out.println(event.getEventSubType());
System.out.println("EventType.END");
//eventWorker.enqueue(new EndEvent(request, response));
} else if (event.getEventType() == EventType.ERROR) {
PrintWriter out = response.getWriter();
out.println("ERROR!");
out.flush();
System.out.println("checkError: "+out.checkError());
System.out.println("subtype: "+event.getEventSubType());
//response.getWriter().close();
System.out.println("EventType.ERROR");
} else {
(new ServletException("EXCEPTION")).printStackTrace();
}
}
```
So here I'm trying to send the message "BEGIN!" and keep the connection open afterwards so I can send more data. However, it seems that the message doesn't go through until the connection is closed.
This is my ajax code:
$.post('comet', function(data) { alert(data); });
After I run this code, Firebug tells me that this is the response header:
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked
Date: Mon, 13 Jul 2009 21:16:29 GMT
This leads me to think that my browser received some data back, but how do I update something on the page before the connection is closed? | So it seems that the browser was receiving data this whole time, but because the connection wasn't closed, the JavaScript thought data was still being sent. This is why my jQuery callback function was not being called.
Looking at the W3C AJAX tutorial, I noticed that there are different ready states for the XMLHttpRequest object.
```
var xmlhttp;
if (window.XMLHttpRequest)
{
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else if (window.ActiveXObject)
{
// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
else
{
alert("Your browser does not support XMLHTTP!");
}
xmlhttp.onreadystatechange=function()
{
if(xmlhttp.readyState==3) {
alert('process '+xmlhttp.responseText);
}
if(xmlhttp.readyState==4) {
alert('ready '+xmlhttp.responseText);
}
}
xmlhttp.open("GET","comet",true);
xmlhttp.send(null);
```
Traditionally, people only handle the 4th readyState, which means transfer is complete. In my case, however, I needed to read data before the transfer ends. Hence, I needed code to handle the 3rd readyState. | I remember there being an issue with servlets where if your page is redirected (like to an error page) the current response buffer, writer in this case, is thrown away without being written. | Java Servlets: why is PrintWriter.flush() not flushing? | [
"",
"java",
"tomcat",
"servlets",
"comet",
""
] |
Why isn't there a designer for native api forms in Visual Studio? Similar to Delphi?
If there exist some programs, tools etc, please advice.
What is the best approach to design complex windows in pure API? | That's probably because there is no standard way of doing control layouts in WinAPI, you have to manage it by yourself. There is no base "Control" class in WinAPI - everything is a Window of some sort, so no way to support their differences with a common layout editor/designer.
You can however create your window layout in a dialog and make it resizable by yourself or using methods published on codeproject ([this](http://www.codeproject.com/KB/dialog/layoutmgr.aspx) or [this](http://www.codeproject.com/KB/dialog/resizabledialog.aspx) - both are MFC-related, but that's fairly easy to translate).
Or adapt [ScreenLib](http://blogs.msdn.com/windowsmobile/archive/2006/09/11/749467.aspx) to your desktop needs. | There's ressource editor for dialogs, and then there is code.
I've never really missed some visual design tool, though some better support from the controls themselves would be nice.
**The core problem is the abstraction level:** using just Win32 controls, designing a complex GUI needs some forethought, and the controls all have slightly different oddities, capabilities and features. They don't have a common interface that can be used to build a designer on top.
WinForms was designed from the ground up with designer support in mind, and it shows. The main design concern of Win32 controls was memory footprint of code and data.
Even MFC (which still shows many signs of memory scarcity) doesn't abstract these oddities away well enough to warrant a decent forms designer.
All environments that come with a decent forms editor (I remember Watcom ++ / Optima, ZINC, and quite some others I've forgotten the names of) also come with a decent forms library with a high abstraction level.
**Then, there's the problem of modifications.** What should be the designer's output? One could shoot for an XML data file, but that would add a dependency to some large libraries to your native app - doesn't make much sense. Or you create code, but C/C++ isn't well suited to that. Another binary format? You'd limit yourself to what the designer allows.
---
In the end, the designer would have to take care of each control separately, and still could not isolate you from knowing the controls and window mechanisms inside out. It was never undertaken when C++ was the first choice for large scale desktop development. Adding it *now*, when there are - arguably - better choices, would be a rather stupid move. | Native API window designer | [
"",
"c++",
"winapi",
"forms",
"form-designer",
""
] |
I'm trying to create a progress bar that will work asynchronously to the main process. I'm created a new event and invoked it however everytime I then try to perform operations on the progress bar I recieve the following error:
"The calling thread cannot access this object because a different thread owns it"
The following code is an attempt to send an instance of the progress bar to the event as an object, it obviously failed but it gives you an idea of what the code looks like.
```
private event EventHandler importing;
void MdbDataImport_importing(object sender, EventArgs e)
{
ProgressBar pb = (ProgressBar)sender;
while (true)
{
if (pb.Value >= 200)
pb.Value = 0;
pb.Value += 10;
}
}
private void btnImport_Click(object sender, RoutedEventArgs e)
{
importing += new EventHandler(MdbDataImport_importing);
IAsyncResult aResult = null;
aResult = importing.BeginInvoke(pbDataImport, null, null, null);
importing.EndInvoke(aResult);
}
```
Does anyone have ideas of how to do this.
Thanks in advance
SumGuy. | You should use something like this
```
pb.Dispatcher.Invoke(
System.Windows.Threading.DispatcherPriority.Normal,
new Action(
delegate()
{
if (pb.Value >= 200)
pb.Value = 0;
pb.Value += 10;
}
));
```
in your while loop | You need to delegate pbDataImport to the dispatcher. Only the GUI-dispatcher can make changes to GUI controls :) | WPF: Asynchronous progress bar | [
"",
"c#",
"wpf",
"events",
"asynchronous",
"event-handling",
""
] |
hey guys, I've removed some of the complexities of my needs to the core of what I need to know.
I want to send a collection of Values to a method, and inside that method I want to test the Value against, say, a property of an Entity. The property will always be of the same Type as the Value.
I also want to test if the value is null, or the default value, obviously depending on whether the value type is a reference type, or a value type.
Now, if all the values sent to the method are of the same type, then I could do this using generics, quite easily, like this:
```
public static void testGenerics<TValueType>(List<TValueType> Values) {
//test null/default
foreach (TValueType v in Values) {
if (EqualityComparer<TValueType>.Default.Equals(v, default(TValueType))) {
//value is null or default for its type
} else {
//comapre against another value of the same Type
if (EqualityComparer<TValueType>.Default.Equals(v, SomeOtherValueOfTValueType)) {
//value equals
} else {
//value doesn't equal
}
}
}
}
```
**My questions is, how would I carry out the same function, if my Collection contained values of different Types.**
My main concerns are successfully identifying null or default values, and successfully identifying if each value passed in, equals some other value of the same type.
**Can I achieve this by simply passing the type object?** I also can't really use the EqualityComparers as I can't use generics, because I'm passing in **an unknown number of different Types**.
is there a solution?
thanks
**UPDATE**
ok, searching around, could I use the following code to test for null/default successfully in my scenario (taken from [this SO answer](https://stackoverflow.com/questions/407337/net-get-default-value-for-a-reflected-propertyinfo/407345#407345)):
```
object defaultValue = type.IsValueType ? Activator.CreateInstance(type) : null;
```
I reckon this might work.
**Now, how can I successfully compare two values of the same Type, without knowing their types** successfully and reliably? | There is `Object.Equals(object left, object right)` static method, it internally relies on `Equals(object)` implementation available at one of provided arguments. Why do you avoid using it?
The rules of implementing equality members are nearly the following:
1. Required: Override `Equals(object)` and `GetHashCode()` methods
2. Optional: Implement `IEquatable<T>` for your type (this is what `EqualityComparer.Default` relies on)
3. Optional: Implement == and != operators
So as you see, if you'll rely on `object.Equals(object left, object right)`, this will be the best solution relying on *strongly required part* of equality implementation pattern.
Moreover, it will be the fastest option, since it relies just on virtual methods. Otherwise you'll anyway involve some reflection.
```
public static void TestGenerics(IList values) {
foreach (object v in values) {
if (ReferenceEquals(null,v)) {
// v is null reference
}
else {
var type = v.GetType();
if (type.IsValueType && Equals(v, Activator.CreateInstance(type))) {
// v is default value of its value type
}
else {
// v is non-null value of some reference type
}
}
}
}
``` | The short answer is "yes", but the longer answer is that it's possible but will take a non-trivial amount of effort on your part and some assumptions in order to make it work. Your issue really comes when you have values that would be considered "equal" when compared in strongly-typed code, but do not have reference equality. Your biggest offenders will be value types, as a boxed `int` with a value of `1` won't have referential equality to another boxed `int` of the same value.
Given that, you have to go down the road of using things like the `IComparable` interface. If your types will *always* specifically match, then this is likely sufficient. If either of your values implements `IComparable` then you can cast to that interface and compare to the other instance to determine equality (`==0`). If neither implements it then you'll likely have to rely on referential equality. For reference types this will work unless there is custom comparison logic (an overloaded `==` operator on the type, for example).
Just bear in mind that the types would have to match EXACTLY. In other words, an `int` and an `short` won't necessarily compare like this, nor would an `int` and a `double`.
You could also go down the path of using reflection to dynamically invoke the `Default` property on the generic type determined at runtime by the supplied `Type` variable, but I wouldn't want to do that if I didn't have to for performance and compile-time safety (or lack thereof) reasons. | Working with an unknown number of unknown types - .NET | [
"",
"c#",
".net",
"generics",
"types",
""
] |
I know there are other naming convention discussions, but the ones I saw were all general guidelines, I'm asking more for advice on good practice regarding my specific case. However, if there is another question that directly covers my case, please link it and I will gladly delete this question.
I have a database for modeling electronic devices, which contains tables such as Model, Unit, Brand, Manufacturer, etc. I'm wondering what to call the column that represents the actual textual name of the instance of that entity - as in "Canon" or "HP" for brand. Obviously, I could adopt the convention to simply calling the column "name," which I did at first. But then during a select, you are forced to reference the table names to get the columns since "name" is not unique among the selected columns:
```
SELECT Model.name, Brand.name, FROM Model, Brand
```
Which is ok, but then I got to the MobilePhoneFormFactor table and the Year table, the latter which stores accumulated data on a specific year. If I force myself to use the same convention with these tables, I get:
```
SELECT Model.name, Brand.name, Year.name, MobilePhoneFormFactor.name FROM Model, Brand, MobilePhoneFormFactor, Year
```
It seems a bit weird to refer to the year as "Year.name" instead of just "year." I could make this case an exception, but then I'm already breaking the naming convention I just committed to. So I considered this alternative naming convention:
```
SELECT model, brand, formFactor, year FROM Model, Brand, MobilePhoneFormFactor, Year
```
Here the Brand table has a "brand" column that is the textual name of the brand, the Model table has a "model" column that is the textual name of the model... etc. As shown in this select statement, this approach also allows me to conveniently drop the table name prefixes for columns in most cases (if you kept the table names, you get the rather redundant "`SELECT Model.model, Brand.brand, ...`").
But then what if I want to have a description field for each table? The obvious implementation is once again Brand.desc, Year.desc, just like my original approach to names. So this makes me think it's safest and most consistent to simply revert back to the Brand.name, Year.name scheme, even though it's a little weird in some situations.
A third option would be to name the columns "brandName, modelName" etc. but you might as well use simply "name" and always reference the table name in queries with "Brand.name, Model.name" in that case.
What do you think will be the best for my case? I'm not a database veteran, so I'm wondering what things down the road will make me look back and say, "it would have been so much more intuitive if I had just gone with Brand.name, Year.name" for instance.
Note: regardless of what I end up calling it, the textual name column (the natural key) is being used as an alternate key with a unqiue constraint; I'm using an IDENTITY column for the primary key. | I'd go with `name`. I don't disagree that `Year.name` is a little weird, but IMAO it's worth it to always know what this column is called and not have to remember anything.
While we're exploring options, though, some people like to do like `Model.ModelName`, `Brand.BrandName`, etc.
I, of course, think they're crazy. | This is such a gray topic as you will find that there is no right or wrong answer but everyone has their own opinion and naming conventions.
Some general rules.
1. Be consistent.
2. Avoid repeating the table name in the columns.
3. Avoid SQL Server Key or Reserved words.
4. Be descriptive. If you and/or someone else cannot tell exactly what data is stored in the column based upon the name then it is not named correctly. | Column Naming Advice | [
"",
"sql",
"database-design",
"naming-conventions",
""
] |
While testing Pythons time.clock() function on FreeBSD I've noticed it always returns the same value, around 0.156
The time.time() function works properly but I need a something with a slightly higher resolution.
Does anyone the C function it's bound to and if there is an alternative high resolution timer?
I'm not profiling so the TimeIt module is not really appropriate here. | time.clock() returns the processor time. That is, how much time the current process has used on the processor. So if you have a Python script called "clock.py", that does `import time;print time.clock()` it will indeed print about exactly the same each time you run it, as a new process is started each time.
Here is a python console log that might explain it to you:
```
>>> import time
>>> time.clock()
0.11
>>> time.clock()
0.11
>>> time.clock()
0.11
>>> for x in xrange(100000000): pass
...
>>> time.clock()
7.7800000000000002
>>> time.clock()
7.7800000000000002
>>> time.clock()
7.7800000000000002
```
I hope this clarifies things. | Python's time.clock calls C function clock(3) -- `man clock` should confirm that it's supposed to work on BSD, so I don't know why it's not working for you. Maybe you can try working around this apparent bug in your Python port by using `ctypes` to call the clock function from the system C library directly (if you have said library as a .so/.dynlib/.dll or whatever dynamic shared libraries are called on FreeBSD)?
time.time is supposed to be very high resolution, BTW, as internally it calls gettimeofday (well, in a properly built Python, anyway) -- what resolution do you observe for it on your system?
**Edit**: here's `wat.c`, a BSD-specific extension (tested on my Mac only -- sorry but I have no other BSD flavor at hand right know) to work around this apparent FreeBSD port problem:
```
#include "Python.h"
#include <sys/time.h>
static PyObject *
wat_time(PyObject *self, PyObject *args)
{
struct timeval t;
if (gettimeofday(&t, (struct timezone *)NULL) == 0) {
double result = (double)t.tv_sec + t.tv_usec*0.000001;
return PyFloat_FromDouble(result);
}
return PyErr_SetFromErrno(PyExc_OSError);
}
static PyMethodDef wat_methods[] = {
{"time", wat_time, METH_VARARGS,
PyDoc_STR("time() -> microseconds since epoch")},
{NULL, NULL} /* sentinel */
};
PyDoc_STRVAR(wat_module_doc,
"Workaround for time.time issues on FreeBsd.");
PyMODINIT_FUNC
initwat(void)
{
Py_InitModule3("wat", wat_methods, wat_module_doc);
}
```
And here's the `setup.py` to put in the same directory:
```
from distutils.core import setup, Extension
setup (name = "wat",
version = "0.1",
maintainer = "Alex Martelli",
maintainer_email = "aleaxit@gmail.com",
url = "http://www.aleax.it/wat.zip",
description = "WorkAround for Time in FreeBSD",
ext_modules = [Extension('wat', sources=['wat.c'])],
)
```
The URL is correct, so you can also get these two files zipped up [here](http://ttp://www.aleax.it/wat.zip).
To build & install this extension, `python setup.py install` (if you have permission to write in your Python's installation) or `python setup.py build_ext -i` to write wat.so in the very directory in which you put the sources (and then manually move it wherever you prefer to have it, but first try it out e.g. with `python -c'import wat; print repr(wat.time())'` in the same directory in which you've built it).
Please let me know how it works on FreeBSD (or any other Unix flavor with `gettimeofday`!-) -- if the C compiler complains about `gettimeofday`, you may be on a system which doesn't want to see its second argument, try without it!-). | Python clock function on FreeBSD | [
"",
"python",
"timer",
"freebsd",
"bsd",
"gettimeofday",
""
] |
I've just got an error.
When I try to assign an object like this:
```
$obj_md = new MDB2();
```
The error I get is "Assigning the return value of new by reference is deprecated". Actually I've been looking for a solution but the only one I've seen is just turn down the politicy of php.ini (error\_reporting). I've tried it too, but it didn't work. | In PHP5 this idiom is deprecated
```
$obj_md =& new MDB2();
```
You sure you've not missed an ampersand in your sample code? That would generate the warning you state, but it is not required and can be removed.
To see why this idiom was used in PHP4, see [this manual page](https://web.archive.org/web/20091020154940/http://www.php.net/manual/en/oop4.newref.php) (note that PHP4 is long dead and this link is to an archived version of the relevant page) | I recently moved a site using SimplePie (<http://simplepie.org/>) from a server that was using PHP 5.2.17 to one that is using PHP 5.3.2. It was after this move that I began receiving a list of error messages such as this one:
> Deprecated: Assigning the return value of new by reference is
> deprecated in .../php/simplepie.inc on line 738
After reviewing several discussions of this issue, I cleared things up by replacing all the instances of **=& new** with **= new** in the simplepie.inc file.
I'm not experienced enough to know if this will work in all instances where these error messages are received but it worked in this particular case and it may be worth trying. | Assigning the return value of new by reference is deprecated | [
"",
"php",
"php-5.2",
""
] |
I have a particular page that uses lots of widgets that have to be initialized by Javascript routines (around 400, it's a complex one). This takes some time (even 20 secs on slower machines). Now I was thinking I could show a progress indicator (a simple text field with a percentage label), instead of the page, but I discovered that even if I was updating it continuosly, the ongoing task blocks everything and I see no updates. So everything remains frozen until the heavy javascript task completes. I even done the following experiment, but freezes as well (and firefox says that the script is taking too long to complete...):
```
function a(){
for (var i = 0; i < 5000000000000000; i++){
abb = i;
}
}
var abb;
function c(){
var ef = document.getElementById("pip");
ef.innerHTML = abb;
}
function b(){
setInterval(c, 50);
setTimeout(a, 1000);
}
```
The only solution that comes to me is to break down the long job in pieces and update the label.... but I was wondering if there is another solution! God, JS needs threads as soon as possible... :)
Any ideas? | Can I assume that the scripts are being executed from an onLoad event, so that the user isn't sitting at a blank page while it runs?
If yes, then I'd recommend breaking the scripts into separate functions, creating an array of those functions, iterating through that array and evaling the functions, and updating your progress indicator after every N functions.
And if it's a public-facing website, try to minimize the amount of JavaScript that's absolutely necessary to experience the page. And the amount of initialization that each widget needs. To paraphrase Agent Smith: what good are accordian-fold trees, if nobody waits to see them? | If you want threads you can use LiveConnect and create true Java Threads (if security manager allows it).
If don't you can use C's coroutines design pattern.
Regards. | Monitoring a long task in Javascript | [
"",
"javascript",
"concurrency",
"long-integer",
"task",
""
] |
Can anyone list the requirements (i.e. any books, tutorials, libraries etc) to build an application in Java, which could communicate with my MySQL Database which is running on web.
I am running an online webstore built in PHP and MySQL. I would like to build a Java Application through which I can (CRUD) Products, Categories, Orders etc.
My Database is already built in MySQL and all the data is present in my online site. So I only need to work on a GUI App which can access my Store's data.
Here is an example application built in Delphi which acts as a Front End (Store Manager) for oscommerce shopping cart.

What technologies do I need for creating Windows GUI in java and Database Application to communicate with my Online Store's Database. ?
I have found some tutorials:
<https://web.archive.org/web/20120126005135/http://www.netbeans.org/kb/docs/java/gui-db.html>
<https://web.archive.org/web/20101125223754/http://www.netbeans.org/kb/docs/java/gui-db-custom.html>
<https://web.archive.org/web/20090228020221/http://www.netbeans.org/kb/articles/mysql-client.html>
<https://web.archive.org/web/20110923093326/http://www.netbeans.org/kb/docs/java/gui-db.html>
Are the above tutorials enough?
**UPDATE:**
**How about the following books**:
1). JDBC Practical Guide for Java Programmers
2). JDBC API Tutorial and Reference, 3rd Edition
Would these be enough for a beginner? | For the database communication you're going to need a [JDBC driver for MySql](http://dev.mysql.com/downloads/connector/j/5.1.html).
Check out the official [Sun JDBC tutorial](http://java.sun.com/docs/books/tutorial/jdbc/index.html) for details on how to use this to access your database. | also [Java Swing tutorial](http://java.sun.com/docs/books/tutorial/uiswing/) or check out [SWT](http://www.eclipse.org/swt/) if you prefer faster and more native looking UI | What is required for building Java Windows Application to access Online MySQL Database | [
"",
"java",
"mysql",
"user-interface",
"jdbc",
""
] |
Okay, so I've written a REST API implementation using mod\_rewrite and PHP. I'm accepting a query string via the body of HTTP DELETE requests (... collective groan?). Arguments about the wisdom of both previous statements aside, what I've found is that PHP doesn't automatically parse the request body of DELETE requests (i.e. $\_POST is empty despite form-encoded query string appearing in body of request). This didn't particularly surprise me. What I did find surprising was that I've been unable to find a built-in PHP function for parsing a query string?? Have I simply overlooked something? I can do something like:
```
public function parseQS($queryString, &$postArray){
$queryArray = explode('&', $queryString);
for($i = 0; $i < count($queryArray); $i++) {
$thisElement = split('=', $queryArray[$i]);
$postArray[$thisElement[0]] = htmlspecialchars(urldecode($thisElement[1]));
}
}
```
... it just seems odd that there wouldn't be a PHP built-in to handle this. Also, I suspect I shouldn't be using htmlspecialcharacters & urldecode to scrub form-encoded values... it's a different kind of encoding, but I'm also having trouble discerning which PHP function I should be using to decode form-encoded data.
Any suggestions will be appreciated. | There's [parse\_str](http://www.php.net/manual/en/function.parse-str.php). Bad name, but does what you want. And notice that it returns nothing, the second argument is passed by reference. | There is a function that does it - <http://php.net/parse_str>. Since PHP has to do this for itself, there's no reason not to also open it up for use in the API.
> Parses the string into variables void
> parse\_str ( string $str [, array &$arr])
>
> Parses str as if it were the query
> string passed via a URL and sets
> variables in the current scope.
```
<?php
$str = "first=value&arr[]=foo+bar&arr[]=baz";
parse_str($str, $output);
echo $output['first']; // value
echo $output['arr'][0]; // foo bar
echo $output['arr'][1]; // baz
``` | Decoding query strings in PHP | [
"",
"php",
"api",
"rest",
"query-string",
"formencode",
""
] |
I am a vc++ developer but I spend most of my time learning c++.What are all the things I should know as a vc developer. | I don't understand why people here post things about WinAPI, .NET, MFC and ATL.
You really must know the language. Another benefit would be the cross platform libraries. C++ is not about GUI or Win32 programming. You can write Multi-Platform application with libraries like boost, QT, wxWidgets (may be some XML parser libs).
Visual C++ is a great IDE to develop C++ application and Microsoft is trying hard to make Visual C++ more standard conform. Learning standard language without dialects (MS dialect as well) will give you an advantage of Rapid Development Environment combined with multi-platform portability. There are many abstraction libraries out there, which work equally on Windows, Linux, Unix or Mac OS. Debugger is a great app in VC++ but not the first thing to start with. Try to write unit tests for your application. They will ensure on next modifications that you did not broke other part of tested (or may be debugged:) code.
Do not try to learn MFC or ATL from scratch, try to understand STL. MFC is old, and new version are more or less wrapper around ATL. ATL is some strange lib, which tries to marry STL-idioms (and sometimes STL itself) and WinAPI. But using ATL concepts without knowing what is behind, will make you unproductive as well. Some ATL idioms are very questionable and might be replaced by some better from boost or libs alike.
The most important things to learn are the language philosophy and concepts. I suggest you to dive into the language and read some serious books:
* [Design & Evolution of C++ by B. Stroustrup](https://rads.stackoverflow.com/amzn/click/com/0201543303)
* [Inside the C++ Object Model by S. Lippman](https://rads.stackoverflow.com/amzn/click/com/0201834545)
* [Design Patterns: Elements of Reusable Object-Oriented Software by GoF](https://rads.stackoverflow.com/amzn/click/com/0201633612)
* [C++ Gotchas: Avoiding Common Problems in Coding and Design by S. Dewhurst](https://rads.stackoverflow.com/amzn/click/com/0321125185)
* [Exceptional C++: 47 Engineering Puzzles, Programming Problems, and Solutions by. H. Sutter](https://rads.stackoverflow.com/amzn/click/com/0201615622)
* [More Exceptional C++: 40 New Engineering Puzzles, Programming Problems, and Solutions by H. Sutter](https://rads.stackoverflow.com/amzn/click/com/020170434X)
* [Exceptional C++ Style: 40 New Engineering Puzzles, Programming Problems, and Solutions by H. Sutter](https://rads.stackoverflow.com/amzn/click/com/0201760428)
When here you will be a very advanced C++ developer
Next books will make guru out of you:
* [Modern C++ Design: Generic Programming and Design Patterns Applied by A. Alexandrescu](https://rads.stackoverflow.com/amzn/click/com/0201704315)
* [C++ Templates: The Complete Guide by by D. Vandevoorde, N. Josuttis](https://rads.stackoverflow.com/amzn/click/com/0201734842)
* [C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond by D. Abrahams, A. Gurtovoy](https://rads.stackoverflow.com/amzn/click/com/0321227255)
* [Large-Scale C++ Software Design by J. Lakos](https://rads.stackoverflow.com/amzn/click/com/0201633620)
Remember one important rule: If you have a question, try to find an answer to it in ISO C++ Standard (i.e. Standard document) first. Doing so you will come along many other similar things, which will make you think about the language design.
Hope that book list helps you. Concepts from these books you will see in all well designed modern C++ frameworks.
With Kind Regards,
Ovanes | Most importantly, the Debugger.
And if you are into MFC/ATL Development, than those libraries off course.
Other things such as how to enable exceptions while debugging, how to load debugging symbols from disk paths etc are always of great help.
Actually, it really depends on what kind of projects you work on.
You could learn .NET Interoperability if you are doing some Mixed-mode Development.
You could learn ATL + COM if you are into developing COM Components.
There are several other frameworks but as I said, it really depends on what you are doing. | What are the concepts a vc++ developer should be familiar with? | [
"",
"c++",
"visual-c++",
""
] |
Imagine I have an enumeration such as this (just as an example):
```
public enum Direction{
Horizontal = 0,
Vertical = 1,
Diagonal = 2
}
```
How can I write a routine to get these values into a System.Web.Mvc.SelectList, given that the contents of the enumeration are subject to change in future? I want to get each enumerations name as the option text, and its value as the value text, like this:
```
<select>
<option value="0">Horizontal</option>
<option value="1">Vertical</option>
<option value="2">Diagonal</option>
</select>
```
This is the best I can come up with so far:
```
public static SelectList GetDirectionSelectList()
{
Array values = Enum.GetValues(typeof(Direction));
List<ListItem> items = new List<ListItem>(values.Length);
foreach (var i in values)
{
items.Add(new ListItem
{
Text = Enum.GetName(typeof(Direction), i),
Value = i.ToString()
});
}
return new SelectList(items);
}
```
However this always renders the option text as 'System.Web.Mvc.ListItem'. Debugging through this also shows me that Enum.GetValues() is returning 'Horizontal, Vertical' etc. instead of 0, 1 as I would've expected, which makes me wonder what the difference is between Enum.GetName() and Enum.GetValue(). | To get the value of an enum you need to cast the enum to its underlying type:
```
Value = ((int)i).ToString();
``` | It's been awhile since I've had to do this, but I think this should work.
```
var directions = from Direction d in Enum.GetValues(typeof(Direction))
select new { ID = (int)d, Name = d.ToString() };
return new SelectList(directions , "ID", "Name", someSelectedValue);
``` | How to get the values of an enum into a SelectList | [
"",
"c#",
"asp.net-mvc",
"enums",
""
] |
how would you extract the domain name from a URL, excluding any subdomains?
My initial simplistic attempt was:
```
'.'.join(urlparse.urlparse(url).netloc.split('.')[-2:])
```
This works for <http://www.foo.com>, but not <http://www.foo.com.au>.
Is there a way to do this properly without using special knowledge about valid TLDs (Top Level Domains) or country codes (because they change).
thanks | No, there is no "intrinsic" way of knowing that (e.g.) `zap.co.it` is a subdomain (because Italy's registrar DOES sell domains such as `co.it`) while `zap.co.uk` *isn't* (because the UK's registrar DOESN'T sell domains such as `co.uk`, but only like `zap.co.uk`).
You'll just have to use an auxiliary table (or online source) to tell you which TLD's behave peculiarly like UK's and Australia's -- there's no way of divining that from just staring at the string without such extra semantic knowledge (of course it can change eventually, but if you can find a good online source that source will also change accordingly, one hopes!-). | Here's a great python module someone wrote to solve this problem after seeing this question:
<https://github.com/john-kurkowski/tldextract>
The module looks up TLDs in the [Public Suffix List](https://www.publicsuffix.org/), mantained by Mozilla volunteers
Quote:
> `tldextract` on the other hand knows what all gTLDs [*Generic Top-Level Domains*]
> and ccTLDs [*Country Code Top-Level Domains*] look like
> by looking up the currently living ones according to the [Public Suffix
> List](https://www.publicsuffix.org/). So, given a URL, it knows its subdomain from its domain, and its
> domain from its country code. | How to extract top-level domain name (TLD) from URL | [
"",
"python",
"url",
"parsing",
"dns",
"extract",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.