Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have a python script that is a http-server: <http://paste2.org/p/89701>, when benchmarking it against ApacheBench (ab) with a concurrency level (-c switch) that is lower then or equal to the value i specified in the socket.listen()-call in the sourcecode everything works fine, but as soon as put the concurrency level in apache bench above the value in the socket.listen()-call performance drops through the floor, some example:
* socket.listen(**10**) and ab -n 50 -c **10** <http://localhost/> = **1200req/s**
* socket.listen(**10**) and ab -n 50 -c **11** <http://localhost/> = **40req/s**
* socket.listen(**100**) and ab -n 5000 -c **100** <http://localhost/> = **1000req/s**
* socket.listen(**100**) and ab -n 5000 -c **101** <http://localhost/> = **32req/s**
Nothing changes in the code between the two calls, I can’t figure out what is wrong - been at this problem for one day now. Also note that: The multiplexing version of the same code (I wrote to compare to the threaded version) works FINE no matter what socket.listen() is set to or what the concurrency (-c switch) in apache is set to.
*I've spent a day on IRC/python docs, posted on comp.lang.python and on my blog - I can't find ANYONE that even has an idea what could be wrong. Help me!*
|
I cannot confirm your results, and your server is coded fishy. I whipped up my own server and do not have this problem either. Let's move the discussion to a simpler level:
```
import thread, socket, Queue
connections = Queue.Queue()
num_threads = 10
backlog = 10
def request():
while 1:
conn = connections.get()
data = ''
while '\r\n\r\n' not in data:
data += conn.recv(4048)
conn.sendall('HTTP/1.1 200 OK\r\n\r\nHello World')
conn.close()
if __name__ == '__main__':
for _ in range(num_threads):
thread.start_new_thread(request, ())
acceptor = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
acceptor.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
acceptor.bind(('', 1234))
acceptor.listen(backlog)
while 1:
conn, addr = acceptor.accept()
connections.put(conn)
```
which on my machine does:
```
ab -n 10000 -c 10 http://127.0.0.1:1234/ --> 8695.03 [#/sec]
ab -n 10000 -c 11 http://127.0.0.1:1234/ --> 8529.41 [#/sec]
```
|
For the heck of it I also implemented an asynchronous version:
```
import socket, Queue, select
class Request(object):
def __init__(self, conn):
self.conn = conn
self.fileno = conn.fileno
self.perform = self._perform().next
def _perform(self):
data = self.conn.recv(4048)
while '\r\n\r\n' not in data:
msg = self.conn.recv(4048)
if msg:
data += msg
yield
else:
break
reading.remove(self)
writing.append(self)
data = 'HTTP/1.1 200 OK\r\n\r\nHello World'
while data:
sent = self.conn.send(data)
data = data[sent:]
yield
writing.remove(self)
self.conn.close()
class Acceptor:
def __init__(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 1234))
sock.listen(10)
self.sock = sock
self.fileno = sock.fileno
def perform(self):
conn, addr = self.sock.accept()
reading.append(Request(conn))
if __name__ == '__main__':
reading = [Acceptor()]
writing = list()
while 1:
readable, writable, error = select.select(reading, writing, [])
for action in readable + writable:
try: action.perform()
except StopIteration: pass
```
which performs:
```
ab -n 10000 -c 10 http://127.0.0.1:1234/ --> 16822.13 [#/sec]
ab -n 10000 -c 11 http://127.0.0.1:1234/ --> 15704.41 [#/sec]
```
|
I’m stunned: weird problem with python and sockets + threads
|
[
"",
"python",
"multithreading",
"apache",
"sockets",
""
] |
I used to use the standard mysql\_connect(), mysql\_query(), etc statements for doing MySQL stuff from PHP. Lately I've been switching over to using the wonderful MDB2 class. Along with it, I'm using prepared statements, so I don't have to worry about escaping my input and SQL injection attacks.
However, there's one problem I'm running into. I have a table with a few VARCHAR columns, that are specified as not-null (that is, do not allow NULL values). Using the old MySQL PHP commands, I could do things like this without any problem:
```
INSERT INTO mytable SET somevarchar = '';
```
Now, however, if I have a query like:
```
INSERT INTO mytable SET somevarchar = ?;
```
And then in PHP I have:
```
$value = "";
$prepared = $db->prepare($query, array('text'));
$result = $prepared->execute($value);
```
This will throw the error "`null value violates not-null constraint`"
As a temporary workaround, I check if `$value` is empty, and change it to `" "` (a single space), but that's a horrible hack and might cause other issues.
How am I supposed to insert empty strings with prepared statements, without it trying to instead insert a NULL?
**EDIT:** It's too big of a project to go through my entire codebase, find everywhere that uses an empty string "" and change it to use NULL instead. What I need to know is why standard MySQL queries treat "" and NULL as two separate things (as I think is correct), but prepared statements converts "" into NULL.
Note that "" and NULL are **not** the same thing. For Example, `SELECT NULL = "";` returns `NULL` instead of `1` as you'd expect.
|
This sounds like a problem with the MDB2 API fumbling PHP's duck typing semantics. Because the empty string in PHP is equivalent to NULL, MDB2 is probably mis-treating it as such. The ideal solution would be to find a workaround for it within it's API, but I'm not overly familiar with it.
One thing that you should consider, though, is that an empty string in SQL is not a NULL value. You can insert them into rows declared 'NOT NULL' just fine:
```
mysql> CREATE TABLE tbl( row CHAR(128) NOT NULL );
Query OK, 0 rows affected (0.05 sec)
mysql> INSERT INTO tbl VALUES( 'not empty' ), ( '' );
Query OK, 2 rows affected (0.02 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> SELECT row, row IS NULL FROM tbl;
+-----------+-------------+
| row | row IS NULL |
+-----------+-------------+
| not empty | 0 |
| | 0 |
+-----------+-------------+
2 rows in set (0.00 sec)
mysql> INSERT INTO tbl VALUES( NULL );
ERROR 1048 (23000): Column 'row' cannot be null
```
If you're unable to find (or implement) a workaround in the MDB2 API, one hackish solution (though slightly better than the one you're currently using) might be to define a [user variable](http://dev.mysql.com/doc/refman/5.0/en/user-variables.html) for the empty string --
```
SET @EMPTY_STRING = "";
UPDATE tbl SET row=@EMPTY_STRING;
```
Finally, if you need to use the empty string in an INSERT statement but find yourself unable to, the default value for string types in MySQL is an empty string. So you could simply omit the column from INSERT statement and it would automatically get set to the empty string (provided the column has a NOT NULL constraint).
|
Thanks to some of the answers, I realized that the problem may be in the MDB2 API, and not in the PHP or MYSQL commands themselves. Sure enough, I found this in the MDB2 FAQ:
> * Why do empty strings end up as NULL in the database? Why do I get an NULL
> not allowed in NOT NULL text fields
> eventhough the default value is ""?
> + The problem is that for some RDBMS (most noteably Oracle) an empty
> string is NULL. Therefore MDB2
> provides a portability option to
> enforce the same behaviour on all
> RDBMS.
> + Since all portability options are enabled by default you will have
> to disable the feature if you dont
> want to have this behaviour:
> $mdb2->setOption('portability',
> MDB2\_PORTABILITY\_ALL ^
> MDB2\_PORTABILITY\_EMPTY\_TO\_NULL);
Thanks to everyone who provided thoughtful answers.
|
Empty string in not-null column in MySQL?
|
[
"",
"php",
"mysql",
"mysqli",
"nullable",
"prepared-statement",
""
] |
I want to find in a vector of Object pointers for a matching object. Here's a sample code to illustrate my problem:
```
class A {
public:
A(string a):_a(a) {}
bool operator==(const A& p) {
return p._a == _a;
}
private:
string _a;
};
vector<A*> va;
va.push_back(new A("one"));
va.push_back(new A("two"));
va.push_back(new A("three"));
find(va.begin(), va.end(), new A("two"));
```
I want to find the second item pushed into the vector. But since vector is defined as a pointers collection, C++ does not use my overloaded operator, but uses implicit pointer comparison. What is the preferred C++-way of solutiono in this situation?
|
Use find\_if with a functor:
```
template <typename T>
struct pointer_values_equal
{
const T* to_find;
bool operator()(const T* other) const
{
return *to_find == *other;
}
};
// usage:
void test(const vector<A*>& va)
{
A* to_find = new A("two");
pointer_values_equal<A> eq = { to_find };
find_if(va.begin(), va.end(), eq);
// don't forget to delete A!
}
```
Note: your operator== for A ought to be const, or, better still, write it as a non-member friend function.
|
Either use std::find\_if and provide a suitable predicate yourself, see other answers for an example of this.
Or as an alternative have a look at [boost::ptr\_vector](http://www.boost.org/doc/libs/1_37_0/libs/ptr_container/doc/ptr_container.html), which provides transparent reference access to elements which are really stored as pointers (as an extra bonus, memory management is handled for you as well)
|
How to use find algorithm with a vector of pointers to objects in c++?
|
[
"",
"c++",
"algorithm",
"stl",
"pointers",
"vector",
""
] |
I am calling a SQL proc that has 3 OUTPUT params. After the call to the proc one of the params does not return a value when the other two do. Profiler shows that all 3 values are being returned.
The params are declared as follows in the proc...
```
@UsrVariableID INT OUTPUT,
@OrganisationName NVARCHAR(256) OUTPUT,
@Visible bit OUTPUT
```
and the code that calls the proc is like this...
```
cm.Parameters.AddWithValue("@OrganisationName", name);
cm.Parameters["@OrganisationName"].Direction = ParameterDirection.Output;
cm.Parameters.AddWithValue("@Visible", visible);
cm.Parameters["@Visible"].Direction = ParameterDirection.Output;
cm.ExecuteNonQuery();
name = cm.Parameters["@OrganisationName"].Value.ToString();
visible = bool.Parse(cm.Parameters["@Visible"].Value.ToString());
id = int.Parse(cm.Parameters["@UsrVariableID"].Value.ToString());
```
The param that fails is @OrganisationName.
I'm wondering if its because the param is of type string in the code but NVARCHAR in the proc.
Anyone got any ideas?
|
With output parameters that have variable length data types (nvarchar, varchar, etc), I've found that being more explicit leads to better results. In the case you've posted, a type is not specified on the C# side. I would probably change things to look something like the following:
```
SqlParameter theOrganizationNameParam = new SqlParameter( "@OrganisationName", SqlDbType.NVarChar, 256 );
theOrganizationNameParam.Direction = ParameterDirection.Output;
cm.Parameters.Add( theOrganizationNameParam );
cm.ExecuteNonQuery();
name = theOrganizationNameParam.Value;
```
With this you can guarantee the output paratmer has the correct data type, and therefore can access the **Value** property without and exception being thrown.
Hope this sheds some light.
|
You could try declaring the parameters first (then set the values), and see if this make a difference.
```
cm.Parameters.Add["@OrganisationName", SqlDbType.NVarChar, 256].Direction = ParameterDirection.Output
cm.Parameters["@OrganisationName"].Value = name
```
But to me there doesn't look like anything wrong with what you have posted.
Incidently, if you shouldn't need the .Parse(.ToString()) you should only need to cast.
```
visible = bool.Parse(cm.Parameters["@Visible"].Value.ToString());
```
becomes
```
visible = (bool)cm.Parameters["@Visible"].Value;
```
|
Output Parameter not Returned from Stored Proc
|
[
"",
"c#",
".net",
"sql-server",
"stored-procedures",
"ado.net",
""
] |
In my database, in one of the table I have a GUID column with allow nulls. I have a method with a Guid? parameter that inserts a new data row in the table. However when I say `myNewRow.myGuidColumn = myGuid` I get the following error:
```
Cannot implicitly convert type 'System.Guid?' to 'System.Guid'.
```
|
The ADO.NET API has some problems when it comes to handling nullable value types (i.e. it simply doesn't work correctly). We've had no end of issues with it, and so have arrived at the conclusion that it's best to manually set the value to null, e.g.
```
myNewRow.myGuidColumn = myGuid == null ? (object)DBNull.Value : myGuid.Value
```
It's painful extra work that ADO.NET should handle, but it doesn't seem to do so reliably (even in 3.5 SP1). This at least works correctly.
We've also seen issues with passing nullable value types to SqlParameters where the generated SQL includes the keyword `DEFAULT` instead of `NULL` for the value so I'd recommend the same approach when building parameters.
|
OK; how is myGuidColumn defined, and how is myGuid defined?
If myGuid is `Guid?` and myGuidColumn is `Guid`, then the error is correct: you will need to use `myGuid.Value`, or `(Guid)myGuid` to get the value (which will throw if it is null), or perhaps `myGuid.GetValueOrDefault()` to return the zero guid if null.
If myGuid is `Guid` and myGuidColumn is `Guid?`, then it should work.
If myGuidColumn is `object`, you probably need `DBNull.Value` instead of the regular null.
Of course, if the column is truly nullable, you might simply want to ensure that it is `Guid?` in the C# code ;-p
|
"Cannot implicitly convert type 'System.Guid?' to 'System.Guid'." - Nullable GUID
|
[
"",
"c#",
".net",
"sql-server",
"guid",
"nullable",
""
] |
I have a custom class that implements `ICollection`, and this class is readonly, ie. `IsReadOnly` returns true (as opposed to using the `readonly` keyword), and all functions that would normally modify the data in the collection throw `InvalidOperationException`'s.
Now, given such a construct, and a quick skim over the thread-safety issues when implementing `ICollection` (specifically [`ICollection.IsSynchronized`](http://msdn.microsoft.com/en-us/library/system.collections.icollection.issynchronized.aspx) and friends), I came up with this quick and dirty solution.
```
bool ICollection.IsSynchronised { get{ return true; } }
object ICollection.SyncRoot { get{ return new Object(); } }
```
Now, given the examples in MSDN, this won't cause different threads to lock properly, because they are getting different objects from `SyncRoot`. Given that this is a readonly collection though, is this an issue? Are there memory/GC issues with returning `new Object()`? Any other issues you can see with this implementation?
|
Yes this is an issue in some cases. Even though the collection is read only and cannot be changed, the objects the collection references are not read only. Thus if the clients use the SyncRoot to perform locking they will not be thread safe when modifying the objects referenced by the collection.
I would recommend adding:
```
private readonly object syncRoot = new object();
```
to your class. Return this as the SyncRoot and you're good to go.
|
I guess the issue would be if clients used your sync root to achieve locking of not only your collection, but something else. Supposed they cached the size of the collection - or maybe "what subset of this collection matches a predicate" - they would reasonably assume that they could use your SyncRoot to guard both your collection and their other member.
Personally I hardly use SyncRoot at all, but I think it would be sensible to always return the same thing.
|
ICollection, readonly collections, and synchronisation. Is this right?
|
[
"",
"c#",
".net-2.0",
""
] |
How to, in C# round any value to 10 interval? For example, if I have 11, I want it to return 10, if I have 136, then I want it to return 140.
I can easily do it by hand
```
return ((int)(number / 10)) * 10;
```
But I am looking for an builtin algorithm to do this job, something like Math.Round(). The reason why I won't want to do by hand is that I don't want to write same or similar piece of code all over my projects, even for something as simple as the above.
|
There is no built-in function in the class library that will do this. The closest is [System.Math.Round()](http://msdn.microsoft.com/en-us/library/system.math.round.aspx) which is only for rounding numbers of types Decimal and Double to the nearest integer value. However, you can wrap your statement up in a extension method, if you are working with .NET 3.5, which will allow you to use the function much more cleanly.
```
public static class ExtensionMethods
{
public static int RoundOff (this int i)
{
return ((int)Math.Round(i / 10.0)) * 10;
}
}
int roundedNumber = 236.RoundOff(); // returns 240
int roundedNumber2 = 11.RoundOff(); // returns 10
```
If you are programming against an older version of the .NET framework, just remove the "this" from the RoundOff function, and call the function like so:
```
int roundedNumber = ExtensionMethods.RoundOff(236); // returns 240
int roundedNumber2 = ExtensionMethods.RoundOff(11); // returns 10
```
|
Use Math.Ceiling to always round up.
```
int number = 236;
number = (int)(Math.Ceiling(number / 10.0d) * 10);
```
Modulus(%) gets the remainder, so you get:
```
// number = 236 + 10 - 6
```
Put that into an extension method
```
public static int roundupbyten(this int i){
// return i + (10 - i % 10); <-- logic error. Oops!
return (int)(Math.Ceiling(i / 10.0d)*10); // fixed
}
// call like so:
int number = 236.roundupbyten();
```
above edited: I should've gone with my first instinct to use Math.Ceiling
I [blogged about this when calculating UPC check digits](http://atomiton.blogspot.com/2007/12/calculating-checkdigit-using.html).
|
Built in .Net algorithm to round value to the nearest 10 interval
|
[
"",
"c#",
"math",
"rounding",
""
] |
So... I used to think that when you accessed a file but specified the name without a path (CAISLog.csv in my case) that .NET would expect the file to reside at the same path as the running .exe.
This works when I'm stepping through a solution (C# .NET2.\* VS2K5) but when I run the app in normal mode (Started by a Websphere MQ Trigger monitor & running in the background as a network service) instead of accessing the file at the path where the .exe is it's being looked for at C:\WINDOWS\system32. If it matters The parent task's .exe is in almost the same folder structure/path as my app
I get a matching error: "*System.UnauthorizedAccessException: Access to the path 'C:\WINDOWS\system32\CAISLog.csv' is denied.*"
My workaround is to just fully qualify the location of my file. What I want to understand, however is **"What is the .NET rule that governs how a path is resolved when only the file name is specified during IO?"** I feel I'm missing some basic concept and it's bugging me bad.
edit - I'm not sure it's a.NET rule per se but Schmuli seems to be explaining the concept a little clearer. I will definitely try Rob Prouse's suggestions in the future so +1 on that too.
If anyone has some re-wording suggestions that emphasize I don't *really* care about finding the path to my .exe - rather just didn't understand what was going on with relative path resolution (and I may still have my terminlogy screwed up)...
|
When an application (WinForms) starts up, the `Environment.CurrentDirectory` contains the path to the application folder (i.e. the folder that contains the .exe assembly). Using any of the File Dialogs, ex. `OpenFileDialog`, `SaveFileDialog`, etc. will cause the current directory to change (if a different folder was selected).
When running a Windows Service, its containing folder is C:\Windows\System32, as that is the System folder and it is the System (i.e. the Operation System) that is actually running your Windows Service.
Note that specifying a relative path in most of the `System.IO` objects, will fall back to using the `Environment.CurrentDirectory` property.
As mentioned, there are several ways to obtain the path of the service executable, using `Assembly.GetEntryAssembly()` or `Assembly.GetExecutingAssembly()` and then using either the `Location` property or the `CodeBase` property (be aware that this is the file path, not directory, of the executable).
Another option is to use:
```
`System.IO.Directory.SetCurrentDirectory( System.AppDomain.CurrentDomain.BaseDirectory );`
```
Make the call in the Service's `OnStart` method, applying it to the whole application.
|
It is based on the current working directory which may or may not be the same as where your application resides, especially if started from a different program or a shortcut with a different working directory.
Rather than hard code the path, get the path to your program and use it. You can do this with something like this
```
Assembly ass = Assembly.GetEntryAssembly();
string dir = Path.GetDirectoryName(ass.Location);
string filename = Path.Combine( dir, "CAISLog.csv" );
```
This assumes that the entry assembly is where your file is. If not, you can change up getting the assembly for something like;
```
Assembly ass = Assembly.GetAssembly( typeof( AClassInYourAssembly ) );
```
|
How does default/relative path resolution work in .NET?
|
[
"",
"c#",
"visual-studio",
"file-io",
".net-2.0",
""
] |
I am creating a Python script where it does a bunch of tasks and one of those tasks is to launch and open an instance of Excel. What is the ideal way of accomplishing that in my script?
|
While the `Popen` answers are reasonable for the general case, I would recommend `win32api` for this specific case, if you want to do something useful with it:
It goes something like this:
```
from win32com.client import Dispatch
xl = Dispatch('Excel.Application')
wb = xl.Workbooks.Open('C:\\Documents and Settings\\GradeBook.xls')
xl.Visible = True # optional: if you want to see the spreadsheet
```
Taken from [a mailing list post](https://mail.python.org/pipermail/python-list/2005-June/315330.html) but there are plenty of examples around.
|
or
```
os.system("start excel.exe <path/to/file>")
```
(presuming it's in the path, and you're on windows)
and also on Windows, just `start <filename>` works, too - if it's an associated extension already (as xls would be)
|
How can I launch an instance of an application using Python?
|
[
"",
"python",
"scripting",
"ipc",
""
] |
I have been working on a childish little program: there are a bunch of little circles on the screen, of different colors and sizes. When a larger circle encounters a smaller circle it eats the smaller circle, and when a circle has eaten enough other circles it reproduces. It's kind of neat!
However, the way I have it implemented, the process of detecting nearby circles and checking them for edibility is done with a for loop that cycles through the entire living population of circles... which takes longer and longer as the population tends to spike into the 3000 before it starts to drop. The process doesn't slow my computer down, I can go off and play Dawn of War or whatever and there isn't any slow down: it's just the process of checking every circle to see if it has collided with every other circle...
So what occurred to me, is that I could try to separate the application window into four quadrants, and have the circles in the quadrants do their checks simultaneously, since they would have almost no chance of interfering with each other: or something to that effect!
My question, then, is: how does one make for loops that run side by side? In Java, say.
|
Computers are usually single tasked, this means they can usually execute one instruction at a time per CPU or core.
However, as you have noticed, your operation system (and other programs) appear to run many tasks at the same time.
This is accomplished by splitting the work into processes, and each process can further implement concurrency by spawning threads. The operation system then switches between each process and thread very quickly to give the illusion of multitasking.
In your situation,your java program is a single process, and you would need to create 4 threads each running their own loop. It can get tricky, because threads need to synchronize their access to local variables, to prevent one thread editing a variable while another thread is trying to access it.
Because threading is a complex subject it would take far more explaining than I can do here.
However, you can read Suns excellent tutorial on Concurrency, which covers everything you need to know:
<http://java.sun.com/docs/books/tutorial/essential/concurrency/>
|
the problem you have here can actually be solved without threads.
What you need is a spatial data structure. a quad tree would be best, or if the field in which the spheres move is fixed (i assume it is) you could use a simple grid. Heres the idea.
Divide the display area into a square grid where each cell is at least as big as your biggest circle. for each cell keep a list (linked list is best) of all the circles whose center is in that cell. Then during the collision detection step go through each cell and check each circle in that cell against all the other circles in that cell and the surrounding cells.
technically you don't have to check all the cells around each one as some of them might have already been checked.
you can combine this technique with multithreading techniques to get even better performance.
|
How do I make for loops run side by side?
|
[
"",
"java",
"loops",
""
] |
I need to find two adjacent repeating digits in a string and replace with a single one. How to do this in Java. Some examples:
123345 should be 12345
77433211 should be 74321
|
Probably a `replaceAll("(\\d)\\1+", "$1")`
* `$` plays a special role in a replacing string, designating the first capturing group.
* `+` allows for replacing as many identical number as possible `(\\d)\\1` would only replace them by pair: `777xx` => `77xx` (thank you [Ben Doom](https://stackoverflow.com/users/12267/ben-doom) for the remark)
So:
```
System.out.println("77433211".replaceAll("(\\d)\\1+", "$1"));
```
will return
```
74321
```
---
```
String java.lang.String.replaceAll(String regex, String replacement)
```
Replaces each substring of this string that matches the given regular expression with the given replacement.
An invocation of this method of the form `str.replaceAll(regex, repl)` yields exactly the same result as the expression
```
java.util.regex.Pattern.compile(regex).matcher(str).replaceAll(repl)
```
---
**Warning**: `String.replaceAll()` function does not modify the String on which it is applied. It returns a modified String (or *a new String with the same content* if the pattern does not match anything)
So you need to affect the result of a `replaceAll()` call to itself to actually update your String with the regexp changes.
```
String aString = "77433211"
aString = aString.replaceAll("(\\d)\\1+", "$1"));
```
|
I finally did it myself. Those who are looking for the solution, this is how I did it:
```
import java.util.regex.*;
public class RepetingDigits{
public static void main(String[] args) {
String word = "77433211";
Pattern pattern = Pattern.compile("(\\d)\\1");
Matcher matcher = pattern.matcher(word);
word = matcher.replaceAll("$1");
System.out.println(word);
}
}
```
Even simpler:
```
**word = word.replaceAll("(\\d)\\1", "$1");**
```
|
How to find two adjacent repeating digits and replace them with a single digit in Java?
|
[
"",
"java",
"regex",
"string",
""
] |
If you have an STL vector which has been resized, is it safe to take the address of element 0 and assume the rest of the vector will follow in memory?
e.g.
```
vector<char> vc(100);
// do some stuff with vc
vc.resize(200);
char* p = &vc[0];
// do stuff with *p
```
|
**Yes, that is a valid assumption (\*).**
From the C++03 standard (23.2.4.1):
> The elements of a vector are stored
> contiguously, meaning that if v is a
> vector where T is some
> type other than bool, then it obeys
> the identity &v[n] == &v[0] + n for
> all 0 <= n < v.size().
(\*) ... but watch out for the array being reallocated (invalidating any pointers and iterators) after adding elements to it.
|
The C++03 standard added wording to make it clear that vector elements must be contiguous.
C++03 23.2.4 Paragraph 1 contains the following language which is *not* in the C++98 standard document:
> The elements of a `vector` are stored
> contiguously, meaning that if `v` is a
> `vector<T, Allocator>` where `T` is
> some type other than `bool`, then it
> obeys the identity `&v[n] == &v[0] +
> n` for all `0 <= n < v.size()`.
Herb Sutter talks about this change in one of his blog entries, [Cringe not: Vectors are guaranteed to be contiguous](http://herbsutter.wordpress.com/2008/04/07/cringe-not-vectors-are-guaranteed-to-be-contiguous/):
> ... contiguity is in fact part of the
> vector abstraction. It’s so important,
> in fact, that when it was discovered
> that the C++98 standard didn’t
> completely guarantee contiguity, the
> C++03 standard was amended to
> explicitly add the guarantee.
|
Is it safe to assume that STL vector storage is always contiguous?
|
[
"",
"c++",
"stl",
"vector",
""
] |
> **Possible Duplicate:**
> [How to convert UNIX timestamp to DateTime and vice versa?](https://stackoverflow.com/questions/249760/how-to-convert-unix-timestamp-to-datetime-and-vice-versa)
I've got the following class:
```
[DataContractAttribute]
public class TestClass
{
[DataMemberAttribute]
public DateTime MyDateTime { get; set; }
}
```
Here's the JSON:
```
{ "MyDateTime":"1221818565" }
```
The JSON is being returned from a PHP webservice.
What I need to do, is convert that epoch string into a valid C# DateTime. What's the best way of doing this?
I can do this:
```
[IgnoreDataMemberAttribute]
public DateTime MyDateTime { get; set; }
[DataMemberAttribute(Name = "MyDateTime")]
public Int32 MyDateTimeTicks
{
get { return this.MyDateTime.Convert(...); }
set { this.Created = new DateTime(...); }
}
```
But the trouble with this is, the MyDateTimeTicks is public (changing it to private causes an exception in the serialization process)
|
Finishing what you posted, AND making it private seemed to work fine for me.
```
[DataContract]
public class TestClass
{
private static readonly DateTime unixEpoch = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
[IgnoreDataMember]
public DateTime MyDateTime { get; set; }
[DataMember(Name = "MyDateTime")]
private int MyDateTimeTicks
{
get { return (int)(this.MyDateTime - unixEpoch).TotalSeconds; }
set { this.MyDateTime = unixEpoch.AddSeconds(Convert.ToInt32(value)); }
}
}
```
|
```
private DateTime ConvertJsonStringToDateTime(string jsonTime)
{
if (!string.IsNullOrEmpty(jsonTime) && jsonTime.IndexOf("Date") > -1)
{
string milis = jsonTime.Substring(jsonTime.IndexOf("(") + 1);
string sign = milis.IndexOf("+") > -1 ? "+" : "-";
string hours = milis.Substring(milis.IndexOf(sign));
milis = milis.Substring(0, milis.IndexOf(sign));
hours = hours.Substring(0, hours.IndexOf(")"));
return new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc).AddMilliseconds(Convert.ToInt64(milis)).AddHours(Convert.ToInt64(hours) / 100);
}
return DateTime.Now;
}
```
|
How to convert DateTime from JSON to C#?
|
[
"",
"c#",
"json",
"serialization",
"silverlight-2.0",
""
] |
I have a table of data, and I allow people to add meta data to that table.
I give them an interface that allows them to treat it as though they're adding extra columns to the table their data is stored in, but I'm actually storing the data in another table.
```
Data Table
DataID
Data
Meta Table
DataID
MetaName
MetaData
```
So if they wanted a table that stored the data, the date, and a name, then I'd have the data in the data table, and the word "Date" in metaname, and the date in MetaData, and another row in the meta table with "Name" in metaname and the name in metadata.
I now need a query that takes the information from these tables and presents it as though coming from a single table with the two additional columns "Data" and "Name" so to the customer it would look like there's a single table with their custom columns:
```
MyTable
Data
Date
Name
```
Or, in other words, how do I go from here:
```
Data Table
DataID Data
1 Testing!
2 Hello, World!
Meta Table
DataID MetaName MetaData
1 Date 20081020
1 Name adavis
2 Date 20081019
2 Name mdavis
```
To here:
```
MyTable
Data Date Name
Testing! 20081020 adavis
Hello, World! 20081019 mdavis
```
Years ago when I did this in MySQL using PHP, I did two queries, the first to get the extra meta data, the second to join them all together. I'm hoping that modern databases have alternate methods of dealing with this.
Related to option 3 of [this question](https://stackoverflow.com/questions/218848/design-decision-dynamically-adding-data-question#218872).
-Adam
|
You want to pivot each of your name-value pair rows in the MyTable... Try this sql:
```
DECLARE @Data TABLE (
DataID INT IDENTITY(1,1) PRIMARY KEY,
Data VARCHAR(MAX)
)
DECLARE @Meta TABLE (
DataID INT ,
MetaName VARCHAR(MAX),
MetaData VARCHAR(MAX)
)
INSERT INTO @Data
SELECT 'Data'
INSERT INTO @Meta
SELECT 1, 'Date', CAST(GetDate() as VARCHAR(20))
UNION
SELECT 1, 'Name', 'Joe Test'
SELECT * FROM @Data
SELECT * FROM @Meta
SELECT
D.DataID,
D.Data,
MAX(CASE MetaName WHEN 'Date' THEN MetaData ELSE NULL END) as Date,
MAX(CASE MetaName WHEN 'Name' THEN MetaData ELSE NULL END) as Name
FROM
@Meta M
JOIN @Data D ON M.DataID = D.DataID
GROUP BY
D.DataID,
D.Data
```
|
```
SELECT DataTable.Data AS Data, MetaTable.MetaData AS Date, MetaTable.MetaName AS Name
FROM DataTable, MetaTable
WHERE DataTable.DataID = MetaTable.DataID
```
You'll probably want to add an additional clause (AND Data = 'some value') to return the rows the user is interested in.
|
What SQL query or view will show "dynamic columns"
|
[
"",
"sql",
"database",
"metadata",
"dataview",
""
] |
I am looking for lightweight messaging framework in Java. My task is to process events in a SEDA’s manner: I know that some stages of the processing could be completed quickly, and others not, and would like to decouple these stages of processing.
Let’s say I have components A and B and processing engine (be this container or whatever else) invokes component A, which in turn invokes component B. I do not care if execution time of component B will be 2s, but I do care if execution time of component A is below 50ms, for example. Therefore, it seems most reasonable for component A to submit a message to B, which B will process at the desired time.
I am aware of different JMS implementations and Apache ActiveMQ: they are too heavyweight for this. I searched for some lightweight messaging (with really basic features like messages serialization and simplest routing) to no avail.
Do you have anything to recommend in this issue?
|
Do you need any kind of persistence (e.g. if your JVM dies in between processing thousands of messages) and do you need messages to traverse to any other JVMs?
If its all in a single JVM and you don't need to worry about transactions, recovery or message loss if a JVM dies - then as Chris says above, Executors are fine.
ActiveMQ is pretty lightweight; you can use it in a single JVM only with no persistence if you want to; you can then enable transactions / persistence / recovery / remoting (working with multiple JVMs) as and when you need it. But if you need none of these things then its overkill - just use Executors.
Incidentally another option if you are not sure which steps might need persistence/reliability or load balancing to multiple JVMs would be to [hide the use of middleware completely](http://activemq.apache.org/camel/hiding-middleware.html) so you can switch between in memory SEDA queues with executors to JMS/ActiveMQ as and when you need to.
e.g. it might be that some steps need to be reliable & recoverable (so needing some kind of persistence) and other times you don't.
|
*Really* lightweight? [Executors](http://java.sun.com/javase/6/docs/technotes/guides/concurrency/overview.html). :-) So you set up an executor (B, in your description), and A simply submits tasks to the executor.
|
Lightweight messaging (async invocations) in Java
|
[
"",
"java",
"messaging",
""
] |
How can I launch an event that has accessors like this :
```
public event EventHandler CanExecuteChanged
{
add
{
CommandManager.RequerySuggested += value;
}
remove
{
CommandManager.RequerySuggested -= value;
}
}
```
If it were a normal event I would launch it by:
```
CanExecuteChanged(sender, EventArgs..).
```
But here it doesn't work - I can only do
```
CanExecuteChanged +=..
```
to attach a method do the event - but I can't Launch it.
Also some documentation on the subject would be appreciated.
Thanks.
**EDIT**
The event is from class implementing ICommand in WPF. there's nothing more to show :).
And no - the CommandManager.RequerySuggested(this, EventArgs.Empty); doesn't work.
**EDIT2** Not sure what to say - Jon's example should have worked yet even if the add method is called correctly - when I try to call the event - it's null :|. I probably will
drop events with accessors.
|
That event is just subscribing to and unsubscribing from another event. If you want your subscribers (and *only* your subscribers - not separate ones to the other event) to be invoked, you'll need to keep hold of your subscribers separately. For instance, you could change the code to something like:
```
private EventHandler canExecuteChanged;
public event EventHandler CanExecuteChanged
{
add
{
CommandManager.RequerySuggested += value;
canExecuteChanged += value;
}
remove
{
CommandManager.RequerySuggested -= value;
canExecuteChanged -= value;
}
}
```
|
I think you have events confused with delegates. Only the class exposing the event can raise it... Others can only subscribe-unsubscribe to it. If you are invoking the event from within the class declaring the event, it should work like a regular delegate.
The best page I could find on [Events vs Delegates](http://blog.monstuff.com/archives/000040.html). Read up..
Can you post a bigger snippet.. something seems amiss..
# Killer Update
I think I finally see your problem and how to solve it.
Short Answer: **It does not know the name of delegate to invoke if you write your own accessors. If you don't.. the compiler adds the private delegate of known name and hence is able to invoke it**
This code snippet shows what I mean. This [MSDN article showed me the light](http://msdn.microsoft.com/en-us/magazine/cc163533.aspx). Great question dude.. I lost 30 mins. Upvoted :)
```
public class Hash1
{
private EventHandler myHomeMadeDelegate;
public event EventHandler FancyEvent
{
add
{
//myDelegate += value;
myHomeMadeDelegate = (EventHandler)Delegate.Combine(myHomeMadeDelegate, value);
}
remove
{
//myDelegate -= value;
myHomeMadeDelegate = (EventHandler)Delegate.Remove(myHomeMadeDelegate, value);
}
}
public event EventHandler PlainEvent;
public Hash1()
{
FancyEvent += new EventHandler(On_Hash1_FancyEvent);
PlainEvent += new EventHandler(On_Hash1_PlainEvent);
// FancyEvent(this, EventArgs.Empty); //won't work:What is the backing delegate called? I don't know
myHomeMadeDelegate(this, EventArgs.Empty); // Aha!
PlainEvent(this, EventArgs.Empty);
}
void On_Hash1_PlainEvent(object sender, EventArgs e)
{
Console.WriteLine("Bang Bang!");
}
void On_Hash1_FancyEvent(object sender, EventArgs e)
{
Console.WriteLine("Bang!");
}
}
```
|
Launch an event that has accessors
|
[
"",
"c#",
"events",
"accessor",
""
] |
I'm a bit confused that the argument to crypto functions is a string. Should I simply wrap non-string arguments with str() e.g.
```
hashlib.sha256(str(user_id)+str(expiry_time))
hmac.new(str(random.randbits(256)))
```
(ignore for the moment that random.randbits() might not be cryptographically good).
edit: I realise that the hmac example is silly because I'm not storing the key anywhere!
|
Well, usually hash-functions (and cryptographic functions generally) work on bytes. The Python strings are basically byte-strings. If you want to compute the hash of some object you have to convert it to a string representation. Just make sure to apply the same operation later if you want to check if the hash is correct. And make sure that your string representation doesn't contain any changing data that you don't want to be checked.
Edit: Due to popular request a short reminder that Python unicode strings don't contain bytes but unicode code points. Each unicode code point contains *multiple* bytes (2 or 4, depending on how the Python interpreter was compiled). Python strings only contain bytes. So Python strings (type *str*) are the type most similar to an array of bytes.
|
You can.
However, for the HMAC, you actually want to store the key somewhere. Without the key, there is no way for you to verify the hash value later. :-)
|
arguments to cryptographic functions
|
[
"",
"python",
"cryptography",
""
] |
I am creating an application in java which will be the part of an external application. My application contains a viewport which shows some polygons and stuff like that. The external application needs to get the image of the viewport in gif format. For that it calls a method in an interface (implemented by my application) and my application returns the image. The external application needs to store the image in database (or something related to it which I dont need to worry about).
My question is:- What should be the data container type of the image when my application send it to the external application? I mean what should be the return type of the method?
Currently my gif encoder class returns a byte array. Is there any other 'better' option?
|
A byte array could be appropriate if you expect the GIFs to be small, but you might consider using an `OutputStream` so you can stream bits more efficiently.
Even if today you just return a fully-populated `ByteArrayOutputStream`, this would allow you to change your implementation in future without affecting cilent code.
|
A more intuitive return type might be java.awt.Image.
Here are some examples:
<http://www.google.com/codesearch?q=java+gif+image&hl=en&btnG=Search+Code>
|
Returning a gif image from a method
|
[
"",
"java",
"image",
"gif",
""
] |
I want to detect whether module has changed. Now, using inotify is simple, you just need to know the directory you want to get notifications from.
How do I retrieve a module's path in python?
|
```
import a_module
print(a_module.__file__)
```
Will actually give you the path to the .pyc file that was loaded, at least on Mac OS X. So I guess you can do:
```
import os
path = os.path.abspath(a_module.__file__)
```
You can also try:
```
path = os.path.dirname(a_module.__file__)
```
To get the module's directory.
|
There is `inspect` module in python.
## [Official documentation](http://docs.python.org/library/inspect.html)
> The inspect module provides several useful functions to help get
> information about live objects such as modules, classes, methods,
> functions, tracebacks, frame objects, and code objects. For example,
> it can help you examine the contents of a class, retrieve the source
> code of a method, extract and format the argument list for a function,
> or get all the information you need to display a detailed traceback.
Example:
```
>>> import os
>>> import inspect
>>> inspect.getfile(os)
'/usr/lib64/python2.7/os.pyc'
>>> inspect.getfile(inspect)
'/usr/lib64/python2.7/inspect.pyc'
>>> os.path.dirname(inspect.getfile(inspect))
'/usr/lib64/python2.7'
```
|
How to retrieve a module's path?
|
[
"",
"python",
"module",
"inotify",
""
] |
```
a:3:{i:0;i:4;i:1;i:3;i:2;i:2;}
```
Am I right to say that this is an array of size 3 where the key value pairs are `0->4`, `1->3`, and `2->2`?
If so, I find this representation awfully confusing. At first, I thought it was a listing of values (or the array contained `{0, 4, 1, 3, 2, 2}`), but I figured that the `a:3`: was the size of the array. And if `3` was the size, then both the keys and values appeared in the brackets with no way of clearly identifying a key/value pair without counting off.
To clarify where I'm coming from:
Why did the PHP developers choose to serialize in this manner? What advantage does this have over, let's say the way var\_dump and/or var\_export displays its data?
|
```
$string="a:3:{i:0;i:4;i:1;i:3;i:2;i:2;}";
$array=unserialize($string);
print_r($array);
```
outpts:
```
Array
(
[0] => 4
[1] => 3
[2] => 2
)
```
If think the point is that PHP does not differentiate between integer indexed arrays and string indexed hashtables. The serialization format can be used for hashtables exactly the same way: `a:<<size>>:{<<keytype>>:<<key>>;<<valuetype>>:<<value>>;...}`
As the format is not intended to be human readable but rather to provide a common format to represent all PHP variable types (with exception of resources), I think it's more simple to use the given format because the underlying variable can be reconstructed by reading the string character by character.
|
Yes that's `array(4, 3, 2)`
`a` for array, `i` for integer as key then value. You would have to count to get to a specific one, but PHP always deserialises the whole lot, so it has a count anyway.
Edit: It's not too confusing when you get used to it, but it can be somewhat long-winded compared to, e.g. JSON
> Note: var\_export() does not handle
> circular references as it would be
> close to impossible to generate
> parsable PHP code for that. If you
> want to do something with the full
> representation of an array or object,
> use serialize().
|
How do you interpret a serialized associative array in PHP and why was this serialization method chosen?
|
[
"",
"php",
"arrays",
"serialization",
""
] |
I need to store a url in a MySQL table. What's the best practice for defining a field that will hold a URL with an undetermined length?
|
> 1. [Lowest common denominator max URL length among popular web browsers: **2,083** (Internet Explorer)](https://web.archive.org/web/20060217120846/http://www.boutell.com/newfaq/misc/urllength.html)
> 2. <http://dev.mysql.com/doc/refman/5.0/en/char.html>
> *Values in VARCHAR columns are variable-length strings. The length can be specified as a value from 0 to 255 before MySQL 5.0.3, and 0 to 65,535 in 5.0.3 and later versions. The effective maximum length of a VARCHAR in MySQL 5.0.3 and later is subject to the maximum row size (65,535 bytes, which is shared among all columns) and the character set used.*
> 3. So ...
> < MySQL 5.0.3 use **TEXT**
> or
> >= MySQL 5.0.3 use **VARCHAR(2083)**
|
`VARCHAR(512)` (or similar) should be sufficient. However, since you don't really know the maximum length of the URLs in question, I might just go direct to `TEXT`. The danger with this is of course loss of efficiency due to `CLOB`s being far slower than a simple string datatype like `VARCHAR`.
|
Best database field type for a URL
|
[
"",
"sql",
"mysql",
"database",
""
] |
When copying large files using `shutil.copy()`, you get no indication of how the operation is progressing..
I have put together something that works - it uses a simple ProgressBar class (which simple returns a simple ASCII progress bar, as a string), and a loop of `open().read()` and `.write()` to do the actual copying. It displays the progress bar using `sys.stdout.write("\r%s\r" % (the_progress_bar))` which is a little hackish, but it works.
You can see the code (in context) [on github here](http://github.com/dbr/checktveps/tree/1be8f4445fbf766eba25f98f78ec52e955571608/autoPathTv.py#L64-153)
Is there any built-in module that will do this better? Is there any improvements that can be made to this code?
|
Two things:
* I would make the default block size a *lot* larger than 512. I would start with 16384 and perhaps more.
* For modularity, it might be better to have the `copy_with_prog` function not output the progress bar itself, but call a callback function so the caller can decide how to display the progress.
Perhaps something like this:
```
def copy_with_prog(src, dest, callback = None):
while True:
# copy loop stuff
if callback:
callback(pos, total)
prog = ProgressBar(...)
copy_with_prog(src, dest, lambda pos, total: prog.update(pos, total))
```
|
Overkill? Perhaps. But on almost any system, Linux, Mac, and With a quick wxWidgets install on Windows, you can have the real deal, with pause and cancel buttons in a gui setup. Macs ship with wxWidgets these days, and it's a common package on Linux.
A single file is very quick (it will immediately finish and look broken) so you might consider creating a fileSet job that ticks along once per file instead of once per block. Enjoy!
-Jim Carroll
```
"""
Threaded Jobs.
Any class that does a long running process can inherit
from ThreadedJob. This enables running as a background
thread, progress notification, pause and cancel. The
time remaining is also calculated by the ThreadedJob class.
"""
import wx.lib.newevent
import thread
import exceptions
import time
(RunEvent, EVT_RUN) = wx.lib.newevent.NewEvent()
(CancelEvent, EVT_CANCEL) = wx.lib.newevent.NewEvent()
(DoneEvent, EVT_DONE) = wx.lib.newevent.NewEvent()
(ProgressStartEvent, EVT_PROGRESS_START) = wx.lib.newevent.NewEvent()
(ProgressEvent, EVT_PROGRESS) = wx.lib.newevent.NewEvent()
class InterruptedException(exceptions.Exception):
def __init__(self, args = None):
self.args = args
#
#
class ThreadedJob:
def __init__(self):
# tell them ten seconds at first
self.secondsRemaining = 10.0
self.lastTick = 0
# not running yet
self.isPaused = False
self.isRunning = False
self.keepGoing = True
def Start(self):
self.keepGoing = self.isRunning = True
thread.start_new_thread(self.Run, ())
self.isPaused = False
#
def Stop(self):
self.keepGoing = False
#
def WaitUntilStopped(self):
while self.isRunning:
time.sleep(0.1)
wx.SafeYield()
#
#
def IsRunning(self):
return self.isRunning
#
def Run(self):
# this is overridden by the
# concrete ThreadedJob
print "Run was not overloaded"
self.JobFinished()
pass
#
def Pause(self):
self.isPaused = True
pass
#
def Continue(self):
self.isPaused = False
pass
#
def PossibleStoppingPoint(self):
if not self.keepGoing:
raise InterruptedException("process interrupted.")
wx.SafeYield()
# allow cancel while paused
while self.isPaused:
if not self.keepGoing:
raise InterruptedException("process interrupted.")
# don't hog the CPU
time.sleep(0.1)
#
#
def SetProgressMessageWindow(self, win):
self.win = win
#
def JobBeginning(self, totalTicks):
self.lastIterationTime = time.time()
self.totalTicks = totalTicks
if hasattr(self, "win") and self.win:
wx.PostEvent(self.win, ProgressStartEvent(total=totalTicks))
#
#
def JobProgress(self, currentTick):
dt = time.time() - self.lastIterationTime
self.lastIterationTime = time.time()
dtick = currentTick - self.lastTick
self.lastTick = currentTick
alpha = 0.92
if currentTick > 1:
self.secondsPerTick = dt * (1.0 - alpha) + (self.secondsPerTick * alpha)
else:
self.secondsPerTick = dt
#
if dtick > 0:
self.secondsPerTick /= dtick
self.secondsRemaining = self.secondsPerTick * (self.totalTicks - 1 - currentTick) + 1
if hasattr(self, "win") and self.win:
wx.PostEvent(self.win, ProgressEvent(count=currentTick))
#
#
def SecondsRemaining(self):
return self.secondsRemaining
#
def TimeRemaining(self):
if 1: #self.secondsRemaining > 3:
minutes = self.secondsRemaining // 60
seconds = int(self.secondsRemaining % 60.0)
return "%i:%02i" % (minutes, seconds)
else:
return "a few"
#
def JobFinished(self):
if hasattr(self, "win") and self.win:
wx.PostEvent(self.win, DoneEvent())
#
# flag we're done before we post the all done message
self.isRunning = False
#
#
class EggTimerJob(ThreadedJob):
""" A sample Job that demonstrates the mechanisms and features of the Threaded Job"""
def __init__(self, duration):
self.duration = duration
ThreadedJob.__init__(self)
#
def Run(self):
""" This can either be run directly for synchronous use of the job,
or started as a thread when ThreadedJob.Start() is called.
It is responsible for calling JobBeginning, JobProgress, and JobFinished.
And as often as possible, calling PossibleStoppingPoint() which will
sleep if the user pauses, and raise an exception if the user cancels.
"""
self.time0 = time.clock()
self.JobBeginning(self.duration)
try:
for count in range(0, self.duration):
time.sleep(1.0)
self.JobProgress(count)
self.PossibleStoppingPoint()
#
except InterruptedException:
# clean up if user stops the Job early
print "canceled prematurely!"
#
# always signal the end of the job
self.JobFinished()
#
#
def __str__(self):
""" The job progress dialog expects the job to describe its current state."""
response = []
if self.isPaused:
response.append("Paused Counting")
elif not self.isRunning:
response.append("Will Count the seconds")
else:
response.append("Counting")
#
return " ".join(response)
#
#
class FileCopyJob(ThreadedJob):
""" A common file copy Job. """
def __init__(self, orig_filename, copy_filename, block_size=32*1024):
self.src = orig_filename
self.dest = copy_filename
self.block_size = block_size
ThreadedJob.__init__(self)
#
def Run(self):
""" This can either be run directly for synchronous use of the job,
or started as a thread when ThreadedJob.Start() is called.
It is responsible for calling JobBeginning, JobProgress, and JobFinished.
And as often as possible, calling PossibleStoppingPoint() which will
sleep if the user pauses, and raise an exception if the user cancels.
"""
self.time0 = time.clock()
try:
source = open(self.src, 'rb')
# how many blocks?
import os
(st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime) = os.stat(self.src)
num_blocks = st_size / self.block_size
current_block = 0
self.JobBeginning(num_blocks)
dest = open(self.dest, 'wb')
while 1:
copy_buffer = source.read(self.block_size)
if copy_buffer:
dest.write(copy_buffer)
current_block += 1
self.JobProgress(current_block)
self.PossibleStoppingPoint()
else:
break
source.close()
dest.close()
except InterruptedException:
# clean up if user stops the Job early
dest.close()
# unlink / delete the file that is partially copied
os.unlink(self.dest)
print "canceled, dest deleted!"
#
# always signal the end of the job
self.JobFinished()
#
#
def __str__(self):
""" The job progress dialog expects the job to describe its current state."""
response = []
if self.isPaused:
response.append("Paused Copy")
elif not self.isRunning:
response.append("Will Copy a file")
else:
response.append("Copying")
#
return " ".join(response)
#
#
class JobProgress(wx.Dialog):
""" This dialog shows the progress of any ThreadedJob.
It can be shown Modally if the main application needs to suspend
operation, or it can be shown Modelessly for background progress
reporting.
app = wx.PySimpleApp()
job = EggTimerJob(duration = 10)
dlg = JobProgress(None, job)
job.SetProgressMessageWindow(dlg)
job.Start()
dlg.ShowModal()
"""
def __init__(self, parent, job):
self.job = job
wx.Dialog.__init__(self, parent, -1, "Progress", size=(350,200))
# vertical box sizer
sizeAll = wx.BoxSizer(wx.VERTICAL)
# Job status text
self.JobStatusText = wx.StaticText(self, -1, "Starting...")
sizeAll.Add(self.JobStatusText, 0, wx.EXPAND|wx.ALL, 8)
# wxGague
self.ProgressBar = wx.Gauge(self, -1, 10, wx.DefaultPosition, (250, 15))
sizeAll.Add(self.ProgressBar, 0, wx.EXPAND|wx.ALL, 8)
# horiz box sizer, and spacer to right-justify
sizeRemaining = wx.BoxSizer(wx.HORIZONTAL)
sizeRemaining.Add((2,2), 1, wx.EXPAND)
# time remaining read-only edit
# putting wide default text gets a reasonable initial layout.
self.remainingText = wx.StaticText(self, -1, "???:??")
sizeRemaining.Add(self.remainingText, 0, wx.LEFT|wx.RIGHT|wx.ALIGN_CENTER_VERTICAL, 8)
# static text: remaining
self.remainingLabel = wx.StaticText(self, -1, "remaining")
sizeRemaining.Add(self.remainingLabel, 0, wx.ALL|wx.ALIGN_CENTER_VERTICAL, 8)
# add that row to the mix
sizeAll.Add(sizeRemaining, 1, wx.EXPAND)
# horiz box sizer & spacer
sizeButtons = wx.BoxSizer(wx.HORIZONTAL)
sizeButtons.Add((2,2), 1, wx.EXPAND|wx.ADJUST_MINSIZE)
# Pause Button
self.PauseButton = wx.Button(self, -1, "Pause")
sizeButtons.Add(self.PauseButton, 0, wx.ALL, 4)
self.Bind(wx.EVT_BUTTON, self.OnPauseButton, self.PauseButton)
# Cancel button
self.CancelButton = wx.Button(self, wx.ID_CANCEL, "Cancel")
sizeButtons.Add(self.CancelButton, 0, wx.ALL, 4)
self.Bind(wx.EVT_BUTTON, self.OnCancel, self.CancelButton)
# Add all the buttons on the bottom row to the dialog
sizeAll.Add(sizeButtons, 0, wx.EXPAND|wx.ALL, 4)
self.SetSizer(sizeAll)
#sizeAll.Fit(self)
sizeAll.SetSizeHints(self)
# jobs tell us how they are doing
self.Bind(EVT_PROGRESS_START, self.OnProgressStart)
self.Bind(EVT_PROGRESS, self.OnProgress)
self.Bind(EVT_DONE, self.OnDone)
self.Layout()
#
def OnPauseButton(self, event):
if self.job.isPaused:
self.job.Continue()
self.PauseButton.SetLabel("Pause")
self.Layout()
else:
self.job.Pause()
self.PauseButton.SetLabel("Resume")
self.Layout()
#
#
def OnCancel(self, event):
self.job.Stop()
#
def OnProgressStart(self, event):
self.ProgressBar.SetRange(event.total)
self.statusUpdateTime = time.clock()
#
def OnProgress(self, event):
# update the progress bar
self.ProgressBar.SetValue(event.count)
self.remainingText.SetLabel(self.job.TimeRemaining())
# update the text a max of 20 times a second
if time.clock() - self.statusUpdateTime > 0.05:
self.JobStatusText.SetLabel(str(self.job))
self.statusUpdateTime = time.clock()
self.Layout()
#
#
# when a job is done
def OnDone(self, event):
self.ProgressBar.SetValue(0)
self.JobStatusText.SetLabel("Finished")
self.Destroy()
#
#
if __name__ == "__main__":
app = wx.PySimpleApp()
#job = EggTimerJob(duration = 10)
job = FileCopyJob("VeryBigFile.mp4", "/tmp/test_junk.mp4", 1024*1024*10)
dlg = JobProgress(None, job)
job.SetProgressMessageWindow(dlg)
job.Start()
dlg.ShowModal()
#
```
|
How to copy a file in Python with a progress bar?
|
[
"",
"python",
"file-io",
""
] |
I've got a script that dynamically calls and displays images from a directory, what would be the best way to paginate this? I'd like to be able to control the number of images that are displayed per page through a variable within the script. I'm thinking of using URL varriables (ie - <http://domain.com/page.php?page=1>) but am unsure how to go about this.
Thanks for the help.
|
pagination is the same concept with or without sql. you just need your basic variables, then you can create the content you want. here's some quasi-code:
```
$itemsPerPage = 5;
$currentPage = isset($_GET['page']) ? $_GET['page'] : 1;
$totalItems = getTotalItems();
$totalPages = ceil($totalItems / $itemsPerPage);
function getTotalItems() {
// since they're images, perhaps we'll scan a directory of images to determine
// how many images we have in total
}
function getItemsFromPage($page, $itemsPerPage) {
// function to grab $itemsPerPage based on which $page we're on
}
function getPager($totalPages, $currentPage) {
// build your pager
}
```
hope that helps you get started!
|
This is a function I often use to do pagination. Hope it helps.
```
function paginate($page, $total, $per_page) {
if(!is_numeric($page)) { $page = 1; }
if(!is_numeric($per_page)) { $per_page = 10; }
if($page > ceil($total / $per_page)) $page = 1;
if($page == "" || $page == 0) {
$page = 1;
$start = 0;
$end = $per_page;
} else {
$start = ($page * $per_page) - ($per_page);
$end = $per_page;
}
$prev_page = "";
$next_page = "";
$all_pages = array();
$selected = "";
$enabled = false;
if($total > $per_page) {
$enabled = true;
$prev = $page - 1;
$prev_page = ($prev == 0) ? 0 : $prev;
$next = $page + 1;
$total_pages = ceil($total/$per_page);
$next_page = ($next <= $total_pages) ? $next : 0;
for($x=1;$x<=$total_pages;$x++) {
$all_pages[] = $x;
$selected = ($x == $page) ? $x : $selected;
}
}
return array(
"per_page" => $per_page,
"page" => $page,
"prev_page" => $prev_page,
"all_pages" => $all_pages,
"next_page" => $next_page,
"selected" => $selected,
"start" => $start,
"end" => $end,
"enabled" => $enabled
);
}
// ex: we are in page 2, we have 50 items, and we're showing 10 per page
print_r(paginate(2, 50, 10));
```
This will return:
```
Array
(
[per_page] => 10
[page] => 2
[prev_page] => 1
[all_pages] => Array
(
[0] => 1
[1] => 2
[2] => 3
[3] => 4
[4] => 5
)
[next_page] => 3
[selected] => 2
[start] => 10
[end] => 10
[enabled] => 1
)
```
With all that data you are then pretty well armed to make the pagination links.
|
PHP Dynamic Pagination Without SQL
|
[
"",
"php",
"pagination",
""
] |
I'm *extremely new* to Java, and have mostly just been teaching myself as I go, so I've started building an applet. I'd like to make one that can select a file from the local disk and upload it as a multipart/form-data POST request but **with a progress bar**. Obviously the user has to grant permission to the Java applet to access the hard drive. Now I've already got the first part working: the user can select a file using a `JFileChooser` object, which conveniently returns a `File` object. But I'm wondering what comes next. I know that `File.length()` will give me the total size in bytes of the file, but how do I send the selected `File` to the web, and how do I monitor how many bytes have been sent? Thanks in advance.
|
I ended up stumbling across an open source Java uploader applet and found everything I needed to know within its code. Here are links to a blog post describing it as well as the source:
[Article](http://panyasan.wordpress.com/2008/02/29/open-source-drag-drop-upload-java-applet-for-websites/)
[Source Code](https://qooxdoo-contrib.svn.sourceforge.net/svnroot/qooxdoo-contrib/trunk/qooxdoo-contrib/UploadJavaApplet/trunk/uploadApplet/java/src/dndapplet/applet/DNDApplet.java)
|
To check progress using HttpClient, wrap the MultipartRequestEntity around one that counts the bytes being sent. Wrapper is below:
```
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import org.apache.commons.httpclient.methods.RequestEntity;
public class CountingMultipartRequestEntity implements RequestEntity {
private final RequestEntity delegate;
private final ProgressListener listener;
public CountingMultipartRequestEntity(final RequestEntity entity,
final ProgressListener listener) {
super();
this.delegate = entity;
this.listener = listener;
}
public long getContentLength() {
return this.delegate.getContentLength();
}
public String getContentType() {
return this.delegate.getContentType();
}
public boolean isRepeatable() {
return this.delegate.isRepeatable();
}
public void writeRequest(final OutputStream out) throws IOException {
this.delegate.writeRequest(new CountingOutputStream(out, this.listener));
}
public static interface ProgressListener {
void transferred(long num);
}
public static class CountingOutputStream extends FilterOutputStream {
private final ProgressListener listener;
private long transferred;
public CountingOutputStream(final OutputStream out,
final ProgressListener listener) {
super(out);
this.listener = listener;
this.transferred = 0;
}
public void write(byte[] b, int off, int len) throws IOException {
out.write(b, off, len);
this.transferred += len;
this.listener.transferred(this.transferred);
}
public void write(int b) throws IOException {
out.write(b);
this.transferred++;
this.listener.transferred(this.transferred);
}
}
}
```
Then implements a ProgressListener which updates a progress bar.
Remember that the progress bar update must not run on the Event Dispatch Thread.
|
File Upload with Java (with progress bar)
|
[
"",
"java",
"file-io",
"upload",
"progress-bar",
""
] |
Is there an easy way to modify this code so that the target URL opens in the SAME window?
```
<a href="javascript:q=(document.location.href);void(open('http://example.com/submit.php?url='+escape(q),'','resizable,location,menubar,toolbar,scrollbars,status'));">click here</a>``
```
|
The second parameter of *window.open()* is a string representing the name of the target window.
Set it to: "\_self".
```
<a href="javascript:q=(document.location.href);void(open('http://example.com/submit.php?url='+escape(q),'_self','resizable,location,menubar,toolbar,scrollbars,status'));">click here</a>
```
**Sidenote:**
The following question gives an overview of an arguably better way to bind event handlers to HTML links.
[**What's the best way to replace links with js functions?**](https://stackoverflow.com/questions/266327/whats-the-best-way-to-replace-links-with-js-functions#266443)
|
```
<script type="text/javascript">
window.open ('YourNewPage.htm','_self',false)
</script>
```
see reference:
<http://www.w3schools.com/jsref/met_win_open.asp>
|
Javascript: open new page in same window
|
[
"",
"javascript",
"window",
""
] |
One thing I've run into a few times is a service class (like a JBoss service) that has gotten overly large due to helper inner classes. I've yet to find a good way to break the class out. These helpers are usually threads. Here's an example:
```
/** Asset service keeps track of the metadata about assets that live on other
* systems. Complications include the fact the assets have a lifecycle and their
* physical representation lives on other systems that have to be polled to find
* out if the Asset is still there. */
public class AssetService
{
//...various private variables
//...various methods
public AssetService()
{
Job pollerJob = jobService.schedule( new AssetPoller() );
Job lifeCycleJob = jobService.schedule( AssetLifecycleMonitor() );
}
class AssetPoller
{
public void run()
{
// contact remote systems and update this service's private variables that
// track the assets.
}
}
class AssetLifecycleMonitor
{
public void run()
{
// look for assets that have meet criteria for a lifecycle shift
// and update this service's private variables as relevant.
}
}
}
```
So, what can happen if I have a couple helpers and they're at all complex, is the overall class file can get really large. I like the inner classes in that it makes clear the classes are wholly owned by the service and exist only to help that service. I've tried breaking the classes out and passing the parent service as a reference, which works mostly, but things I don't like are:- I end up exposing package level accessors so the broken out classes can get to the variables, whereas before I didn't expose the setters at all since the inner classes had direct access.- Plus, things get a bit more wordy as I'm constantly calling accessors rather than the underlying variables. A minor nit, granted.- Convenience methods (e.g. checkAssetIsValid() or some such) need package level exposure now so the helper classes can call them, where as before as inner classes they could be private.- Even worse, I need to pass the service implementation class to the helper classes constructors since I don't want to expose these helpers methods in the interface the service implements because that forces them to be public. This can create some unit test/mocking issues.- Worse yet, any synchronization I wanted to do gets leaked out through some external convenience method (e.g. lockDownAssets() during a poller update). Before, the internal classes had access to private Locks.
So, in short, breaking the classes out loses some of the encapsulation I like. But leaving them in can lead to some large java files. I've yet to find a good way to deal with this. C++ had the concept of "friends" which I've rarely missed, but would actually help in this case.
Thoughts?
|
On bytecode level inner classes are just plain Java classes. Since the Java bytecode verifier does not allow access to private members, it generates synthetic accessor methods for each private field which you use. Also, in order to link the inner class with its enclosing instance, the compiler adds synthetic pointer to the outer 'this'.
Considering this, the inner classes are just a layer of syntax sugar. They are convenient and you have listed some good points, so I'd list some negative aspects which you might want to consider:
* Your inner class has a hidden dependency to the *whole* parent class, which obfuscates its inbound interface. If you extract it as package-private class you have a chance to improve your design and make it more maintainable. Initially it's more verbose, but often you'd find that:
+ Instead of exposing 10 accessors you actually want to share a single value object. Often you would find that you don't really need a reference to the whole outer class. This also works well with IoC.
+ Instead of providing methods for explicit locking, it's more maintainable to encapsulate the operation with its context in a separate class (or move it to one of the two classes - outer or formerly-inner).
+ Convenience methods belong in package private utility classes. You can use the Java5 static import to make them appear as local.
* Your outer class can bypass any protection levels and access private members of your inner class directly. This is not bad thing per se, but it takes away one of the language means of expressing your design.
* Since your inner class is embedded in exactly one outer class, the only way to reuse it is to subclass the outer class. An alternative would be to pass explicit reference to a package-private interface that the outer class implements. This would allow you to mock the outer and better test the inner class.
* Though recent debuggers are quite good, I have experienced problems with debugging inner classes before (conditional breakpoint scope confusion, not stopping at breakpoints, etc.)
* Private classes bloat your bytecode. See my first paragraph - often there is an API that you could use and reduce the number of synthetic cruft.
P.S. I'm talking about non-trivial inner classes (especially ones that do not implement any interfaces). Three line listener implementations are good.
|
The line between encapsulation and separation can be difficult to walk. However, I think the main issue here is that you need some kind of solid interaction model to use as a basis of separating your classes.
I think it's reasonable to have external helper utility classes which are used in many places, as long as they don't side effect I don't see an issue. It's also reasonable to have static helper classes, as long as they are well organized, which contain often used methods such as checkAssetIsValid(). This is assuming that checkAssetIsValid does not need to access any external state other than the object you are passing it.
The most important thing for separation is not to have objects which share permanent references in many of these classes. I like to look to functional programming for guidance. Each class should not be reaching into the guts of other classes and changing state. Instead each class which does work should produce and consume container objects.
Visualization can be very helpful too. I noticed a thread on the topic of Java Visualization tools [here](http://www.velocityreviews.com/forums/t138598-uml-class-diagram-generator.html). Ideally, your class interaction diagram should look more like a tree than a graph.
Also, I just want to note that refactoring a large class into smaller classes can be extremely difficult. It's best to build a suite of unit tests for the public interface at the very least so that it will become immediately obvious if you break something. I know tests have saved me countless hours in the past.
Hopefully some of this will be helpful. I'm kind of just rambling on here.
|
Large Inner classes and private variables
|
[
"",
"java",
"jboss",
"class-design",
""
] |
Ok
I'm working on a little project at the moment, the Report expects an int but the ReportParameter class only lets me have a value that's a string or a string[]
How can I pass an int?
thanks
dan
|
Still no answer to this one, ended up casting in the underlying stored procs.
|
You can call the method `GetReportParameters()` which will return a `ReportParameter[]` array. If you iterate through each parameter and look at its Type property it will indicate if it is an `int`. The Type property is an `enum` of type `ParameterTypeEnum` and would be `ParameterTypeEnum.Integer` for an `int`.
|
c# Reporting Services -- ReportParameter value that isn't a string
|
[
"",
"c#",
"asp.net",
"reporting-services",
"reportparameter",
""
] |
I would like to access a class everywhere in my application, how can I do this?
To make it more clear, I have a class somewhere that use some code. I have an other class that use the same code. I do not want to duplicate so I would like to call the same code in both place by using something. In php I would just include("abc.php") in both... I do not want to create the object everytime I want to use the code.
|
The concept of global classes in C# is really just a simple matter of referencing the appropriate assembly containing the class. Once you have reference the needed assembly, you can refer to the class of choice either by it's fully qualified Type name, or by importing the namespace that contains the class. (**Concrete instance or Static access** to that class)
Or
You can have a **Singleton class** to use it everywhere but some people won't recommend you this way to proceed.
|
Do you want to access the class or access an instance of the class from everywhere?
You can either make it a static class - `public static class MyClass { }` - or you can use the [Singleton Pattern](http://en.wikipedia.org/wiki/Singleton_pattern).
For the singleton pattern in its simplest form you can simply add a static property to the class (or some other class) that returns the same instance of the class like this:
```
public class MyClass
{
private static MyClass myClass;
public static MyClass MyClass
{
get { return myClass ?? (myClass = new MyClass()); }
}
private MyClass()
{
//private constructor makes it where this class can only be created by itself
}
}
```
|
How can I make a class global to the entire application?
|
[
"",
"c#",
".net",
""
] |
I have a number of generated .sql files that I want to run in succession. I'd like to run them from a SQL statement in a query (i.e. Query Analyzer/Server Management Studio).
Is it possible to do something like this and if so what is the syntax for doing this?
I'm hoping for something like:
```
exec 'c:\temp\file01.sql'
exec 'c:\temp\file02.sql'
```
I am using SQL Server 2005 and running queries in management studio.
|
use [xp\_cmdshell](http://msdn.microsoft.com/en-us/library/aa260689(SQL.80).aspx) and [sqlcmd](http://msdn.microsoft.com/en-us/library/ms166559.aspx)
```
EXEC xp_cmdshell 'sqlcmd -S ' + @DBServerName + ' -d ' + @DBName + ' -i ' + @FilePathName
```
|
Very helpful thanks, see also this link:
[Execute SQL Server scripts](https://stackoverflow.com/questions/3523365/execute-sql-server-scripts?rq=1)
for a similar example.
To turn `xp_cmdshell` on and off see below:
On
```
SET NOCOUNT ON
EXEC master.dbo.sp_configure 'show advanced options', 1
RECONFIGURE
EXEC master.dbo.sp_configure 'xp_cmdshell', 1
RECONFIGURE
```
Off
```
EXEC master.dbo.sp_configure 'xp_cmdshell', 0
RECONFIGURE
EXEC master.dbo.sp_configure 'show advanced options', 0
RECONFIGURE
SET NOCOUNT OFF
```
|
Is it possible to execute a text file from SQL query?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
The deceptively simple foundation of dynamic code generation within a C/C++ framework has already been covered in [another question](https://stackoverflow.com/questions/45408/). Are there any gentle introductions into topic with code examples?
My eyes are starting to bleed staring at highly intricate open source JIT compilers when my needs are much more modest.
Are there good texts on the subject that don't assume a doctorate in computer science? I'm looking for well worn patterns, things to watch out for, performance considerations, etc. Electronic or tree-based resources can be equally valuable. You can assume a working knowledge of (not just x86) assembly language.
|
Well a pattern I've used in emulators goes something like this:
```
typedef void (*code_ptr)();
unsigned long instruction_pointer = entry_point;
std::map<unsigned long, code_ptr> code_map;
void execute_block() {
code_ptr f;
std::map<unsigned long, void *>::iterator it = code_map.find(instruction_pointer);
if(it != code_map.end()) {
f = it->second
} else {
f = generate_code_block();
code_map[instruction_pointer] = f;
}
f();
instruction_pointer = update_instruction_pointer();
}
void execute() {
while(true) {
execute_block();
}
}
```
This is a simplification, but the idea is there. Basically, every time the engine is asked to execute a "basic block" (usually a everything up to next flow control op or whole function in possible), it will look it up to see if it has already been created. If so, execute it, else create it, add it and then execute.
rinse repeat :)
As for the code generation, that gets a little complicated, but the idea is to emit a proper "function" which does the work of your basic block in the context of your VM.
EDIT: note that I haven't demonstrated any optimizations either, but you asked for a "gentle introduction"
EDIT 2: I forgot to mention one of the most immediately productive speed ups you can implement with this pattern. Basically, if you *never* remove a block from your tree (you can work around it if you do but it is way simpler if you never do), then you can "chain" blocks together to avoid lookups. Here's the concept. Whenever you return from f() and are about to do the "update\_instruction\_pointer", if the block you just executed ended in either a call, unconditional jump, or didn't end in flow control at all, then you can "fixup" its ret instruction with a direct jmp to the next block it'll execute (cause it'll always be the same one) *if* you have already emited it. This makes it so you are executing more and more often in the VM and less and less in the "execute\_block" function.
|
I'm not aware of any sources specifically related to JITs, but I imagine that it's pretty much like a normal compiler, only simpler if you aren't worried about performance.
The easiest way is to start with a VM interpreter. Then, for each VM instruction, generate the assembly code that the interpreter would have executed.
To go beyond that, I imagine that you would parse the VM byte codes and convert them into some sort of suitable intermediate form (three address code? SSA?) and then optimize and generate code as in any other compiler.
For a stack based VM, it may help to to keep track of the "current" stack depth as you translate the byte codes into intermediate form, and treat each stack location as a variable. For example, if you think that the current stack depth is 4, and you see a "push" instruction, you might generate an assignment to "stack\_variable\_5" and increment a compile time stack counter, or something like that. An "add" when the stack depth is 5 might generate the code "stack\_variable\_4 = stack\_variable\_4+stack\_variable\_5" and decrement the compile time stack counter.
It is also possible to translate stack based code into syntax trees. Maintain a compile-time stack. Every "push" instruction causes a representation of the thing being pushed to be stored on the stack. Operators create syntax tree nodes that include their operands. For example, "X Y +" might cause the stack to contain "var(X)", then "var(X) var(Y)" and then the plus pops both var references off and pushes "plus(var(X), var(Y))".
|
Gentle introduction to JIT and dynamic compilation / code generation
|
[
"",
"c++",
"assembly",
"code-generation",
""
] |
In JSP I can reference a bean's property by using the tag
${object.property}
Is there some way to deal with properties that might not exist? I have a JSP page that needs to deal with different types. Example:
```
public class Person {
public String getName()
}
public class Employee extends Person {
public float getSalary()
}
```
In JSP I want to display a table of people with columns of name and salary. If the person is not an employee then salary should be blank. The row HTML might look like:
```
<tr>
<td><c:out value="${person.name}"></td>
<td><c:out value="${person.salary}"></td>
</tr>
```
Unfortunately if person is not an employee then it can't find salary and an error occurs. How would I solve this in JSP?
Edit: Is there an **instanceof** check in JSP tag language?
|
Just use the EL empty operator IF it was a scoped attribute, unfortunately you'll have to go with surrounding your expression using employee.salary with <c:catch>:
```
<c:catch var="err">
<c:out value="${employee.salary}"/>
</c:catch>
```
If you really need *instanceof*, you might consider a custom tag.
|
If you want the class, just use `${person.class}`. You can also use `${person.class.name eq 'my.package.PersonClass'}`
You can also use the "default" on c:out.
```
<c:out value='${person.salary}' default="Null Value" />
```
|
JSP bean tag for property that might not exist
|
[
"",
"java",
"jsp",
"jsp-tags",
"javabeans",
""
] |
I write a singleton c++ in the follow way:
```
class A {
private:
static A* m_pA;
A();
virtual ~A();
public:
static A* GetInstance();
static void FreeInstance();
void WORK1();
void WORK2();
void WORK3();
}
}
A* A::GetInstance() {
if (m_pA == NULL)
m_pA = new A();
return m_pA;
}
A::~A() {
FreeInstance() // Can I write this? are there any potential error?
}
void A::FreeInstance() {
delete m_pA;
m_pA = NULL;
}
```
Thanks! Evan Teran and sep61.myopenid.com 's answer is right, and really good!
My way is wrong, I wish any one writting such code can avoid my silly mistake.
My singleton A in my project has a vector of smart pointer, and another thread can also edit this vector, so when the application is closing, it always become unstable even I add lots of CMutex. Multithread error + singleton error wasted me 1 day.
//-----------------------------------------------------------
A new singleton, you are welcome to edit if you think there is any problem in the following sample:
```
class A {
private:
static A* m_pA;
explicit A();
void A(const A& a);
void A(A &a);
const A& operator=(const A& a);
virtual ~A();
public:
static A* GetInstance();
static void FreeInstance();
void WORK1();
void WORK2();
void WORK3();
}
}
A* A::GetInstance() {
if (m_pA == NULL){
static A self;
m_pA = &self;
}
return m_pA;
}
A::~A() {
}
```
|
You can avoid needing to delete it by using a static object like this:
```
if(m_pA == 0) {
static A static_instance;
m_pA = &static_instance;
}
```
|
Why does everybody want to return a singleton as a pointer?
Return it as a reference seems much more logical!
You should never be able to free a singleton manually. How do you know who is keeping a reference to the singleton? If you don't know (or can't guarantee) nobody has a reference (in your case via a pointer) then you have no business freeing the object.
Use the static in a function method.
This guarantees that it is created and destroyed only once. It also gives you lazy initialization for free.
```
class S
{
public:
static S& getInstance()
{
static S instance;
return instance;
}
private:
S() {}
S(S const&); // Don't Implement.
void operator=(S const&); // Don't implement
};
```
Note you also need to make the constructor private.
Also make sure that you override the default copy constructor and assignment operator so that you can not make a copy of the singleton (otherwise it would not be a singleton).
Also read:
* <https://stackoverflow.com/a/1008289/14065>
* [Singleton: How should it be used](https://stackoverflow.com/questions/86582/singleton-how-should-it-be-used)
* [C++ Singleton design pattern](https://stackoverflow.com/questions/1008019/c-singleton-design-pattern/1008289#1008289)
To make sure you are using a singleton for the correct reasons.
Though technically not thread safe in the general case see:
[What is the lifetime of a static variable in a C++ function?](https://stackoverflow.com/questions/246564/what-is-the-lifetime-of-a-static-variable-in-a-c-function)
GCC has an explicit patch to compensate for this:
<http://gcc.gnu.org/ml/gcc-patches/2004-09/msg00265.html>
|
Can any one provide me a sample of Singleton in c++?
|
[
"",
"c++",
"design-patterns",
"singleton",
""
] |
Are there any tricks for preventing SQL Server from entitizing chars like &, <, and >? I'm trying to output a URL in my XML file but SQL wants to replace any '&' with '`&`'
Take the following query:
```
SELECT 'http://foosite.com/' + RTRIM(li.imageStore)
+ '/ImageStore.dll?id=' + RTRIM(li.imageID)
+ '&raw=1&rev=' + RTRIM(li.imageVersion) AS imageUrl
FROM ListingImages li
FOR XML PATH ('image'), ROOT ('images'), TYPE
```
The output I get is like this (&s are entitized):
```
<images>

</images>
```
What I'd like is this (&s are not entitized):
```
<images>

</images>
```
How does one prevent SQL server from entitizing the '&'s into '`&`'?
|
What SQL Server generates is correct. What you expect to see is not well-formed XML. The reason is that `&` character signifies the start of an entity reference, such as `&`. See the [XML specification](http://www.w3.org/TR/REC-xml/#sec-references) for more information.
When your XML parser parses this string out of XML, it will understand the `&` entity references and return the text back in the form you want. So the internal format in the XML file should not cause a problem to you unless you're using a buggy XML parser, or trying to parse it manually (in which case your current parser code is effectively buggy at the moment with respect to the XML specification).
|
There are situations where a person may not want well formed XML - the one I (and perhaps the original poster) encountered was using the For XML Path technique to return a single field list of 'child' items via a recursive query. More information on this technique is here (specifically in the 'The blackbox XML methods' section):
[Concatenating Row Values in Transact-SQL](https://www.simple-talk.com/sql/t-sql-programming/concatenating-row-values-in-transact-sql/)
For my situation, seeing 'H&E' (a pathology stain) transformed into 'well formed XML' was a real disappointment. Fortunately, I found a solution... the following page helped me solve this issue relatively easily and without having re-architect my recursive query or add additional parsing at the presentation level (for this as well for as other/future situations where my child-rows data fields contain reserved XML characters): [Handling Special Characters with FOR XML PATH](http://blogs.lobsterpot.com.au/2010/04/15/handling-special-characters-with-for-xml-path/)
---
EDIT: code below from the referenced blog post.
```
select
stuff(
(select ', <' + name + '>'
from sys.databases
where database_id > 4
order by name
for xml path(''), root('MyString'), type
).value('/MyString[1]','varchar(max)')
, 1, 2, '') as namelist;
```
|
How to preserve an ampersand (&) while using FOR XML PATH on SQL 2005
|
[
"",
"sql",
"xml",
"sql-server-2005",
""
] |
What is a doubly linked list's remove method?
|
The same algorithm that [Bill the Lizard](https://stackoverflow.com/questions/270950/linkedlist-remove-method#270962) said, but in a graphical way :-)
[](https://i.stack.imgur.com/dbGK6.gif)
(source: [jaffasoft.co.uk](http://www.jaffasoft.co.uk/feature/ll/fig4.gif))
|
The general algorithm is as follows:
* Find the node to remove.* node.previous.next = node.next* node.next.previous = node.previous* node.previous = null* node.next = null* Dispose of node if you're in a non-GC environment
You have to check the previous and next nodes for null to see if you're removing the head or the tail, but those are the easy cases.
|
LinkedList remove method
|
[
"",
"java",
"data-structures",
"linked-list",
""
] |
I have a lot of buttons and by clicking on different button, different image and text would appear. I can achieve what I want, but the code is just so long and it seems very repetitive.
For example:
```
var aaClick = false;
$("aa").observe('click', function() {
unclick();
$('characterPic').writeAttribute('src',"aa.jpg");
$('characterBio').update("aatext");
$('aa').setStyle({ color: '#FFFFFF' });
aaClick = true;
});
$("aa").observe('mouseover', function() {
if (!aaClick) $('aa').setStyle({ color: '#FFFFFF' });
});
$("aa").observe('mouseout', function() {
if (!aaClick) $('aa').setStyle({ color: '#666666' });
});
function unclick() {
aaClick = false;
$('aa').setStyle({ color: '#666666' });
}
```
same thing with bb, cc, etc. and every time I add a new button, I need to add it to unclick function as well. This is pretty annoying and I tried to google it, and I only found observe click on all listed items, so I still couldn't figure out since what I want involves button up when other buttons are clicked.
Is there any way to just have a generic function that takes different id but do the exact same thing? Because from what I can see, if I can just replace aa with other id, I can reduce a lot of code. Thanks!!!
|
Build it all into a function where you can simply pass it the names of the DIVs you want to register. As long are you are consistent with your .jpg names, it should work.
```
var clicks = []
function regEvents(divName) {
$(divName).observe('click', function() {
unclick(divName);
$('characterPic').writeAttribute('src',divName+".jpg");
$('characterBio').update(divName"text");
$(divName).setStyle({ color: '#FFFFFF' });
clicks[divName] = true
});
$(divName).observe('mouseover', function() {
if (!clicks[divName]) $(divName).setStyle({ color: '#FFFFFF' });
});
$(divName).observe('mouseout', function() {
if (!clicks[divName]) $(divName).setStyle({ color: '#666666' });
});
}
function unclick(divName) {
clicks[divName] = false;
$(clicks[divName]).setStyle({ color: '#666666' });
}
```
|
Event delegation. Give all your buttons a class (eg `yourButtonClass`), then...
```
var clicks = [];
$$('.yourButtonClass').each(function(button) {
clicks.push(button.id);
});
$(document).observe('click', function(e) {
// If your click registered on an element contained by the button, this comes in handy...
var clicked = e.element().up('.yourButtonClass') || e.element();
if(clicked.hasClassName('.yourButtonClass')) {
unclick();
$('characterPic').writeAttribute('src', clicked.id + '.jpg');
$('characterBio').update(clicked.id + 'text');
$(clicked).setStyle({ color: '#FFFFFF' });
clicks[clicked.id] = true;
}
});
```
And so on...
|
Shorten the code for a lot of buttons in Prototype
|
[
"",
"javascript",
"button",
"prototypejs",
"mouseevent",
""
] |
I find myself writing code that looks like this a lot:
```
set<int> affected_items;
while (string code = GetKeyCodeFromSomewhere())
{
if (code == "some constant" || code == "some other constant") {
affected_items.insert(some_constant_id);
} else if (code == "yet another constant" || code == "the constant I didn't mention yet") {
affected_items.insert(some_other_constant_id);
} // else if etc...
}
for (set<int>::iterator it = affected_items.begin(); it != affected_items.end(); it++)
{
switch(*it)
{
case some_constant_id:
RunSomeFunction(with, these, params);
break;
case some_other_constant_id:
RunSomeOtherFunction(with, these, other, params);
break;
// etc...
}
}
```
The reason I end up writing this code is that I need to only run the functions in the second loop once even if I've received multiple key codes that might cause them to run.
This just doesn't seem like the best way to do it. Is there a neater way?
|
Since you don't seem to care about the actual values in the set you could replace it with setting bits in an int. You can also replace the linear time search logic with log time search logic. Here's the final code:
```
// Ahead of time you build a static map from your strings to bit values.
std::map< std::string, int > codesToValues;
codesToValues[ "some constant" ] = 1;
codesToValues[ "some other constant" ] = 1;
codesToValues[ "yet another constant" ] = 2;
codesToValues[ "the constant I didn't mention yet" ] = 2;
// When you want to do your work
int affected_items = 0;
while (string code = GetKeyCodeFromSomewhere())
affected_items |= codesToValues[ code ];
if( affected_items & 1 )
RunSomeFunction(with, these, params);
if( affected_items & 2 )
RunSomeOtherFunction(with, these, other, params);
// etc...
```
|
One approach is to maintain a map from strings to booleans. The main logic can start with something like:
```
if(done[code])
continue;
done[code] = true;
```
Then you can perform the appropriate action as soon as you identify the code.
Another approach is to store something executable (object, function pointer, whatever) into a sort of "to do list." For example:
```
while (string code = GetKeyCodeFromSomewhere())
{
todo[code] = codefor[code];
}
```
Initialize codefor to contain the appropriate function pointer, or object subclassed from a common base class, for each code value. If the same code shows up more than once, the appropriate entry in todo will just get overwritten with the same value that it already had. At the end, iterate over todo and run all of its members.
|
Checking lists and running handlers
|
[
"",
"c++",
"c",
"idioms",
""
] |
In C++, a function's signature depends partly on whether or not it's const. This means that a class can have two member functions with identical signatures except that one is const and the other is not. If you have a class like this, then the compiler will decide which function to call based on the object you call it on: if it's a const instance of the class, the const version of the function will be called; if the object isn't const, the other version will be called.
In what circumstances might you want to take advantage of this feature?
|
This really only makes sense when the member function returns a pointer or a reference to a data member of your class (or a member of a member, or a member of a member of a member, ... etc.). Generally returning non-const pointers or references to data members is frowned upon, but sometimes it is reasonable, or simply very convenient (e.g. [] operator). In such cases, you provide a const and a non-const versions of the getter. This way the decision on whether or not the object can be modified rests with the function using it, which has a choice of declaring it const or non-const.
|
It's there so you can make the compiler enforce whether you return a const object or a regular one, and still maintain the same method signature. There's an in-depth explanation at [Const Correctness](https://isocpp.org/wiki/faq/const-correctness#const-overloading).
|
What is the use of const overloading in C++?
|
[
"",
"c++",
"function",
"constants",
"overloading",
""
] |
I'm wondering what the best way is to have a "if all else fails catch it".
I mean, you're handling as much exceptions as possible in your application,
but still there are bound to be bugs, so I need to have something that
catches all unhandled exceptions so I can collect information and store
them in a database or submit them to a web service.
Does the AppDomain.CurrentDomain.UnhandledException event capture everything?
Even if the application is multithreaded?
Side note: Windows Vista exposes native API functions that allow any application
to recover itself after a crash... can't think of the name now... but I'd rather not
use it, as many of our users are still using Windows XP.
|
I have just played with AppDomain's UnhandledException behavior,
(this is the last stage the unhandled exception is registered at)
Yes, after processing the event handlers your application will be terminated and the nasty "... program stopped working dialog" shown.
:)
You *still* can avoid that.
Check out:
```
class Program
{
void Run()
{
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
Console.WriteLine("Press enter to exit.");
do
{
(new Thread(delegate()
{
throw new ArgumentException("ha-ha");
})).Start();
} while (Console.ReadLine().Trim().ToLowerInvariant() == "x");
Console.WriteLine("last good-bye");
}
int r = 0;
void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
Interlocked.Increment(ref r);
Console.WriteLine("handled. {0}", r);
Console.WriteLine("Terminating " + e.IsTerminating.ToString());
Thread.CurrentThread.IsBackground = true;
Thread.CurrentThread.Name = "Dead thread";
while (true)
Thread.Sleep(TimeSpan.FromHours(1));
//Process.GetCurrentProcess().Kill();
}
static void Main(string[] args)
{
Console.WriteLine("...");
(new Program()).Run();
}
}
```
**P.S.** Do handle the unhandled for Application.ThreadException (WinForms) or DispatcherUnhandledException (WPF) at the higher level.
|
In ASP.NET, you use the `Application_Error` function in the `Global.asax` file.
In WinForms, you use the `MyApplication_UnhandledException` in the `ApplicationEvents` file
Both of these functions are called if an unhandled exception occurs in your code. You can log the exception and present a nice message to the user from these functions.
|
.NET - What's the best way to implement a "catch all exceptions handler"
|
[
"",
"c#",
".net",
"exception",
""
] |
From the [Mozilla Developer Network](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map):
```
[1,4,9].map(Math.sqrt)
```
will yield:
```
[1,2,3]
```
Why then does this:
```
['1','2','3'].map(parseInt)
```
yield this:
```
[1, NaN, NaN]
```
I have tested in Firefox 3.0.1 and Chrome 0.3 and just as a disclaimer, I know this is not cross-browser functionality (no IE).
I found out that the following will accomplish the desired effect. However, it still doesn’t explain the errant behavior of `parseInt`.
```
['1','2','3'].map(function(i){return +i;}) // returns [1,2,3]
```
|
The callback function in `Array.map` has *three* parameters:
From the same [Mozilla page](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Objects/Array/map) that you linked to:
> callback is invoked with three arguments: the value of the element, the index of the element, and the Array object being traversed."
So if you call a function `parseInt` which actually expects *two* arguments, the second argument will be the index of the element.
In this case, you ended up calling `parseInt` with radix 0, 1 and 2 in turn. The first is the same as not supplying the parameter, so it defaulted based on the input (base 10, in this case). Base 1 is an impossible number base, and 3 is not a valid number in base 2:
```
parseInt('1', 0); // OK - gives 1
parseInt('2', 1); // FAIL - 1 isn't a legal radix
parseInt('3', 2); // FAIL - 3 isn't legal in base 2
```
So in this case, you need the wrapper function:
```
['1','2','3'].map(function(num) { return parseInt(num, 10); });
```
or with ES2015+ syntax:
```
['1','2','3'].map(num => parseInt(num, 10));
```
(In both cases, it's best to *explicitly* supply a radix to `parseInt` as shown, because otherwise it guesses the radix based on the input. In some older browsers, a leading 0 caused it to guess octal, which tended to be problematic. It will still guess hex if the string starts with `0x`.)
|
`map` is passing along a 2nd argument, which is (in many of the cases) messing up `parseInt`'s radix parameter.
If you're using underscore you can do:
`['10','1','100'].map(_.partial(parseInt, _, 10))`
Or without underscore:
`['10','1','100'].map(function(x) { return parseInt(x, 10); });`
|
Why does parseInt yield NaN with Array#map?
|
[
"",
"javascript",
""
] |
What are the best algorithms for sorting data in C#?
Is there one sorting algorithm that can handle 80% of sorts well?
Please give code examples if applicable.
|
Check out this site:
[Sorting Comparisons with Animations](http://www.sorting-algorithms.com/)
Short answer: Quick Sort
Longer answer:
The above site will show you the strengths and weaknesses of each algorithm with some nifty animations.
The short answer is there is no best all around sort (but you knew that since you said 80% of the time :) ) but Quick Sort (or 3 Way Quick Sort) will probably be the best general algorithm you could use.
It is the algorithm used by default for Lists in .Net, so you can just call `.Sort` if what you have is already in a list.
There is pseudo-code on the website I pointed you to above if you want to see how to implement this.
|
What are you trying to sort? Is there any reason not to use:
```
List<T>.Sort() ?
```
I'm sure this uses QuickSort and you don't have to worry about making any coding errors. You can implement IComparable to change what you want to sort on.
If all your data doesn't fit in memory... well the you're off to the races with Merge sort or something along those lines.
|
Best sorting algorithms for C# / .NET in different scenarios
|
[
"",
"c#",
".net",
"algorithm",
"sorting",
""
] |
I've been trying to find a free database creator for mac os, and i'm not being able to find any. Anyone know of a free one i could download?
EDIT: I need that the application generate the sql (mysql in this case) also :)
ty
|
[SQL Designer](http://ondras.zarovi.cz/sql/demo/) is web-based.
There's also more options at [this question](https://stackoverflow.com/questions/30474/database-schema-diagram-design-tool).
|
I think the [Entity modeling stuff in XCode](http://developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdBasics.html#//apple_ref/doc/uid/TP40001650) might be of us to you...
|
DB sketcher for MacOS?
|
[
"",
"sql",
"database",
"database-design",
"ddl",
""
] |
I've got a project which requires a fairly complicated process and I want to make sure I know the best way to do this. I'm using ASP.net C# with Adobe Flex 3. The app server is Mosso (cloud server) and the file storage server is Amazon S3. The existing site can be viewed at [NoiseTrade.com](https://www.noisetrade.com)
I need to do this:
* Allow users to upload MP3 files to
an album "widget"
* After the user has uploaded their
album/widget, I need to
automatically zip the mp3 (for other
users to download) and upload the
zip along with the mp3 tracks to
Amazon S3
I actually have this working already (using client side processing in Flex) but this no longer works because of Adobe's flash 10 "security" update. So now I need to implement this server-side.
The way I am thinking of doing this is:
* Store the mp3 in a temporary folder
on the app server
* When the artist "publishes" create a
zip of the files in that folder
using a c# library
* Start the amazon S3 upload process (zip and mp3s)
and email the user when it is
finished (as well as deleting the
temporary folder)
The major problem I see with this approach is that if a user deletes or adds a track later on I'll have to update the zip file but the temporary files will not longer exist.
I'm at a loss at the best way to do this and would appreciate any advice you might have.
Thanks!
|
The bit about updating the zip but not having the temporary files if the user adds or removes a track leads me to suspect that you want to build zips containing multiple tracks, possibly complete albums. If this is incorrect and you're just putting a single mp3 into each zip, then StingyJack is right and you'll probably end up making the file (slightly) *larger* rather than smaller by zipping it.
If my interpretation is correct, then you're in luck. Command-line zip tools frequently have flags which can be used to add files to or delete files from an existing zip archive. You have not stated which library or other method you're using to do the zipping, but I expect that it probably has this capability as well.
|
MP3's are compressed. Why bother zipping them?
|
Best process for auto-zipping of multiple MP3s
|
[
"",
"c#",
"asp.net",
"zip",
"amazon-s3",
""
] |
I'm developing a kind of an exchange format between instances of an application so that user could save information to file and restore it later. The whole 'exchange format' is a single class that gets serialized and written to disk.
How would you suggest to store graphical/sound information inside that class? I'd like to avoid just putting the files into it a .jar.
|
You might keep your resources stored in the class as byte[] arrays. Using ByteArrayInputStream and ByteArrayOutputStream you are able to wrap the arrays as streams and use them to store and retrieve resources.
|
\*\* me \*\*
how about more details on your case? the "best" method usually depends on the particular application/use. does the image/sound come from files? from a stream? Is each instance of the class expected to store separate images? Or can an image be shared between different instances?
**gsmd**
> images come from files, sounds come
> from a stream; there's actually a Set
> of items some of which may have an
> image attached; an image can't be
> shared
---
What methods have you tried?
I guess using the Serializable interface is the way to go. Here's [two](http://java.sun.com/developer/technicalArticles/ALT/serialization/) [articles](http://java.sun.com/developer/technicalArticles/Programming/serialization/) on the topic.
Basically,
1. implement the Serializable interface in your class
2. mark members which shouldn't be saved in the file as transient (members which contain contextual data like file handles, socket connections ,etc).
3. you have to customize how your image and sound classes write data using, but implementing:
> private void writeObject(ObjectOutputStream out) throws IOException;
>
> private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException;
|
The best way of storing an image/sound inside a class?
|
[
"",
"java",
"serialization",
"image",
"audio",
"persistence",
""
] |
I'm looking for (simple) examples of problems for which JMS is a good solution, and also reasons why JMS is a good solution in these cases. In the past I've simply used the database as a means of passing messages from A to B when the message cannot necessarily be processed by B immediately.
A hypothetical example of such a system is where all newly registered users should be sent a welcome e-mail within 24 hours of registration. For the sake of argument, assume the DB does not record the time when each user registered, but instead a reference (foreign key) to each new user is stored in the pending\_email table. The e-mail sender job runs once every 24 hours, sends an e-mail to all the users in this table, then deletes all the pending\_email records.
This seems like the kind of problem for which JMS should be used, but it's not clear to me what benefit JMS would have over the approach I've described. One advantage of the DB approach is that the messages are persistent. I understand that JMS message queues can also be persisted, but in that case there seems to be little difference between JMS and the "database as message queue" approach I've described?
What am I missing?
- Don
|
JMS and messaging is really about 2 totally different things.
* publish and subscribe (sending a message to as many consumers as are interested - a bit like sending an email to a mailing list, the sender does not need to know who is subscribed
* high performance reliable load balancing (message queues)
See more info on [how a queue compares to a topic](http://activemq.apache.org/how-does-a-queue-compare-to-a-topic.html)
The case you are talking about is the second case, where yes you can use a database table to kinda simulate a message queue.
The main difference is a JMS message queue is a high performance highly concurrent load balancer designed for huge throughput; you can send usually tens of thousands of messages per second to many concurrent consumers in many processes and threads. The reason for this is that a message queue is basically highly asynchronous - a [good JMS provider will stream messages ahead of time to each consumer](http://activemq.apache.org/what-is-the-prefetch-limit-for.html) so that there are thousands of messages available to be processed in RAM as soon as a consumer is available. This leads to massive throughtput and very low latency.
e.g. imagine writing a web load balancer using a database table :)
When using a database table, typically one thread tends to lock the whole table so you tend to get very low throughput when trying to implement a high performance load balancer.
But like most middleware it all depends on what you need; if you've a low throughput system with only a few messages per second - feel free to use a database table as a queue. But if you need low latency and high throughput - then JMS queues are highly recommended.
|
In my opinion JMS and other message-based systems are intended to solve problems that need:
* **Asynchronous** communications : An application need to notify another that an event has occurred with no need to wait for a response.
* **Reliability**. Ensure once-and-only-once message delivery. With your DB approach you have to "reinvent the wheel", specially if you have several clients reading the messages.
* **Loose coupling**. Not all systems can communicate using a database. So JMS is pretty good to be used in heterogeneous environments with decoupled systems that can communicate over system boundaries.
|
what is JMS good for?
|
[
"",
"java",
"jms",
"messaging",
""
] |
I have essentially a survey that is shown, and people answer questions a lot like a test,
and there are different paths, it is pretty easy so far, but i wanted to make it more dynamic, so that i can have a generic rule that is for the test with all the paths, to make the evaluator easier to work with currently i just allow AND's, and each OR essentially becomes another Rule in the set,
QuestionID, then i form a bunch of AND rules like so
```` ```
<rule id="1">
<true>
<question ID=123>
<question ID=124>
</true>
<false>
<question ID=127>
<question ID=128>
</false>
</rule>
<rule id="2"><true>
<question ID=123>
<question ID=125>
</true>
<false>
<question ID=127>
</false>
</rule>
``` ````
this rule 1 says if question 123, and 124 are answered true, and 127, 128 are false, they pass. OR (rule 2) is if 123 and 125 are true and 127 is false, they pass as well.
This gets tedious if there are many combinations, so i want to implement OR in the logic, I am just not sure what best approach is for this problem.
I think rules engine is too complicated, there must be an easier way, perhaps constructing a graph like in LINQ and then evaluating to see if they pass,
thanks!
--not an compsci major.
|
This doesn't have to be complicated: you're most of the way already, since your and elements effectively implement an AND-type rule. I would introduce an element that can hold and elements.
In your could, you could have:
* A RuleBase class, with a "public abstract bool Evaluate()" method
* TrueRule, FalseRule and OrRule classes, which contain lists of RuleBase objects
* A QuestionRule class, which refers to a specific question
You would implement the Evaluate method on each of these as follows:
* **TrueRule:** returns true only if all the contained rules return true from Evaluate
* **FalseRule:** returns true only if all the contained rules return false from Evaluate
* **OrRule:** returns true if at least one of the contained rules returns true from Evaluate
* **QuestionRule:** returns the answer to the original question
This class hierarchy implements a simple abstract syntax tree (AST). LINQ, in the form of the System.Expressions.Expression class, does pretty much the same thing, but it's helpful to write your own if it's not obvious how everything fits together.
|
If you use a proper Rule Engine that supports inferencing it would be more efficient and extensible.
Take a look at <http://www.flexrule.com> which is a flexible, extensible rule engine that supports three types of rule. Procedural, Inference and Rule-Flow rules can be externalized from your application and get executed using this framework.
|
Boolean logic rule evaluator
|
[
"",
"c#",
"xml",
"logic",
"boolean",
"rule-engine",
""
] |
I don't think this is possible, but if is then I need it :)
I have a auto-generated proxy file from the wsdl.exe command line tool by Visual Studio 2008.
The proxy output is partial classes. I want to override the default constructor that is generated. I would rather not modify the code since it is auto-generated.
I tried making another partial class and redefining the default constructor, but that doesn't work. I then tried using the override and new keywords, but that doesn't work.
I know I could inherit from the partial class, but that would mean I'd have to change all of our source code to point to the new parent class. I would rather not have to do this.
Any ideas, work arounds, or hacks?
```
//Auto-generated class
namespace MyNamespace {
public partial class MyWebService : System.Web.Services.Protocols.SoapHttpClientProtocol {
public MyWebService() {
string myString = "auto-generated constructor";
//other code...
}
}
}
//Manually created class in order to override the default constructor
namespace MyNamespace {
public partial class MyWebService : System.Web.Services.Protocols.SoapHttpClientProtocol {
public override MyWebService() { //this doesn't work
string myString = "overridden constructor";
//other code...
}
}
}
```
|
This is not possible.
Partial classes are essentially parts of the same class; no method can be defined twice or overridden, and that includes the constructor.
You could call a method in the constructor, and only implement it in the other part file.
|
I had a similar problem, with my generated code being created by a DBML file (I'm using Linq-to-SQL classes).
In the generated class it calls a partial void called OnCreated() at the end of the constructor.
Long story short, if you want to keep the important constructor stuff the generated class does for you (which you probably should do), then in your partial class create the following:
```
partial void OnCreated()
{
// Do the extra stuff here;
}
```
|
Override Default Constructor of Partial Class with Another Partial Class
|
[
"",
"c#",
"web-services",
"wsdl",
"overriding",
"partial-classes",
""
] |
I have two objects, let's call them **`Input`** and **`Output`**
**`Input`** has properties *`Input_ID`*, *`Label`*, and *`Input_Amt`*
**`Output`** has properties *`Output_ID`* and *`Output_Amt`*
I want to perform the equivalent SQL statement in LINQ:
```
SELECT Label, Sum(Added_Amount) as Amount FROM
(SELECT I.Label, I.Input_Amt + ISNULL(O.Output_Amt, 0) as Added_Amount
FROM Input I LEFT OUTER JOIN Output O ON I.Input_ID = O.Output_ID)
GROUP BY Label
```
For the inner query, I'm writing something like:
```
var InnerQuery = from i in input
join o in output
on i.Input_ID equals o.Output_ID into joined
from leftjoin in joined.DefaultIfEmpty()
select new
{
Label = i.Label,
AddedAmount = (i.Input_Amt + leftjoin.Output_Amt)
};
```
In testing, however, the statement returns null. What gives?
Also, how can I continue the desired query and perform the group after I've added my amounts together, all within a single LINQ statement?
|
Okay, now that I understand what's going on a bit better, the main problem is that you haven't got the equivalent of the ISNULL bit. Try this instead:
```
var InnerQuery = from i in input
join o in output
on i.Input_ID equals o.Output_ID into joined
from leftjoin in joined.DefaultIfEmpty()
select new
{
Label = i.Label,
AddedAmount = (i.Input_Amt + (leftjoin == null ? 0 : leftjoin.Output_Amt))
};
```
|
Which LINQ provider is this actually using? Are you actually talking to a database, or just working in-process? If you're using LINQ to SQL, you can turn the log on to see what SQL is being generated.
I'm sure that InnerQuery itself won't be null - how are you examining the output?
|
How can I perform a nested Join, Add, and Group in LINQ?
|
[
"",
"c#",
"linq",
""
] |
How can I get a list of the IP addresses or host names from a local network easily in Python?
It would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow.
**Edit:** By local I mean all **active** addresses within a local network, such as `192.168.xxx.xxx`.
So, if the IP address of my computer (within the local network) is `192.168.1.1`, and I have three other connected computers, I would want it to return the IP addresses `192.168.1.2`, `192.168.1.3`, `192.168.1.4`, and possibly their hostnames.
|
If by "local" you mean on the same network segment, then you have to perform the following steps:
1. Determine your own IP address
2. Determine your own netmask
3. Determine the network range
4. Scan all the addresses (except the lowest, which is your network address and the highest, which is your broadcast address).
5. Use your DNS's reverse lookup to determine the hostname for IP addresses which respond to your scan.
Or you can just let Python execute nmap externally and pipe the results back into your program.
|
**Update**: The script is now located on [github](http://github.com/bwaldvogel/neighbourhood).
I wrote a [small python script](https://github.com/bwaldvogel/neighbourhood/blob/master/neighbourhood.py), that leverages [scapy](http://www.secdev.org/projects/scapy/)'s `arping()`.
|
List of IP addresses/hostnames from local network in Python
|
[
"",
"python",
"networking",
""
] |
What is the best way to allow a team of programmers to use Netbeans, Eclipse and IntelliJ on the same project, thus eliminating the "which IDE is better" question.
Which files should or should not be checked into source code control?
|
I think the best way is to make the build process independent of IDE. This means that your project should not rely on any IDE-specific files to build, but rather use an external build system, like [Apache Maven](http://maven.apache.org/), [Apache Ant](http://ant.apache.org), or even make or custom scripts. Maven is supported by most popular Java IDEs, either directly or via plug-ins.
If you don't want to use an external build systems, you should at least make the project as easy to set up as possible (i.e. by having standard folders for shared libraries and other dependencies). When I have working on teams with multiple IDEs in the past, I spent by far the most time on resolving dependencies as the prerequisites for building the project changed over time. In the worst case you may even end up with developers not bothering to get the latest version from the version control repository, since they think setting up the new project is such a hassle.
If your project has many library dependencies, I think its a good idea to make these available in binary form in the version control repository. That way people don't have to resolve all the dependencies of the dependencies and so on just to build a single project. This does however require that you have someone responsible for keeping the "official" binaries up-to-date whenever they change. (This is pretty much the same philosophy used by the Maven repository, but the principles can be applied manually even when not using Maven.)
|
Well, that's a pretty self-answering question.
The files to not check into source control are files that have to do with the IDEs themselves.
Leave it to the developers to generate these files.
If you use Maven, it can generate the files such as Eclipse's `.project` and `.classpath` for you. Eclipse in general is very easy to use with a basic file structure (with the new `Java Project` option).
I think Maven has Netbeans support as well, not sure about IntelliJ though.
Maven's site is [maven.apache.org](http://maven.apache.org).
|
Multi-IDE Support in Java for one Team
|
[
"",
"java",
"eclipse",
"ide",
"netbeans",
"intellij-idea",
""
] |
I have a big lump of binary data in a char[] array which I need to interpret as an array of packed 6-bit values.
I *could* sit down and write some code to do this but I'm thinking there has to be a good extant class or function somebody has written already.
What I need is something like:
```
int get_bits(char* data, unsigned bitOffset, unsigned numBits);
```
so I could get the 7th 6-bit character in the data by calling:
```
const unsigned BITSIZE = 6;
char ch = static_cast<char>(get_bits(data, 7 * BITSIZE, BITSIZE));
```
|
This may not work for sizes greater than 8, depending on endian system. It's basically what Marco posted, though I'm not entirely sure why he'd gather one bit at a time.
```
int get_bits(char* data, unsigned int bitOffset, unsigned int numBits) {
numBits = pow(2,numBits) - 1; //this will only work up to 32 bits, of course
data += bitOffset/8;
bitOffset %= 8;
return (*((int*)data) >> bitOffset) & numBits; //little endian
//return (flip(data[0]) >> bitOffset) & numBits; //big endian
}
//flips from big to little or vice versa
int flip(int x) {
char temp, *t = (char*)&x;
temp = t[0];
t[0] = t[3];
t[3] = temp;
temp = t[1];
t[1] = t[2];
t[2] = temp;
return x;
}
```
|
[Boost.DynamicBitset](http://www.boost.org/doc/libs/release/libs/dynamic_bitset/dynamic_bitset.html) - try it.
|
C/C++ Code to treat a character array as a bitstream
|
[
"",
"c++",
"bit-manipulation",
"extract",
""
] |
Currently I have an application that receives an uploaded file from my web application. I now need to transfer that file to a file server which happens to be located on the same network (however this might not always be the case).
I was attempting to use the webclient class in C# .NET.
```
string filePath = "C:\\test\\564.flv";
try
{
WebClient client = new WebClient();
NetworkCredential nc = new NetworkCredential(uName, password);
Uri addy = new Uri("\\\\192.168.1.28\\Files\\test.flv");
client.Credentials = nc;
byte[] arrReturn = client.UploadFile(addy, filePath);
Console.WriteLine(arrReturn.ToString());
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
```
The machine located at 192.168.1.28 is a file server and has a share c:\Files.
As of right now I am receiving an error of Login failed bad user name or password, but I can open explorer and type in that path login successfully. I can also login using remote desktop, so I know the user account works.
Any ideas on this error?
Is it possible to transfer a file directly like that? With the webclient class or maybe some other class?
|
Just use
```
File.Copy(filepath, "\\\\192.168.1.28\\Files");
```
A windows fileshare exposed via a UNC path is treated as part of the file system, and has nothing to do with the web.
The credentials used will be that of the ASP.NET worker process, or any impersonation you've enabled. If you can tweak those to get it right, this can be done.
You may run into problems because you are using the IP address instead of the server name (windows trust settings prevent leaving the domain - by using IP you are hiding any domain details). **If at all possible, use the server name!**
If this is not on the same windows domain, and you are trying to use a different domain account, you will need to specify the username as "[domain\_or\_machine]\[username]"
If you need to specify explicit credentials, you'll need to look into [coding an impersonation solution](http://www.codeproject.com/KB/cs/cpimpersonation1.aspx).
|
```
namespace FileUpload
{
public partial class Form1 : Form
{
string fileName = "";
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
string path = "";
OpenFileDialog fDialog = new OpenFileDialog();
fDialog.Title = "Attach customer proposal document";
fDialog.Filter = "Doc Files|*.doc|Docx File|*.docx|PDF doc|*.pdf";
fDialog.InitialDirectory = @"C:\";
if (fDialog.ShowDialog() == DialogResult.OK)
{
fileName = System.IO.Path.GetFileName(fDialog.FileName);
path = Path.GetDirectoryName(fDialog.FileName);
textBox1.Text = path + "\\" + fileName;
}
}
private void button2_Click(object sender, EventArgs e)
{
try
{
WebClient client = new WebClient();
NetworkCredential nc = new NetworkCredential("erandika1986", "123");
Uri addy = new Uri(@"\\192.168.2.4\UploadDocs\"+fileName);
client.Credentials = nc;
byte[] arrReturn = client.UploadFile(addy, textBox1.Text);
MessageBox.Show(arrReturn.ToString());
}
catch (Exception ex1)
{
MessageBox.Show(ex1.Message);
}
}
}
}
```
|
Uploading files to file server using webclient class
|
[
"",
"c#",
"file-upload",
"webclient",
"fileserver",
""
] |
My application uses loads of Java threads. I am looking for a reliable understanding how the JVM (version 5 and 6) maps the Java threads to underlying Windows threads. I know there is a document for mapping to Solaris threads, but not Windows.
Why doesn't Sun publish this information?
I want to know if there's a 1:1 mapping, or if it varies by JVM, by -server option, by workload, etc, etc.
I know I am not "supposed" to care, I should write properly synchronisd code, but I am inheriting a large body of code...
Also, does anyone know how to give names to Windows threads?
|
Don't have a document for you, but from the Threads column in the task-manager you can pretty reliably guess that it maps 1:1 to native threads (you need to enable the Threads column in the task manager first).
Oh, almost forgot, you can download the jdk src [here](https://jdk6.dev.java.net/) and look yourself.
|
The mapping is platform-dependent, however I found an interesting [comparison](http://www2.sys-con.com/itsg/virtualcd/Java/archives/0306/gal/index.html) between platform threads for the vm (although probably a bit old). The bottom line is: you don't need to know. What you probably are more interested is to know about [green threads](http://en.wikipedia.org/wiki/Green_threads) (if you don't know already).
As for the naming question: Doesn't the [constructor](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Thread.html#Thread(java.lang.String)) allow you to name a thread? Or do you mean name them and view their name on some windows thread browser?
|
How does the Sun JVM map Java threads to Windows threads?
|
[
"",
"java",
"windows",
"multithreading",
"jvm",
""
] |
Lots of frameworks let me expose an ejb as a webservice.
But then 2 months after publishing the initial service I need to change the ejb or any part of its interface. I still have clients that need to access the old interface, so I obviously need to have 2 webservices with different signatures.
Anyone have any suggestions on how I can do this, preferably letting the framework do the grunt work of creating wrappers and copying logic (unless there's an even smarter way).
I can choose webservice framework on basis of this, so suggestions are welcome.
Edit: I know my change is going to break compatibility,and I am fully aware that I will need two services with different namespaces at the same time. But how can I do it in a simple manner ?
|
I don't think, you need any additional frameworks to do this. Java EE lets you directly expose the EJB as a web service (since [EJB 2.1](http://www.jcp.org/en/jsr/detail?id=153); see [example for J2EE 1.4](http://java.sun.com/j2ee/1.4/docs/tutorial/doc/Session3.html#wp79822)), but with EE 5 it's even simpler:
```
@WebService
@SOAPBinding(style = Style.RPC)
public interface ILegacyService extends IOtherLegacyService {
// the interface methods
...
}
@Stateless
@Local(ILegacyService.class)
@WebService(endpointInterface = "...ILegacyService", ...)
public class LegacyServiceImpl implements ILegacyService {
// implementation of ILegacyService
}
```
Depending on your application server, you should be able to provide `ILegacyService` at any location that fits. As jezell said, you should try to put changes that do not change the contract directly into this interface. If you have additional changes, you may just provide another implementation with a different interface. Common logic can be pulled up into a superclass of `LegacyServiceImpl`.
|
Ok here goes;
it seems like dozer.sourceforge.net is an acceptable starting-point for doing the grunt work of copying data between two parallel structures. I suppose a lot of web frameworks can generate client proxies that can be re-used in a server context to maintain compatibility.
|
How to expose an EJB as a webservice that will later let me keep client compatibility when ejb changes?
|
[
"",
"java",
"web-services",
""
] |
the WPF Popup control is nice, but somewhat limited in my opinion. is there a way to "drag" a popup around when it is opened (like with the DragMove() method of windows)?
can this be done without big problems or do i have to write a substitute for the popup class myself?
thanks
|
Here's a simple solution using a Thumb.
* Subclass Popup in XAML and codebehind
* Add a Thumb with width/height set to 0 (this could also be done in XAML)
* Listen for MouseDown events on the Popup and raise the same event on the Thumb
* Move popup on DragDelta
XAML:
```
<Popup x:Class="PopupTest.DraggablePopup" ...>
<Canvas x:Name="ContentCanvas">
</Canvas>
</Popup>
```
C#:
```
public partial class DraggablePopup : Popup
{
public DraggablePopup()
{
var thumb = new Thumb
{
Width = 0,
Height = 0,
};
ContentCanvas.Children.Add(thumb);
MouseDown += (sender, e) =>
{
thumb.RaiseEvent(e);
};
thumb.DragDelta += (sender, e) =>
{
HorizontalOffset += e.HorizontalChange;
VerticalOffset += e.VerticalChange;
};
}
}
```
|
There is no DragMove for PopUp. Just a small work around, there is lot of improvements you can add to this.
```
<Popup x:Name="pop" IsOpen="True" Height="200" Placement="AbsolutePoint" Width="200">
<Rectangle Stretch="Fill" Fill="Red"/>
</Popup>
```
In the code behind , add this mousemove event
```
pop.MouseMove += new MouseEventHandler(pop_MouseMove);
void pop_MouseMove(object sender, MouseEventArgs e)
{
if (e.LeftButton == MouseButtonState.Pressed)
{
pop.PlacementRectangle = new Rect(new Point(e.GetPosition(this).X,
e.GetPosition(this).Y),new Point(200,200));
}
}
```
|
Drag WPF Popup control
|
[
"",
"c#",
".net",
"wpf",
"xaml",
"popup",
""
] |
How can I create 7-Zip archives from my C# console application? I need to be able to extract the archives using the regular, widely available [7-Zip](http://www.7-zip.org/) program.
---
## Here are my results with the examples provided as answers to this question
* "Shelling out" to 7z.exe - this is the simplest and most effective approach, and I can confirm that **it works nicely**. As [workmad3 mentions](https://stackoverflow.com/questions/222030/how-do-i-create-7-zip-archives-with-net#222047), I just need to guarantee that 7z.exe is installed on all target machines, which is something I can guarantee.
* [7Zip in memory compression](http://www.eggheadcafe.com/tutorials/aspnet/064b41e4-60bc-4d35-9136-368603bcc27a/7zip-lzma-inmemory-com.aspx) - this refers to compressing cookies "in-memory" before sending to the client; this method seems somewhat promising. The wrapper methods (wrapping the [LZMA SDK](http://www.7-zip.org/sdk.html)) return type `byte[]`. When I write the `byte[]` array to a file, I can't extract it using 7-Zip (`File.7z is not supported archive`).
* [7zSharp](http://www.codeplex.com/7zsharp) Wrapper (found on CodePlex) - this wraps the 7z exe/[LZMA SDK](http://www.7-zip.org/sdk.html). I referenced the project from my app, and it successfully created some archive files, but I was unable to extract the files using the regular 7-Zip program (`File.7z is not supported archive`).
* [7Zip SDK aka LZMA SDK](http://www.7-zip.org/sdk.html) - I guess I'm not smart enough to figure out how to use this (which is why I posted here)... Any working code examples that demonstrate creating a 7zip archive that is able to be extracted by the regular 7zip program?
* [CodeProject C# (.NET) Interface for 7-Zip Archive DLLs](http://www.codeproject.com/KB/DLL/cs_interface_7zip.aspx) - only supports extracting from 7zip archives... I need to create them!
* [SharpZipLib](http://sharpdevelop.net/OpenSource/SharpZipLib/Default.aspx) - According to their [FAQ](http://wiki.sharpdevelop.net/SharpZipLib_FAQ.ashx), SharpZipLib doesn't support 7zip.
|
If you can guarantee the 7-zip app will be installed (and in the path) on all target machines, you can offload by calling the command line app 7z. Not the most elegant solution but it is the least work.
|
[EggCafe 7Zip cookie example](http://www.eggheadcafe.com/tutorials/aspnet/064b41e4-60bc-4d35-9136-368603bcc27a/7zip-lzma-inmemory-com.aspx) This is an example (zipping cookie) with the DLL of 7Zip.
[CodePlex Wrapper](http://www.codeplex.com/7zsharp)
This is an open source project that warp zipping function of 7z.
[7Zip SDK](http://www.7-zip.org/sdk.html) The official SDK for 7zip (C, C++, C#, Java) <---My suggestion
.Net zip library by [SharpDevelop.net](http://sharpdevelop.net/OpenSource/SharpzipLib/)
[CodeProject](http://www.codeproject.com/KB/DLL/cs_interface_7zip.aspx) example with 7zip
[SharpZipLib](http://community.sharpdevelop.net/blogs/johnreilly/default.aspx) Many zipping
|
How do I create 7-Zip archives with .NET?
|
[
"",
"c#",
".net",
"compression",
"7zip",
""
] |
There is this example code, but then it starts talking about millisecond / nanosecond problems.
The same question is on MSDN, *[Seconds since the Unix epoch in C#](https://learn.microsoft.com/archive/blogs/brada/seconds-since-the-unix-epoch-in-c)*.
This is what I've got so far:
```
public Double CreatedEpoch
{
get
{
DateTime epoch = new DateTime(1970, 1, 1, 0, 0, 0, 0).ToLocalTime();
TimeSpan span = (this.Created.ToLocalTime() - epoch);
return span.TotalSeconds;
}
set
{
DateTime epoch = new DateTime(1970, 1, 1, 0, 0, 0, 0).ToLocalTime();
this.Created = epoch.AddSeconds(value);
}
}
```
|
Here's what you need:
```
public static DateTime UnixTimeStampToDateTime( double unixTimeStamp )
{
// Unix timestamp is seconds past epoch
DateTime dateTime = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
dateTime = dateTime.AddSeconds( unixTimeStamp ).ToLocalTime();
return dateTime;
}
```
Or, for Java (which is different because the timestamp is in milliseconds, not seconds):
```
public static DateTime JavaTimeStampToDateTime( double javaTimeStamp )
{
// Java timestamp is milliseconds past epoch
DateTime dateTime = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
dateTime = dateTime.AddMilliseconds( javaTimeStamp ).ToLocalTime();
return dateTime;
}
```
|
The [latest version of .NET (v4.6)](http://www.visualstudio.com/en-us/news/vs2015-preview-vs#Net) has added built-in support for Unix time conversions. That includes both to and from Unix time represented by either seconds or milliseconds.
* Unix time in seconds to UTC `DateTimeOffset`:
```
DateTimeOffset dateTimeOffset = DateTimeOffset.FromUnixTimeSeconds(1000);
```
* `DateTimeOffset` to Unix time in seconds:
```
long unixTimeStampInSeconds = dateTimeOffset.ToUnixTimeSeconds();
```
* Unix time in milliseconds to UTC `DateTimeOffset`:
```
DateTimeOffset dateTimeOffset = DateTimeOffset.FromUnixTimeMilliseconds(1000000);
```
* `DateTimeOffset` to Unix time in milliseconds:
```
long unixTimeStampInMilliseconds = dateTimeOffset.ToUnixTimeMilliseconds();
```
---
Note: These methods convert to and from a UTC `DateTimeOffset`. To get a `DateTime` representation simply use the `DateTimeOffset.UtcDateTime` or `DateTimeOffset.LocalDateTime` properties:
```
DateTime dateTime = dateTimeOffset.UtcDateTime;
```
|
How can I convert a Unix timestamp to DateTime and vice versa?
|
[
"",
"c#",
"datetime",
"unix",
"epoch",
"data-conversion",
""
] |
I have this doubt, I've searched the web and the answers seem to be diversified. Is it better to use mysql\_pconnect over mysql\_connect when connecting to a database via PHP? I read that pconnect scales much better, but on the other hand, being a persistent connection... having 10 000 connections at the same time, all persistent, doesn't seem scalable to me.
Thanks in advance.
|
Persistent connections should be unnecessary for MySQL. In other databases (such as Oracle), making a connection is expensive and time-consuming, so if you can re-use a connection it's a big win. But those brands of database offer connection pooling, which solves the problem in a better way.
Making a connection to a MySQL database is quick compared to those other brands, so using persistent connections gives proportionally less benefit for MySQL than it would for another brand of database.
Persistent connections have a downside too. The database server allocates resources to each connection, whether the connections are needed or not. So you see a lot of wasted resources for no purpose if the connections are idle. I don't know if you'll reach 10,000 idle connections, but even a couple of hundred is costly.
Connections have state, and it would be inappropriate for a PHP request to "inherit" information from a session previously used by another PHP request. For example, temporary tables and user variables are normally cleaned up as a connection closes, but not if you use persistent connections. Likewise session-based settings like character set and collation. Also, `LAST_INSERT_ID()` would report the id last generated during the session -- even if that was during a prior PHP request.
For MySQL at least, the downside of persistent connections probably outweighs their benefits. And there are other, better techniques to achieve high scalability.
---
Update March 2014:
MySQL connection speed was always low compared to other brands of RDBMS, but it's getting even better.
See <http://mysqlserverteam.com/improving-connectdisconnect-performance/>
> In MySQL 5.6 we started working on optimizing the code handling connects and disconnects. And this work has accelerated in MySQL 5.7. In this blog post I will first show the results we have achieved and then describe what we have done to get them.
Read the blog for more details and speed comparisons.
|
Basically you have to balance the cost of creating connections versus keeping connections. Even though MySQL is very fast at setting up a new connection, it still costs -- in thread setup time, and in TCP/IP setup time from your web server. This is noticeable on a high-enough traffic site. Unfortunately, PHP does not have any controls on the persistence of connections. So the answer is to lower the idle timeout in MySQL a long way (like down to 20 seconds), and to up the thread cache size. Together, this generally works remarkably well.
On the flip side, your application needs to respect the state of the connection. It is best if it makes no assumptions about what state the session is in. If you use temporary tables, then using CREATE IF NOT EXISTS and TRUNCATE TABLE helps a lot, as does naming them uniquely (such as including as userid). Transactions are bit more problematic; but your code can always do ROLLBACK at the top, just in case.
|
mysql_connect VS mysql_pconnect
|
[
"",
"php",
"mysql",
"mysql-connect",
"mysql-pconnect",
""
] |
Here is my sample code. It is meant to be an iterative procedure for gauss seidel (matrix solver). Essentially when the error is small enough it breaks out of the while loop.
```
i=1
while (i>0):
x_past = x_present
j=0
while(j<3):
value=0
k=0
while(k<3):
if(k!=j):
if(i==1):
if(k>j):
value=value+0
else:
value=value+x_present[k]*eqn[j][k]
else:
value=value+x_present[k]*eqn[j][k]
else:
value=value+eqn[j][k]
k=k+1
x_present[j:j+1]=[value]
j=j+1
print "X_PAST"
print x_past
print "X_PRESENT"
print x_present
if(error(x_past, x_present)<10**-2):
break;
i=i+1
```
I've reduced the code so its more manageable. if you don't understand what its doing its not really that important to solving this problem.
Here is the issue. Everytime
```
x_present[j:j+1]=[value]
```
is run, x\_past is made equal to x\_present. I don't know why this is the case as the only place i have set x\_past equal to x\_present is at the top of the loop. If I take away the
```
x_past=x_present
```
sentence, x\_past is never made equal to x\_present. This leads me to think it is some combination of the two statements that is causing the issue.
This is a big problem cause if x\_past = x\_present the error = 0 every time and the loop breaks after the first iteration. The code does work, for example if I tell the code to run for 8 iterations and the break it gives me the answer its supposed to.
I've been trying to figure this out for the last 4 hours and am completely stumped. I haven't been working with python long, so my trouble shooting skills syntax wise aren't that great. Any help would be appreciated!!
|
Yes, I think the answers here show your problem.
Just to try and clarify a little bit.
You're referencing a list, so when the list changes any reference to that list will reflect that change. To demonstrate:
```
>>> x_present = [4,5,6]
>>>
>>> x_past = x_present
>>>
>>> x_past
[4, 5, 6]
>>>
>>> x_present.append(7)
>>>
>>> x_past
[4, 5, 6, 7]
>>>
```
If you want a copy of the list you have to do do this, **listcopy = mylist[:]**. (or `import copy;listcopy = copy.copy(mylist)`
```
>>> x_past = x_present[:]
>>> x_past
[4, 5, 6, 7]
>>>
>>> x_present.append(8)
>>>
>>> x_past
[4, 5, 6, 7]
```
|
What are x\_past and x\_present? I don't know much Python, but from a .NET/Java perspective, if they're references to some data structure (a map or whatever) then making them references to the same object (as you do at the start) will mean that any changes made through one variable will be visible through the other. It sounds like you need to take a copy of the data structure instead of just doing a reference assignment. Does the data structure you're working with have any kind of "clone" functionality available?
As I say though, I don't know much Python so this could be totally wrong...
|
Python - one variable equals another variable when it shouldn't
|
[
"",
"python",
"variables",
""
] |
How to set internationalization to a `DateTimepicker` or `Calendar WinForm` control in .Net when the desire culture is different to the one installed in the PC?
|
It doesn't seem to be possible to change the culture. See this [KB article](http://support.microsoft.com/Default.aspx?scid=kb%3ben-us%3b889834&x=18&y=19).
|
Based on previous solution, I think the better is:
```
dateTimePicker.Format = DateTimePickerFormat.Custom;
dateTimePicker.CustomFormat = Application.CurrentCulture.DateTimeFormat.ShortDatePattern;
```
|
How to change culture to a DateTimepicker or calendar control in .Net
|
[
"",
"c#",
".net",
"internationalization",
"datetimepicker",
"culture",
""
] |
I am very familiar with the Command pattern, but I don't yet understand the difference in theory between a Functor and a command. In particular, I am thinking of Java implementations. Both are basically programming "verbs" represented as objects. However, in the case of functors, as I have seen from some examples anonymous inner class implementations seem common. Can anyone out there clear this up for me nicely?
|
A functor is a 'syntax level' concept - it packages up code in an object that can be treated syntactically like a function pointer - i.e. it can be 'called' by putting parameter list in brackets after it. In C++ you could make a class a functor by overriding operator().
A Command in the command pattern is an object that packages up some runnable functionality, but there's no requirement for it to be a functor. For example, it could be a class that implements an interface ICommand, allowing its command to be run by calling Do().
|
A functor is an implementation, a way of making an object behave like a function.
The 'Command Pattern' is a design pattern.
The functor is one way to implement the 'Command Pattern'.
|
What is the difference between a Functor and the Command pattern?
|
[
"",
"java",
"design-patterns",
"oop",
"theory",
"functor",
""
] |
Quick add on requirement in our project. A field in our DB to hold a phone number is set to only allow 10 characters. So, if I get passed "(913)-444-5555" or anything else, is there a quick way to run a string through some kind of special replace function that I can pass it a set of characters to allow?
Regex?
|
Definitely regex:
```
string CleanPhone(string phone)
{
Regex digitsOnly = new Regex(@"[^\d]");
return digitsOnly.Replace(phone, "");
}
```
or within a class to avoid re-creating the regex all the time:
```
private static Regex digitsOnly = new Regex(@"[^\d]");
public static string CleanPhone(string phone)
{
return digitsOnly.Replace(phone, "");
}
```
Depending on your real-world inputs, you may want some additional logic there to do things like strip out leading 1's (for long distance) or anything trailing an x or X (for extensions).
|
You can do it easily with regex:
```
string subject = "(913)-444-5555";
string result = Regex.Replace(subject, "[^0-9]", ""); // result = "9134445555"
```
|
Replace non-numeric with empty string
|
[
"",
"c#",
"regex",
"string",
"phone-number",
""
] |
Why is using '\*' to build a view bad ?
Suppose that you have a complex join and all fields may be used somewhere.
Then you just have to chose fields needed.
```
SELECT field1, field2 FROM aview WHERE ...
```
The view "aview" could be `SELECT table1.*, table2.* ... FROM table1 INNER JOIN table2 ...`
We have a problem if 2 fields have the same name in table1 and table2.
Is this only the reason why using '\*' in a view is bad?
With '\*', you may use the view in a different context because the information is there.
What am I missing ?
Regards
|
I don't think there's much in software that is "just bad", but there's plenty of stuff that is misused in bad ways :-)
The example you give is a reason why \* might not give you what you expect, and I think there are others. For example, if the underlying tables change, maybe columns are added or removed, a view that uses \* will continue to be valid, but might break any applications that use it. If your view had named the columns explicitly then there was more chance that someone would spot the problem when making the schema change.
On the other hand, you might actually *want* your view to blithely
accept all changes to the underlying tables, in which case a \* would
be just what you want.
*Update:* I don't know if the OP had a specific database vendor in mind, but it is now clear that my last remark does not hold true for all types. I am indebted to user12861 and Jonny Leeds for pointing this out, and sorry it's taken over 6 years for me to edit my answer.
|
Although many of the comments here are very good and reference one common problem of using wildcards in queries, such as causing errors or different results if the underlying tables change, another issue that hasn't been covered is optimization. A query that pulls every column of a table tends to not be quite as efficient as a query that pulls only those columns you actually need. Granted, there are those times when you need every column and it's a major PIA having to reference them all, especially in a large table, but if you only need a subset, why bog down your query with more columns than you need.
|
Why is using '*' to build a view bad?
|
[
"",
"sql",
"view",
""
] |
ShellExecute() allows me to perform simple shell tasks, allowing the system to take care of opening or printing files. I want to take a similar approach to sending an email attachment programmatically.
I don't want to manipulate Outlook directly, since I don't want to assume which email client the user uses by default. I don't want to send the email directly, as I want the user to have the opportunity to write the email body using their preferred client. Thus, I really want to accomplish exactly what Windows Explorer does when I right click a file and select Send To -> Mail Recipient.
I'm looking for a C++ solution.
|
This is my MAPI solution:
```
#include <tchar.h>
#include <windows.h>
#include <mapi.h>
#include <mapix.h>
int _tmain( int argc, wchar_t *argv[] )
{
HMODULE hMapiModule = LoadLibrary( _T( "mapi32.dll" ) );
if ( hMapiModule != NULL )
{
LPMAPIINITIALIZE lpfnMAPIInitialize = NULL;
LPMAPIUNINITIALIZE lpfnMAPIUninitialize = NULL;
LPMAPILOGONEX lpfnMAPILogonEx = NULL;
LPMAPISENDDOCUMENTS lpfnMAPISendDocuments = NULL;
LPMAPISESSION lplhSession = NULL;
lpfnMAPIInitialize = (LPMAPIINITIALIZE)GetProcAddress( hMapiModule, "MAPIInitialize" );
lpfnMAPIUninitialize = (LPMAPIUNINITIALIZE)GetProcAddress( hMapiModule, "MAPIUninitialize" );
lpfnMAPILogonEx = (LPMAPILOGONEX)GetProcAddress( hMapiModule, "MAPILogonEx" );
lpfnMAPISendDocuments = (LPMAPISENDDOCUMENTS)GetProcAddress( hMapiModule, "MAPISendDocuments" );
if ( lpfnMAPIInitialize && lpfnMAPIUninitialize && lpfnMAPILogonEx && lpfnMAPISendDocuments )
{
HRESULT hr = (*lpfnMAPIInitialize)( NULL );
if ( SUCCEEDED( hr ) )
{
hr = (*lpfnMAPILogonEx)( 0, NULL, NULL, MAPI_EXTENDED | MAPI_USE_DEFAULT, &lplhSession );
if ( SUCCEEDED( hr ) )
{
// this opens the email client with "C:\attachment.txt" as an attachment
hr = (*lpfnMAPISendDocuments)( 0, ";", "C:\\attachment.txt", NULL, NULL );
if ( SUCCEEDED( hr ) )
{
hr = lplhSession->Logoff( 0, 0, 0 );
hr = lplhSession->Release();
lplhSession = NULL;
}
}
}
(*lpfnMAPIUninitialize)();
}
FreeLibrary( hMapiModule );
}
return 0;
}
```
|
You can use a standard "mailto:" command in windows shell. It will run the default mail client.
|
How do I programmatically send an email in the same way that I can "Send To Mail Recipient" in Windows Explorer?
|
[
"",
"c++",
"windows",
"email",
"shell",
""
] |
I know there is a `WeakHashMap` in `java.util`, but since it uses `WeakReference`s for everything, which is only referenced by this `Map`, referenced objects will get lost on the next GC cycle. So it's nearly useless if you want to cache random data, which is very likely to be requested again without being Hard-linked the rest of the time. The best solution would be a map, which uses `SoftReference`s instead, but I didn't find one in the Java RT Package.
|
Edit (Aug. 2012):
It turns out that currently the best solution are probably Guava 13.0's `Cache` classes, explained on [Guava's Wiki](https://github.com/google/guava/wiki/CachesExplained) - that's what I'm going to use.
It even supports building a `SoftHashMap` (see `CacheBuilder.newBuilder().softKeys()`), but it is probably not what you want, as Java expert Jeremy Manson explains (below you'll find the link).
---
Not that [I know of](http://www.javalobby.org/java/forums/t16581.html) (Nov. 2008), but you kind find some implementation of `SoftHashMap` on the net.
Like this one: [`SoftHashMap`](http://www.koders.com/java/fidF6A9D06D716A6B562853A8DA95C43169F4044FC0.aspx?s=idef%3Aconfiguration) or [this one](http://www.krugle.org/examples/p-5RTxkghvRYJ5a4Wv/SoftHashMap.java).
---
Edit (Nov. 2009)
As [Matthias](https://stackoverflow.com/users/127013/matthias) mentions in the comments, the [Google Guava](https://github.com/google/guava) [**MapMaker**](https://github.com/google/guava/wiki/MapMakerMigration) does use SoftReferences:
> A `ConcurrentMap` builder, providing any combination of these features:
> * soft or weak keys,
* soft or weak values,
* timed expiration, and
* on-demand computation of values.
As mentioned in [this thread](http://www.mail-archive.com/general@lists.ops4j.org/msg08349.html), another JSR166y candidate:
[jsr166y.ConcurrentReferenceHashMap](http://anonsvn.jboss.org/repos/jbosscache/experimental/jsr166/src/jsr166y/ConcurrentReferenceHashMap.java)
> It provides an alternative concurrent reference map to the Google implementation (which relies on a background thread to evict entries)
---
Edit (August 2012)
The Google implementation uses a background thread only when timed expiration of entries is requested. In particular, it simply uses `java.util.Timer`, which is not so intrusive as having a separate background thread.
Jeremy Manson recommends, for any cache, using this feature to avoid the dangers of SoftReference:
<http://jeremymanson.blogspot.de/2009/07/how-hotspot-decides-to-clear_07.html>
There's another implementation from [Apache Commons](http://commons.apache.org/collections/), namely [org.apache.commons.collections.map.ReferenceMap](https://commons.apache.org/proper/commons-collections/javadocs/api-3.2.2/org/apache/commons/collections/map/ReferenceMap.html); it does not support timed removal, but it does support choosing whether keys should be compared by identity or by equality. Moreover, this implementation is not concurrent - it can be made synchronized, but that works less well under accesses from multiple threads.
|
I am familiar with two libraries that offer a SoftHashMap implementation:
1. [Apache Commons](http://commons.apache.org/collections/): org.apache.commons.collections.map.ReferenceMap
2. [Google Collections](http://google-collections.googlecode.com/): com.google.common.collect.ReferenceMap
|
Is there a SoftHashMap in Java?
|
[
"",
"java",
"hashmap",
"weak-references",
"soft-references",
""
] |
I need to do a date comparison in Mysql without taking into account the time component i.e. i need to convert '2008-11-05 14:30:00' to '2008-11-05'
Currently i am doing this:
```
SELECT from_days(to_days(my_date))
```
Is there a proper way of doing this?
|
Yes, use the [date](http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_date) function:
```
SELECT date(my_date)
```
|
`select date(somedate)` is the most common.
If you need to accommodate other formats, you can use:
```
SELECT DATE_FORMAT(your_date, '%Y-%m-%d');
```
|
Mysql strip time component from datetime
|
[
"",
"sql",
"mysql",
"datetime",
""
] |
I'm writing code to do Xml serialization. With below function.
```
public static string SerializeToXml(object obj)
{
XmlSerializer serializer = new XmlSerializer(obj.GetType());
using (StringWriter writer = new StringWriter())
{
serializer.Serialize(writer, obj);
return writer.ToString();
}
}
```
If the argument is a instance of class without parameterless constructor, it will throw a exception.
> Unhandled Exception:
> System.InvalidOperationException:
> CSharpConsole.Foo cannot be serialized
> because it does not have a
> parameterless constructor. at
> System.Xml.Serialization.TypeDesc.CheckSupported()
> at
> System.Xml.Serialization.TypeScope.GetTypeDesc(Type
> type, MemberInfo sourc e, Boolean
> directReference, Boolean throwOnError)
> at
> System.Xml.Serialization.ModelScope.GetTypeModel(Type
> type, Boolean direct Reference) at
> System.Xml.Serialization.XmlReflectionImporter.ImportTypeMapping(Type
> type , XmlRootAttribute root, String
> defaultNamespace) at
> System.Xml.Serialization.XmlSerializer..ctor(Type
> type, String defaultName space) at
> System.Xml.Serialization.XmlSerializer..ctor(Type
> type)
Why must there be a parameterless constructor in order to allow xml serialization to succeed?
EDIT: thanks for cfeduke's answer. The parameterless constructor can be private or internal.
|
During an object's de-serialization, the class responsible for de-serializing an object creates an instance of the serialized class and then proceeds to populate the serialized fields and properties only after acquiring an instance to populate.
You can make your constructor `private` or `internal` if you want, just so long as it's parameterless.
|
This is a limitation of `XmlSerializer`. Note that `BinaryFormatter` and `DataContractSerializer` *do not* require this - they can create an uninitialized object out of the ether and initialize it during deserialization.
Since you are using xml, you might consider using `DataContractSerializer` and marking your class with `[DataContract]`/`[DataMember`], but note that this changes the schema (for example, there is no equivalent of `[XmlAttribute]` - everything becomes elements).
Update: if you really want to know, `BinaryFormatter` et al use [`FormatterServices.GetUninitializedObject()`](http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatterservices.getuninitializedobject.aspx) to create the object without invoking the constructor. Probably dangerous; I don't recommend using it too often ;-p See also the remarks on MSDN:
> Because the new instance of the object
> is initialized to zero and no
> constructors are run, the object might
> not represent a state that is regarded
> as valid by that object. The current
> method should only be used for
> deserialization when the user intends
> to immediately populate all fields. It
> does not create an uninitialized
> string, since creating an empty
> instance of an immutable type serves
> no purpose.
I have my [own](http://code.google.com/p/protobuf-net/) serialization engine, but I don't intend making it use `FormatterServices`; I quite like knowing that a constructor (*any* constructor) has actually executed.
|
Why XML-Serializable class need a parameterless constructor
|
[
"",
"c#",
".net",
"xml-serialization",
""
] |
We use GUIDs for primary key, which you know is clustered by default.
When inserting a new row into a table it is inserted at a random page in the table (because GUIDs are random). This has a measurable performance impact because the DB will split data pages all the time (fragmentation). But the main reason I what a sequential GUID is because I want new rows to be inserted as the last row in the table... which will help when debugging.
I could make a clustered index on `CreateDate`, but our DB is auto-generated and in development, we need to do something extra to facilitate this. Also, `CreateDate` is not a good candidate for a clustered index.
Back in the day, I used [Jimmy Nielsons COMB's](http://www.informit.com/articles/article.aspx?p=25862), but I was wondering if there is something in the .NET framework for this. In SQL 2005 Microsoft introduced `newsequentialid()` as an alternative to `newid()`, so I was hoping that they made a .NET equivalent because we generate the ID in the code.
PS: Please don't start discussing if this is right or wrong, because GUIDs should be unique etc.
|
It should be possible to create a sequential GUID in c# or vb.net using an API call to UuidCreateSequential. The API declaration (C#) below has been taken from [Pinvoke.net](http://www.pinvoke.net/default.aspx/rpcrt4/UuidCreateSequential.html) where you can also find a full example of how to call the function.
```
[DllImport("rpcrt4.dll", SetLastError=true)]
static extern int UuidCreateSequential(out Guid guid);
```
The MSDN article related to the UuidCreateSequential function can be [found here](http://msdn.microsoft.com/en-us/library/aa379322.aspx) which includes the prerequisites for use.
|
**Update 2018:** Also [check my other answer](https://stackoverflow.com/a/49256827/66629)
This is how NHibernate generate sequantial IDs:
[NHibernate.Id.GuidCombGenerator](https://github.com/nhibernate/nhibernate-core/blob/5e71e83ac45439239b9028e6e87d1a8466aba551/src/NHibernate/Id/GuidCombGenerator.cs)
```
/// <summary>
/// Generate a new <see cref="Guid"/> using the comb algorithm.
/// </summary>
private Guid GenerateComb()
{
byte[] guidArray = Guid.NewGuid().ToByteArray();
DateTime baseDate = new DateTime(1900, 1, 1);
DateTime now = DateTime.Now;
// Get the days and milliseconds which will be used to build the byte string
TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks);
TimeSpan msecs = now.TimeOfDay;
// Convert to a byte array
// Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333
byte[] daysArray = BitConverter.GetBytes(days.Days);
byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333));
// Reverse the bytes to match SQL Servers ordering
Array.Reverse(daysArray);
Array.Reverse(msecsArray);
// Copy the bytes into the guid
Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2);
Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4);
return new Guid(guidArray);
}
```
|
Is there a .NET equivalent to SQL Server's newsequentialid()
|
[
"",
"c#",
".net",
"sql-server",
""
] |
I'm trying to find a way to determine how many parameters a constructor has.
Now I've built one constructor with no parameters and 1 constructor with 4 parameters.
Is there, in C#, a way to find out how many parameters a used or given constructor has?
Thing is, I'm using a third constructor to read log files. These logs files are read as string[] elements and there should be just as many as there are arguments. If not, I have a corrupt log file.
But I'm using a lot of subclasses and each constructor has more parameters for their specific log-type.
So I wanted to know: is there a method to check the amount of parameters on a constructor?
And yes, this is a school assignment. I don't know what terms to look for really, so the VS2008 object browser is currently not of much use.
|
You should look at the System.Reflection Namespace. More specifically, you can get a list of the constructors of a class with:
```
System.Type.GetType("MYClassName").GetConstructors()
```
|
It sounds as if you need to re think your code a bit. From your description, having to dynamically determine the number of arguments in a constructor sounds a bit hairy. You might consider a factory design pattern since the type of object created is determined at runtime. If I misunderstand your problem then using reflection as pointed out by other answers will do the trick for you.
|
How to find amount of parameters in a constructor
|
[
"",
"c#",
"visual-studio-2008",
"constructor",
""
] |
I am trying to write a stored procedure which selects columns from a table and adds 2 extra columns to the ResultSet. These 2 extra columns are the result of conversions on a field in the table which is a Datetime field.
The Datetime format field has the following format 'YYYY-MM-DD HH:MM:SS.S'
The 2 additional fields which should be in the following format:
1. DDMMM
2. HHMMT, where T is 'A' for a.m. and 'P' for p.m.
Example: If the data in the field was '2008-10-12 13:19:12.0' then the extracted fields should contain:
1. 12OCT
2. 0119P
I have tried using CONVERT string formats, but none of the formats match the output I want to get. I am thinking along the lines of extracting the field data via CONVERT and then using REPLACE, but I surely need some help here, as I am no sure.
Could anyone well versed in stored procedures help me out here?
Thanks!
|
If dt is your datetime column, then
For 1:
```
SUBSTRING(CONVERT(varchar, dt, 13), 1, 2)
+ UPPER(SUBSTRING(CONVERT(varchar, dt, 13), 4, 3))
```
For 2:
```
SUBSTRING(CONVERT(varchar, dt, 100), 13, 2)
+ SUBSTRING(CONVERT(varchar, dt, 100), 16, 3)
```
|
Use DATENAME and wrap the logic in a Function, not a Stored Proc
```
declare @myTime as DateTime
set @myTime = GETDATE()
select @myTime
select DATENAME(day, @myTime) + SUBSTRING(UPPER(DATENAME(month, @myTime)), 0,4)
```
Returns "14OCT"
Try not to use any Character / String based operations if possible when working with dates. They are numerical (a float) and performance will suffer from those data type conversions.
Dig these handy conversions I have compiled over the years...
```
/* Common date functions */
--//This contains common date functions for MSSQL server
/*Getting Parts of a DateTime*/
--//gets the date only, 20x faster than using Convert/Cast to varchar
--//this has been especially useful for JOINS
SELECT (CAST(FLOOR(CAST(GETDATE() as FLOAT)) AS DateTime))
--//gets the time only (date portion is '1900-01-01' and is considered the "0 time" of dates in MSSQL, even with the datatype min value of 01/01/1753.
SELECT (GETDATE() - (CAST(FLOOR(CAST(GETDATE() as FLOAT)) AS DateTime)))
/*Relative Dates*/
--//These are all functions that will calculate a date relative to the current date and time
/*Current Day*/
--//now
SELECT (GETDATE())
--//midnight of today
SELECT (DATEADD(ms,-4,(DATEADD(dd,DATEDIFF(dd,0,GETDATE()) + 1,0))))
--//Current Hour
SELECT DATEADD(hh,DATEPART(hh,GETDATE()),CAST(FLOOR(CAST(GETDATE() AS FLOAT)) as DateTime))
--//Current Half-Hour - if its 9:36, this will show 9:30
SELECT DATEADD(mi,((DATEDIFF(mi,(CAST(FLOOR(CAST(GETDATE() as FLOAT)) as DateTime)), GETDATE())) / 30) * 30,(CAST(FLOOR(CAST(GETDATE() as FLOAT)) as DateTime)))
/*Yearly*/
--//first datetime of the current year
SELECT (DATEADD(yy,DATEDIFF(yy,0,GETDATE()),0))
--//last datetime of the current year
SELECT (DATEADD(ms,-4,(DATEADD(yy,DATEDIFF(yy,0,GETDATE()) + 1,0))))
/*Monthly*/
--//first datetime of current month
SELECT (DATEADD(mm,DATEDIFF(mm,0,GETDATE()),0))
--//last datetime of the current month
SELECT (DATEADD(ms,-4,DATEADD(mm,1,DATEADD(mm,DATEDIFF(mm,0,GETDATE()),0))))
--//first datetime of the previous month
SELECT (DATEADD(mm,DATEDIFF(mm,0,GETDATE()) -1,0))
--//last datetime of the previous month
SELECT (DATEADD(ms, -4,DATEADD(mm,DATEDIFF(mm,0,GETDATE()),0)))
/*Weekly*/
--//previous monday at 12AM
SELECT (DATEADD(wk,DATEDIFF(wk,0,GETDATE()) -1 ,0))
--//previous friday at 11:59:59 PM
SELECT (DATEADD(ms,-4,DATEADD(dd,5,DATEADD(wk,DATEDIFF(wk,0,GETDATE()) -1 ,0))))
/*Quarterly*/
--//first datetime of current quarter
SELECT (DATEADD(qq,DATEDIFF(qq,0,GETDATE()),0))
--//last datetime of current quarter
SELECT (DATEADD(ms,-4,DATEADD(qq,DATEDIFF(qq,0,GETDATE()) + 1,0)))
```
|
Custom Date/Time formatting in SQL Server
|
[
"",
"sql",
"sql-server",
"datetime",
"stored-procedures",
"sql-convert",
""
] |
I'm looking for a way to sequentially number rows in a *result set* (not a table). In essence, I'm starting with a query like the following:
```
SELECT id, name FROM people WHERE name = 'Spiewak'
```
The `id`s are obviously not a true sequence (e.g. `1, 2, 3, 4`). What I need is another column in the result set which contains these auto-numberings. I'm willing to use a SQL function if I have to, but I would rather do it without using extensions on the ANSI spec.
Platform is MySQL, but the technique should be cross-platform if at all possible (hence the desire to avoid non-standard extensions).
|
To have a meaningful row number you need to order your results. Then you can do something like this:
```
SELECT id, name
, (SELECT COUNT(*) FROM people p2 WHERE name='Spiewak' AND p2.id <= p1.id) AS RowNumber
FROM people p1
WHERE name = 'Spiewak'
ORDER BY id
```
Note that the WHERE clause of the sub query needs to match the WHERE clause or the primary key from the main query *and* the ORDER BY of the main query.
SQL Server has the ROW\_NUMBER() OVER construct to simplify this, but I don't know if MySQL has anything special to address it.
---
Since my post here was accepted as the answer, I want to also call out Dan Goldstein's response, which is very similar in approach but uses a JOIN instead of a sub query and will often perform better
|
AFAIK, there's no "standard" way.
MS SQL Server has row\_number(), which MySQL has not.
The simplest way to do this in MySQL is
`SELECT a.*, @num := @num + 1 b from test a, (SELECT @num := 0) d;`
Source: comments in <http://www.xaprb.com/blog/2006/12/02/how-to-number-rows-in-mysql/>
|
Pure-SQL Technique for Auto-Numbering Rows in Result Set
|
[
"",
"sql",
"mysql",
""
] |
this is my first question here so I hope I can articulate it well and hopefully it won't be too mind-numbingly easy.
I have the following class *SubSim* which extends *Sim*, which is extending *MainSim*. In a completely separate class (and library as well) I need to check if an object being passed through is a type of *MainSim*. So the following is done to check;
```
Type t = GetType(sim);
//in this case, sim = SubSim
if (t != null)
{
return t.BaseType == typeof(MainSim);
}
```
Obviously *t.BaseType* is going to return *Sim* since *Type.BaseType* gets the type from which the current Type directly inherits.
Short of having to do *t.BaseType.BaseType* to get *MainSub*, is there any other way to get the proper type using .NET libraries? Or are there overrides that can be redefined to return the main class?
Thank you in advance
|
There are 4 related standard ways:
```
sim is MainSim;
(sim as MainSim) != null;
sim.GetType().IsSubclassOf(typeof(MainSim));
typeof(MainSim).IsAssignableFrom(sim.GetType());
```
You can also create a recursive method:
```
bool IsMainSimType(Type t)
{ if (t == typeof(MainSim)) return true;
if (t == typeof(object) ) return false;
return IsMainSimType(t.BaseType);
}
```
|
```
if (sim is MainSim)
```
is all you need. "is" looks up the inheritance tree.
|
BaseType of a Basetype
|
[
"",
"c#",
""
] |
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
|
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
|
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
1. Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE\_NUMBER CHAR(20)
);
2. In your application, connect to your database normally.
3. Execute a simple query against the revision table. Here's what can happen.
* The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
* The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
* The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
|
Using SQLite in a Python program
|
[
"",
"python",
"exception",
"sqlite",
""
] |
Simply moving the file to `~/.Trash/` will not work, as if the file os on an external drive, it will move the file to the main system drive..
Also, there are other conditions, like files on external drives get moved to `/Volumes/.Trash/501/` (or whatever the current user's ID is)
Given a file or folder path, what is the correct way to determine the trash folder? I imagine the language is pretty irrelevant, but I intend to use Python
|
Alternatively, if you're on OS X 10.5, you could use Scripting Bridge to delete files via the Finder. I've done this in Ruby code [here](http://osx-trash.rubyforge.org/git?p=osx-trash.git;a=blob;f=bin/trash;h=26911131eacafd659b4d760bda1bd4c99dc2f918;hb=HEAD) via RubyCocoa. The the gist of it is:
```
url = NSURL.fileURLWithPath(path)
finder = SBApplication.applicationWithBundleIdentifier("com.apple.Finder")
item = finder.items.objectAtLocation(url)
item.delete
```
You could easily do something similar with PyObjC.
|
Based upon code from <http://www.cocoadev.com/index.pl?MoveToTrash> I have came up with the following:
```
def get_trash_path(input_file):
path, file = os.path.split(input_file)
if path.startswith("/Volumes/"):
# /Volumes/driveName/.Trashes/<uid>
s = path.split(os.path.sep)
# s[2] is drive name ([0] is empty, [1] is Volumes)
trash_path = os.path.join("/Volumes", s[2], ".Trashes", str(os.getuid()))
if not os.path.isdir(trash_path):
raise IOError("Volume appears to be a network drive (%s could not be found)" % (trash_path))
else:
trash_path = os.path.join(os.getenv("HOME"), ".Trash")
return trash_path
```
Fairly basic, and there's a few things that have to be done seperatly, particularly checking if the filename already exist in trash (to avoid overwriting) and the actual moving to trash, but it seems to cover most things (internal, external and network drives)
**Update:** I wanted to trash a file in a Python script, so I re-implemented Dave Dribin's solution in Python:
```
from AppKit import NSURL
from ScriptingBridge import SBApplication
def trashPath(path):
"""Trashes a path using the Finder, via OS X's Scripting Bridge.
"""
targetfile = NSURL.fileURLWithPath_(path)
finder = SBApplication.applicationWithBundleIdentifier_("com.apple.Finder")
items = finder.items().objectAtLocation_(targetfile)
items.delete()
```
Usage is simple:
```
trashPath("/tmp/examplefile")
```
|
OS X: Determine Trash location for a given path
|
[
"",
"python",
"macos",
"filesystems",
""
] |
I have a DTS package that drops a table then creates it and populates it but sometimes something happens and the package fails after the drop table. If it's rerun it fails cuz the table hasn't been created yet.
Is there something like "if exists" for SQLServer 2000 like in MySQL?
thanks.
|
Or quicker:
```
IF OBJECT_ID('temp_ARCHIVE_RECORD_COUNTS') IS NOT NULL
DROP TABLE temp_ARCHIVE_RECORD_COUNTS
```
* [OBJECT\_ID - MSDN Reference - SQL Server 2000](http://msdn.microsoft.com/en-us/library/aa276843%28v=sql.80%29.aspx)
* [OBJECT\_ID - MSDN Reference - SQL Server 2008](http://msdn.microsoft.com/en-us/library/ms190328(v=SQL.100).aspx "MSDN SQL Server 2000")
|
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[TableName]') AND type in (N'U'))
DROP TABLE TableName;
GO
```
You can check a [list of type definitions in the sys.objects table here](http://msdn.microsoft.com/en-us/library/ms177596.aspx) if you want to check if other objects in your database exist.
|
How can I drop a table if it exists in SQL Server 2000?
|
[
"",
"sql",
"sql-server",
""
] |
I am developing a C# program, and i have one function that consumes too much CPU. I would like to know a way to control this by code (not with any external application) and restrict the percentage of CPU usage.
For example, if it uses 90% of the CPU usage, to make my app consume only a 20%, even if it becomes slower. It must be done automatically and from within the app.
If you provide a class, it would be fantastic.
|
I don't know if you can do that, but you can change the thread priority of the executing thread via the [Priority](http://msdn.microsoft.com/en-us/library/system.threading.thread.priority.aspx) property. You would set that by:
```
Thread.CurrentThread.Priority = ThreadPriority.Lowest;
```
Also, I don't think you really want to cap it. If the machine is otherwise idle, you'd like it to get busy on with the task, right? ThreadPriority helps communicate this to the scheduler.
|
I guess you need to query some kind of OS API to find out how much of the CPU are you consuming and take throttling decisions (like Thread.Sleep) from that on.
|
How to restrict the CPU usage a C# program takes?
|
[
"",
"c#",
"cpu",
""
] |
Let's have the following class hierarchy:
```
public class ParentClass implements SomeInterface {
}
public class ChildClass extends ParentClass {
}
```
Then let's have these two instances:
```
ParentClass parent;
ChildClass child;
```
Then we have the following TRUE statements
```
(parent instanceof SomeInterface) == true
(child instanceof SomeInterface) == true
```
Is it possible to unimplement the SomeInterface in the ChildClass, so when we check with the instanceof operator it returns false?
If not possible, is there a workaround?
|
No it is not possible, and your intent to do so is a good hint that something is flawed in your class hierarchy.
Workaround: change the class hierarchy, eg. like this:
```
interface SomeInterface {}
abstract class AbstractParentClass {}
class ParentClass extends AbstractParentClass implements SomeInterface {}
class ChildClass extends AbstractParentClass {}
```
|
Maybe composition instead of inheritance is what you want, i.e. have a "base class" object as a member and just implement the interfaces you need, forwarding any methods needed to the member.
|
Is it possible to unimplement an interface in derived class in Java?
|
[
"",
"java",
"oop",
""
] |
I have a table named Info of this schema:
```
int objectId;
int time;
int x, y;
```
There is a lot of redundant data in the system - that is, `objectId` is not UNIQUE. For each `objectId` there can be multiple entries of `time, x, y`.
I want to retrieve a list of the latest position of each object. I started out with this query:
```
SELECT * FROM Info GROUP BY objectId
```
That got me just the kind of list I was looking for. However I want also to get just the latest times for each Object, so I tried:
```
SELECT * FROM Info GROUP BY objectId ORDER BY time DESC
```
This gave me a `time` descended list of Infos. However, **I don't think it did what I want - that is return me the latest `time, x, y` for each object**.
Can anyone imagine a query to do what I want?
**Update** I have tried the top three solutions to see how they perform against each other on a dataset of about 50,000 Infos. Here are the results:
```
-- NO INDEX: forever
-- INDEX: 7.67 s
SELECT a.*
FROM Info AS a
LEFT OUTER JOIN Info AS b ON (a.objectId = b.objectId AND a.time < b.time)
WHERE b.objectId IS NULL;
-- NO INDEX: 8.05 s
-- INDEX: 0.17 s
select a.objectId, a.time, a.x, a.y
from Info a,
(select objectId, max(time) time from Info group by objectId) b
where a.objectId = b.objectId and a.time = b.time;
-- NO INDEX: 8.30 s
-- INDEX: 0.18 s
SELECT A.time, A.objectId, B.x, B.y
FROM
(
SELECT max(time) as time, objectId
FROM Info
GROUP by objectId
) as A
INNER JOIN Info B
ON A.objectId = b.objectId AND A.time = b.time;
```
By a margin, it would seem `where` outperforms `inner join`.
|
One way is using a subquery.
```
select distinct a.objectID, a.time, a.x, a.y
from Info a,
(select objectID, max(time) time from Info group by objectID) b
where a.objectID = b.objectID and a.time = b.time
```
EDIT: Added DISTINCT to prevent duplicate rows if one objectId has multiple records with the same time. Depends on your data if this is necessary, the question author mentioned there were many duplicate rows. (*added by [Tomalak](https://stackoverflow.com/users/18771/tomalak)*)
|
```
SELECT A.time, A.objectID, B.X, B.Y
FROM
(
SELECT max(time) as time, objectID
FROM table
GROUP by objectID
) as A
INNER JOIN table B
ON A.objectID = b.objectID AND A.Time = b.Time
```
votenaders, solution won't work, if x & y decrement at anypoint in the time line.
|
Help write a query: Confusion over order of operations of GROUP BY and ORDER BY
|
[
"",
"mysql",
"sql",
"database",
""
] |
While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python:
```
flist = []
for i in xrange(3):
def func(x): return x * i
flist.append(func)
for f in flist:
print f(2)
```
Note that this example mindfully avoids `lambda`. It prints "4 4 4", which is surprising. I'd expect "0 2 4".
This equivalent Perl code does it right:
```
my @flist = ();
foreach my $i (0 .. 2)
{
push(@flist, sub {$i * $_[0]});
}
foreach my $f (@flist)
{
print $f->(2), "\n";
}
```
"0 2 4" is printed.
Can you please explain the difference ?
---
Update:
The problem **is not** with `i` being global. This displays the same behavior:
```
flist = []
def outer():
for i in xrange(3):
def inner(x): return x * i
flist.append(inner)
outer()
#~ print i # commented because it causes an error
for f in flist:
print f(2)
```
As the commented line shows, `i` is unknown at that point. Still, it prints "4 4 4".
|
Python is actually behaving as defined. **Three separate functions** are created, but they each have the **closure of the environment they're defined in** - in this case, the global environment (or the outer function's environment if the loop is placed inside another function). This is exactly the problem, though - in this environment, **i is modified**, and the closures all **refer to the same i**.
Here is the best solution I can come up with - create a function creater and invoke *that* instead. This will force **different environments** for each of the functions created, with a **different i** in each one.
```
flist = []
for i in xrange(3):
def funcC(j):
def func(x): return x * j
return func
flist.append(funcC(i))
for f in flist:
print f(2)
```
This is what happens when you mix side effects and functional programming.
|
The functions defined in the loop keep accessing the same variable `i` while its value changes. At the end of the loop, all the functions point to the same variable, which is holding the last value in the loop: the effect is what reported in the example.
In order to evaluate `i` and use its value, a common pattern is to set it as a parameter default: parameter defaults are evaluated when the `def` statement is executed, and thus the value of the loop variable is frozen.
The following works as expected:
```
flist = []
for i in xrange(3):
def func(x, i=i): # the *value* of i is copied in func() environment
return x * i
flist.append(func)
for f in flist:
print f(2)
```
|
How do lexical closures work?
|
[
"",
"python",
"closures",
"lazy-evaluation",
"late-binding",
"python-closures",
""
] |
Can anyone provide a clear explanation / example of what these functions do, and when it's appropriate to use them?
|
Straight from [the manual](http://www.postgresql.org/docs/8.3/static/ddl-constraints.html#DDL-CONSTRAINTS-FK)...
> We know that the foreign keys disallow creation of orders that do not relate to any products. But what if a product is removed after an order is created that references it? SQL allows you to handle that as well. Intuitively, we have a few options:
>
> Disallow deleting a referenced product
>
> Delete the orders as well
>
> Something else?
```
CREATE TABLE order_items (
product_no integer REFERENCES products ON DELETE RESTRICT,
order_id integer REFERENCES orders ON DELETE CASCADE,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
```
> Restricting and cascading deletes are the two most common options. RESTRICT prevents deletion of a referenced row. NO ACTION means that if any referencing rows still exist when the constraint is checked, an error is raised; this is the default behavior if you do not specify anything. (The essential difference between these two choices is that NO ACTION allows the check to be deferred until later in the transaction, whereas RESTRICT does not.) CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT. These cause the referencing columns to be set to nulls or default values, respectively, when the referenced row is deleted. Note that these do not excuse you from observing any constraints. For example, if an action specifies SET DEFAULT but the default value would not satisfy the foreign key, the operation will fail.
>
> Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is changed (updated). The possible actions are the same.
**edit:** You might want to take a look at this related question: [When/Why to use Cascading in SQL Server?](https://stackoverflow.com/questions/59297/whenwhy-to-use-cascading-in-sql-server). The concepts behind the question/answers are the same.
|
I have a PostGreSQL database and I use On Delete when I have a user that I delete from the database and I need to delete it's information from other table. This ways I need to do only 1 delete and FK that has ON delete will delete information from other table.
You can do the same with ON Update. If you update the table and the field have a FK with On Update, if a change is made on the FK you will be noticed on the FK table.
|
How do the Postgres foreign key 'on update' and 'on delete' options work?
|
[
"",
"sql",
"database-design",
"postgresql",
"foreign-keys",
"cascade",
""
] |
With the following file reading code:
```
using (FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.None))
{
using (TextReader tr = new StreamReader(fileStream))
{
string fileContents = tr.ReadToEnd();
}
}
```
And the following file write code:
```
using (TextWriter tw = new StreamWriter(fileName))
{
tw.Write(fileContents);
tw.Close();
}
```
The following exception details are seen:
> The process cannot access the file
> 'c:\temp\myfile.txt' because it is
> being used by another process.
What is the best way of avoiding this? Does the reader need to retry upon receipt of the exception or is there some better way?
Note that the reader process is using a *FileSystemWatcher* to know when the file has changed.
Also note that, in this instance, I'm **not** looking for alternatives ways of sharing strings between the 2 processes.
|
You can open a file for writing and only lock write access, thereby allowing others to still read the file.
For example,
```
using (FileStream stream = new FileStream(@"C:\Myfile.txt", FileMode.Open, FileAccess.ReadWrite, FileShare.Read))
{
// Do your writing here.
}
```
Other file access just opens the file for reading and not writing, and allows readwrite sharing.
```
using (FileStream stream = new FileStream(@"C:\Myfile.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Does reading here.
}
```
If you want to ensure that readers will always read an up-to-date file, you will either need to use a locking file that indicates someone is writing to the file (though you may get a race condition if not carefully implemented) or make sure you block write-sharing when opening to read and handle the exception so you can try again until you get exclusive access.
|
If you create a named Mutex you can define the mutex in the writing application, and have the reading application wait until the mutex is released.
So in the notification process that is currently working with the FileSystemWatcher, simply check to see if you need to wait for the mutex, if you do, it will wait, then process.
Here is a [VB example of a Mutex](http://www.developerfusion.com/article/5184/multithreading-in-vbnet/4/) like this that I found, it should be easy enough to convert to C#.
|
In C#, if 2 processes are reading and writing to the same file, what is the best way to avoid process locking exceptions?
|
[
"",
"c#",
"file-io",
"distributed",
""
] |
I take care of critical app in my project. It does stuff related to parsing business msgs (legacy standard), processing them and then storing some results in a DB (another apps picks that up). After more then a year of my work (I've other apps to look after as well) the app is finally stable. I've introduced strict TDD policy and I have 20% unit test coverage (thank you Michael Feathers for your book!), most of it in critical parts. I have some white-box Fitness tests as well (whole business scenarios are covered there). I feel that I cannot further refactor this app and I'm safe to play hard with it. It's designed so badly, I want to rewrite it. App itself is around 20k of challenging legacy C/C++ code. There were other dependencies but I manged to decouple most of them.
---
All I have is Sun C++ compiler, cppunitlite, STLPort and Boost. Please do not suggest other technologies (no XML, Java etc) as this is not the option in my organization.
I'd like to do it with modern C++ (perhaps play with metaprogramming...), TDD from start to the end.
There are about 30 types of msgs I need to parse. Each of them is composed of 3-10 lines, most of them are pretty similar.
This is root of all evil -> lots of code duplication.
Each msgs has a class describing how it should be parsed. Take a look at the main inheritance tree:
```
MSG_A MSG_B
/ \ / \
MSG_A_NEW MSG_A_CNL MSG_B_NEW MSG_B_CNL
```
Both trees goes much deeper. There are very small differences between MSG\_A\_NEW and MSG\_B\_NEW. It should be handled by single class that could be injected with some small customization.
My initial plan it to have one generic msg class that will be customized. Some entity (builder... ?) will take a look at the msgs and initialize proper object that will be able to parse the msg. Another entity will be able to discover what line is it and this info will be used by builder. I'm planning to write several parsers that are responsible for parsing just one specific line. This will allow me to reuse it in parsing different msgs.
There are several challenges that I struggle to solve in an elegant and extensible way.
Each type of msg:
has min and max number if lines
- has some must-have lines
- has some optional lines
- certain lines must be at certain places (i.e. date cannot be before msg type), order matters
I need to be able to validate format of the msgs.
---
I'm not sure if I explained the design challenge here good enough. My design experience is very limited. I've been bug-fixing for a while now and finally I will have a change to do some fun codding :)
What high-level advice do you have for that? Which design patterns can you identify in this description? Main design constraint is maintainability and extensibility with performance at the bottom (we have other bottlenecks anyway...).
|
I would advise you **not** to inherit your specific message handling classes from base classes that contain the common code like this:
```
CommonHandler
^ ^
| | = inheritance
MsgAHandler
^ ^
| |
ANewHandler ACnlHandler
```
This approach suffers from bad reusability: for example if you want to handle some kind of message that needs to do things from A\_NEW and A\_CNL, you would end up with multiple inheritance rather quickly.
Instead I would choose a class containing the common code, that makes calls to an interface to customize that common code. Something like this:
BasicHandler <>--- IMsgHandler ------------\
1 1 ^ ^ ^ ^ \* | ^
| | | | | | = inheritance
MsgAHandler | | ANewHandler 1 |
ACnlHandler HandlerContainer <>-/ <>- = containment
The HandlerContainer class can be used to group the behaviour of other handlers together.
This pattern is called 'Composite', if I'm not mistaken. And to create the correct instances of the handlers, you will of course need some kind of factory.
Good luck!
|
I would suggest looking at the libraries provided by boost, for example `Tuple` or `mpl::vector`. These libraries allows you to create a list of unrelated types and then operate over them. The very rough idea is that you have sequences of types for each message kind:
```
Seq1 -> MSG_A_NEW, MSG_A_CNL
Seq2 -> MSG_B_NEW, MSG_B_CNL
```
Once you know your message kind, you use the appropriate tuple with a function template that applies the first tuple type to the data. Then the next entry in the tuple and so on.
This does assume that the layout of your data streams are known at compile time, but it does have the advantage that you are not paying any runtime overhead for the data structures.
|
designing business msgs parser / rewriting from scratch
|
[
"",
"c++",
"design-patterns",
"oop",
"templates",
""
] |
## Note
The question below was asked in 2008 about some code from 2003. As the OP's **update** shows, this entire post has been obsoleted by vintage 2008 algorithms and persists here only as a historical curiosity.
---
I need to do a fast case-insensitive substring search in C/C++. My requirements are as follows:
* Should behave like strstr() (i.e. return a pointer to the match point).
* Must be case-insensitive (doh).
* Must support the current locale.
* Must be available on Windows (MSVC++ 8.0) or easily portable to Windows (i.e. from an open source library).
Here is the current implementation I am using (taken from the GNU C Library):
```
/* Return the offset of one string within another.
Copyright (C) 1994,1996,1997,1998,1999,2000 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307 USA. */
/*
* My personal strstr() implementation that beats most other algorithms.
* Until someone tells me otherwise, I assume that this is the
* fastest implementation of strstr() in C.
* I deliberately chose not to comment it. You should have at least
* as much fun trying to understand it, as I had to write it :-).
*
* Stephen R. van den Berg, berg@pool.informatik.rwth-aachen.de */
/*
* Modified to use table lookup instead of tolower(), since tolower() isn't
* worth s*** on Windows.
*
* -- Anders Sandvig (anders@wincue.org)
*/
#if HAVE_CONFIG_H
# include <config.h>
#endif
#include <ctype.h>
#include <string.h>
typedef unsigned chartype;
char char_table[256];
void init_stristr(void)
{
int i;
char string[2];
string[1] = '\0';
for (i = 0; i < 256; i++)
{
string[0] = i;
_strlwr(string);
char_table[i] = string[0];
}
}
#define my_tolower(a) ((chartype) char_table[a])
char *
my_stristr (phaystack, pneedle)
const char *phaystack;
const char *pneedle;
{
register const unsigned char *haystack, *needle;
register chartype b, c;
haystack = (const unsigned char *) phaystack;
needle = (const unsigned char *) pneedle;
b = my_tolower (*needle);
if (b != '\0')
{
haystack--; /* possible ANSI violation */
do
{
c = *++haystack;
if (c == '\0')
goto ret0;
}
while (my_tolower (c) != (int) b);
c = my_tolower (*++needle);
if (c == '\0')
goto foundneedle;
++needle;
goto jin;
for (;;)
{
register chartype a;
register const unsigned char *rhaystack, *rneedle;
do
{
a = *++haystack;
if (a == '\0')
goto ret0;
if (my_tolower (a) == (int) b)
break;
a = *++haystack;
if (a == '\0')
goto ret0;
shloop:
;
}
while (my_tolower (a) != (int) b);
jin:
a = *++haystack;
if (a == '\0')
goto ret0;
if (my_tolower (a) != (int) c)
goto shloop;
rhaystack = haystack-- + 1;
rneedle = needle;
a = my_tolower (*rneedle);
if (my_tolower (*rhaystack) == (int) a)
do
{
if (a == '\0')
goto foundneedle;
++rhaystack;
a = my_tolower (*++needle);
if (my_tolower (*rhaystack) != (int) a)
break;
if (a == '\0')
goto foundneedle;
++rhaystack;
a = my_tolower (*++needle);
}
while (my_tolower (*rhaystack) == (int) a);
needle = rneedle; /* took the register-poor approach */
if (a == '\0')
break;
}
}
foundneedle:
return (char*) haystack;
ret0:
return 0;
}
```
Can you make this code faster, or do you know of a better implementation?
**Note:** I noticed that the GNU C Library now has [a new implementation of `strstr()`](http://sources.redhat.com/cgi-bin/cvsweb.cgi/libc/string/strstr.c?rev=1.2&content-type=text/x-cvsweb-markup&cvsroot=glibc), but I am not sure how easily it can be modified to be case-insensitive, or if it is in fact faster than the old one (in my case). I also noticed that [the old implementation is still used for wide character strings](http://sources.redhat.com/cgi-bin/cvsweb.cgi/libc/wcsmbs/wcsstr.c?rev=1.4&content-type=text/x-cvsweb-markup&cvsroot=glibc), so if anyone knows why, please share.
**Update**
Just to make things clear—in case it wasn't already—I didn't write this function, it's a part of the GNU C Library. I only modified it to be case-insensitive.
Also, thanks for the tip about `strcasestr()` and checking out other implementations from other sources (like OpenBSD, FreeBSD, etc.). It seems to be the way to go. The code above is from 2003, which is why I posted it here in hope for a better version being available, which apparently it is. :)
|
The code you posted is about half as fast as `strcasestr`.
```
$ gcc -Wall -o my_stristr my_stristr.c
steve@solaris:~/code/tmp
$ gcc -Wall -o strcasestr strcasestr.c
steve@solaris:~/code/tmp
$ ./bench ./my_stristr > my_stristr.result ; ./bench ./strcasestr > strcasestr.result;
steve@solaris:~/code/tmp
$ cat my_stristr.result
run 1... time = 6.32
run 2... time = 6.31
run 3... time = 6.31
run 4... time = 6.31
run 5... time = 6.32
run 6... time = 6.31
run 7... time = 6.31
run 8... time = 6.31
run 9... time = 6.31
run 10... time = 6.31
average user time over 10 runs = 6.3120
steve@solaris:~/code/tmp
$ cat strcasestr.result
run 1... time = 3.82
run 2... time = 3.82
run 3... time = 3.82
run 4... time = 3.82
run 5... time = 3.82
run 6... time = 3.82
run 7... time = 3.82
run 8... time = 3.82
run 9... time = 3.82
run 10... time = 3.82
average user time over 10 runs = 3.8200
steve@solaris:~/code/tmp
```
The `main` function was:
```
int main(void)
{
char * needle="hello";
char haystack[1024];
int i;
for(i=0;i<sizeof(haystack)-strlen(needle)-1;++i)
{
haystack[i]='A'+i%57;
}
memcpy(haystack+i,needle, strlen(needle)+1);
/*printf("%s\n%d\n", haystack, haystack[strlen(haystack)]);*/
init_stristr();
for (i=0;i<1000000;++i)
{
/*my_stristr(haystack, needle);*/
strcasestr(haystack,needle);
}
return 0;
}
```
It was suitably modified to test both implementations. I notice as I am typing this up I left in the `init_stristr` call, but it shouldn't change things too much. `bench` is just a simple shell script:
```
#!/bin/bash
function bc_calc()
{
echo $(echo "scale=4;$1" | bc)
}
time="/usr/bin/time -p"
prog="$1"
accum=0
runs=10
for a in $(jot $runs 1 $runs)
do
echo -n "run $a... "
t=$($time $prog 2>&1| grep user | awk '{print $2}')
echo "time = $t"
accum=$(bc_calc "$accum+$t")
done
echo -n "average user time over $runs runs = "
echo $(bc_calc "$accum/$runs")
```
|
You can use StrStrI function which finds the first occurrence of a substring within a string. The comparison is not case-sensitive.
Don't forget to include its header - Shlwapi.h.
Check this out: <http://msdn.microsoft.com/en-us/library/windows/desktop/bb773439(v=vs.85).aspx>
|
Fastest way to do a case-insensitive substring search in C/C++?
|
[
"",
"c++",
"c",
"optimization",
"string",
"glibc",
""
] |
I have a problem applying the `DebuggerDisplay` attribute on a generic class:
```
[DebuggerDisplay("--foo--")]
class Foo
{
}
[DebuggerDisplay("Bar: {t}")]
class Bar<T>
{
public T t;
}
```
When inspecting an object of type `Bar<Foo>` I would expect it to show as `Bar: --foo--`, but I get `Bar: {Foo}`
What am I doing wrong?
|
The DebuggerDisplay attribute is not recursive. The {} inside the string essentially say evaluate this expression and display the result inline. The string for the inner result is calculated as if there was no DebuggerDisplay attribute in play for type or member. That is why you see {Foo} instead of --foo--.
The reason for this is reliability. It is far too easy to have mutually recursive DebuggerDisplay attribute tags. This would cause a stack overflow or infinite loop to occur when evaluating an inner expression. Not recursively evaluating the DebuggerDisplay attribute prevents this infinite recursion (although it's still quite possible for the user to create it themselves inside a particular expression).
One way you can control the way the inner expression is displayed is by overriding the .ToString() method. This will be evaluated when computing the display string for an inner expression.
|
You can use `[DebuggerDisplay("Bar<{typeof(T).Name,nq}>")]//nq - no quotes`.
You also can use these practices:
[DebuggerDisplay attribute best practices](http://blogs.msdn.com/b/jaredpar/archive/2011/03/18/debuggerdisplay-attribute-best-practices.aspx)
|
DebuggerDisplay on generic class
|
[
"",
"c#",
".net",
"visual-studio",
"debugging",
""
] |
I'm using Eclipse 3.4 and Tomcat 5.5 and I have a Dynamic Web Project set up. I can access it from <http://127.0.0.1:8080/project/> but by default it serves files from WebContent folder. The real files, that I want to serve, can be found under folder named "share". This folder comes from CVS so I'd like to use it with its given name instead of renaming it. How can this be done?
|
In the project folder, there should be a file under the `.settings` folder named `org.eclipse.wst.common.component` that contains an XML fragment like this:
```
<wb-module deploy-name="WebProjectName">
<wb-resource deploy-path="/" source-path="/WebContent"/>
<wb-resource deploy-path="/WEB-INF/classes" source-path="/src"/>
```
You should be able to change the source-path under `wb-resource` to your share folder. I'd make these changes with the Eclipse project closed to be safe.
This is a setting that you pick when you first create the Dynamic Web Project in Eclipse - I can't find how to change the value in an existing project thru Eclipse itself.
If you want the share folder to be mapped to a different path when deployed, such as `/share/` in your webapp, you can probably just add another wb-resource element.
|
Can be done through Eclipse, no need to manually edit .settings files.
In Eclipse 3.6 (and possibly earlier releases),
1. right click on your project
2. click on properties
3. Click on 'Deployment Assembly'
4. Add... Folder -> Next
5. Navigate to source folder
6. Finish
|
Eclipse & Tomcat: How to specify which folder is served from the project?
|
[
"",
"java",
"eclipse",
"tomcat",
"eclipse-3.4",
""
] |
I have a C# app which uses a System.Diagnostics.Process to run another exe. I ran into some example code where the process is started in a try block and closed in a finally block. I also saw example code where the process is not closed.
What happens when the process is not closed?
Are the resources used by the process reclaimed when the console app that created the process is closed?
Is it bad to open lots of processes and not close any of them in a console app that's open for long periods of time?
Cheers!
|
When the other process *exits*, all of *its* resources are freed up, but you will still be holding onto a process handle (which is a pointer to a block of information about the process) unless you call `Close()` on your `Process` reference. I doubt there would be much of an issue, but **you may as well**.`Process` implements `IDisposable` so you can use C#'s `using(...)` statement, which will automatically call `Dispose` (and therefore `Close()`) for you :
```
using (Process p = Process.Start(...))
{
...
}
```
As a rule of thumb: if something implements `IDisposable`, You really *should* call `Dispose`/`Close` or use `using(...)` on it.
|
They will continue to run as if you started them yourself.
|
What happens if I don't close a System.Diagnostics.Process in my C# console app?
|
[
"",
"c#",
"process",
""
] |
I knew stackoverflow would help me for other than know what is the "favorite programming cartoon" :P
This was the accepted answer by:
[Bill Karwin](https://stackoverflow.com/questions/185327/oracle-joins-left-outer-right-etc-s#185439)
Thanks to all for the help ( I would like to double vote you all )
My query ended up like this ( this is the real one )
```
SELECT
accepted.folio,
COALESCE( inprog.activityin, accepted.activityin ) as activityin,
inprog.participantin,
accepted.completiondate
FROM performance accepted
LEFT OUTER JOIN performance inprog
ON( accepted.folio = inprog.folio
AND inprog.ACTIVITYIN
IN ( 4, 435 ) -- both are ids for inprogress
AND inprog.PARTICIPANTIN != 1 ) -- Ignore the "bot" participant
LEFT OUTER JOIN performance closed
ON( accepted.folio = closed.folio
AND closed.ACTIVITYIN IN ( 10,436, 4, 430 ) ) -- all these are closed or cancelled
WHERE accepted.ACTIVITYIN IN ( 3, 429 ) --- both are id for new
AND accepted.folio IS NOT NULL
AND closed.folio IS NULL;
```
Now I just have to join with the other tables for a human readable report.
---
**ORIGINAL POST**
Hello.
I'm struggling for about 6 hrs. now with a DB query ( my long time nemesis )
I have a data table with some fields like:
```
table performance(
identifier varchar,
activity number,
participant number,
closedate date,
)
```
It is used to keep track of the history of ticket
**Identifier**: is a customer id like ( NAF0000001 )
**activity**: is a fk of where the ticket is ( new, in\_progress, rejected, closed, etc )
**participant**: is a fk of who is attending at that point the ticket
**closedate**: is the date when that activity finished.
**EDIT:** I should have said "completiondate" rather than closedate. This is the date when the activity was completed, not necessary when the ticket was closed.
For instance a typical history may be like this:
```
identifier|activity|participant|closedate
-------------------------------------------
NA00000001| 1| 1|2008/10/08 15:00|
-------------------------------------------
NA00000001| 2| 2|2008/10/08 15:20|
-------------------------------------------
NA00000001| 3| 2|2008/10/08 15:40|
-------------------------------------------
NA00000001| 4| 4|2008/10/08 17:05|
-------------------------------------------
```
And participant 1=jonh, 2=scott, 3=mike, 4=rob
and activties 1=new, 2=inprogress, 3=waitingforapproval, 4=closed
etc. And tens of other irrelevant info.
Well my problem is the following.
I have managed to create a query where I can know when a ticket was opened and closed
it is like this:
```
select
a.identifier,
a.participant,
a.closedate as start,
b.closedate as finish
from
performance a,
performance b
where
a.activity = 1 -- new
and b.activity = 4 -- closed
and a.identifier = b.identifier
```
But I can't know what tickets are **not** closed and who is attending them.
So far I have something like this:
```
select
a.identifier,
a.participant,
a.closedate as start
from
performance a
where
a.activity = 1 -- new
and a.identifier not in ( select identifier from performance where activity = 4 ) --closed
```
That is give me all the ones who have an start ( new = 1 ) but are not closed ( closed = 4 )
But the big problem here is that it prints the participant who opened the ticket, but I need the participant who is attending it. So I add the "inprogress" activity to the query.
```
select
a.identifier,
a.participant,
a.closedate as start
from
performance a,
performance b
where
a.activity = 1 -- new
and a.identifier not in ( select identifier from performance where activity = 4 ) --closed
and b.identifier = a.identifier
and b.activity = 2 -- inprogress..
```
But not all the rows that are in "new" are "inprogress" and with that query I drop all of them.
What I need is to show all the "inprogress" participant and if the ticket is not "inprogress", it will show as empty.
Somthing like
```
identifier|activity|participant|closedate
-------------------------------------------
NA00000002| 1| |2008/10/08 15:00|
-------------------------------------------
NA00000003| 1| |2008/10/08 15:20|
-------------------------------------------
NA00000004| 1| |2008/10/08 15:40|
-------------------------------------------
NA00000005| 2| 4|2008/10/08 15:40|
-------------------------------------------
NA00000006| 2| 4|2008/10/08 15:40|
```
In this case
NA002, NA003 and NA004 are in "new", so no participant is shown
While
NA005 and NA006 are being "inprgress (act = 2 )" and they are being attended by rob ( participant 4 )
So I remember there was this thing called left outer join or something like that but I never ever understand it. What I would like to know is how can I fetch the identifiers that are "inprogress" and "new" and that are not closed.
Probably taking a little rest would help me to clear my mind. If anyone knows how to do it I'll appreciate it.
By the way I've tried:
```
select
a.identifier,
a.participant,
a.closedate as start
from
performance a
left outer join
performance b
on
b.identifier = a.identifier
where
a.activity = 1 -- new
and a.identifier not in ( select identifier from performance where activity = 4 ) --closed
and b.activity = 2 -- inprogress..
```
But gives me the same result as the previous ( drop the only in "new" records )
|
Try something like this (I haven't tested it):
```
SELECT p_new.identifier, COALESCE(p_inprog.activity, p_new.activity) AS activity,
p_inprog.participant, COALESCE(p_inprog.closedate, p_new.closedate) AS closedate
FROM performance p_new
LEFT OUTER JOIN performance p_inprog
ON (p_new.identifier = p_inprog.identifier AND p_inprog.activity = 2)
LEFT OUTER JOIN performance p_closed
ON (p_new.identifier = p_closed.identifier AND p_closed.activity = 4)
WHERE p_new.activity = 1
AND p_closed.identifier IS NULL;
```
I think people believe outer joins are harder than they really are. For example:
```
A LEFT OUTER JOIN B ON (...condition...)
```
This returns all rows from A, whether or not there are any matching rows in B. If no rows in B match, treat all columns B.\* as NULL in the result set for that row of A. The join condition can be an expression that the row in B must satisfy, or else it isn't included in the join. So, more rows in A will be solo.
|
Typically the better way to write those is with EXISTS. The first one would be:
```
select * from performance p1
where not exists
( select * from performance p2
where p2.identifier = p1.identifier and p2.activity = 4 )
```
This way lets you do a keyed lookup on performance.identifier, rather than potentially having to build a massive list of identifiers in `(select identifier from performance where activity=4)`.
|
Oracle joins ( left outer, right, etc. :S )
|
[
"",
"sql",
"oracle",
"join",
"left-join",
""
] |
I want to fade out an element and all its child elements after a delay of a few seconds. but I haven't found a way to specify that an effect should start after a specified time delay.
|
```
setTimeout(function() { $('#foo').fadeOut(); }, 5000);
```
The 5000 is five seconds in milliseconds.
|
I use this pause plugin I just wrote
```
$.fn.pause = function(duration) {
$(this).animate({ dummy: 1 }, duration);
return this;
};
```
Call it like this :
```
$("#mainImage").pause(5000).fadeOut();
```
Note: you don't need a callback.
---
**Edit: You should now use the [jQuery 1.4. built in delay()](http://api.jquery.com/delay/) method. I haven't checked but I assume its more 'clever' than my plugin.**
|
delay JQuery effects
|
[
"",
"javascript",
"jquery",
"effects",
""
] |
I'm wondering if there is a quick and easy function to clean get variables in my url, before I work with them.( or $\_POST come to think of it... )
I suppose I could use a regex to replace non-permitted characters, but I'm interested to hear what people use for this sort of thing?
|
I use the PHP [input filters](https://www.php.net/manual/en/function.filter-input.php) and the function [urlencode](http://php.net/urlencode).
|
The concept of cleaning input never made much sense to me. It's based on the assumption that some kinds of input are dangerous, but in reality there is no such thing as dangerous input; Just code that handles input wrongly.
The culprit of it is that if you embed a variable inside some kind of string (code), which is then evaluated by any kind of interpreter, **you must ensure that the variable is properly escaped**. For example, if you embed a string in a SQL-statement, then you must quote and escape certain characters in this string. If you embed values in a URL, then you must escape it with [`urlencode`](http://www.php.net/urlencode). If you embed a string within a HTML document, then you must escape with [`htmlspecialchars`](http://www.php.net/htmlspecialchars). And so on and so forth.
Trying to "clean" data up front is a doomed strategy, because you can't know - at that point - which context the data is going to be used in. The infamous [magic\_quotes](http://www.php.net/magic_quotes) anti-feature of PHP, is a prime example of this misguided idea.
|
A tidy way to clean your URL variables?
|
[
"",
"php",
"security",
"sanitization",
""
] |
Multimap essentially has groups of data sorted by the key. I want a method by which I could access these individual groups and get their aggregate values.
For example, in a `std::multimap< string, int >` I store
```
{"Group1", 1},
{"Group1", 2},
{"Group1", 3},
{"Group2", 10},
{"Group2", 11},
{"Group2", 12}
```
Having stored these values, I should be able to iterate this multimap and get the aggregate values of each "group". Problem is there aren't any functions defined in STL to access MultiMaps in such a way. I could use `lower_bound`, `upper_bound` to manually iterate the multimap and total the group's contents, but I am hoping there could be better ways already defined in STL ? Can anyone propose a solution as to how I could get the aggregate values for a group in the above example.
|
```
pair<Iter, Iter> range = my_multimap.equal_range("Group1");
int total = accumulate(range.first, range.second, 0);
```
Is one way.
**Edit:**
If you don't know the group you are looking for, and are just going through each group, getting the next group's range can be done like so:
```
template <typename Pair>
struct Less : public std::binary_function<Pair, Pair, bool>
{
bool operator()(const Pair &x, const Pair &y) const
{
return x.first < y.first;
}
};
Iter first = mmap.begin();
Iter last = adjacent_find(first, mmap.end(), Less<MultimapType::value_type>());
```
|
```
// samekey.cpp -- Process groups with identical keys in a multimap
#include <iostream>
#include <string>
#include <map>
using namespace std;
typedef multimap<string, int> StringToIntMap;
typedef StringToIntMap::iterator mapIter;
int main ()
{
StringToIntMap mymap;
mymap.insert(make_pair("Group2", 11));
mymap.insert(make_pair("Group1", 3));
mymap.insert(make_pair("Group2", 10));
mymap.insert(make_pair("Group1", 1));
mymap.insert(make_pair("Group2", 12));
mymap.insert(make_pair("Group1", 2));
cout << "mymap contains:" << endl;
mapIter m_it, s_it;
for (m_it = mymap.begin(); m_it != mymap.end(); m_it = s_it)
{
string theKey = (*m_it).first;
cout << endl;
cout << " key = '" << theKey << "'" << endl;
pair<mapIter, mapIter> keyRange = mymap.equal_range(theKey);
// Iterate over all map elements with key == theKey
for (s_it = keyRange.first; s_it != keyRange.second; ++s_it)
{
cout << " value = " << (*s_it).second << endl;
}
}
return 0;
} // end main
// end samekey.cpp
```
|
stl::multimap - how do i get groups of data?
|
[
"",
"c++",
"stl",
"multimap",
""
] |
Are there any other ways to avoid LazyInitializationExceptions in a Hibernate web application besides using the OpenSessionInView pattern? Are there any downsides to using OpenSessionInView?
|
When working on our web applications, we usually decide beforehand which objects/fields will be needed in the view pages and make sure that all the objecs are properly initialized from the model before dispatching to the view.
This can be accomplished in (at least) three ways:
1. [fetching](http://www.hibernate.org/hib_docs/reference/en/html/performance.html#performance-fetching) properties using eager strategy (i.e. with `FetchMode.JOIN`, if you're using the [Criteria API](http://www.hibernate.org/hib_docs/reference/en/html/querycriteria.html))
2. explicitly initializing properties (i.e. with `Hibernate.initialize(property)`)
3. implicitly initializing properties by calling the appropriate property accessor
About the downsides of OpenSessionInView, have you checked out [this](http://www.hibernate.org/43.html) page?
|
Typically the best way to handle the problem, without making a global decision to do eager fetching; is to use the "fetch" keyword in conjuction with the hql query.
From <http://www.hibernate.org/hib_docs/reference/en/html/queryhql-joins.html>
In addition, a "fetch" join allows associations or collections of values to be initialized along with their parent objects, using a single select. This is particularly useful in the case of a collection. It effectively overrides the outer join and lazy declarations of the mapping file for associations and collections. See Section 19.1, “Fetching strategies” for more information.
from Cat as cat
inner join fetch cat.mate
left join fetch cat.kittens
|
Strategies to avoid Hibernate LazyInitializationExceptions
|
[
"",
"java",
"database",
"hibernate",
""
] |
In XAML I can declare a DataTemplate so that the template is used whenever a specific type is displayed. For example, this DataTemplate will use a TextBlock to display the name of a customer:
```
<DataTemplate DataType="{x:Type my:Customer}">
<TextBlock Text="{Binding Name}" />
</DataTemplate>
```
I'm wondering if it's possible to define a DataTemplate that will be used any time an IList<Customer> is displayed. So if a ContentControl's Content is, say, an ObservableCollection<Customer> it would use that template.
Is it possible to declare a generic type like IList in XAML using the {x:Type} Markup Extension?
|
Not out of the box, no; but there are enterprising developers out there who have done so.
Mike Hillberg at Microsoft played with it in [this post](https://learn.microsoft.com/en-us/archive/blogs/mikehillberg/limited-generics-support-in-xaml), for example. Google has others of course.
|
Not directly in XAML, however you could reference a `DataTemplateSelector` from XAML to choose the correct template.
```
public class CustomerTemplateSelector : DataTemplateSelector
{
public override DataTemplate SelectTemplate(object item,
DependencyObject container)
{
DataTemplate template = null;
if (item != null)
{
FrameworkElement element = container as FrameworkElement;
if (element != null)
{
string templateName = item is ObservableCollection<MyCustomer> ?
"MyCustomerTemplate" : "YourCustomerTemplate";
template = element.FindResource(templateName) as DataTemplate;
}
}
return template;
}
}
public class MyCustomer
{
public string CustomerName { get; set; }
}
public class YourCustomer
{
public string CustomerName { get; set; }
}
```
The resource dictionary:
```
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfApplication1"
>
<DataTemplate x:Key="MyCustomerTemplate">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="150"/>
</Grid.RowDefinitions>
<TextBlock Text="My Customer Template"/>
<ListBox ItemsSource="{Binding}"
DisplayMemberPath="CustomerName"
Grid.Row="1"/>
</Grid>
</DataTemplate>
<DataTemplate x:Key="YourCustomerTemplate">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="150"/>
</Grid.RowDefinitions>
<TextBlock Text="Your Customer Template"/>
<ListBox ItemsSource="{Binding}"
DisplayMemberPath="CustomerName"
Grid.Row="1"/>
</Grid>
</DataTemplate>
</ResourceDictionary>
```
The window XAML:
```
<Window
x:Class="WpfApplication1.Window1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Window1"
Height="300"
Width="300"
xmlns:local="clr-namespace:WpfApplication1"
>
<Grid>
<Grid.Resources>
<local:CustomerTemplateSelector x:Key="templateSelector"/>
</Grid.Resources>
<ContentControl
Content="{Binding}"
ContentTemplateSelector="{StaticResource templateSelector}"
/>
</Grid>
</Window>
```
The window code behind:
```
public partial class Window1
{
public Window1()
{
InitializeComponent();
ObservableCollection<MyCustomer> myCustomers
= new ObservableCollection<MyCustomer>()
{
new MyCustomer(){CustomerName="Paul"},
new MyCustomer(){CustomerName="John"},
new MyCustomer(){CustomerName="Mary"}
};
ObservableCollection<YourCustomer> yourCustomers
= new ObservableCollection<YourCustomer>()
{
new YourCustomer(){CustomerName="Peter"},
new YourCustomer(){CustomerName="Chris"},
new YourCustomer(){CustomerName="Jan"}
};
//DataContext = myCustomers;
DataContext = yourCustomers;
}
}
```
|
Can I specify a generic type in XAML (pre .NET 4 Framework)?
|
[
"",
"c#",
"wpf",
"xaml",
"generics",
""
] |
On my busiest production installation, on occasion I get a single thread that seems to get stuck in an infinite loop. I've not managed to figure out who is the culprit, after much research and debugging, but it seems like it should be possible. Here are the gory details:
***Current debugging notes:***
1) **ps -eL 18975** shows me the the Linux pid the problem child thread, 19269
```
$ps -eL | grep 18975
...
PID LWP TTY TIME CMD
18975 18994 ? 00:00:05 java
18975 19268 ? 00:00:00 java
18975 19269 ? 05:16:49 java
18975 19271 ? 00:01:22 java
18975 19273 ? 00:00:00 java
...
```
2) **jstack -l 18975** says there are no deadlocks, **jstack -m 18975** does not work
3) **jstack -l 18975** does give me the stack trace for all my threads (~400). Example thread stack (*and not the problem*):
```
"http-342.877.573.944-8080-360" daemon prio=10 tid=0x0000002adaba9c00 nid=0x754c in Object.wait() [0x00000000595bc000..0x00000000595bccb0]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
at java.lang.Object.wait(Object.java:485)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
- locked (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
at java.lang.Thread.run(Thread.java:619)
```
4) The ps -eL output's thread ID does not match the output from jstack, or at least I cannot see it. (jstack documentation is a bit sparse.)
5) There are no heavy IO, memory usage or other corresponding activity clues to work with.
***Platform:***
* Java 6
* Tomcat 6
* RHEL 4 (64-bit)
Does anybody know how I can make that connection from the linux ps output to my problem child java thread? So close, yet so far...
|
It looks like the **nid** in the jstack output is the Linux LWP id.
```
"http-342.877.573.944-8080-360" daemon prio=10 tid=0x0000002adaba9c00 nid=0x754c in Object.wait() [0x00000000595bc000..0x00000000595bccb0]
```
Convert the nid to decimal and you have the LWP id. In your case 0x754c is 30028. This process is not shown in our ps output, but it was probably one of the LWPs you have omitted to save space.
Here's a little a Perl snippet you can use to pipe the output of jstack to:
```
#!/usr/bin/perl -w
while (<>) {
if (/nid=(0x[[:xdigit:]]+)/) {
$lwp = hex($1);
s/nid=/lwp=$lwp nid=/;
}
print;
}
```
|
You can use [JConsole](http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html) to view the thread's stack trace.
If your using JDK 1.6.0\_07 or above, you can also use [visualvm](http://java.sun.com/javase/6/docs/technotes/guides/visualvm/index.html).
Both tools provide a nice view of all the running threads in an application. The visualvm is quite a bit nicer, but hopefully seeing all the threads can help you track down the run-away thread.
Check for threads that are *always* in a state of RUNNING. When we had a run-away thread, the stack trace would constantly change. So we were able to tell which methods the loop was calling, and track down the loop.
|
Getting the Java thread id and stack trace of run-away Java thread
|
[
"",
"java",
"debugging",
"jstack",
""
] |
I am building an application where most of the HTML is built using javascript. The DOM structure is built using some JSON data structures that are sent from the server, and then the client-side code builds a UI for that data.
My current approach is to walk the JSON data structures, and call script.aculo.us's Builder.node method to build the DOM structure, and then append it to some element that is actually in the HTML sent from the server. Along the way, I am registering event listeners to the various elements that need them. This allows for a good amount of flexibility, and allows for a very dynamic interface.
However, I feel that it is not very sustainable, since the view logic (ie, the DOM structure) is so tightly coupled to the code that walks the data, and the event handlers, and the data that is kept in memory to maintain the state, and is able to communicate those changes back to the server.
Are there any template-like solutions that will allow me to divorce the DOM structure from the code that drives the app? Currently, my only library dependencies are prototype.js and script.aculo.us, so I would like to avoid introducing any large libraries, but any suggestions are welcome.
Thanks!
EDIT: For some reason, [What good template language is supported in Javascript?](https://stackoverflow.com/questions/128949/what-good-template-language-is-supported-in-javascript) didn't show up in the little search results when I was typing this question. It does, however, show up in the "Related" sidebar here.
I will read through some of the suggestions there, and if I find a solution, I will close this question. Otherwise, I will clarify this question with reasons why those solutions won't work for me.
|
There are some template solutions out there, but they aren't doing much more than you're doing already. jQuery has been doing some [**work along these lines**](http://ejohn.org/blog/javascript-micro-templating/), and some jQuery [**plugins**](http://plugins.jquery.com/project/jTemplates) have emerged as solutions. Prototype.js and others have solutions as well.
**Some options include:**
* [**Prototype Templates**](http://www.prototypejs.org/api/template)
* [**Ajax Pages**](http://ajax-pages.sourceforge.net/)
In general, [**Ext js**](http://extjs.com/) has some pretty **wild and tricked out** stuff, including some [**templates**](http://extjs.com/deploy/dev/examples/core/templates.html), but you'd be adding **yet another library**. So many libraries are getting tossed around these days, and it's often so much simpler to implement a **light and simple** custom solution. Try creating some DOM objects on your own. If you've got JSON data, parse it into memory and run it through a function. It's actually a blast, and a lot of people are doing it.
**Sidenote:**
What you're doing may be quite **sustainable** and **maintainable**. Keep in mind that when you send a page of HTML, the browser is putting a **DOM structure into memory** in roughly the same way that your javascript does. I don't particularly recommend any of these solutions. It sounds like you've made a nice little system for your specific needs, and I'd generally say that **refining your design** will be at least as valuable as moving to somebody else's pattern, with the added benefit of being able to **create some of your own dependencies**.
**Sidenote:**
It's **generally** not advisable to **generate the entire DOM** on the client, at least not for many markets. Sometimes it's an A-OK solution, as it may be in your case, but it's worth a note of caution to the audience at large that this style of development is not always the best road to travel.
|
there are several 'template' plugins for jQuery:
* [jsRepeater](http://plugins.jquery.com/project/jsrepeater)
* [jTemplates](http://plugins.jquery.com/project/jTemplates)
* [noTemplate](http://plugins.jquery.com/project/noTemplate)
* [Template](http://plugins.jquery.com/project/Template)
There are some XSLT extras that do all transform in the client, but i don't think XSL is 'designer friendly' enough
|
How to build the DOM using javascript and templates?
|
[
"",
"javascript",
"templates",
""
] |
I'm trying to track down an issue in our system and the following code worries me. The following occurs in our doPost() method in the primary servlet (names have been changed to protect the guilty):
```
...
if(Single.getInstance().firstTime()){
doPreperations();
}
normalResponse();
...
```
The singleton 'Single' looks like this:
```
private static Single theInstance = new Single();
private Single() {
...load properties...
}
public static Single getInstance() {
return theInstance;
}
```
With the way this is set to use a static initializer instead of checking for a null theInstance in the getInstance() method, could this get rebuilt over and over again?
PS - We're running WebSphere 6 with the App on Java 1.4
|
I found this on Sun's site:
> ### Multiple Singletons Simultaneously Loaded by Different Class Loaders
>
> When two class loaders load a class,
> you actually have two copies of the
> class, and each one can have its own
> Singleton instance. That is
> particularly relevant in servlets
> running in certain servlet engines
> (iPlanet for example), where each
> servlet by default uses its own class
> loader. Two different servlets
> accessing a joint Singleton will, in
> fact, get two different objects.
>
> Multiple class loaders occur more
> commonly than you might think. When
> browsers load classes from the network
> for use by applets, they use a
> separate class loader for each server
> address. Similarly, Jini and RMI
> systems may use a separate class
> loader for the different code bases
> from which they download class files.
> If your own system uses custom class
> loaders, all the same issues may
> arise.
>
> If loaded by different class loaders,
> two classes with the same name, even
> the same package name, are treated as
> distinct -- even if, in fact, they are
> byte-for-byte the same class. The
> different class loaders represent
> different namespaces that distinguish
> classes (even though the classes'
> names are the same), so that the two
> `MySingleton` classes are in fact
> distinct. (See "Class Loaders as a
> Namespace Mechanism" in Resources.)
> Since two Singleton objects belong to
> two classes of the same name, it will
> appear at first glance that there are
> two Singleton objects of the same
> class.
[Citation](http://www.oracle.com/technetwork/articles/java/singleton-1577166.html).
In addition to the above issue, if `firstTime()` is not synchronized, you could have threading issues there as well.
|
No it won't get built over and over again. It's static, so it'll only be constructed once, right when the class is touched for the first time by the Classloader.
Only exception - if you happen to have multiple Classloaders.
(from [GeekAndPoke](http://geekandpoke.typepad.com/.a/6a00d8341d3df553ef0147e0761e2f970b-800wi)):

|
Can this Java singleton get rebuilt repeatedly in WebSphere 6?
|
[
"",
"java",
"singleton",
"instance",
""
] |
I am trying to convert all DateTime values in a DataTable to strings. Here is the method I use:
```
private static void ConvertDateTimesToStrings(DataTable dataTable)
{
if (dataTable == null)
{
return;
}
for (int rowIndex = 0; rowIndex < dataTable.Rows.Count; rowIndex++ )
{
for (int i = 0; i < dataTable.Columns.Count; i++)
{
DateTime dateTime;
try
{
dateTime = (DateTime)dataTable.Rows[rowIndex][i];
}
catch (InvalidCastException)
{
continue;
}
dataTable.Rows[rowIndex][i] = dateTime.ToString("dd.MM.yyyy hh:mm:ss");
}
}
}
```
After this line works:
```
dataTable.Rows[rowIndex][i] = dateTime.ToString("dd.MM.yyyy hh:mm:ss");
```
I check the value of dataTable.Rows[rowIndex][i] and see it is still a DateTime, not a string. Why does this happen and how can I solve this?
Edit: I am trying to do this because I am fighting an api and unfortunately I do not have the choice of which component to use.
|
It may have determined that the column data type is date time and is reboxing the values to datetime when you save the value back in there.
Try this...
Create a new column on the datatable as a TypeOf(String), and save the string value into that column. When all the values have been copied, drop the date time column.
|
This simply won't work, because you haven't changed the underlaying data type.
You have a DataTable, with a column which has data type DateTime.
You can assign a String to it, but it will convert back to DateTime.
Why do you want to change it to a formatted string? Can't you format only when you need to display it, and handle as a DateTime until you have to display it?
**Update:** it is also better if you check the column's type before you try to convert, it can be much faster:
```
if (dataTable.Columns[0].DataType == typeof(DateTime))
{
}
```
|
Setting value of an item in a DataRow does not work
|
[
"",
"c#",
"datarow",
""
] |
Is there any way to make a function (the ones I'm thinking of are in the style of the simple ones I've made which generate the fibonnacci sequence from 0 to a point, and all the primes between two points) run indefinitely. E.g. until I press a certain key or until a time has passed, rather than until a number reaches a certain point?
Also, if it is based on time then is there any way I could just extend the time and start it going from that point again, rather than having to start again from 0? I am aware there is a time module, i just don't know much about it.
|
The simplest way is just to write a program with an infinite loop, and then hit control-C to stop it. Without more description it's hard to know if this works for you.
If you do it time-based, you don't need a generator. You can just have it pause for user input, something like a "Continue? [y/n]", read from stdin, and depending on what you get either exit the loop or not.
|
You could use a generator for this:
```
def finished():
"Define your exit condition here"
return ...
def count(i=0):
while not finished():
yield i
i += 1
for i in count():
print i
```
If you want to change the exit condition you could pass a value back into the generator function and use that value to determine when to exit.
|
making a programme run indefinitely in python
|
[
"",
"python",
"time",
"key",
"indefinite",
""
] |
I've created the following regex pattern in an attempt to match a string 6 characters in length ending in either "PRI" or "SEC", unless the string = "SIGSEC". For example, I want to match ABCPRI, XYZPRI, ABCSEC and XYZSEC, but not SIGSEC.
```
(\w{3}PRI$|[^SIG].*SEC$)
```
It is very close and sort of works (if I pass in "SINSEC", it returns a partial match on "NSEC"), but I don't have a good feeling about it in its current form. Also, I may have a need to add more exclusions besides "SIG" later and realize that this probably won't scale too well. Any ideas?
BTW, I'm using System.Text.RegularExpressions.Regex.Match() in C#
Thanks,
Rich
|
Assuming your regex engine supports negative lookaheads, try this:
```
((?!SIGSEC)\w{3}(?:SEC|PRI))
```
Edit: A commenter pointed out that .NET does support negative lookaheads, so this should work fine (thanks, Charlie).
|
To help break down Dan's (correct) answer, here's how it works:
```
( // outer capturing group to bind everything
(?!SIGSEC) // negative lookahead: a match only works if "SIGSEC" does not appear next
\w{3} // exactly three "word" characters
(?: // non-capturing group - we don't care which of the following things matched
SEC|PRI // either "SEC" or "PRI"
)
)
```
All together: ((?!SIGSEC)\w{3}(?:SEC|PRI))
|
How do I match part of a string only if it is not preceded by certain characters?
|
[
"",
"c#",
".net",
"regex",
"negative-lookbehind",
""
] |
Can someone please derive a concrete example from the following:
<http://www.urdalen.com/blog/?p=210>
..that shows how to deal with `one-to-many` and `many-to-many` relationships?
I've emailed the author some time ago but received no reply. I like his idea, but can't figure out how to implement it beyond simple single table relations.
Note: I don't want to use a full-blown ORM. I like doing my SQL by hand. I would like to improve the design of my app code though. Right now each domain object has its own class full of queries wrapped in static methods. They just return scalar, 1d-array (record) or 2d-array (recordset) depending on the query.
|
The problem of ORM's (The impedance mismatch, as it's called) is precisely with relations. In an object graph (In-memory objects), relationships are pointers to other objects. In a relational database, relationships are reversed; This makes it impossible to do a simple mapping between the two models, and that is why ORM's are so complex.
If you want to stay close to the database (avoiding an ORM), then you shouldn't try to abstract relationships away. The way I write datamappers is something like the following:
```
$car42 = $car_gateway->fetch(42);
$wheels = $wheel_gateway->selectByCar($car42);
```
In contrast to the ORM way:
```
$car42 = $car_gateway->fetch(42);
$wheels = $car42->selectWheels();
```
This does mean that the gateways end up in your client-code, but it also keeps things very transparent and close to the database.
|
If you're looking for a simple and portable DataMapper ORM, have a look at [phpDataMapper](http://phpdatamapper.com). It's only dependencies are PHP5 and PDO, and it's very small and lightweight. It supports table relations and some other very nice features as well.
|
How to extend this simple DataMapper?
|
[
"",
"php",
"design-patterns",
"orm",
"datamapper",
""
] |
How is it possible that .NET is finding the wrong `MyType` in this scenario?
I have a type `A.B.C.D.MyType` in a project that I'm working on, and I'm referencing a DLL that has a type `A.B.MyType`. I do not have any `using A.B;` statements anywhere in my code, and I do have `using A.B.C.D;`. When I compile, the compiler thinks any naked reference to `MyType` means `A.B.MyType`.
I know I could just rename the class or use an alias, but I'm wondering how this is even possible. Any ideas?
|
Are you working in a namespace that is under A.B namespace? (for example A.B.X) if so the C# namespace resolutions ([ECMA-334 C# Language Specification : 10.8 10.8 Namespace and type names](http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf)) says:
> ... for each namespace N, starting
> with the namespace in which the
> namespace-or-typename occurs,
> continuing with each enclosing
> namespace (if any), and ending with
> the global namespace, the following
> steps are evaluated until an entity is
> located...
and then followed by:
> If K is zero and the namespace
> declaration contains an
> extern-alias-directive or
> using-aliasdirective that associates
> the name I with an imported namespace
> or type, then the
> namespace-or-type-name refers to that
> namespace or type
This means that name resolution starts at the current namespace and searches all namespaces up to the root, and only after this hierarchical search ends, then the namespaces imported with the `using` clause are searched.
The following example prints "Ns1.Foo"
```
using Ns1.Foo.Foo2;
namespace Ns1.Foo
{
class Foo
{
public void Print()
{
System.Console.WriteLine("Ns1.Foo");
}
}
}
namespace Ns1.Foo.Foo2
{
class Foo
{
public void Print()
{
System.Console.WriteLine("Ns1.Foo.Foo2");
}
}
}
namespace Ns1.Foo.Bar
{
class Bar
{
public void Print()
{
new Foo().Print();
}
static void Main()
{
new Bar().Print();
}
}
}
```
Edit: Adding a using clause **inside** a namespace, will make so that the namespace is searched before the hierarchical search of current namespace is done is done. Change the example to:
```
namespace Ns1.Foo.Bar
{
using Ns1.Foo.Foo2;
class Bar
{
public void Print()
{
new Foo().Print();
}
static void Main()
{
new Bar().Print();
}
}
}
```
and `Ns1.Foo.Foo2` will be printed.
Edit: changed example
|
Is your code in namespace `A.B` or `A.B.C`? If so, that's probably the issue. Use a using directive like this:
```
using TheTypeIWant = A.B.C.D.MyType;
```
then just refer to `TheTypeIWant` in your code.
EDIT: I've just tried the "`using MyType=A.B.C.D.MyType`" option, but that *doesn't* work. The above is fine though.
|
Namespace collisions
|
[
"",
"c#",
".net",
"namespaces",
""
] |
I have the following sql query for transforming data but is it possible to save the value of the int in some variable to avoid casting multiple times?
```
update prospekts set sni_kod = case
when
cast(sni_kod as int) >= 1000 and cast(sni_kod as int) <= 1499
or cast(sni_kod as int) >= 1600 and cast(sni_kod as int) <= 2439
then '1'
when
cast(sni_kod as int) >= 7000 and cast(sni_kod as int) <= 7499
then 'W'
else
sni_kod
end
```
There are a lot more when-cases in the script, just showing the first one. I cannot use anything other than a simple text-script.
**Update**
Using SQL Server 2000
Thanks
Anders
|
Ok... here's my rewrite of your code...
```
UPDATE prospekts SET sni_kod =
CASE
WHEN ISNUMERIC(@sni_kod)=1 THEN
CASE
WHEN cast(@sni_kod as int) BETWEEN 1000 AND 1499 OR cast(@sni_kod as int) BETWEEN 1600 AND 2439 THEN '1'
WHEN cast(@sni_kod as int) BETWEEN 7000 AND 7499 THEN 'W'
ELSE @sni_kod
END
ELSE @sni_kod
END
```
This way, it'll only attempt to do a CAST if it's a numeric value, so you won't get cast exceptions, like other people have mentioned in comments.
Since you said there are a lot more statements involved, I'm guessing you have a lot more number ranges that get different values... If that's the case, you might be able to use a second table (can be a temporary one if, like your question says, you're limited to just SQL code) to join on which have min value, max value, and what you want to display based on that. Gets more tricky when you need to evaluate non-numeric values, but it isn't impossible.
Without seeing the full statement, though, this is the best I can offer.
|
Is this just a one-off script, as it appears? If so, and you are just trying to save on typing then write as:
```
update prospekts set sni_kod = case
when
xxx >= 1000 and xxx <= 1499
or xxx >= 1600 and xxx <= 2439
then '1'
when
xxx >= 7000 and xxx <= 7499
then 'W'
else
sni_kod
end
```
... and then do a global search and replace with a text editor.
Or perhaps you are concerned about the performance of casting several times per row when once might do? But again, if this script is a one off, does it really matter?
|
How do I avoid multiple CASTs in sql query?
|
[
"",
"sql",
"t-sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.