Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I need to selectively flatten an array in PHP, but do it selectively. If a key matches a pattern then all sub elements below that key should be included in the 'flat' output.
SO If I had a catalogue of music:
=> array of albums => each of which is an array of song titles
Then I could search for a string, and would get a flat array in reponse. SO if I searched for 'sun' then I would get the entire catalogue for any artist with 'sun' in their name, plus the albums for other artists where 'sun' was in the album name.
Hopefully that makes sense.
Anyone got any thoughts? | Ok, I'm going to assume your data looks like this:
```
$data = array(
"Bill Withers" => array (
"Lovely Day",
"Use Me",
"Ain't No Sunshine"
),
"Fleet Foxes" => array (
"Sun It Rises",
"White Winter Hymnal"
),
"Billy Joel" => array (
"Piano Man"
)
);
```
...and that given the input `"Bill"`, you want the output: `["Lovely Day", "Use Me", "Ain't No Sunshine", "Piano Man"]`. Here's one way you could do it.
```
function getSongs($data, $searchTerm) {
$output = array();
foreach ($data as $artist => $songs) {
if (stripos($artist, $searchTerm) !== false)) {
$output = array_merge($output, $songs);
}
}
return $output;
}
```
...I'll also assume you've got a good reason to not use a database for this. | Is there a reason you're not using a database to store what sounds like a significant amount of info? It would be fairly simple to write a query in SQL to pull the data out that you want. | Selectively flattening a PHP array according to parent keys | [
"",
"php",
"arrays",
""
] |
I'm working on a legacy product and i have some SQL being executed through ADO, to an Access database with linked tables to SQL Server. I'm getting the error 'Undefined function 'Round' when i execute the SQL but **if i take the query and run directly in Access it works fine**. I know that EVERYTHING is correct and that this is a machine specific issue since this is production code, it works on other machines and has been deployed successfully for many clients.
I'm not even sure where to begin to be honest. I'm running the correct (latest) versions of Jet/ADO/MDAC.
ANY help would be appreciated.
Thanks in advance.
**EDIT: Obviously, the SQL includes the aggregate function 'Round'. I'm aware of the differences between Jet and SQL implementations. This problem is due to some problem with a component on my machine and NOT with the code. The SQL executes properly when done through MS Access 2007 but NOT through ADO.** | EDIT2: Right solution from the comments:
shahkalpesh: If it executes fine thru Access, it could be that Access has the DLL available to it, which has the Round function. What is the connection string, you are using?
Stimul8d: I'm not sure how it can be anything so do with the connection string. This code works on EVERY other machine, with no changes; just not on mine.
Andomar: Well, that's your problem right there, your machine is farked up. You can still install [vb6 sp6](http://www.microsoft.com/Downloads/details.aspx?familyid=9EF9BF70-DFE1-42A1-A4C8-39718C7E381D&displaylang=en) maybe.
Stimul8d: Well, SP6 fixed it. Cheers Anndomar, no idea why SP6 fixed it but it did!
EDIT: Based on your comment [this newsgroup](http://www.tech-archive.net/Archive/Access/microsoft.public.access.developers.toolkitode/2004-05/0014.html) post might be the answer:
> Unfortunately, when you are running
> queries from outside of Access (as you
> are from VB), your only connection to
> the database is through the Jet
> engine, which doesn't know anything
> about most VBA functions. There's no
> way around this, other than to return
> the data to your VB application and
> use the functions on the data there.
And two posts later:
> I solved the problem. Updated my VB
> with the Service Pack 6... it took
> care of the problems.
Old answer here:
Try FLOOR() instead of ROUND().
To round something to the nearest integer, you could:
```
declare @floatmyboat float
set @floatmyboat = 1.51
select floor(@floatmyboat+0.5)
```
P.S. Maybe post the exact error you get. If it's "The round function requires 2 to 3 arguments.", that means Sql Server is borking on the ROUND(). | The round() function exists in SQL Server as well.
The only difference is: in Access the precision is an optional parameter, but in SQL Server you have to specify it.
So this will only work in Access, but not in SQL Server:
```
select round(Column) from Table
```
This will work in Access *and* SQL Server:
```
select round(Column,1) from Table
``` | SQL through classic ADO - Undefined Function 'Round'? | [
"",
"sql",
"ado",
"rounding",
"mdac",
""
] |
I've come across several instances of C# code like the following:
```
public static int Foo(this MyClass arg)
```
I haven't been able to find an explanation of what the `this` keyword means in this case. Any insights? | This is an **extension method**. See here for an [explanation](http://weblogs.asp.net/scottgu/archive/2007/03/13/new-orcas-language-feature-extension-methods.aspx).
> Extension methods allow developers to add new methods to the public
> contract of an existing CLR type, without having to sub-class it or
> recompile the original type. Extension Methods help blend the
> flexibility of "duck typing" support popular within dynamic languages
> today with the performance and compile-time validation of
> strongly-typed languages.
>
> Extension Methods enable a variety of useful scenarios, and help make
> possible the really powerful LINQ query framework... .
it means that you can call
```
MyClass myClass = new MyClass();
int i = myClass.Foo();
```
rather than
```
MyClass myClass = new MyClass();
int i = Foo(myClass);
```
This allows the construction of [fluent interfaces](http://en.wikipedia.org/wiki/Fluent_interface) as stated below. | [Scott Gu's quoted blog post](http://weblogs.asp.net/scottgu/archive/2007/03/13/new-orcas-language-feature-extension-methods.aspx) explains it nicely.
For me, the answer to the question is in the following statement in that post:
> Note how the static method above has a
> "this" keyword before the first
> parameter argument of type string.
> This tells the compiler that this
> particular Extension Method should be
> added to objects of type "string".
> Within the IsValidEmailAddress()
> method implementation I can then
> access all of the public
> properties/methods/events of the
> actual string instance that the method
> is being called on, and return
> true/false depending on whether it is
> a valid email or not. | Use of "this" keyword in formal parameters for static methods in C# | [
"",
"c#",
"parameters",
"this",
""
] |
I am looking for a way to get hibernate to use oracle's `SYS_GUID()` function when inserting new rows. Currently my DB tables have `SYS_GUID()` as the default so if hibernate simply generated SQL that omited the value it should work.
I have everything working, but it is currently generating the UUID/GUID in code using the system-uuid generator:
```
@Id
@GeneratedValue(generator = "system-uuid")
@GenericGenerator(name = "system-uuid", strategy = "uuid")
@Column(name = "PRODUCT_ID", unique = true, nullable = false)
public String getId() {
return this.productId;
}
```
This is fine, but I would prefer that the guids were generated by the database so they will be sequential and potentially have better performance. Plus I would just like to know how to configure this.
I am using annotations for configuration, but xml configuration examples are awesome as well.
*Here is a sample table definition (in case it matters):*
```
CREATE TABLE SCHEMA_NAME.PRODUCT
(
PRODUCT_ID RAW(16) DEFAULT SYS_GUID() NOT NULL,
PRODUCT_CODE VARCHAR2(10 CHAR) NOT NULL,
PRODUCT_NAME VARCHAR2(30 CHAR) NOT NULL,
PRODUCT_DESC VARCHAR2(512 CHAR)
)
```
## UPDATE:
Mat's sollution of using "guid" worked, here is the sql generated:
```
Hibernate:
select rawtohex(sys_guid())
from dual
Hibernate:
insert into PRODUCT
(PRODUCT_CODE, PRODUCT_DESC, LOB_ID, PRODUCT_NAME, PROVIDER_ID, PRODUCT_ID)
values (?, ?, ?, ?, ?, ?)
```
---
**It seems that using the columns default value in an insert is not possible, so the choice is between an application generated guid and a database round trip.** | You might be able to use the "guid" generator. See [this post](https://forums.hibernate.org/viewtopic.php?p=2230350&sid=168cb72b4e0f67c1b29561d535aab74e) from the Hibernate forum. It looks like they added support for Oracle using `SYS_GUID()` a while back, but the [documentation](http://docs.jboss.org/hibernate/stable/core/reference/en/html/mapping.html#mapping-declaration-id-generator) still says they only support SQL Server and MySQL.
I haven't worked with JPA annotations yet, but here is an example using XML configuration:
```
<id name="PRODUCT_ID">
<generator class="guid" />
</id>
```
**EDIT:** In regards to your second question, I think you are asking why Hibernate can't do something like this:
```
INSERT INTO PRODUCT (PRODUCT_ID, /* etc */)
SELECT SYSGUID(), /* etc */
```
The reason is that Hibernate must know what the object's ID is. For example, consider the following scenario:
1. You create a new Product object and save it. Oracle assigns the ID.
2. You detach the Product from the Hibernate session.
3. You later re-attach it and make some changes.
4. You now want to persist those changes.
Without knowing the ID, Hibernate can't do this. It needs the ID in order to issue the UPDATE statement. So the implementation of `org.hibernate.id.GUIDGenerator` has to generate the ID beforehand, and then later on re-use it in the INSERT statement.
This is the same reason why Hibernate cannot do *any batching* if you use a database-generated ID (including auto-increment on databases that support it). Using one of the hilo generators, or some other Hibernate-generated ID mechanism, is the only way to get good performance when inserting lots of objects at once. | I have same task that topic starter. With thanks to **@Matt Solnit** suggestion I use such annotations:
```
@Id
@NotNull
@Column(name = "UUID")
@GenericGenerator(name = "db-uuid", strategy = "guid")
@GeneratedValue(generator = "db-uuid")
private String uuid;
public String getUuid() { return uuid; }
public void setUuid(String uuid) { this.uuid = uuid; }
```
`strategy = "guid"` and `String` type are essential parts of solution.
Before persisting new entities Hibernate issue SQL query:
```
select rawtohex(sys_guid()) from dual
```
My setup: Oracle 11, Hibernate 4.3.4.Final, Spring 3.2.x. And field is `raw(16)` in table for efficient storage and lesser index size then if you use `char(32)`.
When I try to use `java.util.UUID` as ID field type I get error from Hibernate on persisting new entity (it try to set `String` type to `java.util.UUID` field).
Also I use `javax.xml.bind.DatatypeConverter` for non-Hibernate queries (Spring JDBC helpers), for passing convert to `byte[]`:
```
String query = "insert into TBL (UUID, COMPANY) values (:UUID, :COMPANY)";
MapSqlParameterSource parameters = new MapSqlParameterSource()
.addValue("COMPANY", repo.getCompany())
.addValue("UUID", DatatypeConverter.parseHexBinary(signal.getUuid()));
namedJdbcTemplate.update(query, parameters);
```
for extracting:
```
ResultSet rs;
sig.id = DatatypeConverter.printHexBinary(rs.getBytes("UUID"));
```
All web controllers get codes like:
```
025131763FB19522E050010A106D11E9
```
without `{`, `-`, `}` chars (usual representation of UUID is `{a-b-c-d-x-y}` if you remember). This representation already URL encoding clean and safe. You don't need to implement `PropertyEditor` or `Convertor` for `String` type:
```
@RequestMapping(value = {"/signal/edit/{id}.htm"}, method = RequestMethod.POST)
public String handleEditRequest(
@PathVariable("id") String id,
```
Compare with failed attempt to use `jaa.util.UUID`, where I need to write:
```
@Component
public static class UUIDPropertyEditor extends PropertyEditorSupport {
@Override
public void setAsText(final String str) {
if (str == null || str.isEmpty()) {
setValue(null);
return;
}
setValue(UUID.fromString(str));
}
}
private @Autowired UUIDPropertyEditor juuidPE;
@InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(UUIDPropertyEditor.class, juuidPE);
}
```
in order to use:
```
@PathVariable("id") UUID id,
``` | Configure Hibernate to use Oracle's SYS_GUID() for Primary Key | [
"",
"java",
"oracle",
"hibernate",
"guid",
"uuid",
""
] |
Just I would like to know, is there any difference between
> if (a==5) or if (5==a)
in C#, Which one is better? | There's no difference - assuming that "a" is an integer.
I know some people prefer `if (5==a)` because in c & c++ if you wrote `if (5=a)` by mistake you'd get a compiler error while `if (a=5)` would result in a bug.
C# raises a compiler error in the latter case, so it's not an issue. | I'd actually say there is a difference, but it's not a technical one (as everyone has well covered already) - **readability**. It matters and the first form is *much* more natural. | Is there any difference between if(a==5) or if(5==a) in C#? | [
"",
"c#",
""
] |
I am creating a signup form in my website. I want to implement some checkup with username, which are
* username can't have any space,
* username can't have any special characters other than dot (.) as gmail is doing in thier signup form.
I am using jQUERY in my app, can anyone tell me how to implement above listed two checkups using jquery ?? I remember these two main checkups for username please suggest if you knows any other?
Thanks | I tried out following jquery script by studying the links ([this one](https://stackoverflow.com/questions/280759/jquery-validate-how-to-add-a-rule-for-regular-expression-validation)) given by @Dror to check for wrong characters in username,
```
if (!$('#uname').val().match("^[a-z0-9'.\s]{1,50}$")) {
//not a correct username
}
```
Thanks | Have a look at the [validation plug-in](http://bassistance.de/jquery-plugins/jquery-plugin-validation/).
Whatever you do, validate the username at the server side too. | How to remove space and restrict special chars using jquery? | [
"",
"javascript",
"jquery",
"regex",
"validation",
""
] |
How do I get form data from HTML page using c++, as far as the basics of post and get?
**EDIT:** CGI is using apache 2 on windows, I got c++ configured and tested with with apache already. | The easiest way to access form data from an HTTP request is via [CGI](http://en.wikipedia.org/wiki/Common_Gateway_Interface). This involves reading environment variables which is done using the [getenv](http://www.cplusplus.com/reference/clibrary/cstdlib/getenv/) function. | First of all take a look at [webtoolkit](http://www.webtoolkit.eu/wt#/).
You might want to use that to make your life easier.
Second you can read about network protocols.
Third take a look at your webserver docs, they might provide such interface to create a [deamon](http://en.wikipedia.org/wiki/Daemon_(computer_software)) that will allow you to read the HTTP socket and the data that is sent over it.
On another note next time you write a question try to elaborate as much as possible.
Explain the use case and provide a test case. | How to get form data from HTML page using c++ | [
"",
"c++",
"html",
"cgi",
""
] |
What is R-Value reference that is about to come in next c++ standard? | Here is a really long [article](http://blogs.msdn.com/vcblog/archive/2009/02/03/rvalue-references-c-0x-features-in-vc10-part-2.aspx) from Stephan T. Lavavej | It allows you to distinguish between code that has called you passing a reference to an r-value or an l-value. For example:
```
void foo(int &x);
foo(1); // we are calling here with the r-value 1. This would be a compilation error
int x=1;
foo(x); // we are calling here with the l-value x. This is ok
```
By using an r-value reference, we can allow to pass in references to temporaries such as in the first example above:
```
void foo(int &&x); // x is an r-value reference
foo(1); // will call the r-value version
int x=1;
foo(x); // will call the l-value version
```
This is more interesting when we are wanting to pass the return value of a function that creates an object to another function which uses that object.
```
std::vector create_vector(); // creates and returns a new vector
void consume_vector(std::vector &&vec); // consumes the vector
consume_vector(create_vector()); // only the "move constructor" needs to be invoked, if one is defined
```
The move constructor acts like the copy constructor, but it is defined to take an r-value reference rather than an l-value (const) reference. It is allowed to use r-value semantics to move the data out of the temporary created in `create_vector` and push them into the argument to `consume_vector` without doing an expensive copy of all of the data in the vector. | What is R-Value reference that is about to come in next c++ standard? | [
"",
"c++",
"c++11",
"rvalue-reference",
""
] |
I want my Python script to be able to read Unicode command line arguments in Windows. But it appears that sys.argv is a string encoded in some local encoding, rather than Unicode. How can I read the command line in full Unicode?
Example code: `argv.py`
```
import sys
first_arg = sys.argv[1]
print first_arg
print type(first_arg)
print first_arg.encode("hex")
print open(first_arg)
```
On my PC set up for Japanese code page, I get:
```
C:\temp>argv.py "PC・ソフト申請書08.09.24.doc"
PC・ソフト申請書08.09.24.doc
<type 'str'>
50438145835c83748367905c90bf8f9130382e30392e32342e646f63
<open file 'PC・ソフト申請書08.09.24.doc', mode 'r' at 0x00917D90>
```
That's Shift-JIS encoded I believe, and it "works" for that filename. But it breaks for filenames with characters that aren't in the Shift-JIS character set—the final "open" call fails:
```
C:\temp>argv.py Jörgen.txt
Jorgen.txt
<type 'str'>
4a6f7267656e2e747874
Traceback (most recent call last):
File "C:\temp\argv.py", line 7,
in <module>
print open(first_arg)
IOError: [Errno 2] No such file or directory: 'Jorgen.txt'
```
Note—I'm talking about Python 2.x, not Python 3.0. I've found that Python 3.0 gives `sys.argv` as proper Unicode. But it's a bit early yet to transition to Python 3.0 (due to lack of 3rd party library support).
**Update:**
A few answers have said I should decode according to whatever the `sys.argv` is encoded in. The problem with that is that it's not full Unicode, so some characters are not representable.
Here's the use case that gives me grief: I have [enabled drag-and-drop of files onto .py files in Windows Explorer](https://stackoverflow.com/q/142844/60075). I have file names with all sorts of characters, including some not in the system default code page. My Python script doesn't get the right Unicode filenames passed to it via sys.argv in all cases, when the characters aren't representable in the current code page encoding.
There is certainly some Windows API to read the command line with full Unicode (and Python 3.0 does it). I assume the Python 2.x interpreter is not using it. | Here is a solution that is just what I'm looking for, making a call to the Windows `GetCommandLineArgvW` function:
[Get sys.argv with Unicode characters under Windows](http://code.activestate.com/recipes/572200/) (from ActiveState)
But I've made several changes, to simplify its usage and better handle certain uses. Here is what I use:
**`win32_unicode_argv.py`**
```
"""
win32_unicode_argv.py
Importing this will replace sys.argv with a full Unicode form.
Windows only.
From this site, with adaptations:
http://code.activestate.com/recipes/572200/
Usage: simply import this module into a script. sys.argv is changed to
be a list of Unicode strings.
"""
import sys
def win32_unicode_argv():
"""Uses shell32.GetCommandLineArgvW to get sys.argv as a list of Unicode
strings.
Versions 2.x of Python don't support Unicode in sys.argv on
Windows, with the underlying Windows API instead replacing multi-byte
characters with '?'.
"""
from ctypes import POINTER, byref, cdll, c_int, windll
from ctypes.wintypes import LPCWSTR, LPWSTR
GetCommandLineW = cdll.kernel32.GetCommandLineW
GetCommandLineW.argtypes = []
GetCommandLineW.restype = LPCWSTR
CommandLineToArgvW = windll.shell32.CommandLineToArgvW
CommandLineToArgvW.argtypes = [LPCWSTR, POINTER(c_int)]
CommandLineToArgvW.restype = POINTER(LPWSTR)
cmd = GetCommandLineW()
argc = c_int(0)
argv = CommandLineToArgvW(cmd, byref(argc))
if argc.value > 0:
# Remove Python executable and commands if present
start = argc.value - len(sys.argv)
return [argv[i] for i in
xrange(start, argc.value)]
sys.argv = win32_unicode_argv()
```
Now, the way I use it is simply to do:
```
import sys
import win32_unicode_argv
```
and from then on, `sys.argv` is a list of Unicode strings. The Python `optparse` module seems happy to parse it, which is great. | Dealing with encodings is very confusing.
I *believe* if your inputing data via the commandline it will encode the data as whatever your system encoding is and is not unicode. (Even copy/paste should do this)
So it should be correct to decode into unicode using the system encoding:
```
import sys
first_arg = sys.argv[1]
print first_arg
print type(first_arg)
first_arg_unicode = first_arg.decode(sys.getfilesystemencoding())
print first_arg_unicode
print type(first_arg_unicode)
f = codecs.open(first_arg_unicode, 'r', 'utf-8')
unicode_text = f.read()
print type(unicode_text)
print unicode_text.encode(sys.getfilesystemencoding())
```
running the following Will output:
Prompt> python myargv.py "PC・ソフト申請書08.09.24.txt"
```
PC・ソフト申請書08.09.24.txt
<type 'str'>
<type 'unicode'>
PC・ソフト申請書08.09.24.txt
<type 'unicode'>
?日本語
```
Where the "PC・ソフト申請書08.09.24.txt" contained the text, "日本語".
(I encoded the file as utf8 using windows notepad, I'm a little stumped as to why there's a '?' in the begining when printing. Something to do with how notepad saves utf8?)
The strings 'decode' method or the unicode() builtin can be used to convert an encoding into unicode.
```
unicode_str = utf8_str.decode('utf8')
unicode_str = unicode(utf8_str, 'utf8')
```
Also, if your dealing with encoded files you may want to use the codecs.open() function in place of the built-in open(). It allows you to define the encoding of the file, and will then use the given encoding to transparently decode the content to unicode.
So when you call `content = codecs.open("myfile.txt", "r", "utf8").read()` `content` will be in unicode.
codecs.open:
<http://docs.python.org/library/codecs.html?#codecs.open>
If I'm miss-understanding something please let me know.
If you haven't already I recommend reading Joel's article on unicode and encoding:
<http://www.joelonsoftware.com/articles/Unicode.html> | Read Unicode characters from command-line arguments in Python 2.x on Windows | [
"",
"python",
"windows",
"command-line",
"unicode",
"python-2.x",
""
] |
I have the following problem:
We need to find the next august. I other words if we are 2009-09-01 we need 2010-08-31 if we are 2009-06-21 we need 2009-08-31.
I know I can check if today is smaller than august 31 but I was wondering if there is an other possibility. | ```
public static class DateTimeExtensions
{
public static DateTime GetNextAugust31(this DateTime date)
{
return new DateTime(date.Month <= 8 ? date.Year : date.Year + 1, 8, 31);
}
}
``` | .Net 2.0
```
DateTime NextAugust(DateTime inputDate)
{
if (inputDate.Month <= 8)
{
return new DateTime(inputDate.Year, 8, 31);
}
else
{
return new DateTime(inputDate.Year+1, 8, 31);
}
}
``` | How to find the next august? C# | [
"",
"c#",
"datetime",
""
] |
I'm creating a grid based game in Java and I want to implement game recording and playback. I'm not sure how to do this, although I've considered 2 ideas:
1. Several times every second, I'd record the entire game state. To play it back, I write a renderer to read the states and try to create a visual representation. With this, however, I'd likely have a large save file, and any playback attempts would likely have noticeable lag.
2. I could also write every key press and mouse click into the save file. This would give me a smaller file, and could play back with less lag. However, the slightest error at the start of the game (For example, shooting 1 millisecond later) would result in a vastly different game state several minutes into the game.
What, then, is the best way to implement game playback?
Edit- I'm not sure exactly how deterministic my game is, so I'm not sure the entire game can be pieced together exactly by recording only keystrokes and mouse clicks. | A good playback mechanism is not something that can be simply added to a game without major difiiculties. The best would be do design the game infrastructure with it in mind. The [command pattern](http://en.wikipedia.org/wiki/Command_pattern) can be used to achieve such a game infrastructure.
For example:
```
public interface Command{
void execute();
}
public class MoveRightCommand implements Command {
private Grid theGrid;
private Player thePlayer;
public MoveRightCommand(Player player, Grid grid){
this.theGrid = grid;
this.thePlayer = player;
}
public void execute(){
player.modifyPosition(0, 1, 0, 0);
}
}
```
And then the command can be pushed in an execution queue **both** when the user presses a keyboard button, moves the mouse or without a trigger with the playback mechanism. The command object can have a time-stamp value (relative to the beginning of the playback) for precise playback... | Shawn Hargreaves had a recent post on his blog about how they implemented replay in MotoGP. Goes over several different approaches and their pros and cons.
<http://blogs.msdn.com/shawnhar/archive/2009/03/20/motogp-replays.aspx> | Best way to implement game playback? | [
"",
"java",
"playback",
"recording",
""
] |
I'm parsing a source file, and I want to "suppress" strings. What I mean by this is transform every string like "bla bla bla +/\*" to something like "string" that is deterministic and does not contain any characters that may confuse my parser, because I don't care about the value of the strings. One of the issues here is string formatting using e.g. "%s", please see my remark about this below.
Take for example the following pseudo code, that may be the contents of a file I'm parsing. Assume strings start with ", and escaping the " character is done by "":
```
print(i)
print("hello**")
print("hel"+"lo**")
print("h e l l o "+
"hello\n")
print("hell""o")
print(str(123)+"h e l l o")
print(uppercase("h e l l o")+"g o o d b y e")
```
Should be transformed to the following result:
```
print(i)
print("string")
print("string"+"string")
print("string"
"string")
print("string")
print(str(123)+"string")
print(uppercase("string")+"string")
```
Currently I treat it as a special case in the code (i.e. detect beginning of a string, and "manually" run until its end with several sub-special cases on the way). If there's a Python library function i can use or a nice regex that may make my code more efficient, that would be great.
Few remarks:
* I would like the "start-of-string" character to be a variable, e.g. ' vs ".
* I'm not parsing Python code at this stage, but I plan to, and there the problem obviously becomes more complex because strings can start in several ways and must end in a way corresponding to the start. I'm not attempting to deal with this right now, but if there's any well established best practice I would like to know about it.
* The thing bothering me the most about this "suppression" is the case of string formatting with the likes of '%s', that are meaningful tokens. I'm currently not dealing with this and haven't completely thought it through, but if any of you have suggestions about how to deal with this that would be great. Please note I'm not interested in the specific type or formatting of the in-string tokens, it's enough for me to know that there are tokens inside the string (how many). Remark that may be important here: my tokenizer is not nested, because my goal is quite simple (I'm not compiling anything...).
* I'm not quite sure about the escaping of the start-string character. What would you say are the common ways this is implemented in most programming languages? Is the assumption of double-occurrence (e.g. "") or any set of two characters (e.g. '\"') to escape enough? Do I need to treat other cases (think of languages like Java, C/C++, PHP, C#)? | Option 1: To sanitize Python source code, try the built-in [tokenize](http://docs.python.org/library/tokenize.html) module. It can correctly find strings and other tokens in any Python source file.
Option 3: Use [pygments](http://pygments.org/languages/) with HTML output, and replace anything in blue (etc.) with `"string"`. pygments supports a few dozen languages.
Option 2: For most of the languages, you can build a custom regexp substitution. For example, the following sanitizes Python source code (but it doesn't work if the source file contains `"""` or `'''`):
```
import re
sanitized = re.sub(r'(#.*)|\'(?:[^\'\\]+|\\.)*\'|"(?:[^"\\]+|\\.)*"',
lambda match: match.group(1) or '"string"', source_code)
```
The regexp above works properly even if the strings contain backslashes (`\"`, `\\`, `\n`, `\\`, `\\"`, `\\\"` etc. all work fine).
When you are building your regexp, make sure to match comments (so your regexp substitution won't touch strings inside comments) and regular expression literals (e.g. in Perl, Ruby and JavaScript), and pay attention you match backslashes and newlines properly (e.g. in Perl and Ruby a string can contain a newline). | Nowhere do you mention that you take an approach using a [lexer](http://en.wikipedia.org/wiki/Lexical_analysis) and [parser](http://en.wikipedia.org/wiki/Parsing). If in fact you do not, have a look at e.g. the [tokenize](http://docs.python.org/library/tokenize.html) module (which is probably what you want), or the 3rd party module [PLY](http://www.dabeaz.com/ply/) (Python Lex-Yacc). Your problem needs a systematic approach, and these tools (and others) provide it.
(Note that once you have tokenized the code, you can apply *another* specialized tokenizer to the contents of the strings to detect special formatting directives such as `%s`. In this case a regular expression may do the job, though.) | Parsing in Python: what's the most efficient way to suppress/normalize strings? | [
"",
"python",
"string",
"parsing",
""
] |
I'm a little lost in how to cast templates. I have a function foo which takes a parameter of type `ParamVector<double>*`. I would like to pass in a `ParamVector<float>*`, and I can't figure out how to overload the casting operator for my `ParamVector` class, and Google isn't helping me that much. Does anyone have an example of how to do this? Thanks.
EDIT: Adding some code, sorry I'm an idiot and didn't phrase the original question well at all;
```
template<class T> class ParamVector
{
public:
vector <T> gnome;
vector <T> data_params;
}
template<class T> class ParamVectorConsumer
{
public:
ParamVector<T> test;
}
ParamVector<float> tester;
ParamVectorConsumer<double> cons;
cons.ParamVector = tester
```
will fail to compile. I would like to know how to write it so that I can cast the float version of tester to a ParamVector double. Thanks
EDIT2: Casting was the wrong word. I don't mind writing extra code, I just need to know how to get this to be accepted by the compiler so that I can write some sort of conversion code. | I'm not sure but maybe you need some like this:
```
template< typename TypeT >
struct ParamVector
{
template < typename NewTypeT >
operator ParamVector< NewTypeT >()
{
ParamVector< NewTypeT > result;
// do some converion things
return result;
}
template< typename NewTypeT >
ParamVector( const ParamVector< NewTypeT > &rhs )
{
// convert
}
template < typename NewTypeT >
ParamVector& operator=( const ParamVector< NewTypeT > &rhs )
{
// do some conversion thigns
return *this;
}
};
ParamVector< double > d1;
ParamVector< float > f1;
f1 = d1;
```
You can choose use conversion operator or operator= - I've provided both in my example. | Well, you can't. Each different actual template parameter, makes an entirely new class, which has no\* relation inheritance relation with any any other class, with a diffent actual argument, made from that template.
No relationship. Well, except that each provides the same interface, so that *inside a template* you can handle then the same.
But neither the static types or the dynamic types have any relation.
Let me drop back here, and explain.
When I declare a pointer to classtype, like
```
Foo fp*;
```
fp has what we call a static type, of pointer-to Foo. If class Bar is a subclass of Foo, and I point fp at new Bar:
```
fp = new Bar1();
```
then we say that the object pointed to by fp has the dynamic type of Bar.
if Bar2 also publicly derives from Foo, I can do this:
```
fp = new Bar2();
```
and without ever even knowing what fp points to, I can call virtual methods declared in Foo, and have the compiler make sure that the method defined in he dynamic type pointed to is what's called.
For a `template< typename T > struct Baz { void doSomething(); };`
`Baz<int>` and `Baz<float>` are two *entirely different class types*, with no relationship.
The only "relationship" is that I can call doSomething() on both, but since the static types have no relationship, if I have a `Baz<int> bi*`, I can't point it to a `Baz<float>`. Not even with a cast. The compiler has no way to "translate" a call to the Baz doSotheing method into a call to a Baz::doSomething() method. That's because *there is no "Baz method", there is no Baz*, there are ony `Baz<int>s` and `Baz<float>s`, and `Baz<whatevers>`, but there's no common parent. Baz is not a class, Baz is a template, a set of instructions about how to make a class if and only if we have a T parameter that's bound to an actual type (or to a constant).
Now there is one way I can treat those `Baz`es alike: in a template, they present the same interface, and the compiler, if it knows what kind of Baz we're really dealing with, can make a *static* call to that method (or a static access of a member variable).
But a template is not code, a template is meta-code, the instructions of how to synthesize a class. A "call" in a template is not a call,it's an instruction of how to write the code to make a call.
So. That was long winded and confusing. Outside of a template definition, there is no relationship between a ParamVector and aParamVector. So your assignment can't work.
Well. Almost.
Actually, with partial application of templates, you can write a *template function* which gives a "recipe" of how to transform a `Paramvector<T>` to a `ParamVector<U>`. Notice the T and the U. If you can write code to turn any kind of ParamVector, regardless of actual template parameter into any other kind of ParamVector, you can package that up as a partially applied template, and the compiler will add that function to, for example, ParamVector.
That probably involves making a `ParamVector<U>`, and transforming each T in the `ParamVector<T>` into a U to put in the `ParamVector<U>`. Which still won't let you asign to a `ParamConsumer<T>`.
So maybe you want to have both templates and inheritance. In that case, you can same that all ParamVectors regardless of type inherit from some non-template class. And then there would be a relationship between ParamVectors, they'd all be sibling subclasses of that base class. | c++ template casting | [
"",
"c++",
"templates",
""
] |
I am trying to boild my treeview at runtime from a DataTable that is returned from a LINQ query. The Fields returned are:
NAME = CaseNoteID | ContactDate | ParentNote
TYPE = Guid | DateTime | Guid
The ParentNote field matches a entry in the CaseNoteID column. The Select(filter) is giving me a runtime error of ***Cannot find column [ea8428e4].*** That alphanumeric is the first section of one of the Guids. When I step thru my code filter = `"ParentNote=ea8428e4-1274-42e8-a31c-f57dc2f189a4"`
What am I missing?
```
var tmpCNoteID = dr["CaseNoteID"].ToString();
var filter = "ParentNote="+tmpCNoteID;
DataRow[] childRows = cNoteDT.Select(filter);
``` | Try enclosing the GUID with single quotes:
```
var filter = "ParentNote='"+tmpCNoteID+"'";
``` | I know this is an old thread, but I wanted to add an addendum to it. When using the IN operator with a Guid (ex: ParentNote IN ( , , etc. ) ) then single quotes are no longer accepted. In that case, the CONVERT method (suggested by granadaCoder) is necessary. (Single quotes raise an exception about comparing a Guid to a string with the '=' operator... which we actually aren't using.)
**Details:** I inherited some legacy code that built a large filter string in the format: `MyColumn = '11111111-2222-3333-4444-555555555555' OR MyColumn = '11111111-2222-3333-4444-555555555555' ....`
When the number of guids (and, therefore the number of OR clauses) got to be too large, this caused a stack overflow exception. By replacing the numerous OR clauses with an IN clause, I was able to set the filter without an exception. But using an IN clause means having to use the CONVERT approach. | DataTable Select() with Guids | [
"",
"c#",
".net-3.5",
"datatable",
"guid",
""
] |
**I've just discovered that regtlib.exe appears to be missing from Windows 7** (and apparently from Vista as well).
I've just installed Windows 7 RC in a VM and I'm attempting to build our existing projects on the new OS. The projects are c/c++ based and I'm using visual studio 2008. In order to build these projects I need to register several tlb files that are referenced within the code base.
Has anyone also encountered this problem? And, has anyone managed to solve this?
Thanks. | Yeah regtlib was removed from vista and up. As far as I know, all it does is call LoadTypeLibEx with the `REGKIND_REGISTER` flag (<http://msdn.microsoft.com/en-us/library/ms221249.aspx>). Maybe you could write a simple replacement. | Just came across this issue (couldn't add any components to a VB6 project on Win7). This post ([Error accessing the system registry in VB 6 IDE](https://stackoverflow.com/questions/104383/error-accessing-the-system-registry-in-vb-6-ide)) pointed to regtlib (which is missing from Win7). I just
set the VB6 start menu icon to 'Run As Administrator' and it worked fine for adding components and should fix any problems relating to updating the registry as well.
Regards
Ian | Windows 7 and the case of the missing regtlib | [
"",
"c++",
"visual-studio",
"windows-7",
"regtlib",
""
] |
I have a name of a property and need to find its value within a Class, what is the fastest way of getting to this value? | I am making the assumption that you have the name of the property in runtime; not while coding...
Let's assume your class is called TheClass and it has a property called TheProperty:
```
object GetThePropertyValue(object instance)
{
Type type = instance.GetType();
PropertyInfo propertyInfo = type.GetProperty("TheProperty");
return propertyInfo.GetValue(instance, null);
}
``` | I assume you mean you have the name of the property as a string. In this case, you need to use a bit of reflection to fetch the property value. In the example below the object containing the property is called `obj`.
```
var prop = obj.GetType().GetProperty("PropertyName");
var propValue = prop.GetValue(obj, null);
```
Hope that helps. | Property Name and need its value | [
"",
"c#",
"properties",
""
] |
I am trying to use a library (.dll) in my project. Everything seems to
be set up fine. It all works in release mode.
When I go debug on it, I get this darn error on startup:
ldr: ... application couldn't be initialized error (or similar, I translated it)
I learned that this has to do with manifest files.
I fumbled around a bit, in the project settings, but nothing really worked / I couldn't get my head around it.
The error persists.
Does anyone know a quick solution to this?
I don't care if it is dirty.
I think I liked dll-Hell better than manifest-Hell!!
The solution:
The wrong version of the .dlls got loaded. I didn't know that they were still
lying around on the system. Depedency Walker is a great tool and set me on the right track. So I will accept this answer.
Thanks a lot! | I always use [Dependency Walker](http://www.dependencywalker.com/) for debugging this sort of thing. It will tell you which dependencies your dll is missing. | Are you trying to run Debug-mode executables on a PC without Visual Studio installed? If so, you'll need to copy the Debug CRT DLLs onto it. See here: <http://msdn.microsoft.com/en-us/library/aa985618(VS.80).aspx> | application couldn't be initialized error (manifest file problem) | [
"",
"c++",
"dll",
"visual-studio-2005",
"manifest",
""
] |
In most languages like C# for example given a string you can test (boolean) if that string contains another string, basically a subset of that string.
```
string x = test2;
if(x.contains("test"))
// do something
```
How can I do this in a simple way with Javascript/Jquery? | This is done with [indexOf](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Objects/String/indexOf), however it returns -1 instead of False if not found.
*Syntax*
`string.indexOf(searchValue[, fromIndex])`
*Parameters*
`searchValue` -
A string representing the value to search for.
`fromIndex` -
The location within `string` to start the search from. It can be any integer between 0 and the length of `string`. The default value is 0.
*Return*
The first index in `string` at which the start of the substring can be found, or -1 if `string` does not contain any instances of the substring. | As Paolo and cletus said, you can do it using indexOf().
Valid to mention is that it is a javascript function, not a jQuery one.
If you want a jQuery function to do this you can use it:
```
jQuery.fn.contains = function(txt) { return jQuery(this).indexOf(txt) >= 0; }
``` | jQuery String Contains Manipulation? | [
"",
"javascript",
"jquery",
"string",
""
] |
I need to resize a picture to a fixed size. But it has to keep the factors between the width and height.
Say I want to resize a picture from `238 (w) X 182 (h)` to `210 / 150`
What I do now is:
```
Original width / target width = 1.333333
Original Height / target Height = 1.213333
```
Now I take the smallest factor.
Now I always have the right width since `238 / 1.333333 = 210`.
But the height is still `160`.
How do I get the height down to `160` without ruining the pic?
Do I need to crop? If so how? | This solution is basically the same as Can Berk Güder's, but after having spent some time writing and commenting, I felt like posting.
This function creates a thumbnail that is exactly as big as the size you give it.
The image is resized to best fit the size of the thumbnail. If it does not fit exactly in both directions, it's centered in the thumnail. Extensive comments explain the goings-on.
```
function thumbnail_box($img, $box_w, $box_h) {
//create the image, of the required size
$new = imagecreatetruecolor($box_w, $box_h);
if($new === false) {
//creation failed -- probably not enough memory
return null;
}
//Fill the image with a light grey color
//(this will be visible in the padding around the image,
//if the aspect ratios of the image and the thumbnail do not match)
//Replace this with any color you want, or comment it out for black.
//I used grey for testing =)
$fill = imagecolorallocate($new, 200, 200, 205);
imagefill($new, 0, 0, $fill);
//compute resize ratio
$hratio = $box_h / imagesy($img);
$wratio = $box_w / imagesx($img);
$ratio = min($hratio, $wratio);
//if the source is smaller than the thumbnail size,
//don't resize -- add a margin instead
//(that is, dont magnify images)
if($ratio > 1.0)
$ratio = 1.0;
//compute sizes
$sy = floor(imagesy($img) * $ratio);
$sx = floor(imagesx($img) * $ratio);
//compute margins
//Using these margins centers the image in the thumbnail.
//If you always want the image to the top left,
//set both of these to 0
$m_y = floor(($box_h - $sy) / 2);
$m_x = floor(($box_w - $sx) / 2);
//Copy the image data, and resample
//
//If you want a fast and ugly thumbnail,
//replace imagecopyresampled with imagecopyresized
if(!imagecopyresampled($new, $img,
$m_x, $m_y, //dest x, y (margins)
0, 0, //src x, y (0,0 means top left)
$sx, $sy,//dest w, h (resample to this size (computed above)
imagesx($img), imagesy($img)) //src w, h (the full size of the original)
) {
//copy failed
imagedestroy($new);
return null;
}
//copy successful
return $new;
}
```
Example usage:
```
$i = imagecreatefromjpeg("img.jpg");
$thumb = thumbnail_box($i, 210, 150);
imagedestroy($i);
if(is_null($thumb)) {
/* image creation or copying failed */
header('HTTP/1.1 500 Internal Server Error');
exit();
}
header('Content-Type: image/jpeg');
imagejpeg($thumb);
``` | This doesn't crop the picture, but leaves space around the new image if necessary, which I think is a better approach (than cropping) when creating thumbnails.
```
$w = 210;
$h = 150;
$orig_w = imagesx($original);
$orig_h = imagesy($original);
$w_ratio = $orig_w / $w;
$h_ratio = $orig_h / $h;
$ratio = $w_ratio > $h_ratio ? $w_ratio : $h_ratio;
$dst_w = $orig_w / $ratio;
$dst_h = $orig_h / $ratio;
$dst_x = ($w - $dst_w) / 2;
$dst_y = ($h - $dst_h) / 2;
$thumbnail = imagecreatetruecolor($w, $h);
imagecopyresampled($thumbnail, $original, $dst_x, $dst_y,
0, 0, $dst_w, $dst_h, $orig_w, $orig_h);
``` | Resize/crop/pad a picture to a fixed size | [
"",
"php",
"resize",
"crop",
""
] |
I'm looking for something that works in PHP and is similar to crystal reports. I basically need to have a layout setup that means I can output invoices just by inserting the data, and then send it to a printer.
The closest I've found so far is PDFB, but It's a bit of a pain as it needs to have precise positioning.
I'd like to have something that could generate an invoice based on a template (preferably XML based) and then output it to a form easy for us to print (PostScript would be nice!)
It needs to have support for barcodes too (though these can be generated as a GD image)
Another requirement is that this must be FLOSS | Use XML + XSL:FO with [Apache FOP](http://xmlgraphics.apache.org/fop/) via [PHP-JavaBridge](http://php-java-bridge.sourceforge.net/pjb/).
Here is how: <http://wiki.apache.org/xmlgraphics-fop/HowTo/PHPJavaBridge>
> PostScript would be nice!
Many PostScript printers understand PDF too. | I've used `Spreadsheet_Excel_Writer` in PHP, and it's good enough. Not WYSIWYG, but it does generate XSL files, and I'm happy with it. Afterwards, you can [use OpenOffice macro](http://www.oooforum.org/forum/viewtopic.phtml?t=3772) to convert the document to PDF. It works from command line, ergo, it works from PHP scripts too.
Or here's an even better way.
Use OpenOffice to convert a Smarty template. Smarty is a cool templating engine for PHP, I recommend it for this purpose. Then generate pure HTML using PHP with Smarty. Finally, just convert the generated HTML into PDF using aforementioned method.
Reporting Revolutionized (tm).
**EDIT Jun6 2009**
Modded down? Ah, nevermind.
Anyways, this method works on a headless server without running X11. I've taken the script from the [mentioned link](http://www.oooforum.org/forum/viewtopic.phtml?t=3772) (except I put it in preexisting collection "Standard" instead of "DannysLibrary") and then I've ran this command from Windows machine using PuTTY, and X was shut down on remote machine, and DISPLAY variable was not set, and ... well, in any case, there's no way OOo could find X11 to connect to.
```
$ openoffice.org -invisible -headless "macro:///Standard.Conversion.ConvertWordToPDF(`pwd`/logaritamska.doc)"
```
This works and I'm sure this would work great for anyone who'd need conversion from another format into PDF, including production of printable reports from HTML. By editing the macro you could, perhaps, even get OOo to read directly from stdin or from your temporary service URL, and output into predefined file. The script on the [link](http://www.oooforum.org/forum/viewtopic.phtml?t=3772) is quite simple once you have the elementary code to expand.
Summary:
* generate reports as XLS or HTML
* convert them
* even though it's OOo, it works on headless machines
**EDIT Jun 9 2009**
I've tried to implement an online converter this way. You should make PHP run under the same user under which you created macros. This user apparently cannot be www-data. I've tried to use suphp, but for some reason it didn't change the user properly (`posix_getuid()` kept returning 33 which is www-data). I'll edit this once I fix this.
**EDIT Jun 26 2009**
Guess it took me a while to report back. Yes, this works with suphp. I'm however not in position to show it live, since the only server I have runs a relatively critical web app which didn't have professional security auditing. This means one of the things we depend on to protect the backend is that the user under which frontend runs is a very very unprivileged user (such as www-data). Don't ask :-)
Hope this helps someone: yes, converting into PDF with OO.o is quite realistic. There's even some remote calling support in OO.o but I didn't study that just for purposes of writing this. | Something like Crystal Reports for PHP? | [
"",
"php",
"report",
"invoice",
""
] |
Is it possible to send "100 Continue" HTTP status code, and then later some other status code after processing entire request using Java Servlet API (HttpServletResponse)?
I can't find any definitive "No" answer, although API doesn't seem to support it. | I assume you mean "100 Continue".
The answer is: no, you can't (at least not the way it's intended, as [provisional response](http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.1)). In general, the servlet engine will do it automatically, when the request requires it. Of course this makes it impossibe for the servlet to *prevent* sending the 100 status -- this issue is a known problem in the Servlet API, and has been known for what feels like eons now. | I know that [Jetty will wait until getReader() or getInputStream() is called](http://www.eclipse.org/jetty/documentation/current/jetty-1xx-responses.html) before it sends a 100. I think this is the behavior you are looking for. I don't know what Tomcat does. | Sending 100 Continue using Java Servlet API | [
"",
"java",
"http",
"servlets",
"http-status-code-100",
""
] |
In eclipse, the m2eclipse plugin will prompt me for my credentials when I'm building a maven2 project. And it compiles fine.
But if I now try to run "mvn install" from the command line, I get an artifact not found error. How do I add the username/password into my pom.xml in order to solve this problem. | Which username password you talking about? If its the username defined in nexus repository then you can define in `settings.xml` where you defined the nexus repository.
```
<servers>
<server>
<id>releases</id>
<username>xxxxxxxxx</username>
<password>yyyyyyyy</password>
</server>
</servers>
``` | As far as I know there are no maven xml tags to configure that. Of course you could try prefixing the domain name with username and password like so:
```
http://username:password@yournexusserver/..
``` | How to pass in credentials when connecting to sonatype nexus (anonymous login disabled)? | [
"",
"java",
"maven-2",
"command-line",
"nexus",
"sonatype",
""
] |
Before starting writing this question, i was trying to solve following
```
// 1. navigate to page
// 2. wait until page is downloaded
// 3. read and write some data from/to iframe
// 4. submit (post) form
```
The problem was, that if a iframe exists on a web page, DocumentCompleted event would get fired more then once (after each document has been completed). It was highly likely that program would have tried to read data from DOM that was not completed and naturally - fail.
But suddenly while writing this question ***'What if' monster*** inspired me, and i fix'ed the problem, that i was trying to solve. As i failed Google'ing this, i thought it would be nice to post it here.
```
private int iframe_counter = 1; // needs to be 1, to pass DCF test
public bool isLazyMan = default(bool);
/// <summary>
/// LOCK to stop inspecting DOM before DCF
/// </summary>
public void waitPolice() {
while (isLazyMan) Application.DoEvents();
}
private void webBrowser1_Navigating(object sender, WebBrowserNavigatingEventArgs e) {
if(!e.TargetFrameName.Equals(""))
iframe_counter --;
isLazyMan = true;
}
private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) {
if (!((WebBrowser)sender).Document.Url.Equals(e.Url))
iframe_counter++;
if (((WebBrowser)sender).Document.Window.Frames.Count <= iframe_counter) {//DCF test
DocumentCompletedFully((WebBrowser)sender,e);
isLazyMan = false;
}
}
private void DocumentCompletedFully(WebBrowser sender, WebBrowserDocumentCompletedEventArgs e){
//code here
}
```
For now at least, my 5m hack seems to be working fine.
Maybe i am really failing at querying google or MSDN, but i can not find:
"How to use webbrowser control DocumentCompleted event in C# ?"
**Remark:** After learning a lot about webcontrol, I found that it does FuNKY stuff.
Even if you detect that the document has completed, in most cases it wont stay like that forever. Page update can be done in several ways - frame refresh, ajax like request or server side push (you need to have some control that supports asynchronous communication and has html or JavaScript interop). Also some iframes will never load, so it's not best idea to wait for them forever.
I ended up using:
```
if (e.Url != wb.Url)
``` | You might want to know the AJAX calls as well.
Consider using this:
```
private void webBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
string url = e.Url.ToString();
if (!(url.StartsWith("http://") || url.StartsWith("https://")))
{
// in AJAX
}
if (e.Url.AbsolutePath != this.webBrowser.Url.AbsolutePath)
{
// IFRAME
}
else
{
// REAL DOCUMENT COMPLETE
}
}
``` | I have yet to find a working solution to this problem online. Hopefully this will make it to the top and save everyone the months of tweaking I spent trying to solve it, and the edge cases associated with it. I have fought over this issue over the years as Microsoft has changed the implementation/reliability of isBusy and document.readystate. With IE8, I had to resort to the following solution. It's similar to the question/answer from Margus with a few exceptions. My code will handle nested frames, javascript/ajax requests and meta-redirects. I have simplified the code for clarity sake, but I also use a timeout function (not included) to reset the webpage after if 5 minutes domAccess still equals false.
```
private void m_WebBrowser_BeforeNavigate(object pDisp, ref object URL, ref object Flags, ref object TargetFrameName, ref object PostData, ref object Headers, ref bool Cancel)
{
//Javascript Events Trigger a Before Navigate Twice, but the first event
//will contain javascript: in the URL so we can ignore it.
if (!URL.ToString().ToUpper().StartsWith("JAVASCRIPT:"))
{
//indicate the dom is not available
this.domAccess = false;
this.activeRequests.Add(URL);
}
}
private void m_WebBrowser_DocumentComplete(object pDisp, ref object URL)
{
this.activeRequests.RemoveAt(0);
//if pDisp Matches the main activex instance then we are done.
if (pDisp.Equals((SHDocVw.WebBrowser)m_WebBrowser.ActiveXInstance))
{
//Top Window has finished rendering
//Since it will always render last, clear the active requests.
//This solves Meta Redirects causing out of sync request counts
this.activeRequests.Clear();
}
else if (m_WebBrowser.Document != null)
{
//Some iframe completed dom render
}
//Record the final complete URL for reference
if (this.activeRequests.Count == 0)
{
//Finished downloading page - dom access ready
this.domAccess = true;
}
}
``` | How to use WebBrowser control DocumentCompleted event in C#? | [
"",
"c#",
"automation",
"c#-2.0",
"webbrowser-control",
""
] |
I am trying to implement NOT NULL constraint in the customer table which is created as:
```
CREATE TABLE CUSTOMER(
cust_id INT(5) NOT NULL AUTO_INCREMENT,
PRIMARY KEY(cust_id),
first_name VARCHAR(25) NOT NULL,
last_name VARCHAR(25) NOT NULL,
email VARCHAR(25) NOT NULL,
password VARCHAR(25) NOT NULL,
gender VARCHAR(1) NOT NULL,
city VARCHAR(25) NOT NULL,
dob DATE NOT NULL,
pin INT NOT NULL);
```
After this from a form i am passing values and inserting it as :
```
$sql= "INSERT INTO customer(first_name,last_name,email,password,gender,city,dob,pin) VALUES('$first_name','$last_name','$email_add','$password','$gender','$city','$DOB','$pin')";
```
However if i am passing blank values of the fields,Mysql seems to insert them? What can be the possible problem?? | ```
$sql= "INSERT INTO customer(first_name,last_name,email,password,gender,city,dob,pin) VALUES('$first_name',".
($last_name?"'".$last_name."'":"NULL").
", '$email_add','$password','$gender','$city','$DOB','$pin')";
```
This will be:
```
VALUES('ahm',NULL,'email@a.ddr' ...
```
And because of NOT NULL nothing will be inserted. | I suspect that's because, in a real DBMS, as opposed to Oracle :-), there is a distinction between a blank value and a NULL value.
Blank strings are perfectly valid for `NOT NULL` fields. The only value you shouldn't be able to put in a `NOT NULL` field is, hang on, trying to remember, ... NULL. Yes, that's it. NULL. :-)
> Aside: Oracle made a decision many moons ago to treat empty VARCHARs as NULLs and it still haunts them (or it would if I wasn't the only one that refused to use it because of that problem).
If you want your PHP code to work like Oracle (where blanks become NULLs), you're going to have to pre-process the string, something like (pseudo-code):
```
if $first_name == "":
$first_name = "NULL"
else:
$first_name = "'" + $first_name + "'"
: : :
$sql= "INSERT INTO customer(" +
"first_name," +
"last_name," +
"emailpassword," +
"gender," +
"city," +
"dob," +
"pin" +
") VALUES (" +
$first_name + "," +
"'$last_name'" + "," +
"'$email_add'" + "," +
"'$password'" + "," +
"'$gender'" +
",'$city'" + "," +
"'$DOB'" + "," +
"'$pin'" +
")";
```
and so on for the other fields which can be NULL as well. I've only shown how to do it for `$first_name`.
That will cause your first\_name column in the `INSERT` statement to be either `NULL` or the value of `$first_name` surrounded by single quotes.
Keep in mind all these fields should be checked and/or modified to prevent SQL injection attacks. | MySql NOT NULL Constraint doesnot work | [
"",
"php",
"mysql",
"null",
"constraints",
"mysqli",
""
] |
How can I convert a .NET exe to Win32 exe? (I don't have the code)
The purpose is to run the application in Linux using wine. I presume that .NET exe cannot be run in wine and I don't want to use mono. | depending on what version of .NET it is and what libraries it makes use of you could try running it under Mono without compiling the IL down to native code.
most Linux distributions have it available under their package management systems.
see: <http://www.mono-project.com/Main_Page> for more details
the alternative is to use NGen to do the compiling (<http://blogs.msdn.com/clrcodegeneration/archive/2007/09/15/to-ngen-or-not-to-ngen.aspx>). but i'm not sure that would work under WINE. | Depending on your framework version it might work with Wine
[.Net Framework compability in Wine](http://appdb.winehq.org/objectManager.php?sClass=application&iId=2586) | How to convert a .NET exe to native Win32 exe? | [
"",
"c#",
"winapi",
""
] |
I have a windows form app written in C#. the main\_form class instantiates an AccessProcess named AccessProcessWorker which is a class that inherits from BackgroundWorker, then main\_form then initializes Process with the following code
```
AccessProcessWorker.DoWork += new DoWorkEventHandler(processWorker.worker_DoWork);
AccessProcessWorker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(processWorkerCompleted);
AccessProcessWorker.ProgressChanged += new ProgressChangedEventHandler(processProgressChanged);
```
this application has just gone from POC to "make it work fast".
I wrote this app to work against an Access database but now want to make it go against MS Sql, but leave the option to work against access also. So, I could do something ugly like instantiate and initialize a SqlProcessWorker or AccessProcessWorker based on some UI selection made by the user. But what I'd like to do is make it so main\_form always creates something like an IProcess so I didn't have to add logic to main\_form every time there is a new ProcessWorker. The problem in my design is that the initializing breaks when I do it the way I described.
If anyone has any ideas, or need further clairification please let me know. | What you look for is called "dependency injection". | At some point you will need to instantiate the correct type, but [The Factory Pattern](http://en.wikipedia.org/wiki/Factory_method_pattern) is usually to the goto here. Now, that may be a bit much if you will only ever have one of two types to 'new' in order to get your IProcess object. | C# OO Design Question | [
"",
"c#",
"oop",
""
] |
I'm stuck here again. I have a database with over 120 000 coordinates that I need to be displayed on a google maps integrated in my application. The thing is that and I've found out the hard way simply looping through all of the coordinates and creating an individual marker for each and adding it using the addOverlay function is killing the browser. So that definitely has to be the wrong way to do this- I've read a bit on clustering or Zoom level bunching - I do understand that theres no point in rendering all of the markers especially if most of them won't be seen in non rendered parts of the map except I have no idea how to get this to work.
How do I fix this here. Please guys I need some help here :( | There is a good comparison of various techniques here <http://www.svennerberg.com/2009/01/handling-large-amounts-of-markers-in-google-maps/>
However, given your volume of markers, you definitely want a technique that only renders the markers that should be seen in the current view (assuming that number is modest - if not there are techniques in the link for doing sensible things) | If you really have more than 120,000 items, there is no way that any of the client-side clusterers or managers will work. You will need to handle the markers server-side.
There is a good discussion [here](http://groups.google.com/group/Google-Maps-API/browse_thread/thread/4edb7ccc87a00bb3/701a16af04ed7fb5) with some options that may help you.
**Update:** I've posted this on SO before, but [this tutorial](http://www.appelsiini.net/2008/11/introduction-to-marker-clustering-with-google-maps) describes a server-side clustering method in PHP. It's meant to be used with the Static Maps API, but I've built it so that it will return clustered markers whenever the view changes. It works pretty well, though there is a delay in transferring the markers whenever the view changes. Unfortunately I haven't tried it with more than 3,000 markers - I don't know how well it would handle 120,000. Good luck! | Adding many markers kills my google maps - how do I do this? | [
"",
"php",
"google-maps",
""
] |
I have windows forms application wich runs another console application
here is the part of code
```
prog = new Process();
prog.StartInfo.FileName = exefile;
```
The console application should create file but when running that application from C# it doesn't creates any file
when im running console application with double click it works fine
here is the part of code from "exefile" (its on c++)
```
freopen("file.in","r",stdin);
freopen("file.out","w",stdout);
printf("somedata\n");
```
"file.in" surely exists | The most likely thing is that you need to set the working path:
```
prog.StartInfo.WorkingDirectory = ...
```
i.e. I'm thinking it can't find file.in in the *current* app folder. | You need to add this line whenever you want to start the process:
```
prog.Start();
```
[Here is the link to the MSDN page for `Process.Start`.](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.start.aspx) There are several overloads that you may want to consider. | Running console application from C# but application can't create file | [
"",
"c#",
"file",
"console",
""
] |
What is the best way to assemble a dynamic WHERE clause to a LINQ statement?
I have several dozen checkboxes on a form and am passing them back as: Dictionary<string, List<string>> (Dictionary<fieldName,List<values>>) to my LINQ query.
```
public IOrderedQueryable<ProductDetail> GetProductList(string productGroupName, string productTypeName, Dictionary<string,List<string>> filterDictionary)
{
var q = from c in db.ProductDetail
where c.ProductGroupName == productGroupName && c.ProductTypeName == productTypeName
// insert dynamic filter here
orderby c.ProductTypeName
select c;
return q;
}
``` | [](https://i.stack.imgur.com/sgS5X.png)
(source: [scottgu.com](http://www.scottgu.com/blogposts/dynquery/step2.png))
You need something like this? Use [the Linq Dynamic Query Library](http://msdn2.microsoft.com/en-us/vcsharp/bb894665.aspx) (download includes examples).
Check out [ScottGu's blog](http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx) for more examples. | I have similar scenario where I need to add filters based on the user input and I chain the where clause.
Here is the sample code.
```
var votes = db.Votes.Where(r => r.SurveyID == surveyId);
if (fromDate != null)
{
votes = votes.Where(r => r.VoteDate.Value >= fromDate);
}
if (toDate != null)
{
votes = votes.Where(r => r.VoteDate.Value <= toDate);
}
votes = votes.Take(LimitRows).OrderByDescending(r => r.VoteDate);
``` | Dynamic WHERE clause in LINQ | [
"",
"c#",
"linq",
"dynamic",
"where-clause",
""
] |
I need to write a simple parser to a sort of Domain Specific Language.
It needs to have basic arithmatics with proper operators evaluation order and a syntax to call functions of the underlying environment which can be overloaded.
What is the simplest way to write such a parser? Is there something I can adapt or use out of the box? I'm writing in C++. | Take a look at [Boost Spirit](http://spirit.sourceforge.net/distrib/spirit_1_8_5/libs/spirit/doc/quick_start.html). | See MetaII parser technology. This tutorial walks you through everything:
<http://www.bayfronttechnologies.com/mc_tutorial.html> | Writing a simple parser | [
"",
"c++",
"parsing",
"dsl",
""
] |
How can I do GroupBy multiple columns in LINQ
Something similar to this in SQL:
```
SELECT * FROM <TableName> GROUP BY <Column1>,<Column2>
```
How can I convert this to LINQ:
```
QuantityBreakdown
(
MaterialID int,
ProductID int,
Quantity float
)
INSERT INTO @QuantityBreakdown (MaterialID, ProductID, Quantity)
SELECT MaterialID, ProductID, SUM(Quantity)
FROM @Transactions
GROUP BY MaterialID, ProductID
``` | Use an anonymous type.
Eg
```
group x by new { x.Column1, x.Column2 }
``` | Procedural sample:
```
.GroupBy(x => new { x.Column1, x.Column2 })
``` | Group By Multiple Columns | [
"",
"c#",
".net",
"linq",
"group-by",
"aggregate",
""
] |
I am trying to throw a (custom) ImportException from the method importUsers in class UserHelper. I can see in the debugger that the throw clause is executed, but the method, from which the importUsers method is called, never catches the exception.
Here is the method, where the exception is thrown:
```
public static AccessorValidator importUsers(List<String> data, WebUser actor) throws ImportException {
//(irrelevant code removed)
try {
isSuccess = insertUserData(st, blocks, db, actor);
} catch (Exception e) {
throw new ImportException("Could not insert user on line " + rowCounter);
}
```
}
Here I try unsuccessfully to catch the thrown exception from execute method in AccessorValidator class:
```
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request,
HttpServletResponse response) throws Exception {
//(irrelevant code removed)
try{
av = UserHelper.importUsers(data, admin);
System.out.print("test2");
} catch (ImportException ie) {
System.out.print("testE");
returnMessageValue = ie.getMessage();
} catch (Exception e) {
System.out.print("testE2");
}
```
The output is "test2", and the code execution never gets to either of the catch blocks. What do I do wrong? | try changing your method to
```
try {
isSuccess = insertUserData(st, blocks, db, actor);
system.out.print("after insertUserData");
} catch (Exception e) {
System.out.print("before throwing");
throw new ImportException("Could not insert user on line " + rowCounter);
}
```
so that you can make sure that what you see in debug is what actually being executed (by checking your console), and whether insertUserData actually throws an exception or not. | If "test2" is printed, then there is no exception at all being thrown by importUsers().
The line information in the debugger can be inaccurate. Try putting a breakpoint in the constructor of the Exception to see if it's really being created. | Why does the code execution never enter the catch block (java)? | [
"",
"java",
"exception",
""
] |
This [blog post](http://illegalargumentexception.blogspot.com/2008/04/java-synchronizing-on-transient-id.html) demonstrates a way to implement a mutex per string id idiom. The String ids used are to represent HttpSession ids.
1. Why do we need to wrap a WeakReference around the Mutex instances ? Isn't it better to just create a Map from String -> Mutex ?
2. Why do we need to call put twice ?
```
public Mutex getMutex( String id )
{
Mutex key = new MutexImpl( id );
synchronized( mutexMap )
{
WeakReference<Mutex> ref = mutexMap.get( key );
if( ref == null )
{
mutexMap.put( key, new WeakReference<Mutex>( key ) );
return key;
}
Mutex mutex = ref.get();
if( mutex == null )
{
mutexMap.put( key, new WeakReference<Mutex>( key ) );
return key;
}
return mutex;
}
}
``` | [Loop](https://stackoverflow.com/questions/823902/what-is-the-reasoning-behind-the-double-call-to-weakhashmap-put/824002#824002) and [Bruno Conde](https://stackoverflow.com/questions/823902/what-is-the-reasoning-behind-the-double-call-to-weakhashmap-put/824015#824015) have pretty much covered it, but since I wrote that code...
The design goal was to avoid having the user call a release mechanism - the mutex is eligible for garbage collection when the user no longer references it.
> Why do we need to wrap a WeakReference
> around the Mutex instances ?
The map is a [WeakHashMap](http://java.sun.com/javase/6/docs/api/java/util/WeakHashMap.html):
```
private final Map mutexMap = new WeakHashMap();
```
This map is used to keep a reference to the mutex, but if you use the same object for the key and the value, the object is not eligible for garbage collection. Javadoc:
> Implementation note: The value objects
> in a WeakHashMap are held by ordinary
> strong references. Thus care should be
> taken to ensure that value objects do
> not strongly refer to their own keys,
> either directly or indirectly, since
> that will prevent the keys from being
> discarded. Note that a value object
> may refer indirectly to its key via
> the WeakHashMap itself; that is, a
> value object may strongly refer to
> some other key object whose associated
> value object, in turn, strongly refers
> to the key of the first value object.
> One way to deal with this is to wrap
> values themselves within
> WeakReferences before inserting, as
> in: m.put(key, new
> WeakReference(value)), and then
> unwrapping upon each get.
---
> Isn't it
> better to just create a Map from
> String -> Mutex ?
When will that String value be garbage collected? Is the same reference being passed every time? Has [intern()](http://java.sun.com/javase/6/docs/api/java/lang/String.html#intern()) been called on it? If I call intern, how long will the String live? If a String was the key, the mutex might not be eligible for garbage collection long after there is no need to keep a reference to it.
---
> Why do we need to call put twice ?
There are two cases to handle until the method can get a strong reference to the mutex in the map:
* the [WeakReference](http://java.sun.com/javase/6/docs/api/java/lang/ref/WeakReference.html) has been garbage collected (or was never there in the first place)
* the contents of the WeakReference are garbage collected after the reference to it has been acquired
*put* will only ever be invoked once; the method returns immediately after.
(The WeakReference could be reused in the second put, but I don't see that it would be a significant improvement.)
---
Of course, if anyone can find fault in the code, let me know and I'll happily correct it. Also, the unit tests try to check that the implementation doesn't leak, so feel free to modify the code and see what happens when you run the tests. | The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly, since that will prevent the keys from being discarded. Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. One way to deal with this is to wrap values themselves within WeakReferences before inserting, as in: m.put(key, new WeakReference(value)), and then unwrapping upon each get. | What is the reasoning behind the double call to WeakHashMap.put( .. )? | [
"",
"java",
"multithreading",
"concurrency",
"mutex",
""
] |
If I wish to take an MS Sql 2008 offline or online, I need to use the GUI -> DB-Tasks-Take Online or Take Offline.
Can this be done with some sql script? | ```
ALTER DATABASE database-name SET OFFLINE
```
If you run the [ALTER DATABASE](http://msdn.microsoft.com/en-us/library/aa275464(SQL.80).aspx) command whilst users or processes are connected, but you do not wish the command to be blocked, you can execute the statement with the NO\_WAIT option. This causes the command to fail with an error.
```
ALTER DATABASE database-name SET OFFLINE WITH NO_WAIT
```
Corresponding online:
```
ALTER DATABASE database-name SET ONLINE
``` | ```
-- Take all user databases offline
CREATE PROCEDURE SP_TakeOfflineAllDatabase AS
BEGIN
DECLARE @db sysname, @q varchar(max);
DECLARE cur_db CURSOR FOR
SELECT name FROM sys.databases WHERE owner_sid<>0x01;
OPEN cur_db;
WHILE 1=1
BEGIN
FETCH NEXT FROM cur_db INTO @db;
IF @@FETCH_STATUS <> 0
BREAK;
SET @q = N'ALTER DATABASE [' + @db + N'] SET OFFLINE WITH NO_WAIT';
EXEC(@q);
END;
CLOSE cur_db;
DEALLOCATE cur_db;
END;
```
Restart the server before run the procedure. It will close the existed connections to the databases. | SQL Script to take a Microsoft Sql database online or offline? | [
"",
"sql",
"sql-server-2008",
"offline",
""
] |
I am designing an application where I want to reduce the burden on the developers for future for development. In this I have list of classes e.g-"User,Account,Permission and etc" now these classes are associated to the table in the database with the name of the class same as data table. I want my business layer to be robust so that in future whenever I add any more column to the data table I don't have to rewrite insert update and delete command just adding property to the class should do the job.
Any Idea how to do this? | What you are looking for is probably an ORM (Object-Relational Mapping) framework. The most popular ones for .NET includes Linq-to-SQL (or Entity Framework), NHibernate nad ActiveRecord. Using an ORM is considered a best practice for achieving a maintainable and robust Data Access Layer.
See [a list of frameworks here](https://web.archive.org/web/20120625212018/http://geekswithblogs.net:80/iupdateable/articles/object-relational-mappers-orm-for-microsoft-.net.aspx). | What you're probably looking for is to implement an [Object Relational Mapping](http://en.wikipedia.org/wiki/Object-relational_mapping) (ORM) solution. Many ORM solutions help you maintain table and entity definitions and also create the appropriate queries for you.
Popular ORMs for the .Net Framework include NHibernate, the ADO Entity Framework and LINQ to SQL. Otherwise you could consider mapping your data model using templates (like using Codesmith, or the Net Tiers templates).
You could check out the answers to this question for more advice on [picking an ORM solution](https://stackoverflow.com/questions/132676/which-orm-for-net-would-you-recommend), or you can just browse some [ORM related questions tagged here](https://stackoverflow.com/questions/tagged/orm). | Data Access Layer in .net | [
"",
"c#",
".net",
"orm",
""
] |
For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?
For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers. | Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.
This by itself seems to me to be good enough reason to do a full rebuild before a release.
Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think. | The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.
Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system. | Partial builds versus full builds in Visual C++ | [
"",
"c++",
"incremental-linking",
"release-builds",
""
] |
I am trying to implement an application for a friend who sells t-shirts and hats, so we are dealing with interconnected tables.
I was wondering what would be the perfect libraries to achieve this goal.
What kind of database should I use to make it portable and easy to deploy.
I would really like to insist on the database stuff, what should I use?
Thank you so much (I use Netbeans) | We used HSQLDB initially for a similar application . But after going to production with it some of the users experienced random data corruption ( look though their forum, it seems like a common issue ), So we switched to Derby which proved to be stable . So my vote is Derby .
Also, I would stay away from hibernate for desktop applications due to huge startup time (it has to prepare the metadata upfront ), unless you only have few tables / models. | It seems like it needs to go production. Better go for [H2](http://www.h2database.com/html/main.html), its a better, faster HSQLDB. | Desktop application in java with database | [
"",
"java",
"database",
""
] |
My PHP code below shows an image on browser as its original size 300x300.
```
<?
$location="http://localhost/folwer.bmp ";
header("Location: $location");
?>
```
But I need to show it as 120x240 or any other size on browser. How can I do it without resizing the original image using PHP? Please help. | use the class on this [page](http://www.white-hat-web-design.co.uk/articles/php-image-resizing.php) (save it as SimpleImage.php on your server)
then use this:
```
<?php
header('Content-Type: image/jpeg');
include('SimpleImage.php');
$image = new SimpleImage();
$image->load($location);
$image->resize(120, 240);
$image->output();
?>
```
A few comments:
For this to work, the $location variable has to have the path of the image on your server, not the URL.
This will resize the image to 120 pixels in width and 240 in height. You probably want 240 x120.
I'm not sure if this works with bmp files. What are you doing working with bmp files anyway? | Side note: You will want to change your image to a **PNG**, **JPG**, or **GIF** as **BMP** is ***not a fully supported web browser image format***. | Showing resized image on browser | [
"",
"php",
"html",
""
] |
How can I have a text file (or XML file) represented as a whole string, and search for (or match) a particular string in it?
I have created a BufferedReader object:
```
BufferedReader input = new BufferedReader(new FileReader(aFile));
```
and then I have tried to use the Scanner class with its option to specify different delimiters, like this:
```
//Scanner scantext = new Scanner(input);
//Scanner scantext = new Scanner(input).useDelimiter("");
Scanner scantext = new Scanner(input).useDelimiter("\n");
while (scantext.hasNext()) { ... }
```
Using the Scanner class like this I can either read the text line by line, or word by word, but it doesn't help me, because sometimes in the text, which I want to process, I have
```
</review><review>
```
and I would like to say: if you find "`<review>`" anywhere in the text, do something with the following next lines (or piece of text) until you find "`</review>`". The problem is that `<review>` and `</review>` are on different places in the text, and sometimes glued to other text (therefore the empty space as delimiter doesn't help me).
I have thought that I might use the regular expression API in Java (the Pattern and Matcher classes), but they seem to match a particular string or line, and I want to have the text as one continuous string (at least this was my impressions from what I have read about them). Could you tell me what structures/methods/classes I should use in this case? Thank you. | Don't try to parse XML with regular expressions; it leads only to pain. There are a lot of very nice [existing](http://java.sun.com/javase/6/docs/api/javax/xml/parsers/package-summary.html) [XML](http://java.sun.com/javase/6/docs/api/org/xml/sax/package-summary.html) [APIs](http://java.sun.com/javase/6/docs/api/org/w3c/dom/package-summary.html) in Java already; why try to reinvent them?
Anyway, to search for a string in a text file, you should:
1. Load the file as a string ([example](http://www.coderanch.com/t/403914/Java-General-beginner/java/do-read-entire-file-all))
2. Create a [`Pattern`](http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html) to search for
3. Use a [`Matcher`](http://java.sun.com/javase/6/docs/api/java/util/regex/Matcher.html) to iterate through any matches | It looks to me as though you are trying to work with a structured xml file, and would suggest that you look into [javax.xml.parsers.DocumentBuilder](http://java.sun.com/j2se/1.4.2/docs/api/javax/xml/parsers/DocumentBuilder.html) or other built in [APIs](http://java.sun.com/j2se/1.4.2/docs/api/org/w3c/dom/package-summary.html) to parse the document. | Representing a text file as single unit in Java, and matching strings in the text | [
"",
"java",
"string-matching",
"textmatching",
""
] |
I have a 2d array with rows containing an id number, a name string, and the number of times the name occurs.
I would like to sort the array by count value in a descending order.
Sample array:
```
[
[1, 'Al', 3],
[2, 'Bea', 2],
[3, 'Chan', 1],
[4, 'Doug', 2],
[5, 'Ed', 3],
[6, 'Fey', 1],
]
```
Desired result:
```
[
[1, 'Al', 3],
[5, 'Ed', 3],
[2, 'Bea', 2],
[4, 'Doug', 2],
[3, 'Chan', 1],
[6, 'Fey', 1],
]
```
`sort()` and `ksort()` don't do what I require. | Use [uasort()](http://nl2.php.net/manual/en/function.uasort.php) to sort with a callback function. Example:
```
function sort_callback($a, $b) {
if ($a[2] == $b[2]) {
return 0;
}
return ($a[2] < $b[2]) ? -1 : 1;
}
uasort($array1, 'sort_callback');
``` | I'm a bit confused by the way you've presented the question, but are you trying to do something like this?
```
function order_by($data, $field_name) {
$code = "return strnatcmp(\$a['$field_name'], \$b['$field_name']);";
usort($data, create_function('$a,$b', $code));
return $data;
}
```
So, for example:
```
$array1 = array();
$array1[] = array('Id' => 1, 'Name' => 'YT', 'Count' => 30);
$array1[] = array('Id' => 2, 'Name' => 'Da5id', 'Count' => 56);
$array1[] = array('Id' => 3, 'Name' => 'Fido', 'Count' => 12);
$array1[] = array('Id' => 4, 'Name' => 'Hiro', 'Count' => 45);
echo "<pre>";
print_r($array1);
echo "</pre>";
$array2 = order_by($array1, 'Count');
echo "<pre>";
print_r($array2);
echo "</pre>";
``` | Sort rows of a 2d array by values in 3rd column in a descending direction | [
"",
"php",
"arrays",
"sorting",
"multidimensional-array",
"custom-sort",
""
] |
I have this code:
```
$rows = array();
$res = mysql_query($someQuery);
if(!mysql_errno())
while($row = mysql_fetch_assoc($res))
$rows[] = $row;
```
`$someQuery` is an arbitrary query that I write in to a form. The mysql\_errno catches the case when I write a mysql query with errors in it. But, I just discovered that when I do a "Delete from table\_name" query, it of course is not an error, but at the same time the mysql\_fetch\_assoc fails with a "Warning: mysql\_num\_rows(): supplied argument is not a valid MySQL result resource in /blah/blah/file.php on line x".
I've tried to look for it in the manual (maybe I'm just blind...) but is there a function I can use to check if $res is a valid MySQL result resource or not? | `if ($res)` should work fine to check if it's a resource. [is\_resource()](http://php.net/is_resource) will determine if its a valid resource at all.
You could also check [mysql\_affected\_rows](http://php.net/mysql_affected_rows) to try to determine if it's an INSERT/UPDATE/etc | Along with [`is_resource()`](http://php.net/is_resource) you can use [`get_resource_type()`](http://php.net/get_resource_type) to check whether it is a MySQL resource.
```
$res_type = is_resource($res) ? get_resource_type($res) : gettype($res);
if(strpos($res_type, 'mysql') === false) {
echo 'Invalid resource type: ' . $res_type;
}
```
`get_resource_type()` may return `"mysql link"` or `"mysql link persistent"` depending on your connection type. | Checking for valid MySQL result resource | [
"",
"php",
"mysql",
"resultset",
""
] |
Coming from Python I am looking for some "easy to try out" shell like Python has with `ipython` (preferably with Emacs support). In the Erlang shell you always have to first compile (`c(...)`) which slows down the experimental phase very much. | Yes it is :)
<http://erlang.org/doc/man/escript.html>
```
$ cat /tmp/test
#!/usr/bin/env escript
main(_) ->
io:format("Hello World!\n").
$ chmod +x /tmp/test
$ /tmp/test
Hello World!
``` | It should be fairly easy to build a module the starts a process to autoload code changes into your shell. Erlang actually has built in support for this kind of thing. So anytime you have recompiled the code it gets loaded into your shell automatically thus eliminating the need to type c('module.erl'). yourself. You could probably even have it monitor file changes and automatically compile it for you. see:
<http://github.com/toddlipcon/mochiweb/blob/29d6ffd58aa2cd175f84cadfc1ce1697f366ef62/src/reloader.erl>
for an example in action.
Currently that module just looks for recompiled modules but it should be trivial to have it look for changed source files and recompile them for you. | Is there an ipython equivalent for erlang? | [
"",
"python",
"emacs",
"erlang",
"ipython",
""
] |
This question is related to [this one](https://stackoverflow.com/questions/842849/mysql-myisam-table-performance-painfully-painfully-slow).
I have a **page** table with the following structure:
```
CREATE TABLE mydatabase.page (
pageid int(10) unsigned NOT NULL auto_increment,
sourceid int(10) unsigned default NULL,
number int(10) unsigned default NULL,
data mediumtext,
processed int(10) unsigned default NULL,
PRIMARY KEY (pageid),
KEY sourceid (sourceid)
) ENGINE=MyISAM AUTO_INCREMENT=9768 DEFAULT CHARSET=latin1;
```
The **data** column contains text whose size is around 80KB - 200KB per record. The total size of the data stored in the **data** column is around 1.5GB.
Executing this query takes *0.08* seconds:
```
select pageid from page
```
But executing this query takes around *130.0* seconds:
```
select sourceid from page
```
As you see, I've got a primary index on page.pageid and an index on page.sourceid. So should the second query be taking *THAT* long?
## Edit #1
**EXPLAIN** returned
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE page index sourceid 5 9767 Using index
```
I'm sorry but profiling didn't work... MySQL (its 4.1.22) did not recognize SHOW PROFILE query.
**SHOW INDEX** returned
```
Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment
page 0 PRIMARY 1 pageid A 9767 BTREE
page 1 sourceid 1 sourceid A 3255 YES BTREE
``` | Did you try to enforce the use of the index? Like:
```
SELECT sourceid FROM page USE INDEX (sourceid_index)
```
Like sgehrig comments, check using EXPLAIN if the index is used? And share the result?
```
EXPLAIN select sourceid from page
```
It could also help to share the definiton of the indexes:
```
SHOW INDEX FROM page
``` | As MySQL 4.1.22 is fairly old (02 November 2006) I'd suspect that it doesn't support the notion of *covering indexes* for secondary keys. `EXPLAIN` shows that the query actually uses the index, so I'd assume that the additional time is needed to read all the result rows (instead of just returning the index content when using *covering indexes*) to extract the `sourceid` column.
Do you have the possibility to check the query on a more recent MySQL server version? | MySQL MyISAM table performance problem revisited | [
"",
"sql",
"mysql",
"performance",
"query-optimization",
"myisam",
""
] |
I am pretty new with Qt.
I want to respond to `linkClicked` in `QWebView`.
I tried connect like this:
```
QObject::connect(ui->webView, SIGNAL(linkClicked(QUrl)),
MainWindow,SLOT(linkClicked(QUrl)));
```
But I was getting error: `C:/Documents and Settings/irfan/My Documents/browser1/mainwindow.cpp:9: error: expected primary-expression before ',' token`
When I do this using UI Editing Signals Slots:
I have in header file declaration of slot:
```
void linkClicked(QUrl &url);
```
in source cpp file :
```
void MainWindow::linkClicked(QUrl &url)
{
QMessageBox b;
b.setText(url->toString());
b.exec();
}
```
When I run this it compiles and runs but got a warning :
```
Object::connect: No such slot MainWindow::linkClicked(QUrl)
in ui_mainwindow.h:100
```
What is proper way of doing this event handling? | I changed `QObject::connect` to only `connect` and it works.
So this code works:
`connect(ui->webView,SIGNAL(linkClicked(const QUrl)),this,SLOT(linkClicked(const QUrl)),Qt::DirectConnection);`
But I don't know why? | Using QObject::connect() and connect() is same in this context. I believe
```
QObject::connect(ui->webView,SIGNAL(linkClicked(QUrl)),
MainWindow,SLOT(linkClicked(QUrl)));
```
was called from a function inside MainWindow class. That is why when you tried
```
connect(ui->webView,SIGNAL(linkClicked(const QUrl)),
this,SLOT(linkClicked(const QUrl)),Qt::DirectConnection);
```
it works. Notice the difference that make it work - the third parameter. You used *this* in the second snippet, where as you used *MainWindow* in the first snippet.
Read [this](http://doc.trolltech.com/4.4/signalsandslots.html) to know how signals and slots mechanism works and how to properly implement it. | Qt: having problems responding on QWebView::linkClicked(QUrl) - slot signal issue | [
"",
"c++",
"qt",
"signals",
""
] |
This is what I want to do:
```
switch(myvar)
{
case: 2 or 5:
...
break;
case: 7 or 12:
...
break;
...
}
```
I tried with "case: 2 || 5" ,but it didn't work.
The purpose is to not write same code for different values. | By stacking each switch case, you achieve the OR condition.
```
switch(myvar)
{
case 2:
case 5:
...
break;
case 7:
case 12:
...
break;
...
}
``` | You do it by [stacking case labels](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/statements/selection-statements#the-switch-statement):
```
switch(myvar)
{
case 2:
case 5:
...
break;
case 7:
case 12:
...
break;
...
}
``` | How add "or" in switch statements? | [
"",
"c#",
"switch-statement",
""
] |
Windows provides a lockless singly linked list, as documented at this page:
[Win32 SList](http://msdn.microsoft.com/en-us/library/ms684121(VS.85).aspx)
I'm wondering if there's an existing good C++ wrapper around this functionality. When I say good, I mean it exports the usual STL interface as much as possible, supports iterators, etc. I'd much rather use someone else's implementation than sit down to write an STL-type container. | You could quickly get up and running with boost and ::boost::iterator\_facade.
No it wouldn't be optimal or portable and iterator semantics are something you should hear Alexandrescou suddenly come out against at DevCon. You are not locking the container, you are locking (and potentially relocking and unlocking ) the operations. And locking the operation means serial execution, very simple. There is plenty of iterator manipulation that will be an unnecessary penalty for the abstraction being created.
From Mars view, iterator is hiding the pointer, and hiding under a semi-OO concept that is as odds as OO-vs-Distributed development is.. I'd use a 'procedural' interface for sure and make the users/maintainers pay attention to why it is necessary. Lock-free ops are only as good as 'all the parallel code' surrounding it. And classic examples as people keep giving scoped\_lock wrapping reinvention since '96 credit, it produces pretty serial code.
Or use the atomic and Sutter's DDJ entries as reference for poor man way forward (and more than 10 years of unorderedness of Pentium Pro later).
(all that is really happening is that boost and DDJ is running after a .net and MS CCR train that is running after immutability, as well as intel train that is running after a good OO-similar abstraction for lockfree development. The problem is it cannot be done well and some people fight it time and time again; much like concurrent\_vector nonsense of TBB. The same reason exceptions never materialised as non-problematic, especially across environments, and the same reason why vector-processing in CPUs is underutilised by C++ compilers and so on and on..) | You won't be able to layer an STL style interface on top of SList ever. In order to avoid memory management problems the only node in the list that is accessible is the head of the list. And the only way to access that node is to pop it off the list. This prevents two threads from having the same node and then one thread deleting that node while the other thread is still using it. This is what I mean by "memory management issues" and is a common problem in lock free programming. You could always pop the first node and then follow the "Next" pointers in the SLIST\_ENTRY structs but this is an exceedingly bad idea unless you can guarantee that the list will not shrink, with nodes being deallocated, while you are reading it. Of course this still removes the head node from the list.
Basically you are trying to use SList wrong. For what it sounds like you want to do you just need to use an STL container and protect access to it using a lock. STL algorithms will not work with lock free data structures that are mutable like SList.
All that being said you could create a C++ wrapper around SList but it wouldn't be STL compatible. | Is there a decent C++ wrapper around Win32's lockless SList out there? | [
"",
"c++",
"windows",
"linked-list",
"lock-free",
""
] |
I need to make one function in a module platform-independent by offering several implementations, without changing any files that import it. The following works:
`do_it = getattr(__import__(__name__), "do_on_" + sys.platform)`
...but breaks if the module is put into a package.
An alternative would be an if/elif with hard-coded calls to the others in do\_it().
Anything better? | Use `globals()['do_on_' + platform]` instead of the `getattr` call and your original idea should work whether this is inside a package or not. | Put the code for platform support in different files in your package. Then add this to the file people are supposed to import from:
```
if sys.platform.startswith("win"):
from ._windows_support import *
elif sys.platform.startswith("linux"):
from ._unix_support import *
else:
raise ImportError("my module doesn't support this system")
``` | How to offer platform-specific implementations of a module? | [
"",
"python",
""
] |
I am storing time in a MySQL database as a Unix timestamp and that gets sent to some JavaScript code. How would I get just the time out of it?
For example, in `HH/MM/SS` format. | ```
let unix_timestamp = 1549312452;
// Create a new JavaScript Date object based on the timestamp
// multiplied by 1000 so that the argument is in milliseconds, not seconds
var date = new Date(unix_timestamp * 1000);
// Hours part from the timestamp
var hours = date.getHours();
// Minutes part from the timestamp
var minutes = "0" + date.getMinutes();
// Seconds part from the timestamp
var seconds = "0" + date.getSeconds();
// Will display time in 10:30:23 format
var formattedTime = hours + ':' + minutes.substr(-2) + ':' + seconds.substr(-2);
console.log(formattedTime);
```
For more information regarding the Date object, please refer to [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date) or the [ECMAScript 5 specification](http://ecma-international.org/ecma-262/5.1/#sec-15.9). | ```
function timeConverter(UNIX_timestamp){
var a = new Date(UNIX_timestamp * 1000);
var months = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'];
var year = a.getFullYear();
var month = months[a.getMonth()];
var date = a.getDate();
var hour = a.getHours();
var min = a.getMinutes();
var sec = a.getSeconds();
var time = date + ' ' + month + ' ' + year + ' ' + hour + ':' + min + ':' + sec ;
return time;
}
console.log(timeConverter(0));
``` | Convert a Unix timestamp to time in JavaScript | [
"",
"javascript",
"date",
"time",
"time-format",
""
] |
How do I let my users apply their own custom formula to a table of data to derive new fields?
I am working on a [Django](http://www.djangoproject.com) application which is going to store and process a lot of data for subscribed users on the open web. Think 100-10,000 sensor readings in one page request. I am going to be drawing graphs using this data and also showing tables containing it. I expect groups of sensors to be defined by my users, who will have registered themselves on my website (i.e they correspond with a django model).
I would like to allow the user to be able to create fields that are derived from their sensor data (as part of a setup process). For example, the user might know that their average house temperature is (temperature sensor1 + temperature sensor2) / 2 and want to show that on the graph. They might also want something more interesting like solar hot water heated is (temp out - temp in) \* flow \* conversion constant. I will then save these defined formulas for them and everyone else who views this page of sensor data.
The main question is how do I define the formula at the centre of the system. Do I just have a user-defined string to define the formula (say 100 chars long) and parse it myself - replace the user defined with an input sample and call it toast?
**Update**
In the end I got just the answer I asked for : A safe way to evaluate a stored user function on the server. Evaluating the same function also on the client when the function is being defined will be a great way to make the UI intuitive. | Depends on who your clients are.
If this is "open to the public" on the WWW, you have to parse expressions yourself. You can use the Python compiler to compile Python syntax. You can also invent your own compiler for a subset of Python syntax. There are lots of examples; start with the [ply](http://www.dabeaz.com/ply/) project.
If this is in-house ("behind the firewall") let the post a piece of Python code and exec that code.
Give them an environment `from math import *` functionality available.
Fold the following around their supplied line of code:
```
def userFunc( col1, col2, col3, ... ):
result1= {{ their code goes here }}
return result1
```
Then you can exec the function definition and use the defined function without bad things happening.
While some folks like to crow that `exec` is "security problem", it's no more a security problem than user's sharing passwords, and admin's doing intentionally stupid things like deleting important files or turning the power off randomly while your programming is running.
`exec` is only a security problem if you allow anyone access to it. For in-house applications, you know the users. Train them. | I would work out what operations you want to support [+,-,\*,/,(,),etc] and develop client side (javascript) to edit and apply those values to new fields of the data. I don't see the need to do any of this server-side and you will end up with a more responsive and enjoyable user experience as a result.
If you allow the user to save their formulas and re-load them when they revisit the site, you can get their browser to do all the calculations. Just provide some generic methods to add columns of data which are generated by applying one of their forumla's to your data.
I imagine the next step would be to allow them to apply those operations to the newly generated columns.
Have you considered posting their data into a google spreadsheet? This would save a lot of the development work as they already allow you to define formulas etc. and apply it to the data. I'm not too sure of the data limit (how much data you can post and operate on) mind you. | User-defined derived data in Django | [
"",
"python",
"django",
"user-defined-functions",
""
] |
Does the method get called with a null value or does it give a null reference exception?
```
MyObject myObject = null;
myObject.MyExtensionMethod(); // <-- is this a null reference exception?
```
If this is the case I will never need to check my 'this' parameter for null? | That will work fine (no exception). Extension methods don't use virtual calls (i.e. it uses the "call" il instruction, not "callvirt") so there is no null check unless you write it yourself in the extension method. This is actually useful in a few cases:
```
public static bool IsNullOrEmpty(this string value)
{
return string.IsNullOrEmpty(value);
}
public static void ThrowIfNull<T>(this T obj, string parameterName)
where T : class
{
if(obj == null) throw new ArgumentNullException(parameterName);
}
```
etc
Fundamentally, calls to static calls are very literal - i.e.
```
string s = ...
if(s.IsNullOrEmpty()) {...}
```
becomes:
```
string s = ...
if(YourExtensionClass.IsNullOrEmpty(s)) {...}
```
where there is obviously no null check. | **Addition to the correct answer from Marc Gravell.**
You could get a warning from the compiler if it is obvious that the this argument is null:
```
default(string).MyExtension();
```
Works well at runtime, but produces the warning `"Expression will always cause a System.NullReferenceException, because the default value of string is null"`. | In C#, what happens when you call an extension method on a null object? | [
"",
"c#",
"parameters",
"null",
"extension-methods",
""
] |
Since 2005 we are using MyEclipse as our standard development tool.
We use it mainly for for Java applications, but from time to time
we use it also for Groovy and may be other stuff, like DB navigator, etc.
Our licences should be renewed in a couple of months and I am rethinking
the decision of using it. A nice feature of MyEclipse was the debugger which
allows us to debug client/server applications. Also the hot deployment was a nice
feature. All this stuff can be done without MyEclipse and comparing the memory footprint
of MyEclipse 7.1 woth Eclipse Ganymede the last one wins for far.
The licence price doesn't matter.
Then the question is what I do loose not using MyEclipse anymore?
opinions are welcome.
Luis | We used to use MyEclipse but we just stopped doing so over time, and didn't really miss it. We're now on Ganymede EE and find it has everything we need, having now implemented some of the things bundled with MyEclipse. Syntax highlighting across various sources such as .css, .js and .sql is nice to have out-of-the-box. And we've always used the remote debugger built right in - it's pretty neat imo, but I didn't realise there was anything special with MyEclipse in this regard. And of course you can install Eclipse and MyEclipse side-by-side while you try things out. | I was using MyEclipse for about 3 years between 2002 and 2005. Currently, the functionality coming with Ganymede is IMHO good enough to live without it | MyEclipse: Is still worth to use it? | [
"",
"java",
"eclipse",
"myeclipse",
""
] |
Likewise are there design patterns that should be avoided? | I assume you are writing a server type application (lets leave Web apps for a while - there are some good off the shelf solutions that can help there, so lets look at the "i've got this great new type of server I have write", but I want it to be HA problem).
In a server implementation, the requests from clients are usually (in some form or another) converted to some event or command type pattern, and are then executed on one or more queue's.
So, first problem - need to store events/commands in a manner that will survive in the cluster (ie. when a new node takes over as master , it looks at the next command that needs executing and begins).
Lets start with a single threaded server impl (the easiest - and concepts still apply to multi-threaded but its got its own set of issues0. When a command is being processed need some sort of transaction processing.
Another concern is managing side effects and how do you handle failure of the current command ? Where possible, handle side effects in a transactional manner, so that they are all or nothing. ie. if the command changes state variables, but crashes half way through execution, being able to return to the "previous" state is great. This allows the new master node to resume the crashed command and just re-run the command. A good way again is breaking a side effects into smaller tasks that can again be run on any node. ie. store the main request start and end tasks, with lots of little tasks that handle say only one side effect per task.
This also introduces other issues which will effect your design. Those state variables are not necessarily databases updates. They could be shared state (say a finite state machine for an internal component) that needs to also be distributed in the cluster. So the pattern for managing changes such that the master code must see a consistent version of the state it needs, and then committing that state across the cluster. Using some form of immutable (at least from the master thread doing the update) data storage is useful. ie. all updates are effectively done on new copies that must go through some sort of mediator or facade that only updates the local in memory copies with the updates after updating across the cluster (or the minimum number of members across the cluster for data consistency).
Some of these issues are also present for master worker systems.
Also need good error management as the number of things that can go wrong on state update increases (as you have the network now involved).
I use the state pattern a lot. Instead of one line updates, for side effects you want to send requests/responses, and use conversation specific fsm's to track the progress.
Another issue is the representation of end points. ie. client connected to master node needs to be able to reconnect to the new master, and then listen for results ? Or do you simply cancel all pending results and let the clients resubmit ? If you allow pending requests to be processed, a nice way to identify endpoints (clients) is needed (ie. some sort of client id in a lookup).
Also need cleanup code etc (ie. don't want data waiting for a client to reconnect to wait forever).
Lots of queue are used. A lot of people will therefore using some message bus (jms say for java) to push events in a transactional manner.
Terracotta (again for java) solves a lot of this for you - just update the memory - terracotta is your facade/mediator here. They have just inject the aspects for your.
Terracotta (i don't work for them) - introduces the concept of "super static", so you get these cluster wide singletons that are cool, but you just need to be aware how this will effect testing and development workflow - ie. use lots of composition, instead of inheritance of concrete implementations for good reuse.
For web apps - a good app server can help with session variable replication and a good load balancer works. In someways, using this via a REST (or your web service method of choice) is a an easy way to write a multi-threaded service. But it will have performance implications. Again depends on your problem domain.
Messages serves (say jms) are often used to introduce a loose coupling between different services. With a decent message server, you can do a lot of msg routing (again apache camel or similar does a great job) ie. say a sticky consumer against a cluster of jms producers etc. that can also allow for good failover. Jms queue's etc can provide a simple way to distribute cmds in the cluster, indept of master / slave. (again it depends on if you are doing LOB or writing a server / product from scratch).
(if i get time later I will tidy up, maybe put some more detail in fix spelling grammar etc) | One approach to creating reliable software is [crash-only software](http://lwn.net/Articles/191059/):
> Crash-only software is software that crashes safely and recovers quickly. The only way to stop it is to crash it, and the only way to start it is to recover. A crash-only system is composed of crash-only components which communicate with retryable requests; faults are handled by crashing and restarting the faulty component and retrying any requests which have timed out. The resulting system is often more robust and reliable because crash recovery is a first-class citizen in the development process, rather than an afterthought, and you no longer need the extra code (and associated interfaces and bugs) for explicit shutdown. All software ought to be able to crash safely and recover quickly, but crash-only software must have these qualities, or their lack becomes quickly evident. | What design patterns are most leveraged in creating high availability applications? | [
"",
"java",
"design-patterns",
"database-design",
"high-availability",
""
] |
How can one programmatically fire the SelectedIndexChanged event of a ListView?
I've intended for the first item in my ListView to automatically become selected after the user completes a certain action. Code already exists within the SelectedIndexChanged event to highlight the selected item. Not only does the item fail to become highlighted, but a breakpoint set within SelectedIndexChanged is never hit. Moreover, a Debug.WriteLine fails to produce output, so I am rather certain that event has not fired.
The following code fails to fire the event:
```
listView.Items[0].Selected = false;
listView.Items[0].Selected = true;
listView.Select();
Application.DoEvents();
```
The extra .Select() method call was included for good measure. ;) The deselection (.Selected = false) was included to deselect the ListViewItem in the .Items collection just in case it may have been selected by default and therefore setting it to 'true' would have no effect. The 'Application.DoEvents()' call is yet another last ditch method.
Shouldn't the above code cause the SelectedIndexChanged event to fire?
I should mention that the SelectedIndexChanged event fires properly on when an item is selcted via keyboard or mouse input. | Deselecting it by setting it to false won't fire the event but setting it to true will.
```
public Form1 ()
{
InitializeComponent();
listView1.Items[0].Selected = false; // Doesn't fire
listView1.Items[0].Selected = true; // Does fire.
}
private void listView1_SelectedIndexChanged (object sender, EventArgs e)
{
// code to run
}
```
You might have something else going on. What event are you running your selection code? | Why can't you move the code that is currently inside your event handler's method into a method that can be called from the original spot and also from your code?
Something like this:
```
class Foo
{
void bar(Object o, EventArgs e)
{
// imagine this is something important
int x = 2;
}
void baz()
{
// you want to call bar() here ideally
}
}
```
would be refactored to this:
```
class Foo
{
void bar(Object o, EventArgs e)
{
bop();
}
void baz()
{
bop();
}
void bop()
{
// imagine this is something important
int x = 2;
}
}
``` | Fire ListView.SelectedIndexChanged Event Programmatically? | [
"",
"c#",
"winforms",
"listview",
"selectedindexchanged",
""
] |
I'm trying to determine the best way to ping a database via JDBC. By 'best' I mean fast and low overhead. For example, I've considered executing this:
```
"SELECT 1 FROM DUAL"
```
but I believe the DUAL table is Oracle-specific, and I need something more generic.
Note that `Connection` has an `isClosed()` method, but the javadoc states that this cannot be used to test the validity of the connection. | Yes, that would be Oracle-only, but there is no generic way to do this in JDBC.
Most connection pool implementations have a configuration parameter where you can specify the SQL that will be used for ping, thus pushing the responsiblity to figure out how to do it to the user.
That seems like the best approach unless someone comes up with a little helper tool for this (of course, it precludes using potentially even faster non-SQL-based methods like [Oracle's internal ping function](http://www.orindasoft.com/public/Oracle_JDBC_JavaDoc/javadoc1110/oracle/jdbc/OracleConnection.html#pingDatabase())) | With JDBC 4 you can use `isValid(int)` ([JavaDoc](http://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#isValid%28int%29)) from the Connection Interface. This basically does the trial statement for you.
Some driver implement this by sending the correct dummy SQL to the database and some directly uses low level operations which reduces the parsing overhead.
However beware of the timeout, some drivers (DB/400 and Oracle Thin) do spawn a new time thread for each invocation, which is not really acceptable for most Pool validation scenarios). And Oracle also does not seem to use a prepared statement, so it’s kind of relying on the implicit cache. | What is the best way to 'ping' a database via JDBC? | [
"",
"java",
"jdbc",
""
] |
I have two instances running of same Windows Service. They check the health of each other and report if any issue is found. I have a critical job that needs to be performed so I am running it with a fail-over approach, it runs in Master, and if Master is not responding it runs in slave. This job needs to communicate over a specific serial port, I am trying to use Mutex to check for race condition. I dont have access to production, so before deploying I want to make sure my approach is fine. So please suggest if my use of Mutex is fine for the given case.
```
if (iAmRunningInSlave)
{
HealthClient hc = new HealthClient();
if (!hc.CheckHealthOfMaster())
return this.runJobWrapper(withMutex, iAmRunningInSlave);
else
return true; //master is ok, we dont need to run the job in slave
}
return this.runJobWrapper(withMutex, iAmRunningInSlave);
```
And then in runJobWrapper
```
bool runJobWrapper(bool withMutex, bool iAmRunningInSlave)
{
if (!withMutex)
return this.runJob(iAmRunningInSlave); //the job might be interested to know
Mutex mutex = null;
string mutexName = this.jobCategory + "-" + this.jobTitle; //this will be unique for given job
try
{
mutex = Mutex.OpenExisting(mutexName);
return false; //mutex is with peer, return false which will re-trigger slave
}
catch
{
try
{ //mean time mutex might have created, so wrapping in try/catch
mutex = new Mutex(true /*initiallyOwned*/, mutexName);
return this.runJob(iAmRunningInSlave); //the job might be interested to know where I am running
}
finally
{
if (null!=mutex) mutex.ReleaseMutex();
}
return false;
}
}
``` | I had a similar issue recently.
The design of the `Mutex` class is a bit weird/different from the normal classes in .NET.
Using `OpenMutex` to check for an existing `Mutex` is not really nice as you have to catch an exception.
A better approach is to use the
```
Mutex(bool initiallyOwned, string name, out bool createdNew)
```
constructor, and check the value returned by `createdNew`. | You don't look to check the return value from `runJobWrapper` anywhere - is this intentional? It is not obvious what the return value actually means anyway. Also you really shouldn't catch each any every exception that `OpenExisiting` could possibly throw - Out of memory? Stack overflow? etc. etc. Just catch the one you mean to handle correctly.
Also your code looks to be somewhat fragile - I wouldn't be surprised if you have race conditions. | Using Named Mutex | [
"",
"c#",
"mutex",
""
] |
I know how to write the basic c# code to send emails, but I'm wondering about any larger issues involved. Are there things I should avoid? Best practices? etc
Thanks!! | * Make sure you use a hosting service which will not disable your account because of the amount of email you send.
* Add some kind of audit or logging so you know what mails you have sent
* Use BCC to send the same mail to multiple recipients
* Queue the mail to avoid bursts of emails
* Consider sending mail in batches. E.g. you notify users of a new forum post, then don't send out a mail to everyone each time someone posts, offer the possiblity of daily digests.
* As others have pointed out, use threading to send a high amount of mail | You shoul probably throttle your emails to not send more that x per unit time. You do not want a client's email server to mistake your emails for a denial of service attack.
I assume you have dealt with the larger isues like how a client opt-in or unsubscribes?
Also be prepaired for a lot of messages coming back as undeliverable, and be prepaired to process them. People change their email address all the time, leave a company etc. Do not make an enemy of a client (or large ISP) by continuing to send email to the same undeliverable addresses day after day.
In response to your comment, a powerful email server on a fast connection could easily send 100s if not 1,000s of emails per second.
Give some thought to what time of day you want the reciepient to actually get the email. Morning, afternoon etc. And think in terms of his (not yours) time zone. An email promoting a "ideal wine for dinner" would be better sent at 4 PM than 4 AM.
Be extremely careful of using a "shared" email server. Are you sharing it with a spammer who will get the ip address blacklisted? How can you be sure that will not change tomorrow?
Are you familiar with the [Trusted Email Open Standard](http://www.ftc.gov/bcp/workshops/spam/Supplements/eprivacygp.pdf)? The more you can do to seperate yourself from the spammers, the better your email campain will be received. | What sort of gotchas/tips/hints can you share for a .Net program which will send possibly 10k+ emails a day to subscribers? | [
"",
"c#",
".net",
"email",
""
] |
I'm developing a C api for some functionality written in C++ and I want to make sure that no exceptions are propagated out of any of the exported C functions.
The simple way to do it is making sure each exported function is contained in a:
```
try {
// Do the actual code
} catch (...) {
return ERROR_UNHANDLED_EXCEPTION;
}
```
Let's say I know one exception that is often missed inside the C++ code is std::bad\_alloc and I want to treat it specially I'd write something like this instead:
```
try {
// Run the actual code
} catch (std::bad_alloc& e) {
return ERROR_BAD_ALLOC;
} catch (...) {
return ERROR_UNHANDLED_EXCEPTION;
}
```
Is it possible to decompose this in some clever way so that I can globally treat some errors differently without adding a new catch statement for the exception handler around every exported function?
I'm aware of that this is possible to solve using the preprocessor, but before going down that road, I'd make sure there is no other way to do it. | You can use only one handler function for all possible exceptions, and call it from each or your API implementation functions, as below:
```
int HandleException()
{
try
{
throw;
}
// TODO: add more types of exceptions
catch( std::bad_alloc & )
{
return ERROR_BAD_ALLOC;
}
catch( ... )
{
return ERROR_UNHANDLED_EXCEPTION;
}
}
```
And in each exported function:
```
try
{
...
}
catch( ... )
{
return HandleException();
}
``` | There already is a good answer. But just FYI, its called 'exception-dispatcher' idiom, see [C++ FAQ](https://isocpp.org/wiki/faq/exceptions#throw-without-an-object). | Code reuse in exception handling | [
"",
"c++",
"c",
"exception",
""
] |
I have a problem with how the List Sort method deals with sorting. Given the following element:
```
class Element : IComparable<Element>
{
public int Priority { get; set; }
public string Description { get; set; }
public int CompareTo(Element other)
{
return Priority.CompareTo(other.Priority);
}
}
```
If I try to sort it this way:
```
List<Element> elements = new List<Element>()
{
new Element()
{
Priority = 1,
Description = "First"
},
new Element()
{
Priority = 1,
Description = "Second"
},
new Element()
{
Priority = 2,
Description = "Third"
}
};
elements.Sort();
```
Then the first element is the previously second element "Second". Or, in other words, this assertion fails:
```
Assert.AreEqual("First", elements[0].Description);
```
Why is .NET reordering my list when the elements are essentially the same? I'd like for it to only reorder the list if the comparison returns a non-zero value. | From the documentation of the List.Sort() method from MSDN:
> This method uses Array.Sort, which uses the QuickSort algorithm. This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
Here's the link:
<http://msdn.microsoft.com/en-us/library/b0zbh7b6.aspx>
Essentially, the sort is performing as designed and documented. | Here is an extension method SortStable() for `List<T> where T : IComparable<T>`:
```
public static void SortStable<T>(this List<T> list) where T : IComparable<T>
{
var listStableOrdered = list.OrderBy(x => x, new ComparableComparer<T>()).ToList();
list.Clear();
list.AddRange(listStableOrdered);
}
private class ComparableComparer<T> : IComparer<T> where T : IComparable<T>
{
public int Compare(T x, T y)
{
return x.CompareTo(y);
}
}
```
Test:
```
[Test]
public void SortStable()
{
var list = new List<SortItem>
{
new SortItem{ SortOrder = 1, Name = "Name1"},
new SortItem{ SortOrder = 2, Name = "Name2"},
new SortItem{ SortOrder = 2, Name = "Name3"},
};
list.SortStable();
Assert.That(list.ElementAt(0).SortOrder, Is.EqualTo(1));
Assert.That(list.ElementAt(0).Name, Is.EqualTo("Name1"));
Assert.That(list.ElementAt(1).SortOrder, Is.EqualTo(2));
Assert.That(list.ElementAt(1).Name, Is.EqualTo("Name2"));
Assert.That(list.ElementAt(2).SortOrder, Is.EqualTo(2));
Assert.That(list.ElementAt(2).Name, Is.EqualTo("Name3"));
}
private class SortItem : IComparable<SortItem>
{
public int SortOrder { get; set; }
public string Name { get; set; }
public int CompareTo(SortItem other)
{
return SortOrder.CompareTo(other.SortOrder);
}
}
```
In the test method, if you call Sort() method instead of SortStable(), you can see that the test would fail. | Why does List<T>.Sort method reorder equal IComparable<T> elements? | [
"",
"c#",
".net",
"sorting",
"quicksort",
""
] |
I'm a newbie to Python and I'm looking at using it to write some hairy EDI stuff that our supplier requires.
Basically they need an 80-character fixed width text file, with certain "chunks" of the field with data and others left blank. I have the documentation so I know what the length of each "chunk" is. The response that I get back is easier to parse since it will already have data and I can use Python's "slices" to extract what I need, but I can't assign to a slice - I tried that already because it sounded like a good solution, and it didn't work since Python strings are immutable :)
Like I said I'm really a newbie to Python but I'm excited about learning it :) How would I go about doing this? Ideally I'd want to be able to say that range 10-20 is equal to "Foo" and have it be the string "Foo" with 7 additional whitespace characters (assuming said field has a length of 10) and have that be a part of the larger 80-character field, but I'm not sure how to do what I'm thinking. | You don't need to assign to slices, just build the string using [`% formatting`](http://docs.python.org/library/stdtypes.html#string-formatting-operations).
An example with a fixed format for 3 data items:
```
>>> fmt="%4s%10s%10s"
>>> fmt % (1,"ONE",2)
' 1 ONE 2'
>>>
```
Same thing, field width supplied with the data:
```
>>> fmt2 = "%*s%*s%*s"
>>> fmt2 % (4,1, 10,"ONE", 10,2)
' 1 ONE 2'
>>>
```
Separating data and field widths, and using `zip()` and `str.join()` tricks:
```
>>> widths=(4,10,10)
>>> items=(1,"ONE",2)
>>> "".join("%*s" % i for i in zip(widths, items))
' 1 ONE 2'
>>>
``` | Hopefully I understand what you're looking for: some way to conveniently identify each part of the line by a simple variable, but output it padded to the correct width?
The snippet below may give you what you want
```
class FixWidthFieldLine(object):
fields = (('foo', 10),
('bar', 30),
('ooga', 30),
('booga', 10))
def __init__(self):
self.foo = ''
self.bar = ''
self.ooga = ''
self.booga = ''
def __str__(self):
return ''.join([getattr(self, field_name).ljust(width)
for field_name, width in self.fields])
f = FixWidthFieldLine()
f.foo = 'hi'
f.bar = 'joe'
f.ooga = 'howya'
f.booga = 'doin?'
print f
```
This yields:
```
hi joe howya doing
```
It works by storing a class-level variable, `fields` which records the order in which each field should appear in the output, together with the number of columns that field should have. There are correspondingly-named instance variables in the `__init__` that are set to an empty string initially.
The `__str__` method outputs these values as a string. It uses a list comprehension over the class-level `fields` attribute, looking up the instance value for each field by name, and then left-justifying it's output according to the columns. The resulting list of fields is then joined together by an empty string.
Note this doesn't parse input, though you could easily override the constructor to take a string and parse the columns according to the field and field widths in `fields`. It also doesn't check for instance values that are longer than their allotted width. | Writing/parsing text file with fixed width lines | [
"",
"python",
"parsing",
"edi",
""
] |
I have an application that is modifying 5 identical xml files, each located on a different network share. I am aware that this is needlessly redundant, but "it must be so."
Every time this application runs, exactly one element (no more, no less) will be added/removed/modified.
Initially, the application opens each xml file, adds/removes/modifies the element to the appropriate node and saves the file, or throws an error if it cannot (Unable to access the network share, timeout, etc...)
How do I make this atomic?
My initial assumption was to:
```
foreach (var path in NetworkPaths)
if (!File.Exists(path)
isAtomic = false;
if (isAtomic)
{
//Do things
}
```
But I can see that only going so far. Is there another way to do this, or a direction I can be pointed to? | Unfortunately, for it to be truly "atomic" isn't really possible. My best advice would be to wrap up your own form of transaction for this, so you can at least undo the changes.
I'd do something like check for each file - if one doesn't exist, throw.
Backup each file, save the state needed to undo, or save a copy in memory if they're not huge. If you can't, throw.
Make your edits, then save the files. If you get a failure here, try to restore from each of the backups. You'll need to do some error handling here so you don't throw until all of the backups were restored. After restoring, throw your exception.
At least this way, you'll be more likely to not make a change to just a single file. Hopefully, if you can modify one file, you'll be able to restore it from your backup/undo your modification. | I suggest the following solution.
* Try opening all files with a write lock.
+ If one or more fail, abort.
+ Modify and flush all files.
- If one or more fail, roll the already modified ones back and flush them again.
* Close all files.
If the rollback fails ... well ... try again, and try again, and try again ... and give up in an inconsitent state.
If you have control over all processes writing this files, you could implement a simple locking mechanism using a lock file. You could even perform write ahead logging and record the planned change in the lock file. If your process crashes, the next one attempting to modify the files would detect the incomplete operation and could continue it before doing it's one modification. | Atomic modification of files across multiple networks | [
"",
"c#",
"file",
"atomic",
""
] |
C++11 adds the ability for telling the compiler to [create a default implementation](http://en.wikipedia.org/wiki/C%2B%2B0x#Defaulting.2Fdeleting_of_standard_functions_on_C.2B.2B_objects) of any of the [special member functions](http://en.wikipedia.org/wiki/Special_member_functions). While I can see the value of deleting a function, where's the value of explicitly defaulting a function? Just leave it blank and the compiler will do it anyway.
The only point I can see is that a default constructor is only created when no other constructor exists:
```
class eg {
public:
eg(int i);
eg() = default;
};
```
But is that really better than how you do it now?
```
class eg {
public:
eg(int i);
eg() {}
};
```
Or am I missing a use-case? | A defaulted constructor will have a declaration, and that declaration will be subject to the normal access rules. E.g. you can make the default copy constructor protected. Without these new declarations, the default generated members are public. | Those examples from [Stroustrup's website](http://www.stroustrup.com/C++11FAQ.html#default) might help you understand the point:
> **defaulted and deleted functions -- control of defaults**
>
> The common idiom of "prohibiting
> copying" can now be expressed
> directly:
>
> ```
> class X {
> // ...
>
> X& operator=(const X&) = delete; // Disallow copying
> X(const X&) = delete;
> };
> ```
>
> Conversely, we can also say explicitly
> that we want to default copy behavior:
>
> ```
> class Y {
> // ...
> Y& operator=(const Y&) = default; // default copy semantics
> Y(const Y&) = default;
>
> };
> ```
>
> Being explicit about the default is
> obviously redundant, but comments to
> that effect and (worse) a user
> explicitly defining copy operations
> meant to give the default behavior are
> not uncommon. Leaving it to the
> compiler to implement the default
> behavior is simpler, less error-prone,
> and often leads to better object code.
> The "default" mechanism can be used
> for any function that has a default.
> The "delete" mechanism can be used for
> any function. For example, we can
> eliminate an undesired conversion like
> this:
>
> ```
> struct Z {
> // ...
>
> Z(long long); // can initialize with an long long
> Z(long) = delete; // but not anything less
> };
> ``` | What's the point in defaulting functions in C++11? | [
"",
"c++",
"c++11",
"defaulted-functions",
""
] |
I want to make a simple, simple DLL which exports one or two functions, then try to call it from another program... Everywhere I've looked so far, is for complicated matters, different ways of linking things together, weird problems that I haven't even *begun* to realize exist yet... I just want to get started, by doing something like so:
Make a DLL which exports some functions, like,
```
int add2(int num){
return num + 2;
}
int mult(int num1, int num2){
int product;
product = num1 * num2;
return product;
}
```
I'm compiling with MinGW, I'd like to do this in C, but if there's any real differences doing it in C++, I'd like to know those also. I want to know how to load that DLL into another C (and C++) program, and then call those functions from it.
My goal here, after playing around with DLLs for a bit, is to make a VB front-end for C(++) code, by loading DLLs into visual basic (I have visual studio 6, I just want to make some forms and events for the objects on those forms, which call the DLL).
I need to know how to call gcc (/g++) to make it create a DLL, but also how to write (/generate) an exports file... and what I can/cannot do in a DLL (like, can I take arguments by pointer/reference from the VB front-end? Can the DLL call a theoretical function in the front-end? Or have a function take a "function pointer" (I don't even know if that's possible) from VB and call it?) I'm fairly certain I can't pass a variant to the DLL...but that's all I know really.
## update again
Okay, I figured out how to compile it with gcc, to make the dll I ran
```
gcc -c -DBUILD_DLL dll.c
gcc -shared -o mydll.dll dll.o -Wl,--out-implib,libmessage.a
```
and then I had another program load it and test the functions, and it worked great,
thanks so much for the advice,
but I tried loading it with VB6, like this
```
Public Declare Function add2 Lib "C:\c\dll\mydll.dll" (num As Integer) As Integer
```
then I just called add2(text1.text) from a form, but it gave me a runtime error:
"Can't find DLL entry point add2 in C:\c\dll\mydll.dll"
this is the code I compiled for the DLL:
```
#ifdef BUILD_DLL
#define EXPORT __declspec(dllexport)
#else
#define EXPORT __declspec(dllimport)
#endif
EXPORT int __stdcall add2(int num){
return num + 2;
}
EXPORT int __stdcall mul(int num1, int num2){
return num1 * num2;
}
```
calling it from the C program like this worked, though:
```
#include<stdio.h>
#include<windows.h>
int main(){
HANDLE ldll;
int (*add2)(int);
int (*mul)(int,int);
ldll = LoadLibrary("mydll.dll");
if(ldll>(void*)HINSTANCE_ERROR){
add2 = GetProcAddress(ldll, "add2");
mul = GetProcAddress(ldll, "mul");
printf("add2(3): %d\nmul(4,5): %d", add2(3), mul(4,5));
} else {
printf("ERROR.");
}
}
```
any ideas?
## solved it
To solve the previous problem, I just had to compile it like so:
```
gcc -c -DBUILD_DLL dll.c
gcc -shared -o mydll.dll dll.o -Wl,--add-stdcall-alias
```
and use this API call in VB6
```
Public Declare Function add2 Lib "C:\c\dll\mydll" _
(ByVal num As Integer) As Integer
```
I learned not to forget to specify ByVal or ByRef explicitly--I was just getting back the address of the argument I passed, it looked like, -3048. | Regarding building a DLL using MinGW, here are some very brief instructions.
First, you need to mark your functions for export, so they can be used by callers of the DLL. To do this, modify them so they look like (for example)
```
__declspec( dllexport ) int add2(int num){
return num + 2;
}
```
then, assuming your functions are in a file called funcs.c, you can compile them:
```
gcc -shared -o mylib.dll funcs.c
```
The -shared flag tells gcc to create a DLL.
To check if the DLL has actually exported the functions, get hold of the free [Dependency Walker](http://www.dependencywalker.com/) tool and use it to examine the DLL.
For a free IDE which will automate all the flags etc. needed to build DLLs, take a look at the excellent [Code::Blocks](http://www.codeblocks.org/), which works very well with MinGW.
**Edit:** For more details on this subject, see the article [Creating a MinGW DLL for Use with Visual Basic](http://www.mingw.org/wiki/Visual_Basic_DLL) on the MinGW Wiki. | Here is how you do it:
In .h
```
#ifdef BUILD_DLL
#define EXPORT __declspec(dllexport)
#else
#define EXPORT __declspec(dllimport)
#endif
extern "C" // Only if you are using C++ rather than C
{
EXPORT int __stdcall add2(int num);
EXPORT int __stdcall mult(int num1, int num2);
}
```
in .cpp
```
extern "C" // Only if you are using C++ rather than C
{
EXPORT int __stdcall add2(int num)
{
return num + 2;
}
EXPORT int __stdcall mult(int num1, int num2)
{
int product;
product = num1 * num2;
return product;
}
}
```
The macro tells your module (i.e your .cpp files) that they are providing the dll stuff to the outside world. People who incude your .h file want to import the same functions, so they sell EXPORT as telling the linker to import. You need to add BUILD\_DLL to the project compile options, and you might want to rename it to something obviously specific to your project (in case a dll uses your dll).
You might also need to create a [.def file to rename the functions](http://msdn.microsoft.com/en-us/library/d91k01sh(VS.71).aspx) and de-obfuscate the names (C/C++ mangles those names). [This blog entry](http://blogs.msdn.com/oldnewthing/archive/2004/01/12/57833.aspx) might be an interesting launching off point about that.
Loading your own custom dlls is just like loading system dlls. Just ensure that the DLL is on your system path. C:\windows\ or the working dir of your application are an easy place to put your dll. | Compile a DLL in C/C++, then call it from another program | [
"",
"c++",
"c",
"gcc",
"vb6",
"mingw",
""
] |
What are Java's native ways of communicating with devices or ports such as LPT1, COM1, USB directly? | native means unportable, so you have to mess with JNI or [JNA](http://jna.dev.java.net/) if and only if the following libraries doesn't works for you:
* [javacomm](http://java.sun.com/products/javacomm/) for serial ports
* [jUSB](http://jusb.sourceforge.net/) for USB | [RXTX](http://users.frii.com/jarvi/rxtx/intro.html) is good one for COM and LPT ports. USB is extremely difficult; probably the easiest way is to write your own C+JNI wrapper for native drivers of the device. | What are Java's native ways of communicating with devices directly? | [
"",
"java",
"device-driver",
"ports",
"device",
""
] |
I have been given the task of modernizing my company's I5 based point of sale system. The main push is to create a friendlier interface/better data views without losing business logic.
Is there a good Java way of interacting with an interactive (non-command line) I5 program? Something alone the lines of what PHP provides with their 5250 Bridge? I'm considering using he 5250 bridge, but I'd prefer a Java base solution.
Thanks! | Assuming that the interactive part of the application is separable ... Why not use the Toolbox for Java to call the underlying programs directly and create a remote GUI? You can call APIs, PGMs, and CL commands remotely from Java. | The [IBM Developer Kit for Java](http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/rzaha/calrpgex.htm) allows you to run Java code on the iSeries.
You can call Java code directly from RPG/COBOL programs using the JNI interface. [Here's an article describing how to call Java from an RPG program.](http://www.itjungle.com/mpo/mpo013102-story02.html) | Modernize I5/As400 program with Java | [
"",
"java",
"ibm-midrange",
""
] |
I'm trying to build a simple website with login functionality very similar to the one here on SO.
The user should be able to browse the site as an anonymous user and there will be a login link on every page. When clicking on the login link the user will be taken to the login form. After a successful login the user should be taken back to the page from where he clicked the login link in the first place.
I'm guessing that I have to somehow pass the url of the current page to the view that handles the login form but I can't really get it to work.
EDIT:
I figured it out. I linked to the login form by passing the current page as a GET parameter and then used 'next' to redirect to that page. Thanks!
EDIT 2:
My explanation did not seem to be clear so as requested here is my code:
Lets say we are on a page foo.html and we are not logged in. Now we would like to have a link on foo.html that links to login.html. There we can login and are then redirected back to foo.html.
The link on foo.html looks like this:
```
<a href='/login/?next={{ request.path }}'>Login</a>
```
Now I wrote a custom login view that looks somewhat like this:
```
def login_view(request):
redirect_to = request.REQUEST.get('next', '')
if request.method=='POST':
#create login form...
if valid login credentials have been entered:
return HttpResponseRedirect(redirect_to)
#...
return render_to_response('login.html', locals())
```
And the important line in login.html:
```
<form method="post" action="./?next={{ redirect_to }}">
```
So yeah thats pretty much it, hope that makes it clear. | You do not need to make an extra view for this, the functionality is already built in.
First each page with a login link needs to know the current path, and the easiest way is to add the request context preprosessor to settings.py (the 4 first are default), then the request object will be available in each request:
**settings.py:**
```
TEMPLATE_CONTEXT_PROCESSORS = (
"django.core.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.request",
)
```
Then add in the template you want the Login link:
**base.html:**
```
<a href="{% url django.contrib.auth.views.login %}?next={{request.path}}">Login</a>
```
This will add a GET argument to the login page that points back to the current page.
The login template can then be as simple as this:
**registration/login.html:**
```
{% block content %}
<form method="post" action="">
{{form.as_p}}
<input type="submit" value="Login">
</form>
{% endblock %}
``` | To support full urls with param/values you'd need:
```
?next={{ request.get_full_path|urlencode }}
```
instead of just:
```
?next={{ request.path }}
``` | Django: Redirect to previous page after login | [
"",
"python",
"django",
""
] |
What are your opinions and expectations on [Google's Unladen Swallow](http://code.google.com/p/unladen-swallow/wiki/ProjectPlan)? From their project plan:
> We want to make Python faster, but we
> also want to make it easy for large,
> well-established applications to
> switch to Unladen Swallow.
>
> 1. Produce a version of Python at least 5x faster than CPython.
> 2. Python application performance should be stable.
> 3. Maintain source-level compatibility with CPython
> applications.
> 4. Maintain source-level compatibility with CPython extension
> modules.
> 5. We do not want to maintain a Python implementation forever; we view
> our work as a branch, not a fork.
And even sweeter:
> In addition, we intend to remove the
> GIL and fix the state of
> multithreading in Python. We believe
> this is possible through the
> implementation of a more sophisticated
> GC
It almost looks too good to be true, like the best of PyPy and Stackless combined.
More info:
* Jesse Noller: ["Pycon: Unladen-Swallow"](http://jessenoller.com/2009/03/26/pycon-unladen-swallow/)
* ArsTechnica: ["Google searches for holy grail of Python performance"](http://arstechnica.com/open-source/news/2009/03/google-launches-project-to-boost-python-performance-by-5x.ars)
Update: as DNS pointed out, there was related question: [What is LLVM and How is replacing Python VM with LLVM increasing speeds 5x?](https://stackoverflow.com/questions/695370/what-is-llvm-and-how-is-replacing-python-vm-with-llvm-increasing-speeds-5x) | I have high hopes for it.
1. This is being worked on by several people from Google. Seeing as how the BDFL is also employed there, this is a positive.
2. Off the bat, they state that this is a branch, and not a fork. As such, it's within the realm of possibility that this will eventually get merged into trunk.
3. Most importantly, **they have a working version**. They're using a version of unladen swallow **right now** for Youtube stuff.
They seem to have their shit together. They have a relatively detailed plan for a project at this stage, and they have a list of tests they use to gauge performance improvements and regressions.
I'm not holding my breath on GIL removal, but even if they never get around to that, the speed increases alone make it awesome. | I'm sorry to disappoint you, but when you read [PEP 3146](http://www.python.org/dev/peps/pep-3146/) things look bad.
The improvement is by now minimal and therfore the compiler-code gets more complicated.
Also removing the GIL has many downsides.
Btw. PyPy seems to be faster then Unladen Swallow in [some tests](http://morepypy.blogspot.com/2010/03/hello.html). | Opinions on Unladen Swallow? | [
"",
"python",
"llvm",
"unladen-swallow",
""
] |
This is possibly related to a classpath problem, but I'm really not sure at this point, since I don't get this error on some machines.
The error at the top of the stack is `SAX2 driver class org.apache.crimson.parser.XMLReaderImpl not found`. Why would I get this error only in some environments, but not others? How can I further investigate and/or fix this?
**Environments:**
* Jetty on Mac or PC == OK
* Tomcat 5 or 6 on Mac == OK
* Tomcat 5 or 6 on Win XP == ERROR
* Tomcat 6 on CentOS == ERROR
**Versions in the POM:**
* batik:batik:jar:1.5:compile
* net.sf.saxon:saxon:jar:8.7:compile
* batik:batik-transcoder:jar:1.6-1:compile
+ batik:batik-bridge:jar:1.6-1:compile
+ batik:batik-gvt:jar:1.6-1:compile
+ batik:batik-awt-util:jar:1.6-1:compile
+ batik:batik-util:jar:1.6-1:compile
+ batik:batik-gui-util:jar:1.6-1:compile
+ batik:batik-ext:jar:1.6-1:compile
+ xml-apis:xmlParserAPIs:jar:2.0.2:compile
+ batik:batik-script:jar:1.6-1:compile
+ batik:batik-svg-dom:jar:1.6-1:compile
+ batik:batik-dom:jar:1.6-1:compile
+ batik:batik-css:jar:1.6-1:compile
+ batik:batik-xml:jar:1.6-1:compile
+ batik:batik-parser:jar:1.6-1:compile
+ fop:fop:jar:0.20.5:compile
+ batik:batik-1.5-fop:jar:0.20-5:compile
+ xml-apis:xml-apis:jar:1.0.b2:compile
+ xalan:xalan:jar:2.4.1:compile
+ xerces:xercesImpl:jar:2.2.1:compile
+ avalon-framework:avalon-framework:jar:4.0:compile | It turns out that Apache XML Graphics itself adds Crimson to the classpath, twice. Once in the Apache Batik transcoder, and once in Apache FOP.
Since the libs are loaded alphabetically in Tomcat, FOP included Crimson, first, but then Batik also did the same.
I excluded FOP from the project POM, and have resolved the classpath issue. | Thanks, this was very useful.
On Win 7 / Tomcat 6 had the exactly same "missing crimson" thing. Got it working by adding the crimson libraries, but the performance was poor, very slow. It took something like 10-15 seconds for a single image transcoding. Finally solved the problem by removing the FOP as you described, and now it is really fast. This is how it is in my POM:
```
<dependency>
<groupId>batik</groupId>
<artifactId>batik-transcoder</artifactId>
<version>1.6-1</version>
<scope>compile</scope>
<exclusions>
<exclusion>
<artifactId>fop</artifactId>
<groupId>fop</groupId>
</exclusion>
</exclusions>
</dependency>
``` | "SAX2 driver class org.apache.crimson.parser.XMLReaderImpl not found" when using Batik in a webapp on Tomcat | [
"",
"java",
"tomcat",
"svg",
"classpath",
"batik",
""
] |
Why doesn't func3 get executed in the program below? After func1, func2 doesn't need to get evaluated but for func3, shouldn't it?
```
if (func1() || func2() && func3()) {
System.out.println("true");
} else {
System.out.println("false");
}
}
public static boolean func1() {
System.out.println("func1");
return true;
}
public static boolean func2() {
System.out.println("func2");
return false;
}
public static boolean func3() {
System.out.println("func3");
return false;
}
``` | You're using a short-circuited or. If the first argument is true, the entire expression is true.
It might help if I add the implicit parentheses that the compiler uses
**Edit**: As Chris Jester-Young noted, this is actually because logical operators have to left-to-right associativity:
```
if (func1() || (func2() && func3()))
```
After func1 returns, it becomes this:
```
if (true || (func2() && func3()))
```
After evaluating the short-circuited or, it becomes:
```
if (true)
``` | Java functions are evaluated according to [precedence rules](http://mindprod.com/jgloss/precedence.html)
because "&&" is of higher precendence than "||", it is evaluated first because you did not have any brackets to set explicit precedence
so you expression of
```
(A || B && C)
```
which is
```
(T || F && F)
```
is bracketed as
```
(T || (F && F))
```
because of the precedence rules.
Since the compiler understands that if 'A == true' it doesn't need to bother evaluating the rest of the expression, it stops after evaluating A.
If you had bracketed `((A || B) && C)` Then it would evaluate to false.
**EDIT**
Another way, as mentioned by other posters is to use "|" and "&" instead of "||" and "&&" because that stops the expression from shortcutting.
However, because of the precedence rules, the end result will still be the same. | What is wrong with the short circuit logic in this Java code? | [
"",
"java",
"logical-operators",
""
] |
I have some code,
```
int count = 0;
list.ForEach(i => i.SomeFunction(count++));
```
This seems to not increment count. Is count passed by value here? Is there any difference if I use the {} in the lambda?
```
int count = 0;
list.ForEach(i =>
{
i.SomeFunction(count++);
});
```
**Update 1**
Sorry, my mistake, it does update the original count. | `count` is an int, and ints are value types, which means they are indeed passed by value. There is no semantic difference between your first and second example.
(That said, it looks to me like it should be incrementing `count`, since it should be capturing the original reference as far as the closure. To clarify -- although count will be passed by value down into SomeFunction, things don't get "passed" into your lambda expression when you use them inside the expression -- they are the same reference as the external variable.) | In both cases you are creating what's called a closure. Essentially, count is being wrapped in a class and that class is being used by the lambda expression.
Bill Wagner has a great book called [More Effective C#](https://rads.stackoverflow.com/amzn/click/com/0321485890) and he has a blog post that describes [closures](http://srtsolutions.com/blogs/billwagner/archive/2008/01/22/looking-inside-c-closures.aspx) in more detail. | Does C# Pass by Value to Lambdas? | [
"",
"c#",
".net",
"delegates",
"lambda",
""
] |
I want to know when a USB device is connected to the computer that my Qt application is running on (in Windows). In my main QWidget, I've reimplemented `winEventFilter` like this:
```
bool winEventFilter ( MSG * msg, long * result ) {
qDebug() << msg;
return false;
}
```
I'd expect qDebug to send at least something when I connect a USB device, but I don't get anything.
I'm guessing that I'm fundamentally misunderstanding the process here - this is my first Qt app! | I believe what you may be missing is the call to register for device notification. Here is code that I use to do the same thing, though I override the winEvent() method of the QWidget class and not the winEventFilter.
```
// Register for device connect notification
DEV_BROADCAST_DEVICEINTERFACE devInt;
ZeroMemory( &devInt, sizeof(devInt) );
devInt.dbcc_size = sizeof(DEV_BROADCAST_DEVICEINTERFACE);
devInt.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE;
devInt.dbcc_classguid = GUID_DEVINTERFACE_VOLUME;
m_hDeviceNotify =
RegisterDeviceNotification( winId(), &devInt, DEVICE_NOTIFY_WINDOW_HANDLE );
if(m_hDeviceNotify == NULL)
{
qDebug() << "Failed to register device notification";
} // end if
```
NOTE: You will most likely need to change the values of the `DEV_BROADCAST_DEVICEINTERFACE` to fit your needs.
EDIT: To use this code you will need to include the proper header files and perform the proper setup. `DEV_BROADCAST_DEVICEINTERFACE` requires the Dbt.h header to be included. Also, the focal point of this code is on the RegisterDeviceNotification function. Info is available on [MSDN](http://msdn.microsoft.com/en-us/library/aa363431.aspx) | I'm working along the same lines but in C#.
you need to register your application with the system (look at the RegisterHidNotification() function). Mine looks like this: `
```
void RegisterHidNotification() //Register this application to recieve all USB device notices
{
BroadcastHeader dbi = new BroadcastHeader();
int size = Marshal.SizeOf(dbi);
dbi.Size = size;
dbi.Type = DeviceType.DeviceInterface;
**dbi.Classguid = GUID_DEVINTERFACE_USB_DEVICE**;
dbi.Name = 0;
IntPtr buffer = Marshal.AllocHGlobal(size);
Marshal.StructureToPtr(dbi, buffer, true);
IntPtr r = RegisterDeviceNotification(this.Handle, buffer, (int)DeviceEvents.regWindowHandle);
if (r == IntPtr.Zero)
statusLabel.Text = GetLastError().ToString();
}`
```
The most important part of the function is the bit I've highlighted in bold (or at least tried to). Defined as: `public Guid GUID_DEVINTERFACE_USB_DEVICE = new Guid("A5DCBF10-6530-11D2-901F-00C04FB951ED");`
Hope you can adapt it to your application. | How to know when a new USB storage device is connected in Qt? | [
"",
"c++",
"windows",
"qt",
"usb",
""
] |
I have a recusive-method that creates a unordered list from a XML document. To check which node I am positioned on I use the querystring to match the URL in the XML document.
I need to add the class 'current' on the parent node if I am positioned on its child node.
Like this:
```
MenuItem 1
MenuItem 2 [class current]
MenuItem 3 [class current] (selected node)
MenuItem 4
```
My XML document is like this:
```
<MenuItem Name="MenuItem 1" Url="MenuItem1.aspx"/>
<MenuItem Name="MenuItem 2" Url="MenuItem2.aspx">
<MenuItem Name="MenuItem 3" Url="MenuItem3.aspx" />
</MenuItem>
<MenuItem Name="MenuItem4" Url="MenuItem4.asp" />
```
And my current code looks like this:
```
foreach (XmlNode item in menuitems)
{
if (HttpContext.Current.Request.Url.AbsolutePath.ToLower() == item.Attributes["Url"].Value.ToLower())
{
writer.AddAttribute(HtmlTextWriterAttribute.Class, "current");
}
writer.RenderBeginTag(HtmlTextWriterTag.Li);
// And so on...
}
```
So I need some kind of function to find this node and place a class on that node aswell.
If you need more code, information or anything else - please say so! :-)
Thank you! | Thanks for the answers. But I solved my own problem. You guys get upvotes for your effort.
Here was my solution:
```
System.Collections.Hashtable ht = new System.Collections.Hashtable();
ht.Add("Url", HttpContext.Current.Request.Url.AbsolutePath.ToLower());
XmlNode currentpage = Core.FindChildNode(item, "MenuItem", ht);
if (HttpContext.Current.Request.RawUrl.ToLower() == item.Attributes["Url"].Value.ToLower() || currentpage != null)
{
writer.AddAttribute(HtmlTextWriterAttribute.Class, "current");
}
```
This is my FindChildNode method
```
public static XmlNode FindChildNode(XmlNode parent, string name, System.Collections.Hashtable keyvaluecollection)
{
if (parent.NodeType != XmlNodeType.Element) return null;
XmlElement el = (XmlElement)parent;
XmlNodeList nodes = el.GetElementsByTagName(name);
foreach (XmlNode node in nodes)
{
if (node.NodeType == XmlNodeType.Element)
{
bool found = false;
foreach (string key in keyvaluecollection.Keys)
{
if (node.Attributes[key] != null && node.Attributes[key].Value == (string)keyvaluecollection[key])
{
found = true;
}
else
{
found = false;
break;
}
}
if (found) return node;
}
}
return null;
}
```
Seems to work like a charm! :-) | This isn't an especially elegant solution, but if I understand the problem correctly, you could create a method to check if a child node is selected:
```
private static bool IsChildSelected(XmlNode item)
{
foreach(XmlNode child in item.ChildNodes)
{
if(HttpContext.Current.Request.Url.AbsolutePath.ToLower() == child.Attributes["Url"].Value.ToLower())
{
return true;
}
}
return false;
}
```
and check it as you iterate through the nodes:
```
foreach(XmlNode item in menuitems)
{
if(HttpContext.Current.Request.Url.AbsolutePath.ToLower() == item.Attributes["Url"].Value.ToLower()
|| IsChildSelected(item))
{
writer.AddAttribute(HtmlTextWriterAttribute.Class, "current");
}
writer.RenderBeginTag(HtmlTextWriterTag.Li);
}
``` | Add CSS class on parent node if child node is selected | [
"",
"c#",
"asp.net",
"xml",
""
] |
I've just started learning Qt, using their tutorial. I'm currently on tutorial 7, where we've made a new LCDRange class. The implementation of LCDRange (the .cpp file) uses the Qt QSlider class, so in the .cpp file is
```
#include <QSlider>
```
but in the header is a forward declaration:
```
class QSlider;
```
According to Qt,
> This is another classic trick, but one that's much less used often. Because we don't need QSlider in the interface of the class, only in the implementation, we use a forward declaration of the class in the header file and include the header file for QSlider in the .cpp file.
>
> This makes the compilation of big projects much faster, because the compiler usually spends most of its time parsing header files, not the actual source code. This trick alone can often speed up compilations by a factor of two or more.
Is this worth doing? It seems to make sense, but it's one more thing to keep track of - I feel it would be much simpler just to include everything in the header file. | Absolutely. The C/C++ build model is ...ahem... an anachronism (to say the best). For large projects it becomes a serious PITA.
As Neil notes correctly, this should **not** be the default approach for your class design, don't go out of your way unless you really need to.
**Breaking Circular include references** is the one reason where you have to use forward declarations.
```
// a.h
#include "b.h"
struct A { B * a; }
// b.h
#include "a.h" // circlular include reference
struct B { A * a; }
// Solution: break circular reference by forward delcaration of B or A
```
**Reducing rebuild time** - Imagine the following code
```
// foo.h
#include <qslider>
class Foo
{
QSlider * someSlider;
}
```
now every .cpp file that directly or indirectly pulls in Foo.h also pulls in QSlider.h and all of its dependencies. That may be hundreds of .cpp files! (Precompiled headers help a bit - and sometimes a lot - but they turn disk/CPU pressure in memory/disk pressure, and thus are soon hitting the "next" limit)
If the header requires only a reference declaration, this dependency can often be limited to a few files, e.g. foo.cpp.
**Reducing incremental build time** - The effect is even more pronounced, when dealing with your own (rather than stable library) headers. Imagine you have
```
// bar.h
#include "foo.h"
class Bar
{
Foo * kungFoo;
// ...
}
```
Now if most of your .cpp's need to pull in bar.h, they also indirectly pull in foo.h. Thus, every change of foo.h triggers build of all these .cpp files (which might not even need to know Foo!). If bar.h uses a forward declaration for Foo instead, the dependency on foo.h is limited to bar.cpp:
```
// bar.h
class Foo;
class Bar
{
Foo * kungFoo;
// ...
}
// bar.cpp
#include "bar.h"
#include "foo.h"
// ...
```
**It is so common that it is a pattern** - the [PIMPL pattern](http://www.google.de/search?q=PIMPL+pattern). It's use is two-fold: first it provides true interface/implementation isolation, the other is reducing build dependencies. In practice, I'd weight their usefulness 50:50.
**You need a reference** in the header, you can't have a direct instantiation of the dependent type. This limits the cases where forward declarations can be applied. If you do it explicitely, it is common to use a utility class (such as [boost::scoped\_ptr](http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/scoped_ptr.htm)) for that.
**Is Build Time worth it?** [Definitely](http://xkcd.com/303/), I'd say. In the worst case build time grows polynomial with the number of files in the project. other techniques - like faster machines and parallel builds - can provide only percentage gains.
The faster the build, the more often developers test what they did, the more often unit tests run, the faster build breaks can be found fixed, and less often developers end up procrastinating.
In practice, managing your build time, while essential on a large project (say, hundreds of source files), it still makes a "comfort difference" on small projects. Also, adding improvements after the fact is often an exercise in patience, as a single fix might shave off only seconds (or less) of a 40 minute build. | I use it all the time. My rule is if it doesn't need the header, then i put a forward declaration (*"use headers if you must, use forward declarations if you can"*). The only thing that sucks is that i need to know how the class was declared (struct/class, maybe if it is a template i need its parameters, ...). But in the vast majority of times, it just comes down to `"class Slider;"` or something along that. If something requires some more hassle to be just declared, one can always declare a special forward declare header like the Standard does with `iosfwd` too.
Not including the header file will not only reduce compile time but also will avoid polluting the namespace. Files including the header will thank you for including as little as possible so they can keep using a clean environment.
This is the rough plan:
```
/* --- --- --- Y.hpp */
class X;
class Y {
X *x;
};
/* --- --- --- Y.cpp */
#include <x.hpp>
#include <y.hpp>
...
```
There are smart pointers that are specifically designed to work with pointers to incomplete types. One very well known one is [`boost::shared_ptr`](http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/shared_ptr.htm). | Is it worth forward-declaring library classes? | [
"",
"c++",
"forward-declaration",
""
] |
I've defined the following view:
```
<CollectionViewSource x:Key="PatientsView" Source="{Binding Source={x:Static Application.Current}, Path=Patients}"/>
```
Where Patient is the following property:
```
public IEnumerable<Patient> Patients
{
get
{
return from patient in Database.Patients
orderby patient.Lastname
select patient;
}
}
```
Somewhere in my code, I change the Patients database, and I want to have the controls that display this data (using the "PatientsView") to be automatically notified. What's a proper way to do this?
Can the CollectionViewSource be invalidated or something? | I think this is a bit more complex than it seems. Notifying your client application about changes in database is a non-trivial task. But your life is easier if the database is changed only from your application - this makes you able to put "refreshing logic" whenever you change the database.
Your "Patients" property seems to be present in one class (maybe a little more than one? :) ). And you probably bind some ListBox to the CollectionViewSource. So instead of calling Refresh on the CollectionViewSource you can make WPF re-call the getter. For this the class that has Patients property has to implement INotifyPropertyChanged interface.
The code would look like this:
```
public class TheClass : INotifyPropertyChanged
{
public IEnumerable<Patient> Patients
{
get
{
return from patient in Database.Patients
orderby patient.Lastname
select patient;
}
}
#region INotifyPropertyChanged members
// Generated code here
#endregion
public void PatientsUpdated()
{
if (PropertyChanged != null)
PropertyChanged(this, "Patients");
}
}
```
Now, call PatientsUpdated() on an instance of TheClass to trigger update of the binding.
P.S. Having said all that it just feels like a bad design somehow. | How to invalidate a CollectionViewSource in code behind:
```
CollectionViewSource patientsView = FindResource("PatientsView") as CollectionViewSource;
patientsView.View.Refresh();
``` | Invalidate CollectionViewSource | [
"",
"c#",
".net",
"wpf",
"xaml",
""
] |
It seems that Python 2.6.1 doesn't compile bz2 library by default from source.
I don't have lib-dynload/bz2.so
What's the quickest way to add it (without installing Python from scratch)?
OS is Linux 2.4.32-grsec+f6b+gr217+nfs+a32+fuse23+tg+++opt+c8+gr2b-v6.194 #1 SMP Tue Jun 6 15:52:09 PDT 2006 i686 GNU/Linux
IIRC I used only --prefix flag. | You need libbz2.so (the general purpose libbz2 library) properly installed first, for Python to be able to build its own interface to it. That would typically be from a package in your Linux distro likely to have "libbz2" and "dev" in the package name. | Use your vendor's package management to add the package that contains the development files for bz2. It's usually a package called "libbz2-dev". E.g. on Ubuntu
`sudo apt-get install libbz2-dev` | Python's bz2 module not compiled by default | [
"",
"python",
"c",
"compiler-construction",
""
] |
I have a python program that reads floating point values using the following regular expression
```
(-?\d+\.\d+)
```
once I extract the value using float(match.group(1)), I get the actual floating point number. However, I am not able to distinguish if the number was 1.2345678 or 1.234 or 1.2340000.
The problem I am facing is to print out the floating point value again, with the exact same formatting. An easy solution is to "split and count" the floating point value when still a string, *eg* splitting at the decimal point, and counting the integer part length and the fractional part length, then create the formatter as
```
print "%"+str(total_len)+"."+str(fractional_len)+"f" % value
```
but maybe you know a standard way to achieve the same result ? | You method is basically correct.
String formatting has a less often used `*` operator you can put for the formatting sizes, here's some code:
```
import re
def parse_float(str):
re_float = re.compile(r'(-?)(\d+)\.(\d+)')
grps = re_float.search(str)
sign, decimal, fraction = grps.groups()
float_val = float('%s%s.%s' % (sign, decimal, fraction))
total_len = len(grps.group(0))
print '%*.*f' % (total_len, len(fraction), float_val)
parse_float('1.2345678')
parse_float('1.234')
parse_float('1.2340000')
```
and it outputs
```
1.2345678
1.234
1.2340000
``` | If you want to keep a fixed precision, avoid using `float`s and use `Decimal` instead:
```
>>> from decimal import Decimal
>>> d = Decimal('-1.2345')
>>> str(d)
'-1.2345'
>>> float(d)
-1.2344999999999999
``` | keeping same formatting for floating point values | [
"",
"python",
"formatting",
"floating-point",
""
] |
I hear a lot about subtyping tables when designing a database, and I'm fully aware of the theory behind them. However, I have never actually seen table subtyping in action. How can you create subtypes of tables? I am using MS Access, and I'm looking for a way of doing it in SQL as well as through the GUI (Access 2003).
Cheers! | An easy example would be to have a Person table with a primary key and some columns in that table. Now you can create another table called Student that has a foreign key to the person table (its supertype). Now the student table has some columns which the supertype doesn't have like GPA, Major, etc. But the name, last name and such would be in the parent table. You can always access the student name back in the Person table through the foreign key in the Student table.
Anyways, just remember the following:
* The hierarchy depicts relationship between supertypes and subtypes
* Supertypes has common attributes
* Subtypes have uniques attributes | Keep in mind that in designing a bound application, as with an Access application, subtypes impose a heavy cost in terms of joins.
For instance, if you have a supertype table with three subtype tables and you need to display all three in a single form at once (and you need to show not just the supertype date), you end up with a choice of using three outer joins and Nz(), or you need a UNION ALL of three mutually exclusive SELECT statements (one for each subtype). Neither of these will be editable.
I was going to paste some SQL from the first major app where I worked with super/subtype tables, but looking at it, the SQL is so complicated it would just confuse people. That's not so much because my app was complicated, but it's because the nature of the problem is complex -- presenting the full set of data to the user, both super- and subtypes, is by its very nature complex. My conclusion from working with it was that I'd have been better off with only one subtype table.
That's not to say it's not useful in some circumstances, just that Access's bound forms don't necessarily make it easy to present this data to the user. | Subtyping database tables | [
"",
"sql",
"ms-access",
"database-design",
"subtype",
""
] |
Why do Scalar-valued functions seem to cause queries to run cumulatively slower the more times in succession that they are used?
I have this table that was built with data purchased from a 3rd party.
I've trimmed out some stuff to make this post shorter... but just so you get the idea of how things are setup.
```
CREATE TABLE [dbo].[GIS_Location](
[ID] [int] IDENTITY(1,1) NOT NULL, --PK
[Lat] [int] NOT NULL,
[Lon] [int] NOT NULL,
[Postal_Code] [varchar](7) NOT NULL,
[State] [char](2) NOT NULL,
[City] [varchar](30) NOT NULL,
[Country] [char](3) NOT NULL,
CREATE TABLE [dbo].[Address_Location](
[ID] [int] IDENTITY(1,1) NOT NULL, --PK
[Address_Type_ID] [int] NULL,
[Location] [varchar](100) NOT NULL,
[State] [char](2) NOT NULL,
[City] [varchar](30) NOT NULL,
[Postal_Code] [varchar](10) NOT NULL,
[Postal_Extension] [varchar](10) NULL,
[Country_Code] [varchar](10) NULL,
```
Then I have two functions that look up LAT and LON.
```
CREATE FUNCTION [dbo].[usf_GIS_GET_LAT]
(
@City VARCHAR(30),
@State CHAR(2)
)
RETURNS INT
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE @LAT INT
SET @LAT = (SELECT TOP 1 LAT FROM GIS_Location WITH(NOLOCK) WHERE [State] = @State AND [City] = @City)
RETURN @LAT
END
CREATE FUNCTION [dbo].[usf_GIS_GET_LON]
(
@City VARCHAR(30),
@State CHAR(2)
)
RETURNS INT
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE @LON INT
SET @LON = (SELECT TOP 1 LON FROM GIS_Location WITH(NOLOCK) WHERE [State] = @State AND [City] = @City)
RETURN @LON
END
```
When I run the following...
```
SET STATISTICS TIME ON
SELECT
dbo.usf_GIS_GET_LAT(City,[State]) AS Lat,
dbo.usf_GIS_GET_LON(City,[State]) AS Lon
FROM
Address_Location WITH(NOLOCK)
WHERE
ID IN (SELECT TOP 100 ID FROM Address_Location WITH(NOLOCK) ORDER BY ID DESC)
SET STATISTICS TIME OFF
```
100 ~= 8 ms, 200 ~= 32 ms, 400 ~= 876 ms
--Edit
Sorry I should have been more clear. I'm not looking to tune the query listed above. This is just a sample to show the execution time getting slower the more records it crunches through. In the real world application the functions are used as part of a where clause to build a radius around a city and state to include all records with in that region. | In most cases, it's best to avoid scalar valued functions that reference tables because (as others said) they are basically black boxes that need to be ran once for every row, and cannot be optimized by the query plan engine. Therefore, they tend to scale linearly even if the associated tables have indexes.
You may want to consider using an inline-table-valued function, since they are evaluated inline with the query, and can be optimized. You get the encapsulation you want, but the performance of pasting the expressions right in the select statement.
As a side effect of being inlined, they can't contain any procedural code (no declare @variable; set @variable = ..; return). However, they can return several rows and columns.
You could re-write your functions something like this:
```
create function usf_GIS_GET_LAT(
@City varchar (30),
@State char (2)
)
returns table
as return (
select top 1 lat
from GIS_Location with (nolock)
where [State] = @State
and [City] = @City
);
GO
create function usf_GIS_GET_LON (
@City varchar (30),
@State char (2)
)
returns table
as return (
select top 1 LON
from GIS_Location with (nolock)
where [State] = @State
and [City] = @City
);
```
The syntax to use them is also a little different:
```
select
Lat.Lat,
Lon.Lon
from
Address_Location with (nolock)
cross apply dbo.usf_GIS_GET_LAT(City,[State]) AS Lat
cross apply dbo.usf_GIS_GET_LON(City,[State]) AS Lon
WHERE
ID IN (SELECT TOP 100 ID FROM Address_Location WITH(NOLOCK) ORDER BY ID DESC)
``` | **They do not.**
There is no bug in scalar functions that causes its performance to degrade exponentially depending on the number of rows in the scalar function is executed against. Try your tests again and have a look at SQL profiler, looking at the CPU and READS and DURATION columns. Increase you test size to include tests that take longer than a second, two seconds, five seconds.
```
CREATE FUNCTION dbo.slow
(
@ignore int
)
RETURNS INT
AS
BEGIN
DECLARE @slow INT
SET @slow = (select count(*) from sysobjects a
cross join sysobjects b
cross join sysobjects c
cross join sysobjects d
cross join sysobjects e
cross join sysobjects f
where a.id = @ignore)
RETURN @slow
END
go
SET STATISTICS TIME ON
select top 1 dbo.slow(id)
from sysobjects
go
select top 5 dbo.slow(id)
from sysobjects
go
select top 10 dbo.slow(id)
from sysobjects
go
select top 20 dbo.slow(id)
from sysobjects
go
select top 40 dbo.slow(id)
from sysobjects
SET STATISTICS TIME OFF
```
Output
```
SQL Server Execution Times:
CPU time = 203 ms, elapsed time = 202 ms.
SQL Server Execution Times:
CPU time = 889 ms, elapsed time = 939 ms.
SQL Server Execution Times:
CPU time = 1748 ms, elapsed time = 1855 ms.
SQL Server Execution Times:
CPU time = 3541 ms, elapsed time = 3696 ms.
SQL Server Execution Times:
CPU time = 7207 ms, elapsed time = 7392 ms.
```
Keep in mind that if you are running a scalar function against rows in the result set, the scalar function will be executed per-row with no global optimisation. | Why do SQL Server Scalar-valued functions get slower? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
"sql-function",
""
] |
We have an application that performs comparisons on data objects to determine if one version of the object is different than another. Our application also does some extensive caching of these objects, and we've run into a bit of a performance issue when it comes to doing these comparisons.
Here's the workflow:
1. Data item 1 is the current item in memory. This item was initially retrieved from cache and deep cloned (all sub objects such as Dictionaries etc). Data item 1 is then edited, and its properties are modified.
2. We are then comparing this object against the original version that was stored in cache. Since Data item 1 was cloned and its properties changed, these objects should be different.
There are a couple of issues here.
The main issue is our deep clone method is very expensive. We profiled it against a shallow clone and it was 10x slower. That's crap. Here's our method to deep clone:
```
public object Clone()
{
using (var memStream = new MemoryStream())
{
var binaryFormatter = new BinaryFormatter(null, new StreamingContext(StreamingContextStates.Clone));
binaryFormatter.Serialize(memStream, this);
memStream.Seek(0, SeekOrigin.Begin);
return binaryFormatter.Deserialize(memStream);
}
}
```
We were initially using the following to clone:
```
public object Clone()
{
return this.MemberwiseClone();
}
```
This was more performant, but because it does a shallow clone all the complex objects that were properties of this object, such as Dictionaries etc, were not cloned. The object would still contain the same reference as the object that was in the cache, therefore the properties would be the same upon comparison.
So, does anyone have an efficient way of doing a deep clone on C# objects that would cover cloning the entire object graph? | You're not going to be able to get much better than your Generic Binary Serialization without Explicitly implementing ICloneable on all your data objects that need to be cloned. Another possible route is reflection, but you won't be happy with it either if you are searching for performance.
I would consider taking the hit with ICloneable for deep copy and/or IComparable for comparing if the objects are different ... if the performance is that big of an issue for you. | Maybe you should not deep clone then?
Other options:
1) Make your "cached" object remember its original state and make *it* update "changed" flag every time anything changes.
2) Do not remember original state and just flag object as dirty once anything has changed ever. Then reload object from the original source to compare. I bet your objects change less frequently than don't change, and even less frequently change back to the same value. | Efficient cloning of cached objects | [
"",
"c#",
"performance",
"clone",
"deep-copy",
""
] |
How do I check if an object given to me is an `int[]` in Java? | The way you'd expect:
```
if (theObject instanceof int[]) {
// use it!
}
```
Arrays are `Objects`, even if they're arrays of primitives. | ```
if (o instanceof int[])
{
...
}
```
Arrays are Objects in java. | How do I check if an object given to me is an int [] in Java? | [
"",
"java",
""
] |
I'm using Blowfish.NET newest version,but there is one problem.
```
responce = new byte[6]
{
0x00, 0x80 ,0x01, 0x61, 0x00, 0x00
};
byte[] encrypted = new byte[responce.Length];
blowfish.Encrypt(responce, 2, encrypted, 2, input.Length - 2);
```
I called it the right way,I want it to start read/write from the third byte and the length is 6 - 2,because i dont use two bytes.
The problem:
```
public int Encrypt(
byte[] dataIn,
int posIn,
byte[] dataOut,
int posOut,
int count)
{
uint[] sbox1 = this.sbox1;
uint[] sbox2 = this.sbox2;
uint[] sbox3 = this.sbox3;
uint[] sbox4 = this.sbox4;
uint[] pbox = this.pbox;
uint pbox00 = pbox[ 0];
uint pbox01 = pbox[ 1];
uint pbox02 = pbox[ 2];
uint pbox03 = pbox[ 3];
uint pbox04 = pbox[ 4];
uint pbox05 = pbox[ 5];
uint pbox06 = pbox[ 6];
uint pbox07 = pbox[ 7];
uint pbox08 = pbox[ 8];
uint pbox09 = pbox[ 9];
uint pbox10 = pbox[10];
uint pbox11 = pbox[11];
uint pbox12 = pbox[12];
uint pbox13 = pbox[13];
uint pbox14 = pbox[14];
uint pbox15 = pbox[15];
uint pbox16 = pbox[16];
uint pbox17 = pbox[17]; // till this line count is 4
count &= ~(BLOCK_SIZE - 1); //count becomes 0 after that calc :((
int end = posIn + count; // 2 + 0 = 2
while (posIn < end) //no loop :[
{
uint hi = (((uint)dataIn[posIn + 3]) << 24) |
(((uint)dataIn[posIn + 2]) << 16) |
(((uint)dataIn[posIn + 1]) << 8) |
dataIn[posIn ];
uint lo = (((uint)dataIn[posIn + 7]) << 24) |
(((uint)dataIn[posIn + 6]) << 16) |
(((uint)dataIn[posIn + 5]) << 8) |
dataIn[posIn + 4];
posIn += 8;
hi ^= pbox00;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox01;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox02;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox03;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox04;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox05;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox06;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox07;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox08;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox09;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox10;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox11;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox12;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox13;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox14;
lo ^= (((sbox1[(int)(hi >> 24)] + sbox2[(int)((hi >> 16) & 0x0ff)]) ^ sbox3[(int)((hi >> 8) & 0x0ff)]) + sbox4[(int)(hi & 0x0ff)]) ^ pbox15;
hi ^= (((sbox1[(int)(lo >> 24)] + sbox2[(int)((lo >> 16) & 0x0ff)]) ^ sbox3[(int)((lo >> 8) & 0x0ff)]) + sbox4[(int)(lo & 0x0ff)]) ^ pbox16;
lo ^= pbox17;
dataOut[posOut + 3] = (byte)(lo >> 24);
dataOut[posOut + 2] = (byte)(lo >> 16);
dataOut[posOut + 1] = (byte)(lo >> 8);
dataOut[posOut ] = (byte) lo;
dataOut[posOut + 7] = (byte)(hi >> 24);
dataOut[posOut + 6] = (byte)(hi >> 16);
dataOut[posOut + 5] = (byte)(hi >> 8);
dataOut[posOut + 4] = (byte) hi;
posOut += 8;
}
return count;
}
```
The blowfish works fine if i pass longer data,but I need to encrypt this one.We've found the problem,but my question is:How to fix it?! | I'm not super familiar with the Blowfish algorithm or, for that matter, the library you're using. But, in general, I'd say that if you're having issues with your data not being long enough to encrypt, you could pad it. In other words... If you data isn't long enough, add a bunch of zeros (or some other character if zeros hold special meaning to you) to the beginning or end of the data before you encrypt it. Then, when you decrypt it, trim the padding characters. | You're trying to use a block cipher on something which is not exactly one block in size. If your data is not going to be exactly 8 bytes long, you should be using an existing padding algorithm along with a well-defined mode. That said, why are you implementing your own blowfish function instead of using a pre-made library? | C#: Blowfish doesnt work with fewer letters | [
"",
"c#",
"blowfish",
""
] |
I was trying to answer the question mentioned [here](https://stackoverflow.com/questions/736982/c-polymorphism-not-supported-for-pointer-to-pointer) by passing the reference to the pointer instead of pointer to pointer like this:
```
class Parent
{
};
class Child : public Parent
{
};
void RemoveObj(Parent*& pObj)
{
delete pObj;
pObj = NULL;
}
int main()
{
Parent* pPObj = new Parent;
Child* pCObj = new Child;
pPObj = new Parent();
pCObj = new Child();
RemoveObj(pPObj);
RemoveObj(pCObj); // This is line 32
return 1;
}
```
But this produces the following compiler error at line 32:
> error C2664: 'RemoveObj' : cannot
> convert parameter 1 from 'Child \*' to
> 'Parent \*&'
I agree that conversion from Child\*\* to Parent\*\* is not allowed. But why this conversion is also not allowed? | An object of type `Child*` cannot be bound to a `Parent*&` for exactly the same reason that a `Child**` cannot be converted to a `Parent**`. Allowing it would allow the programmer (intentionally or not) to break type safety without a cast.
```
class Animal {};
class DangerousShark : public Animal {};
class CuteKitten : public Animal {};
void f(Animal*& animalPtrRef, Animal* anotherAnimalPtr)
{
animalPtrRef = anotherAnimalPtr;
}
void g()
{
DangerousShark myPet;
CuteKitten* harmlessPetPtr;
f(harmlessPetPtr, &myPet); // Fortunately, an illegal function call.
}
```
**Edit**
I think that some of the confusion arises because of the loose use of the words 'convert' and 'conversion'.
References can't be rebound, unlike objects which can be reassigned, so in the context of references when we speak of conversion we can only be concerned about initializing a new reference.
References are always bound to an object, and from the OP's question it was clear that he is aiming to get a reference that is a direct bind to an existing object. This is only allowed if the object used to initialize the reference is *reference-compatible* with the type of the reference. Essentially, this is only if the types are the same, or the type of the object is derived from the type of the reference and the reference type is at least as cv-qualified as the initializing object. In particular, pointers to different types are not reference-compatible, regardless of the relationship of the pointed-to types.
In other cases, a reference can be initialized with something that can be converted to the reference type. In these cases, though, the reference must be const and not volatile and the conversion will create a temporary and the reference will be bound to this temporary and not the original object. As pointed out, this is not suitable for the requirements of OP's motivating example.
In summary, a `Child` can be bound directly to a `Parent&` but a `Child*` cannot be directly bound to a `Parent*&`. A `Parent* const&` can be initialized with a `Child*`, but the reference will actually bind to a temporary `Parent*` object copy-initialized from the `Child*` object. | * Your classes don't have a virtual function. See [FAQ 20.7](http://www.parashift.com/c++-faq-lite/virtual-functions.html#faq-20.7)
* Beacuse `Parent *&` is a reference to a pointer to a `Parent` object. You are passing a pointer to a `Child` -- these are incompatible types. You can bind a temporary to a const-reference i.e. if you change your parameter to:
`void RemoveObj(Parent* const& foo);`
But then you won't be able to do much with this.
> It was just a test code so I didn't make any virtual destructors. If I understand correctly in the second call of RemoveObj(), I get a temporary Parent\* object which can be passed as a const reference to the function. Is this correct?
I strongly suggest you run the following program in standard C++98 mode, once as in and then again after you have commented out `foo(b)` and uncommented `delete b`. Next, try putting in a `virtual` before `~s()`. The differences should be self-explanatory!
```
#include <iostream>
using namespace std;
struct s {
s() {cout << __func__ << endl; }
~s() {cout << __func__ << endl; }
};
struct t : s {
t() {cout << __func__ << endl; }
~t() {cout << __func__ << endl; }
};
void foo(s* const& x) { delete x; }
int main() {
t* b = new t;
foo(b);
//delete b;
}
``` | Converting from Derived* to Base*& | [
"",
"c++",
"polymorphism",
"compilation",
""
] |
I can't believe it,this works in my other application,but in this one,which has similiar structure - it doesn't!
```
public string ListAdd
{
set
{
if (listView1.InvokeRequired)
{
this.Invoke((MethodInvoker)delegate
{
listView1.Items.Add(value);
});
}
else
{
listView1.Items.Add(value);
}
}
}
```
The way I call it:
```
var formz = Form.ActiveForm as Form1;
if (formz != null)
formz.ListAdd = "asdasdasdasd";
```
If I call it from the function where I receive packet(I want to put it on the listView) over TCP - Nothing happens,but If I call it in that same class where I initialize WinSock - it works.
Where could the problem be?
**EDIT:**
I just debugged the application ,formz is null at the place I call it(receive function).
Why is it null there,but not at any other place? | I do not think, it will solve your problem, but have you thought about using the following pattern to do the invoke? I consider it much more readable. Create this extension method.
```
public static class ControlExtension
{
public static void ThreadSafeInvoke(this Control control, MethodInvoker method)
{
if (control != null)
{
if (control.InvokeRequired)
{
control.Invoke(method);
}
else
{
method.Invoke();
}
}
}
}
```
And then you can perform thread safe method calls like this.
```
Form form = new Form();
form.ThreadSafeInvoke(() => form.Text = "ThreadSafeInvoke");
```
Or multiple calls at once.
```
form.ThreadSafeInvoke(() =>
{
form.Text = "ThreadSafeInvoke";
form.Visible = true;
form.WindowState = FormWindowState.Maximized;
});
```
**UPDATE**
So the problem is clearly `Form.ActiveForm` returning `null`.
1. There is no active form at the moment of the call.
2. The thread has no permission to get the active form - [MSDN](http://msdn.microsoft.com/en-us/library/system.windows.forms.form.activeform(loband).aspx) states that [UIPermission](http://msdn.microsoft.com/en-us/library/system.security.permissions.uipermission.aspx) is required. | If ActiveForm is returning null then you might not have an active form or it is not of type Form1. You are using "as Form1", so if you have a Form2 which is active then formz will be set to null.
Can you pass formz into the function instead of calling ActiveForm? | Invoke from different thread | [
"",
"c#",
"winforms",
"invoke",
""
] |
What would be the most elegant way to implement a Win32 equivalent of WaitForMultipleObjects in Java (v6). A thread is sleeping until one of several events occur. When that happens, I want to process it and get back to sleep. No data is required, just an event. | It really depends what you want to do with it, but you could do something as simple as using the wait/notify methods or you could use the structures in the java.util.concurrency package. The latter would personally be my choice. You could easily set up a BlockingQueue that you could have producers drop event objects into and consumers blocking on removing the events.
```
// somewhere out there
public enum Events {
TERMINATE, DO_SOMETHING, BAKE_SOMETHING
}
// inside consumer
Events e;
while( (e = queue.take()) != TERMINATE ) {
switch(e) {
case DO_SOMETHING:
// blah blah
}
}
// somewhere else in another thread
Events e = BAKE_SOMETHING;
if( queue.offer(e) )
// the queue gladly accepted our BAKE_SOMETHING event!
else
// oops! we could block with put() if we want...
``` | You can use **CountDownLatch** object provided by **java.util.concurrent** package
<http://rajendersaini.wordpress.com/2012/05/10/waitformultipleobject-in-java/> | WaitForMultipleObjects in Java | [
"",
"java",
"concurrency",
""
] |
Can someone please provide an example of creating a Java `ArrayList` and `HashMap` on the fly? So instead of doing an `add()` or `put()`, actually supplying the seed data for the array/hash at the class instantiation?
To provide an example, something similar to PHP for instance:
```
$array = array (3, 1, 2);
$assoc_array = array( 'key' => 'value' );
``` | ```
List<String> list = new ArrayList<String>() {
{
add("value1");
add("value2");
}
};
Map<String,String> map = new HashMap<String,String>() {
{
put("key1", "value1");
put("key2", "value2");
}
};
``` | A nice way of doing this is using `List.of()` (since Java 8) and `Map.of()` (since Java 9):
```
List<String> list = List.of("A", "B", "C");
Map<Integer, String> map = Map.of(1, "A",
2, "B",
3, "C");
```
Java 7 and earlier may use [Google Collections](http://code.google.com/p/google-collections/):
```
List<String> list = ImmutableList.of("A", "B", "C");
Map<Integer, String> map = ImmutableMap.of(
1, "A",
2, "B",
3, "C");
``` | Java ArrayList and HashMap on-the-fly | [
"",
"java",
"collections",
""
] |
So I've got a ServiceReference added to a C# Console Application which calls a Web Service that is exposed from Oracle.
I've got everything setup and it works like peaches when it's not using SSL (http). I'm trying to set it up using SSL now, and I'm running into issues with adding it to the Service References (or even Web References). For example, the URL (https) that the service is being exposed on, isn't returning the appropriate web methods when I try to add it into Visual Studio.
> The underlying connection was closed: An unexpected error occurred on a send.
> Received an unexpected EOF or 0 bytes from the transport stream.
> Metadata contains a reference that cannot be resolved: '<https://srs204.mywebsite.ca:7776/SomeDirectory/MyWebService?WSDL>'
Another quandary I've got is in regards to certificate management and deployment. I've got about 1000 external client sites that will need to use this little utility and they'll need the certificate installed in the appropriate cert store in order to connect to the Web Service. Not sure on the best approach to handling this. Do they need to be in the root store?
I've spent quite a few hours on the web looking over various options but can't get a good clean answer anywhere.
To summarize, I've got a couple of questions here:
1) Anybody have some good links on setting up Web Services in Visual Studio that use SSL?
2) How should I register the certificate? Which store should it exist in? Can I just use something like CertMgr to register it?
There's gotta be a good book/tutorial/whatever that will show me common good practices on setting something like this up. I just can't seem to find it! | Well, I've figured this out. It took me far longer than I care to talk about, but I wanted to share my solution since it's a HUGE pet peeve of mine to see the standard. "Oh I fixed it! Thanks!" posts that leave everyone hanging on what actually happened.
So.
The root problem was that by default Visual Studio 2008 uses TLS for the SSL handshake and the Oracle/Java based Webservice that I was trying to connect to was using SSL3.
When you use the "Add Service Reference..." in Visual Studio 2008, you have [no way to specify that the security protocol](http://social.msdn.microsoft.com/Forums/en-US/asmxandxml/thread/37c376a3-f50c-4b57-a1df-83dc43fcddbf) for the service point manager should be SSL3.
Unless.
You take a static WSDL document and [use wsdl.exe to generate a proxy class](http://gsraj.tripod.com/dotnet/webservices/webservice_csharp_client.html).
```
wsdl /l:CS /protocol:SOAP /namespace:MyNamespace MyWebService.wsdl
```
Then you can use the [C Sharp Compiler](http://msdn.microsoft.com/en-us/library/1700bbwd(VS.80).aspx) to turn that proxy class into a library (.dll) and add it to your .Net projects "References".
```
csc /t:library /r:System.Web.Services.dll /r:System.Xml.dll MyWebService.cs
```
At this point you also need to make sure that you've included System.Web.Services in your "References" as well.
Now you should be able to call your web service without an issue in the code. To make it **work** you're going to need one magic line of code added before you instantiate the service.
```
// We're using SSL here and not TLS. Without this line, nothing workie.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
```
Okay, so I was feeling pretty impressed with myself as testing was great on my dev box. Then I deployed to another client box and it wouldn't connect again due to a permissions/authority issue. This smelled like certificates to me (whatever they smell like). To resolve this, I [used certmgr.exe](http://msdn.microsoft.com/en-us/library/aa388161(VS.85).aspx) to register the certificate for the site to the Trusted Root on the Local Machine.
```
certmgr -add -c "c:\someDir\yourCert.cer" -s -r localMachine root
```
This allows me to distribute the certificate to our client sites and install it automatically for the users. I'm still not sure on how "security friendly" the different versions of windows will be in regards to automated certificate registrations like this one, but it's worked great so far.
Hope this answer helps some folks. Thanks to blowdart too for all of your help on this one and providing some insight. | It sounds like the web service is using a self signed certificate. Frankly this isn't the best approach.
Assuming you're a large organisation and it's internal you can setup your own trusted certificate authority, this is especially easy with [Active Directory](http://www.microsoft.com/windowsserver2003/technologies/pki/default.mspx). From that CA the server hosting the Oracle service could request a certificate and you can use AD policy to trust your internal CA's root certificate by placing it in the trusted root of the machine store. This would remove the need to manually trust or accept the certificate on the web service.
If the client machines are external then you're going to have to get the folks exposing the service to either purchase a "real" certificate from one of the well known CAs like Verisign, Thawte, GeoTrust etc. or as part of your install bundle the public certificate and install it into Trusted Root certificate authorities at the machine level on every machine. This has problems, for example no way to revoke the certificate, but will remove the prompt. | Implement a C# Client that uses WebServices over SSL? | [
"",
"c#",
"visual-studio",
"web-services",
"ssl",
"console-application",
""
] |
I have a class that contains a collection. I want to provided a method or property that returns the contents of the collection. It's ok if calling classes can modify the individual objects but I do not want them adding or removing object from the actual collection. I have been copying all the objects to a new list, but now I'm thinking that I could just return the list as IEnumerable<>.
In the simplified example below is GetListC the best way to return a read only version of a collection?
```
public class MyClass
{
private List<string> mylist;
public MyClass()
{
mylist = new List<string>();
}
public void Add(string toAdd)
{
mylist.Add(toAdd);
}
//Returns the list directly
public List<String> GetListA
{
get
{
return mylist;
}
}
//returns a copy of the list
public List<String> GetListB
{
get
{
List<string> returnList = new List<string>();
foreach (string st in this.mylist)
{
returnList.Add(st);
}
return returnList;
}
}
//Returns the list as IEnumerable
public IEnumerable<string> GetListC
{
get
{
return this.mylist.AsEnumerable<String>();
}
}
}
``` | You can use [`List(T).AsReadOnly()`](http://msdn.microsoft.com/en-us/library/e78dcd75.aspx):
```
return this.mylist.AsReadOnly()
```
which will return a [`ReadOnlyCollection`](http://msdn.microsoft.com/en-us/library/ms132474.aspx). | Just use ReadOnlyCollection class, it is supported since .NET 2.0 | How to return a readonly copy of a collection | [
"",
"c#",
"collections",
"properties",
"readonly",
""
] |
```
new Date("05-MAY-09 03.55.50")
```
is any thing wrong with this ? i am getting illegalArgumentException | Don't use this method, as it has been deprecated for years (and years). Use DateFormat.parse() in which you can easily define the format you want to parse. | That [Date constructor is deprecated](http://java.sun.com/javase/6/docs/api/java/util/Date.html#Date(java.lang.String)) for a reason.
You should be using a [DateFormat](http://java.sun.com/javase/6/docs/api/java/text/DateFormat.html) / [SimpleDateFormat](http://java.sun.com/javase/6/docs/api/java/text/SimpleDateFormat.html) instead to create Date instances from a String representation.
```
DateFormat df = new SimpleDateFormat("dd-MMM-yy hh.mm.ss");
Date myDate = df.parse("05-05-MAY-09 03.55.50");
```
This way, you can parse dates that are in just about any format you could conceivably want. | new Date("05-MAY-09 03.55.50") is any thing wrong with this ? i am getting illegalArgumentException | [
"",
"java",
""
] |
I have been trying to write a small app with its own option windows. When I try to launch the window I can never seem to set focus on the new form. This is not a mdi form, but merely a new form that I create when a user selects an option from the menu. It should be noted that Form.Show is return false, which means that the new form is never receiving focus.
I have tried multiple methods for loading the form and all have failed:
**From Calling Form:**
```
ServerForm SF = new ServerForm(ref DataLoader, false);
SF.Show();
SF.Focus();
// Fails
```
**Inside the form itself:**
```
this.Show();
this.BringToFront();
this.Activate();
this.TopMost = true;
// Fails
```
**Setting Form to selectable:**
```
this.SetStyle(System.Windows.Forms.ControlStyles.Selectable, true);
...
ServerForm SF = new ServerForm(ref DataLoader, false);
SF.Show();
SF.Focus();
// Fails
```
**Using Old API:**
```
[System.Runtime.InteropServices.DllImport("user32.dll")]
private static extern int ShowWindow(IntPtr handle, int nCmdShow);
ServerForm SF = new ServerForm(ref DataLoader, false);
ShowWindow(SF.Handle, 3);
SF.Show();
SF.Focus();
// Fails
```
**Passing in Parent**
```
ServerForm SF = new ServerForm(ref DataLoader, false);
SF.Show(this);
SF.Focus();
// Fails
```
In all of these cases the form will show up, but the form that spawned still will have focus over the new form. This happens even when I disable the old form before I create the new form.
Any suggestions? | This seems to work. First I create the new form:
```
private void changeDefaultServerToolStripMenuItem_Click(object sender, EventArgs e)
{
this.Enabled = false;
ServerForm SF = new ServerForm(ref DataLoader, true);
}
```
Then in the constructor for the new form I do the following:
```
this.BringToFront();
this.CenterToParent();
this.TopMost = true;
this.ShowDialog();
```
Apparently there is some sort of behind the scene difference between Form.Show and Form.ShowDialog. Not quites sure what it is, I can only think it has to do with setting the active parent somehow. Adding code after the call to construct the function does not seem to give back focus to the parent form. Which it shouldn't. | It's because `Form.canFocus()` is false when the form loads. Use `Form.Activate()` on `Form.Shown` event. That's all.
```
private void ServerForm_Shown(object sender, EventArgs e)
{
this.Activate();
}
``` | C# New form never gains focus | [
"",
"c#",
"forms",
"focus",
""
] |
Currently i am creating an extension method that accepts parameters. Using the below example, how could one convert this using lambda expressions?
```
public static decimal ChangePercentage(this IEnumerable<Trade> trades, DateTime startDate, DateTime endDate)
{
var query = from trade in trades
where trade.TradeTime >= startDate
where trade.TradeTime <= endDate
orderby trade.TradeTime descending
select trade;
return (query.First().Value - query.Last().Value) / query.First().Value * 100;
}
```
What are the pro/cons using lambda vs normal method parameters?
Thanks | One way you could change the sample to use lambda expressions is to use a filter.
```
public static decimal ChangePercentage(this IEnumerable<Trade> trades,
Func<Trade,bool> pred)
{
var query = from trade in trades
where pred(trade);
orderby trade.TradeTime descending
select trade;
return (query.First().Value - query.Last().Value) / query.First().Value * 100;
}
someTrades.ChangePercentage(x => x.TradeDate >= startDate && x.TradeTime <= endDate);
```
The biggest pro this gives you is flexbility. Instead of having a method which does date based filtering for calculation. You have a method with a flexible filter method for calculating percentages. | Did you want to replace the `startDate` and `endDate` parameters with a single lambda expression?
```
public static decimal ChangePercentage(this IEnumerable<Trade> trades, DateTime startDate, DateTime endDate)
{
return trades.ChangePercentage(trade => trade.TradeTime >= startDate
&& trade.TradeTime <= endDate);
}
public static decimal ChangePercentage(this IEnumerable<Trade> trades, Func<Trade, bool> filter)
{
var query = from trade in trades
where filter(trade)
orderby trade.TradeTime descending
select trade;
return (query.First().Value - query.Last().Value) / query.First().Value * 100;
}
``` | How to create extension methods with lambda expressions | [
"",
"c#",
"lambda",
""
] |
Session.Abandon() doesn't seem to do anything. You would expect the Session\_end event to fire when Session.Abandon() is called. | This is most likely because your `SessionMode` is *not* `InProc` (the only one that can detect when a session ends).
Quoted from [MSDN](http://msdn.microsoft.com/en-us/library/ms178581.aspx):
> **Session Events**
>
> ASP.NET provides two events that help
> you manage user sessions. The
> Session\_OnStart event is raised when a
> new session starts, and the
> Session\_OnEnd event is raised when a
> session is abandoned or expires.
> Session events are specified in the
> Global.asax file for an ASP.NET
> application.
>
> The Session\_OnEnd event is not
> supported if the session Mode property
> is set to a value other than InProc,
> which is the default mode. | Session.Abandon() is the way to end a session. What is the problem you are encountering?
If its Back button related, that is a completely different issue ( page doesn't postback on Back, instead it runs one from clientside cache so no server side methods will execute).
Additionally, Session\_End is problematic. It will only fire on Session.Abandon() when using InProc sessions, so if you are using a different Session mode, it cannot be relied on. Otherwise, Session\_End will fire when SessionTimeout is reached ( default is 20 minutes I believe, configured in Web.Config ). | How do you programmatically end a session in asp.net when Session.Abandon() doesn't work? | [
"",
"c#",
"asp.net",
"authentication",
"session",
"membership",
""
] |
I recently ran across the notion of Identity Based Encryption (IBE) which seems like a novel idea. However, I haven't noticed many in the cryptography community attempting to find ways to break it. Am I wrong?
Likewise, I am of the belief that unless you can actually distribute open source implementations where the blackhat crowd can attack it, that it may not have merit?
I guess I would like to understand the experiences of the community-at-large in using this approach and how easy it is to incorporate into your application and distribute?
(Edit: here's a [Wikipedia article](http://en.wikipedia.org/wiki/Identity_based_encryption) on ID based encryption.) | I'm not clear what you're trying to ask, so I'm going to make up a couple things, and answer them. Let me know if I'm getting warm
First, "identity based encryption" isn't really an encryption scheme so much as a key management scheme. In any public/private — or, technically, "asymmetric" — encryption, there are two keys. One of them is used to encrypt, one to decrypt, and they have the special property that if you know one of them, it's still exponentially hard (or thought to be exponentially hard) to construct the other one. So, I can for example encrypt a letter to you using my private key; I publish my public key. If you can decrypt the letter using the public key, you have assurance that I was the one who really sent it. This is the core idea of digital signing schemes.
The problem is that you have to have a way of generating and managing those keys, and that turns out to be hard, because the scheme is only as good as the protection you have on your private key. There are a number of methods for doing this.
ID based encryption attempts to simplify this [key management](http://en.wikipedia.org/wiki/Key_management) problem by proposing special algorithms that construct private keys from a known public piece of information: say an email address. To do this in a way that still makes it hard to figure out the private side, you need to have a trusted entity who constructs the private key based on some other secret they know. So, to establish your communications with someone who knows your email address. you go to the trusted provider and ask for the private key to be generated. The person you want to communicate with knows what provider you use, and gets a *master public key* from that provider.
Now, the person you want to send to can then generate the public side from your ID without knowing anything except some master key information from the provider; the key is never transmitted direction.
In other words, it looks like this: Alice wants to send email to Bob that's encrypted. They both trust a provider, Tom.
* Alice sends a request to Tom with her email address "alice@example.com", and gets back a private key *P*. There is a corresponding public key *p*, but *Tom doesn't send that to anyone.*
* Bob sends a request to Tom and gets Tom's *master public key* *m*.
* Alice encrypts her message "x" with her private key, giving {"x"}*P*. (That notation is just "message "x" "wrapped" or encryption with key *P*.) Alice then sends that message to Bob.
* Bon uses his knowledge of Alice's email address and Tom's master key *m*, and computes. *p=f*("alice@example.com", *m*). Now he applies the decryption decrypt({"x")P, p) and poof, out comes Alice's message.
The thing about these schemes is that it simplifies some key management issues, but only somewhat. You still need to generate the keys, and what's worse, you have to **really** trust Tom, because he knows everything: he can generate your private key, and encrypt with it, making any message look like it came from you. What this means is that it creates an inherent key escrow scheme; your private key can be found out.
Some ways this is good; it handles the problem of lost keys. But for most reasons people want to use encryption, it's bad. Now someone can subpoena Tom, or just beat him up, and get at your private data.
The result is that ID based encryption alone is a nifty idea, but hasn't got a lot of a market. | Identity based encryption would be difficult to pull off in an open source project, especially the kind that's not just free as in freedom, but free as in beer. As has been mentioned, the whole system relies on a trusted third party to issue keys. This takes infrastructure, for both hard and software, that is expensive to purchase and maintain. Additionally, it puts a lot of responsibility on the party that's doing the key issuing. People who use the system will expect the issuer to take responsibility when things go wrong (and they will); this kind of responsibility is not cheap, and is often infeasible for a open source project to take on. | Identity-Based Encryption and Open Source | [
"",
"java",
"encryption",
"cryptography",
"aes",
"jce",
""
] |
I have a user control which takes a Func which it then gives to the Linq "Where" extension method of a IQueryable. The idea is that from the calling code, I can pass in the desired search function.
I'd like to build this search function dynamically as such:
```
Func<Order, bool> func == a => true;
if (txtName.Text.Length > 0) {
//add it to the function
func = a => func(a) && a.Name.StartsWith(txtName.Text);
}
if (txtType.Text.Length > 0) {
//add it to the function
func = a => func(a) && a.Type == txtType.Text;
}
..... etc .....
```
The problem with this approach is that since I'm reusing the name "func" it creates a recursive function.
Is there an easy way to build out the expression tree like this to make a dynamic where clause (in the absence of having the IQueryable up front and repeatedly calling "Where")? | Just save the current lambda in a temporary variable to prevent recursion.
```
var tempFunc = func;
func = a => tempFunc(a) && ...
``` | If you want to do an "and" combination, the preferred option is to use multiple "where" clauses:
```
IQueryable<Order> query = ...
if (!string.IsNullOrEmpty(txtName.Text.Length)) {
//add it to the function
query = query.Where(a => a.Name.StartsWith(txtName.Text));
}
if (!string.IsNullOrEmpty(txtType.Text.Length)) {
//add it to the function
query = query.Where(a => a.Type == txtType.Text);
}
```
You can do more complex things with expression building (AndAlso, Invoke, etc), but this is not necessary for an "and" combination.
If you really need to combine expressions, then the approach depends on the implementation. LINQ-to-SQL and LINQ-to-Objects support Expression.Invoke, allowing:
```
static Expression<Func<T, bool>> OrElse<T>(
this Expression<Func<T, bool>> lhs,
Expression<Func<T, bool>> rhs)
{
var row = Expression.Parameter(typeof(T), "row");
var body = Expression.OrElse(
Expression.Invoke(lhs, row),
Expression.Invoke(rhs, row));
return Expression.Lambda<Func<T, bool>>(body, row);
}
static Expression<Func<T, bool>> AndAlso<T>(
this Expression<Func<T, bool>> lhs,
Expression<Func<T, bool>> rhs)
{
var row = Expression.Parameter(typeof(T), "row");
var body = Expression.AndAlso(
Expression.Invoke(lhs, row),
Expression.Invoke(rhs, row));
return Expression.Lambda<Func<T, bool>>(body, row);
}
```
However, for Entity Framework you will usually need to rip the Expression apart and rebuild it, which is *not* easy. Hence why it is often preferable to use `Queryable.Where` (for "and") and `Queryable.Concat` (for "or"). | Is there an easy way to append lambdas and reuse the lambda name in order to create my Linq where condition? | [
"",
"c#",
".net",
"linq",
"linq-to-sql",
"lambda",
""
] |
Is there a general, cross RDMS, way I can have a key auto generated on a JDBC insert? For example if I have a table with a primary key, id, and an int value:
```
create table test (
id int not null,
myNum int null
)
```
and do an insert
```
PreparedStatement statement = connection.prepareStatement("insert into test(myNum) values(?)", Statement.RETURN_GENERATED_KEYS);
statement.setInt(1, 555);
statement.executeUpdate();
statement.close();
```
I get an java.sql.SQLException: Cannot insert the value NULL into column 'id'.
I have a feeling this is entirely RDMS dependent. We are using using SQL Server 2005 and I have set
```
CONSTRAINT [PK_test] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 1) ON [PRIMARY]
```
in the table with no luck. | This is completely database dependent. There are two main options: 1 - DBMSs that allow an auto-increment keyword next to the primary key definition and 2 - DBMSs that provide sequence generators (that you then can use to generate the new values for the PK, for instance by writing a "before insert" trigger that automatically inserts the new value in the column before completing the insertion ).
As far as I know:
1. Firebird uses sequences
2. DB2 allows to define a column as "GENERATED BY
DEFAULT AS IDENTITY";
3. Interbase uses sequences (called generators)
4. MySQL has the "AUTO\_INCREMENT" clause
5. Oracle uses sequences
6. PostgreSQL uses sequences
7. SQLServer has the "IDENTITY(1,1)" clause | You need to set the id column in the test table to autocreate an identity. In the case of SQL Server, you need to set the `IDENTITY()` property on the ID column. | Auto generate key on JDBC insert in SQL Server | [
"",
"java",
"sql-server",
"database",
"jdbc",
"rdbms",
""
] |
How is it possible to search for a byte[] array in the memory of another process and then get the address at the place where the byte[] array is located?
I want to write a byte array into the memory of another process(WriteProcessMemory()).One of the parameters of that call is uint Address.Well I want to get the address by searching a byte array into the process.
For example I search for {0xEB ,0x20,0x68,0x21,0x27,0x65, ??, 0x21,0x64,0xA1}
We assume that this array is placed only at one place in the memory of the process I'd like to write memory to.
To get that address,I have to search for that byte array.
Is it possible to be done in C#?
**EDIT:** **This is for native applications,NOT .NET. No need to downvote my question,there are components for C++ that do this,I just want to do it in C#.**
Thanks for understanding! | > Is it possible to be done in C#?
Yes. But very hard. It is hard from a native application where there is no impedance mismatched with the unmanaged view of processes and their memory maps you will need to use.
Considerations:
* You will need permission to open the process to get a handle.
* While the virtual memory space of a 32bit process is from two to four GB in size (depending on host OS and /3GB switch), much of this address range will not be allocated, and reading it will cause a page fault. You really need to find out what pages are allocated and for what to avoid lots of invalid page accesses.
Suggestions:
* Do you really really need to do this? Seriously this will be hard.
* Consider doing a native application, this will avoid working across the native/managed fence (this could include a native library with a managed driver application).
* Do you really need to do this?
* Consider doing the work inside the target process. This will require some cleverness (documented) to inject a thread, but should then be much faster.
* Start by reading up on Windows how process memory works (start with Windows Internals and (can't recall its name in the latest edition) Jeffrey Richter's book on Win32 application development.
* Do you really need to do this? There must be something simpler... could you automated a debugger? | Is it possible to be done in C#?
Everithing is possible in c#(or any other languge), u just need to fiind how;
Hard coding here:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using System.Runtime.InteropServices;
namespace ConsoleApplication1
{
class Program
{
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool ReadProcessMemory(
IntPtr hProcess,
IntPtr lpBaseAddress,
[Out] byte[] lpBuffer,
int dwSize,
out int lpNumberOfBytesRead
);
[DllImport("kernel32.dll")]
public static extern IntPtr OpenProcess(int dwDesiredAccess, bool bInheritHandle, int dwProcessId);
[DllImport("kernel32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool CloseHandle(IntPtr hObject);
static void Main(string[] args)
{
Process[] procs = Process.GetProcessesByName("explorer");
if (procs.Length <= 0) //proces not found
return; //can replace with exit nag(message)+exit;
IntPtr p = OpenProcess(0x10 | 0x20, true, procs[0].Id); //0x10-read 0x20-write
uint PTR = 0x0; //begin of memory
byte[] bit2search1 = {0xEB ,0x20,0x68,0x21,0x27,0x65}; //your bit array until ??
int k = 1; //numer of missing array (??)
byte[] bit2search2 = {0x21,0x64,0xA1};//your bit array after ??
byte[] buff = new byte[bit2search1.Length+1+bit2search2.Length]; //your array lenght;
int bytesReaded;
bool finded = false;
while (PTR != 0xFF000000) //end of memory // u can specify to read less if u know he does not fill it all
{
ReadProcessMemory(p, (IntPtr)PTR, buff, buff.Length, out bytesReaded);
if (SpecialByteCompare(buff, bit2search1,bit2search2,k))
{
//do your stuff
finded = true;
break;
}
PTR += 0x1;
}
if (!finded)
Console.WriteLine("sry no byte array found");
}
private static bool SpecialByteCompare(byte[] b1, byte[] b2, byte[] b3, int k) //readed memory, first byte array, second byte array, number of missing byte's
{
if (b1.Length != (b2.Length + k + b3.Length))
return false;
for (int i = 0; i < b2.Length; i++)
{
if (b1[i] != b2[i])
return false;
}
for (int i = 0; i < b3.Length; i++)
{
if (b1[b2.Length + k + i] != b3[i])
return false;
}
return true;
}
}
```
} | C#: Search a byte[] array in another process's memory | [
"",
"c#",
"memory",
"process",
""
] |
...similar to those produced by email clients like thunderbird or outlook, sliding up or fading in from the tray. | The simple popup: Look at <http://jtoaster.sourceforge.net/>
For the fancy stuff look at what Java 2D can do:
<http://java.sun.com/products/java-media/2D/samples/java2demo/Java2Demo.html>
Check Composite -> Composite FadeAnim. | What are your notifications for? Is it for a program that already exists, or are you writing a new application?
I ask because Adobe AIR has some functionality for doing this sort of thing, either as a Flex based application, or an AIR application written in HTML/JS. But, you wouldn't use AIR unless this is something you were building from the ground up. | What is the best way to create a smooth notification window in Java? | [
"",
"java",
"user-interface",
"window",
"notifications",
""
] |
I'm using Python with pywin32's adodbapi to write a script to create a SQL Server database and all its associated tables, views, and procedures. The problem is that Python's DBAPI requires that cursor.execute() be wrapped in a transaction that is only committed by cursor.commit(), and you can't execute a drop or create database statement in a user transaction. Any ideas on how to get around that?
EDIT:
There does not seem to be anything analogous to an autocommit parameter to either the connect() method of adodbapi or its cursor() method. I'd be happy to use pymssql instead of adodbapi, except that it truncates char and varchar datatypes at 255 characters.
I did try this before posting; here's the traceback.
```
Traceback (most recent call last):
File "demo.py", line 39, in <module>
cur.execute("create database dummydatabase")
File "C:\Python26\lib\site-packages\adodbapi\adodbapi.py", line 713, in execute
self._executeHelper(operation,False,parameters)
File "C:\Python26\lib\site-packages\adodbapi\adodbapi.py", line 664, in _executeHelper
self._raiseCursorError(DatabaseError,tracebackhistory)
File "C:\Python26\lib\site-packages\adodbapi\adodbapi.py", line 474, in _raiseCursorError
eh(self.conn,self,errorclass,errorvalue)
File "C:\Python26\lib\site-packages\adodbapi\adodbapi.py", line 60, in standardErrorHandler
raise errorclass(errorvalue)
adodbapi.adodbapi.DatabaseError:
--ADODBAPI
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\adodbapi\adodbapi.py", line 650, in _executeHelper
adoRetVal=self.cmd.Execute()
File "<COMObject ADODB.Command>", line 3, in Execute
File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 258, in _ApplyTypes_
result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes) + args)
com_error: (-2147352567, 'Exception occurred.', (0, u'Microsoft SQL Native Client', u'CREATE DATABASE statement not allowed within multi-statement transaction.', None, 0, -2147217900), None)
-- on command: "create database dummydatabase"
-- with parameters: None
``` | The adodbapi connection object `conn` does automatically start a new transaction after every commit if the database supports transactions. DB-API requires autocommit to be turned off by default and it allows an API method to turn it back on, but I don't see one in adodbapi.
You might be able to use the `conn.adoConn` property to hack around this, using the ADO api instead of DB-API to take you out of any transaction. Let me know if this works:
```
conn.adoConn.CommitTrans()
cursor.execute('CREATE DATABASE ...')
conn.adoConn.BeginTrans()
```
Here's the source for the [adodbapi commit() method](http://pywin32.cvs.sourceforge.net/viewvc/pywin32/pywin32/adodbapi/adodbapi.py?revision=1.9&view=markup#l_411). | "The problem is that Python's DBAPI requires that cursor.execute() be wrapped in a transaction that is only committed by cursor.commit()"
"and you can't execute a drop or create database statement in a user transaction."
I'm not sure all of this is actually true for all DBAPI interfaces.
Since you don't show the error messages, it may turn out that this is not true for ADODBAPI interface. Have you actually tried it? If so, what error message are you getting?
A connection may not *always* be creating a "user transaction". You can often open connections with `autocommit=True` to get DDL-style autocommit.
Also, you might want to consider using a different connection to do run DDL.
<http://pymssql.sourceforge.net/> for example, shows DDL being executed like this.
```
import pymssql
conn = pymssql.connect(host='SQL01', user='user', password='password', database='mydatabase')
cur = conn.cursor()
cur.execute('CREATE TABLE persons(id INT, name VARCHAR(100))')
``` | Creating a SQL Server database from Python | [
"",
"python",
"sql-server",
"pywin32",
"adodbapi",
""
] |
Sorry if this question seems trivial to many here.
In a C++ code there is something as below:
```
class Foo
{
public:
static int bands;
...
...
private:
...
...
}//class definition ends
int Foo::bands; //Note: here its not initialized to any value!
```
1. Why is the above statement needed again when 'bands' is once declared inside the class as static?
2. Also can a static variable be declared as a private member variable in any class? | C++ notes a distinction between *declaring* and *defining*. `bands` is declared within the class, but not defined.
A non-static data member would be defined when you define an object of that type, but since a static member is not a part of any one specific object, it needs it's own definition. | a) It's needed because that's the way the languge is designed.
b) Static variables are initialized by their default constructor, or to zero for built-in types.
c) Yes, they can be (and usually are) private. | Query on Static member variables of a class in C++ | [
"",
"c++",
"class",
"variables",
"static",
""
] |
I need to get a set of distinct records for a table along with the max date across all the duplciates.
ex:
```
Select distinct a,b,c, Max(OrderDate) as maxDate
From ABC
Group By a,b,c
```
The issue is I get a record back for each different date.
Ex:
```
aaa, bbb, ccc, Jan 1 2009
aaa, bbb, ccc, Jan 28 2009
```
How can I limit this so I end up with only:
```
aaa, bbb, ccc Jan 28 2009
```
I assume the issue is the gorup by and distinct not getting along well.
EDIT: Found the issue that was causing the problem, query results were as expected, not as above. | Something is wrong either with your query or with your example results, as what you describe shouldn't be possible. How about some actual SQL and actual results?
In any event, you don't need `distinct` there since all you're selecting are your three grouped columns an an aggregate, so you'll by definition end up with all distinct rows. I've never tried this, so perhaps there is some misbehavior when using both of those. Have you tried removing the `distinct`? What caused you to put it there? | ```
WITH q AS (
SELECT abc.*, ROW_NUMBER() OVER (PARTITION BY a, b, c ORDER BY orderDate DESC) AS rn
FROM abc
)
SELECT *
FROM q
WHERE rn = 1
```
Having an index on `(a, b, c, orderDate)` (in this order) will greatly improve this query. | Select distinct row with max date from SQL Server table? | [
"",
"sql",
"sql-server",
"t-sql",
"distinct",
""
] |
I copied and pasted this binary data out of sql server, which I am unable to query at this time.
```
0xBAC893CAB8B7FE03C927417A2A3F6A60BD30FF35E250011CB25507EBFCD5223B
```
How do I convert it back to a byte array in c#? | Something like this:
```
using System;
public static class Parser
{
static void Main()
{
string hex = "0xBAC893CAB8B7FE03C927417A2A3F6A6"
+ "0BD30FF35E250011CB25507EBFCD5223B";
byte[] parsed = ParseHex(hex);
// Just for confirmation...
Console.WriteLine(BitConverter.ToString(parsed));
}
public static byte[] ParseHex(string hex)
{
int offset = hex.StartsWith("0x") ? 2 : 0;
if ((hex.Length % 2) != 0)
{
throw new ArgumentException("Invalid length: " + hex.Length);
}
byte[] ret = new byte[(hex.Length-offset)/2];
for (int i=0; i < ret.Length; i++)
{
ret[i] = (byte) ((ParseNybble(hex[offset]) << 4)
| ParseNybble(hex[offset+1]));
offset += 2;
}
return ret;
}
static int ParseNybble(char c)
{
if (c >= '0' && c <= '9')
{
return c-'0';
}
if (c >= 'A' && c <= 'F')
{
return c-'A'+10;
}
if (c >= 'a' && c <= 'f')
{
return c-'a'+10;
}
throw new ArgumentException("Invalid hex digit: " + c);
}
}
```
(EDIT: Now slightly more efficient - no substrings required...)
It's possible that `ParseNybble` could be more efficient. For example, a switch/case *may* be more efficient:
```
static int ParseNybble(char c)
{
switch (c)
{
case '0': case '1': case '2': case '3': case '4':
case '5': case '6': case '7': case '8': case '9':
return c-'0';
case 'A': case 'B': case 'C': case 'D': case 'E': case 'F':
return c-'A'+10;
case 'a': case 'b': case 'c': case 'd': case 'e': case 'f':
return c-'a'+10;
}
throw new ArgumentException("Invalid hex digit: " + c);
}
```
or possibly a lookup array:
```
// Omitted for brevity... I'm sure you get the gist
private static readonly int[] NybbleLookup = BuildLookup();
private int ParseNybble(char c)
{
if (c > 'f')
{
throw new ArgumentException("Invalid hex digit: " + c);
}
int ret = NybbleLookup[c];
if (ret == -1)
{
throw new ArgumentException("Invalid hex digit: " + c);
}
return ret;
}
```
I haven't benchmarked any of these, and I've no idea which would be the fastest. The current solution is probably the simplest though. | Consider leveraging a Framework class that already exposes the ability to perform hex conversion, XmlReader for example:
```
public static byte[] HexToBytes(this string hexEncodedBytes, int start, int end)
{
int length = end - start;
const string tagName = "hex";
string fakeXmlDocument = String.Format("<{1}>{0}</{1}>",
hexEncodedBytes.Substring(start, length),
tagName);
var stream = new MemoryStream(Encoding.ASCII.GetBytes(fakeXmlDocument));
XmlReader reader = XmlReader.Create(stream, new XmlReaderSettings());
int hexLength = length / 2;
byte[] result = new byte[hexLength];
reader.ReadStartElement(tagName);
reader.ReadContentAsBinHex(result, 0, hexLength);
return result;
}
```
usage:
```
string input = "0xBAC893CAB8B7FE03C927417A2A3F6A60BD30FF35E250011CB255";
byte[] bytes = input.HexToBytes(2, input.Length);
``` | How to convert hex to a byte array? | [
"",
"c#",
"arrays",
""
] |
I've seen a lot of discussion about this subject on here.
If i have a static class w/ static methods that connects to a database or a server, is it a bad idea to use this in a multi-user environment (like a web page)? Would this make a new user's tread wait for previous users' threads to finish their calls before accepting a new one?
What would be the implications of this with multi-threading, also?
Thx! | If each static method is fully responsible for acquiring its resources and then disposing its resources within the scope of the method call (no shared state), then you shouldn't have any problem with threading that you wouldn't have using instance classes. I would suggest, however, that the bigger problem is that a reliance on public static methods (in static or non-static classes) creates many other design problems down the road.
* First of all, you're binding very tightly to an implementation, which is always bad.
* Second, testing all of the classes that depend on your static methods becomes very difficult to do, because you're locked to a single implementation.
* Third, it does become very easy to create non-thread safe methods since static methods can only have static state (which is shared across all method calls). | Static methods do not have any special behaviour in respect to multithreading. That is, you can expect several "copies" of the method running at the same time. The same goes for static variables - different threads can access them all at once, there is no waiting there. And unless you're careful, this can create chaos. | A Question About C# and Static Classes and Functions | [
"",
"c#",
"static",
"database-connection",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.