Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Here's some background on what I'm trying to do:
1. Open a serial port from a mobile device to a Bluetooth printer.
2. Send an EPL/2 form to the Bluetooth printer, so that it understands how to treat the data it is about to receive.
3. Once the form has been received, send some data to the printer which will be printed on label stock.
4. Repeat step 3 as many times as necessary for each label to be printed.
Step 2 only happens the first time, since the form does not need to precede each label. My issue is that when I send the form, if I send the label data too quickly it will not print. Sometimes I get "Bluetooth Failure: Radio Non-Operational" printed on the label instead of the data I sent.
I have found a way around the issue by doing the following:
```
for (int attempt = 0; attempt < 3; attempt++)
{
try
{
serialPort.Write(labelData);
break;
}
catch (TimeoutException ex)
{
// Log info or display info based on ex.Message
Thread.Sleep(3000);
}
}
```
So basically, I can catch a TimeoutException and retry the write method after waiting a certain amount of time (three seconds seems to work all the time, but any less and it seems to throw the exception every attempt). After three attempts I just assume the serial port has something wrong and let the user know.
This way seems to work ok, but I'm sure there's a better way to handle this. There are a few properties in the SerialPort class that I think I need to use, but I can't really find any good documentation or examples of how to use them. I've tried playing around with some of the properties, but none of them seem to do what I'm trying to achieve.
Here's a list of the properties I have played with:
* CDHolding
* CtsHolding
* DsrHolding
* DtrEnable
* Handshake
* RtsEnable
I'm sure some combination of these will handle what I'm trying to do more gracefully.
I'm using C# (2.0 framework), a Zebra QL 220+ Bluetooth printer and a windows Mobile 6 handheld device, if that makes any difference for solutions.
Any suggestions would be appreciated.
[UPDATE]
I should also note that the mobile device is using Bluetooth 2.0, whereas the printer is only at version 1.1. I'm assuming the speed difference is what's causing the printer to lag behind in receiving the data.
|
Flow control is the correct answer here, and it may not be present/implemented/applicable to your bluetooth connection.
Check out the Zebra specification and see if they implement, or if you can turn on, software flow control (xon, xoff) which will allow you to see when the various buffers are getting full.
Further, the bluetooth radio is unlikely to be capable of transmitting faster than 250k at the maximum. You might consider artificially limiting it to 9,600bps - this will allow the radio a lot of breathing room for retransmits, error correction, detection, and its own flow control.
If all else fails, the hack you're using right now isn't bad, but I'd call Zebra tech support and find out what they recommend before giving up.
-Adam
|
Well I've found a way to do this based on the two suggestions already given. I need to set up my serial port object with the following:
```
serialPort.Handshake = Handshake.RequestToSendXOnXOff;
serialPort.WriteTimeout = 10000; // Could use a lower value here.
```
Then I just need to do the write call:
```
serialPort.Write(labelData);
```
Since the Zebra printer supports software flow control, it will send an XOff value to the mobile device when the buffer is nearly full. This causes the mobile device to wait for an XOn value to be sent from the printer, effectively notifying the mobile device that it can continue transmitting.
By setting the write time out property, I'm giving a total time allowed for the transmission before a write timeout exception is thrown. You would still want to catch the write timeout, as I had done in my sample code in the question. However, it wouldn't be necessary to loop 3 (or an arbitrary amount of) times, trying to write each time, since the software flow control would start and stop the serial port write transmission.
|
How do I force a serial port write method to wait for the line to clear before sending its data?
|
[
"",
"c#",
"windows-mobile",
"bluetooth",
"serial-port",
"zebra-printers",
""
] |
I have a Visual Studio solution with four C# projects in it. I want to step into the code of a supporting project in the solution from my main project, but when I use the "Step into" key, it just skips over the call into that other project. I've set breakpoints in the supporting project, and they're ignored, and I can't for the life of me get it to step into any references to that project.
Everything is set to compile as "Debug", and I've seen Visual Studio warn me that my breakpoints won't be hit before - it doesn't do that in this case. It's as though it looks as though my code will debug, but then at run-time, there's a setting somewhere that tells Visual Studio not to step through the code in that project. All the other projects in my solutions debug without problems.
What box have I checked to cause this behavior?
**UPDATE FOR CLARITY**: The "Just my code" option is currently disabled. Also, since the code belongs to a project in my same solution, I don't think the "Just my code" option applies here. I thought it only applied to pre-compiled code that I didn't have the source for, but since I have the source in my project, I don't think this option has any effect.
|
One thing to check for is that your supporting project assembly has not been installed in the GAC. Open a command prompt and run the following to make sure...
gacutil /l *assemblyName*
|
Not sure if this is it, but "Tools>Options>Debugging>General:Enable Just My Code" is a possibility. (I prefer to always leave this unchecked.)
|
Enable and disable "Step into" debugging on certain project in a Visual Studio solution
|
[
"",
"c#",
"visual-studio",
"debugging",
"step-into",
""
] |
I'm using NuSOAP on PHP 5.2.6 and I'm seeing that the max message size is only 1000 bytes (which makes it tough to do anything meaningful). Is this set in the endpoint's WSDL or is this something I can configure in NuSOAP?
|
Regarding the FUD about a "1000 bytes limit"... I looked up the nusoap\_client sourcecode and found that the limit is only effective for **debug output**.
This means all data is processed and passed on to the webservice (regardless of its size), but only the first 1000 bytes (or more precisely: characters) are shown in the debug log.
Here's the code:
```
$this->debug('SOAP message length=' . strlen($soapmsg) . ' contents (max 1000 bytes)=' . substr($soapmsg, 0, 1000));
// send
$return = $this->send($this->getHTTPBody($soapmsg),$soapAction,$this->timeout,$this->response_timeout);
```
As you can clearly see, the `getHTTPBody()` call uses the whole `$soapmsg`, and only the debug output is limited to the first 1000 characters. If you'd like to change this, just change the `substr()` call to fit your needs, or simply replace it by `$soapmsg` (so everything is shown in the debug output, too).
This should have absolutely nothing to do with any real limit on the data actually sent. There could of course be other factors actually limiting the size of what you can send (e. g. the RAM limit set for your PHP script, limitations of your HTTP implementation, or running out of available virtual memory), but take it for granted there is no such thing as a "1000 bytes limit" for the data you can send with NuSOAP.
|
On a production box we use the PHP 5.2.5 built-in Soap-functions as server and NuSoap on PHP 4 and have successfully transferred messages larger than 1 MB.
I don't think that there is a limitation in either product, but you should check your settings in php.ini for
```
max_input_time (defaults to 60)
```
This is the time each script is allowed to parse input. If the time is up before parsing is complete, the script will not even run.
A sidenote: If possible, I suggest migrating to the SoapClient/SoapServer PHP extension classes. NuSoap has proved itself not very reliable in heavy-load situations, especially when it comes to the cache. Sometimes we saw NuSoap simply "forgetting" wsdl definitions and working in none-wsdl-mode. Weird...
|
How is the max size of a SOAP message determined?
|
[
"",
"php",
"soap",
"wsdl",
"nusoap",
""
] |
I'm trying to use `System.DirectoryServices` in a web site project and I'm getting this error:
> The type or namespace name 'DirectoryServices' does not exist in the namespace 'System' (are you missing an assembly reference?)
My project has a reference to `System.DirectoryServices` in `web.config`:
```
<add assembly="System.DirectoryServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/>
```
And I do have `using System.DirectoryServices` in the files where I want to use it.
Does anybody have a clue where to look for the problem?
|
Is the web-server (IIS or whatever) configured to run the folder as an application (i.e. shows as a cog), and is it using the correct version of ASP.NET? If it is running as 1.1, bits of it might work - but it would fail to find that 2.0 assembly in the 1.1 GAC.
|
1. Right click on References under your solution.
2. Select Add Reference.
The reference can be found under the Framework Assemblies list.
Select *System.DirectoryServices* and click Add.
|
System.DirectoryServices is not recognised in the namespace 'System'
|
[
"",
"c#",
".net",
"directoryservices",
""
] |
I have to read CSV files line by line which can be 10 to 20 Meg. `file()` is useless and I have to find the quickest way.
I have tried with `fgets()`, which runs fine, but I don't know if it reads a small block each time I call it, or if it caches a bigger one and optimize file I/O.
Do I have to try the `fread()` way, parsing EOL by myself?
|
You ought to be using [fgetcsv()](http://www.php.net/fgetcsv) if possible.
Otherwise, there is always fgets().
|
stream\_get\_line is apparently more efficient than fgets for large files. If you specify a sensible maximum length for the read I don't see any reason why PHP would have to 'read ahead' to read a line in, as you seem to be worrying.
If you want to use CSVs then fgetcsv will return results in a slightly more sensible format.
|
The best way to read large files in PHP?
|
[
"",
"php",
"large-files",
""
] |
I'm using Asp.Net 2.0. I have a scenario where i need to check a user input against any of two ranges. For e.g. I need to check a textbox value against ranges 100-200 or 500-600. I know that i can hook up 2 Asp.Net RangeValidators to the TextBox, but that will try to validate the input against both the ranges, an AND condition,if you will. CustomValidator is an option, but how would I pass the 2 ranges values from the server-side. Is it possible to extend the RangeValidator to solve this particular problem?
[Update]
Sorry I didn't mention this, the problem for me is that range can vary. And also the different controls in the page will have different ranges based on some condition. I know i can hold these values in some js variable or hidden input element, but it won't look very elegant.
|
I extended the BaseValidator to achieve this. Its fairly simple once you understand how Validators work. I've included a crude version of code to demonstrate how it can be done. Mind you it's tailored to my problem(like int's should always be > 0) but you can easily extend it.
```
public class RangeValidatorEx : BaseValidator
{
protected override void AddAttributesToRender(System.Web.UI.HtmlTextWriter writer)
{
base.AddAttributesToRender(writer);
if (base.RenderUplevel)
{
string clientId = this.ClientID;
// The attribute evaluation funciton holds the name of client-side js function.
Page.ClientScript.RegisterExpandoAttribute(clientId, "evaluationfunction", "RangeValidatorEx");
Page.ClientScript.RegisterExpandoAttribute(clientId, "Range1High", this.Range1High.ToString());
Page.ClientScript.RegisterExpandoAttribute(clientId, "Range2High", this.Range2High.ToString());
Page.ClientScript.RegisterExpandoAttribute(clientId, "Range1Low", this.Range1Low.ToString());
Page.ClientScript.RegisterExpandoAttribute(clientId, "Range2Low", this.Range2Low.ToString());
}
}
// Will be invoked to validate the parameters
protected override bool ControlPropertiesValid()
{
if ((Range1High <= 0) || (this.Range1Low <= 0) || (this.Range2High <= 0) || (this.Range2Low <= 0))
throw new HttpException("The range values cannot be less than zero");
return base.ControlPropertiesValid();
}
// used to validation on server-side
protected override bool EvaluateIsValid()
{
int code;
if (!Int32.TryParse(base.GetControlValidationValue(ControlToValidate), out code))
return false;
if ((code < this.Range1High && code > this.Range1Low) || (code < this.Range2High && code > this.Range2Low))
return true;
else
return false;
}
// inject the client-side script to page
protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);
if (base.RenderUplevel)
{
this.Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "RangeValidatorEx", RangeValidatorExJs(),true);
}
}
string RangeValidatorExJs()
{
string js;
// the validator will be rendered as a SPAN tag on the client-side and it will passed to the validation function.
js = "function RangeValidatorEx(val){ "
+ " var code=document.getElementById(val.controltovalidate).value; "
+ " if ((code < rangeValidatorCtrl.Range1High && code > rangeValidatorCtrl.Range1Low ) || (code < rangeValidatorCtrl.Range2High && code > rangeValidatorCtrl.Range2Low)) return true; else return false;}";
return js;
}
public int Range1Low
{
get {
object obj2 = this.ViewState["Range1Low"];
if (obj2 != null)
return System.Convert.ToInt32(obj2);
return 0;
}
set { this.ViewState["Range1Low"] = value; }
}
public int Range1High
{
get
{
object obj2 = this.ViewState["Range1High"];
if (obj2 != null)
return System.Convert.ToInt32(obj2);
return 0;
}
set { this.ViewState["Range1High"] = value; }
}
public int Range2Low
{
get
{
object obj2 = this.ViewState["Range2Low"];
if (obj2 != null)
return System.Convert.ToInt32(obj2);
return 0;
}
set { this.ViewState["Range2Low"] = value; }
}
public int Range2High
{
get
{
object obj2 = this.ViewState["Range2High"];
if (obj2 != null)
return System.Convert.ToInt32(obj2);
return 0;
}
set { this.ViewState["Range2High"] = value; }
}
}
```
|
A CustomValidator should work. I'm not sure what you mean by "pass the 2 ranges values from the server-side". You could validate it on the server-side using a validation method like this:
```
void ValidateRange(object sender, ServerValidateEventArgs e)
{
int input;
bool parseOk = int.TryParse(e.Value, out input);
e.IsValid = parseOk &&
((input >= 100 || input <= 200) ||
(input >= 500 || input <= 600));
}
```
You will then need to set the OnServerValidate property of your CustomValidator to "ValidateRange", or whatever you happen to call it.
Is this the sort of thing you're after?
|
Asp.Net : Extended range validation
|
[
"",
"c#",
"asp.net",
"validation",
""
] |
I have a large c# solution file (~100 projects), and I am trying to improve build times. I think that "Copy Local" is wasteful in many cases for us, but I am wondering about best practices.
In our .sln, we have application A depending on assembly B which depends on assembly C. In our case, there are dozens of "B" and a handful of "C". Since these are all included in the .sln, we're using project references. All assemblies currently build into $(SolutionDir)/Debug (or Release).
By default, Visual Studio marks these project references as "Copy Local", which results in every "C" being copied into $(SolutionDir)/Debug once for every "B" that builds. This seems wasteful. What can go wrong if I just turn "Copy Local" off? What do other people with large systems do?
FOLLOWUP:
Lots of responses suggest breaking up the build into smaller .sln files... In the example above, I would build the foundation classes "C" first, followed by the bulk of the modules "B", and then a few applications, "A". In this model, I need to have non-project references to C from B. The problem I run into there is that "Debug" or "Release" gets baked into the hint path and I wind up building my Release builds of "B" against debug builds of "C".
For those of you that split the build up into multiple .sln files, how do you manage this problem?
|
In a previous project I worked with one big solution with project references and bumped into a performance problem as well. The solution was three fold:
1. Always set the Copy Local property to false and enforce this via a custom msbuild step
2. Set the output directory for each project to the same directory (preferably relative to $(SolutionDir)
3. The default cs targets that get shipped with the framework calculate the set of references to be copied to the output directory of the project currently being built. Since this requires calculating a transitive closure under the 'References' relation this can become **VERY** costly. My workaround for this was to redefine the `GetCopyToOutputDirectoryItems` target in a common targets file (eg. `Common.targets` ) that's imported in every project after the import of the `Microsoft.CSharp.targets`. Resulting in every project file to look like the following:
```
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
... snip ...
</ItemGroup>
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />
<Import Project="[relative path to Common.targets]" />
<!-- To modify your build process, add your task inside one of the targets below and uncomment it.
Other similar extension points exist, see Microsoft.Common.targets.
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
-->
</Project>
```
This reduced our build time at a given time from a couple of hours (mostly due to memory constraints), to a couple of minutes.
The redefined `GetCopyToOutputDirectoryItems` can be created by copying the lines 2,438–2,450 and 2,474–2,524 from `C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets` into `Common.targets`.
For completeness the resulting target definition then becomes:
```
<!-- This is a modified version of the Microsoft.Common.targets
version of this target it does not include transitively
referenced projects. Since this leads to enormous memory
consumption and is not needed since we use the single
output directory strategy.
============================================================
GetCopyToOutputDirectoryItems
Get all project items that may need to be transferred to the
output directory.
============================================================ -->
<Target
Name="GetCopyToOutputDirectoryItems"
Outputs="@(AllItemsFullPathWithTargetPath)"
DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence">
<!-- Get items from this project last so that they will be copied last. -->
<CreateItem
Include="@(ContentWithTargetPath->'%(FullPath)')"
Condition="'%(ContentWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(ContentWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"
>
<Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways"
Condition="'%(ContentWithTargetPath.CopyToOutputDirectory)'=='Always'"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory"
Condition="'%(ContentWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/>
</CreateItem>
<CreateItem
Include="@(_EmbeddedResourceWithTargetPath->'%(FullPath)')"
Condition="'%(_EmbeddedResourceWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_EmbeddedResourceWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"
>
<Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways"
Condition="'%(_EmbeddedResourceWithTargetPath.CopyToOutputDirectory)'=='Always'"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory"
Condition="'%(_EmbeddedResourceWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/>
</CreateItem>
<CreateItem
Include="@(Compile->'%(FullPath)')"
Condition="'%(Compile.CopyToOutputDirectory)'=='Always' or '%(Compile.CopyToOutputDirectory)'=='PreserveNewest'">
<Output TaskParameter="Include" ItemName="_CompileItemsToCopy"/>
</CreateItem>
<AssignTargetPath Files="@(_CompileItemsToCopy)" RootFolder="$(MSBuildProjectDirectory)">
<Output TaskParameter="AssignedFiles" ItemName="_CompileItemsToCopyWithTargetPath" />
</AssignTargetPath>
<CreateItem Include="@(_CompileItemsToCopyWithTargetPath)">
<Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways"
Condition="'%(_CompileItemsToCopyWithTargetPath.CopyToOutputDirectory)'=='Always'"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory"
Condition="'%(_CompileItemsToCopyWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/>
</CreateItem>
<CreateItem
Include="@(_NoneWithTargetPath->'%(FullPath)')"
Condition="'%(_NoneWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_NoneWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"
>
<Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways"
Condition="'%(_NoneWithTargetPath.CopyToOutputDirectory)'=='Always'"/>
<Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory"
Condition="'%(_NoneWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/>
</CreateItem>
</Target>
```
With this workaround in place I found it workable to have as much as > 120 projects in one solution, this has the main benefit that the build order of the projects can still be determined by VS instead of doing that by hand by splitting up your solution.
|
I'll suggest you to read Patric Smacchia's articles on that subject :
* [Partitioning Your Code Base Through .NET Assemblies and Visual Studio Projects](http://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/) --> *Should every Visual Studio project really be in its own assembly? And what does 'Copy Local=True' really mean?*
* [Lessons learned from the NUnit code base](http://codebetter.com/blogs/patricksmacchia/archive/2009/01/11/lessons-learned-from-the-nunit-code-base.aspx) --> *The VisualStudio Project Reference + Copy Local true option is evil!*)
* [Analyzing the code base of CruiseControl.NET](http://codebetter.com/patricksmacchia/2009/03/15/analyzing-the-code-base-of-cruisecontrol-net/) --> *Bad usage of Copy Local Reference Assembly option set to True)*
> CC.Net VS projects rely on the copy local reference assembly option set to true. [...]
> Not only this increase significantly the compilation time (x3 in the case of NUnit), but also it messes up your working environment. Last but not least, doing so introduces the risk for versioning potential problems. Btw, NDepend will emit a warning if it founds 2 assemblies in 2 different directories with the same name, but not the same content or version.
>
> The right thing to do is to define 2 directories $RootDir$\bin\Debug and $RootDir$\bin\Release, and configure your VisualStudio projects to emit assemblies in these directories. All project references should reference assemblies in the Debug directory.
You could also read [this article](http://www.theserverside.net/tt/articles/showarticle.tss?id=ControllingDependencies) to help you reduce your projects number and improve your compilation time.
|
What is the best practice for "Copy Local" and with project references?
|
[
"",
"c#",
".net",
"visual-studio",
"msbuild",
""
] |
I have seen C# code that uses the `@` to tell the compiler the string has newlines in it and that it should be all in one line.
Is there something like that for C/C++?
Like if I want to put something like:
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
In a string I don't want to place it all in one line but just put it like that and have the compiler know that that is only one line.
|
C and C++ have automatic concatenation of adjacent quoted strings. This means that
```
const char *a = "a" "b";
```
and
```
const char *b = "ab";
```
will make `a` and `b` point at identical data. You can of course extend this, but it becomes troublesome when the strings contain quotes. Your example seems not to, so it might be practical then.
|
C and C++ didn't have anything like C# verbatim string literals at the time this answer was first written. The closest you could do is:
```
"73167176531330624919225119674426574742355349194934"
"96983520312774506326239578318016984801869478851843"
"85861560789112949495459501737958331952853208805511"
"12540698747158523863050715693290963295227443043557"
"66896648950445244523161731856403098711121722383113"
"62229893423380308135336276614282806444486645238749"
"30358907296290491560440772390713810515859307960866"
"70172427121883998797908792274921901699720888093776"
"65727333001053367881220235421809751254540594752243"
"52584907711670556013604839586446706324415722155397"
"53697817977846174064955149290862569321978468622482"
"83972241375657056057490261407972968652414535100474"
"82166370484403199890008895243450658541227588666881"
"16427171479924442928230863465674813919123162824586"
"17866458359124566529476545682848912883142607690042"
"24219022671055626321111109370544217506941658960408"
"07198403850962455444362981230987879927244284909188"
"84580156166097919133875499200524063689912560717606"
"05886116467109405077541002256983155200055935729725"
"71636269561882670428252483600823257530420752963450"
```
Note however that even in C# it doesn't do what it sounds like you want. If you have:
```
string foo = @"x
y";
```
in C# then the string will actually contain a linebreak.
In C++11, as per the comment, the `R` prefix denotes a *raw string literal*, e.g.
```
string x = R"(First line
second line)";
```
|
Long strings with newlines
|
[
"",
"c++",
"string",
""
] |
I am trying to write a replacement regular expression to surround all words in quotes except the words AND, OR and NOT.
I have tried the following for the match part of the expression:
```
(?i)(?<word>[a-z0-9]+)(?<!and|not|or)
```
and
```
(?i)(?<word>[a-z0-9]+)(?!and|not|or)
```
but neither work. The replacement expression is simple and currently surrounds all words.
```
"${word}"
```
So
> This and This not That
becomes
> "This" and "This" not "That"
|
This is a little dirty, but it works:
```
(?<!\b(?:and| or|not))\b(?!(?:and|or|not)\b)
```
In plain English, this matches any word boundary not preceded by and not followed by "and", "or", or "not". It matches whole words only, e.g. the position after the word "sand" would not be a match just because it is preceded by "and".
The space in front of the "or" in the zero-width look-behind assertion is necessary to make it a fixed length look-behind. Try if that already solves your problem.
EDIT: Applied to the string "except the words AND, OR and NOT." as a global replace with single quotes, this returns:
```
'except' 'the' 'words' AND, OR and NOT.
```
|
John,
The regex in your question is almost correct. The only problem is that you put the lookahead at the end of the regex instead of at the start. Also, you need to add word boundaries to force the regex to match whole words. Otherwise, it will match "nd" in "and", "r" in "or", etc, because "nd" and "r" are not in your negative lookahead.
> (?i)\b(?!and|not|or)(?[a-z0-9]+)\b
|
Regex to match all words except a given list
|
[
"",
"c#",
".net",
"regex",
""
] |
I've long had a desire for an STLish container that I could place into a shared memory segment or a memory mapped file.
I've considered the use of a custom allocator and placement new to place a regular STL container into a shared memory segment. (like this ddj [article](http://www.ddj.com/cpp/184401639;jsessionid=XH553SW4YIE42QSNDLPSKHSCJUNN2JVN?_requestid=537362)). The problem is that STL containers will internally have pointers to the memory they own. Therefore, if the shared memory segment or memory mapped file loads at a different base address (perhaps on a subsequent run, or in a second process), then the internal pointers are suddenly invalid. As far as I can figure out, the custom allocator approach only works if you can always map the memory segment into your process at the same address. At least with memory mapped files, I have lots of experience of that NOT being the case if you just let the system map it where ever it feels like.
I've had some thoughts on how to do this, but I'd like to avoid it if someone else has already done the work (that's me, being lazy).
I'm currently leaving locking out of the discussion, as the best locking strategy is highly application dependent.
|
The best starting point for this is probably the boost Interprocess libraries. They have a good example of a map in shared memory here:
[interprocess map](http://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/quick_guide.html#interprocess.quick_guide.qg_interprocess_map)
You will probably also want to read the section on offset smart pointers, which solves the internal pointer problem you were referring to.
[Offset Pointer](http://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/quick_guide.html#interprocess.quick_guide.qg_offset_ptr)
|
I only know of proprietary versions. [Bloomberg](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1850.pdf) and [EA](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html) have both published about their STL versions, but havent released ( to my knowledge ) the fruits of their labor.
|
Anyone have a good shared memory container for C++?
|
[
"",
"c++",
"stl",
"shared-memory",
""
] |
Which SQL statement is faster?
```
SELECT TOP 2 c1.Price, c2.Price, ..... c49.Price, c50.Price
FROM Table1 AS c1, Table2 AS c2, ..... Table49 AS c49, Table50 AS c50
WHERE c1.Date = c2.Date AND c2.Date = c3.Date ..... c49.Date = c50.Date
ORDER BY c1.ID DESC
OR
SELECT TOP 2 c1.Price, c2.Price, ..... c49.Price, c50.Price
FROM (Table1 AS c1
INNER JOIN (Table2 AS c2
........
INNER JOIN (Table49 AS c49
INNER JOIN Table50 AS c50
ON c49.Date = c50.Date)
........
ON c2.FullDate__ = c3.FullDate__)
ON c1.FullDate__ = c2.FullDate__)
ORDER BY c1.ID DESC";
```
Basically I need to extract 2 rows from each table to produce a summary periodically.
Which statement is faster?
|
What is faster is not having 50 tables to start with. Joining 50 tables might be ok, but it's a highly counter-intuitive design and probably not the most maintainable solution.
Can you not store your data in rows (or columns) of a single (or fewer) tables rather than 50 tables??!
|
WHERE would usually be better but the best way is **case by case** and throw this into profiler, or simpler yet **display execution plan**. Folk often have very strong opinions on which approach is fastest/best in theory but there is no replacement for actually tuning according to the data you actually deal with as theories applicable change depending on your data load.
If you do not have **real data** in your app yet, try and create some realistic stress data. This will continue to be useful for testing. Then schedule time to tune once the application is live.
|
Which SQL statement is faster?
|
[
"",
"sql",
"database",
"performance",
"logging",
""
] |
I am currently building a small website where the content of the main div is being filled through an Ajax call. I basically have a php script that returns the content like this:
(simplified php script...)
```
if(isset($_POST["id_tuto"])){
PrintHtml($_POST["id_tuto"]);
}
function PrintHtml($id)
{
switch($id)
{
case [...]:
echo "THIS IS MY HTML CONTENT";
break;
[...]
}
}
```
The web page then gets the text from that echo command and replaces the inner html of the content div.
My question is this : What is the best way to echo that html content? there is a lot of content each time since it's a step by step tutorial. Each string will be about 50-80 lines of HTML code. Is there a way to put that html content in a separate html file and echo that file or...?
Thanks a lot!!
|
You could do it like so:
```
<?php
function PrintHtml($id) {
switch($id) {
case [...]:
?>
<h1>Tut page 1</h1>
<p>this is html content.</p>
<?php
break;
[...]
}
}
?>
```
Or perhaps:
```
<?php
function PrintHtml($id) {
switch($id) {
case [...]:
include 'section1.php';
break;
[...]
}
}
?>
```
|
You just do it the way you'd normally generate an HTML page, except it is not wrapped in HTML HEAD or BODY tags. It's just the HTML fragment that will be added to your page.
Everything you "echo" becomes part of the output. You can do it is pieces or all on one go, it doesn't matter because it call gets sent to the browser as "the response" as one chunk anyways.
|
What's the best way to include html through an ajax echo command?
|
[
"",
"php",
"html",
"ajax",
""
] |
> **Possible Duplicate:**
> [What's the difference between NOT EXISTS vs. NOT IN vs. LEFT JOIN WHERE IS NULL?](https://stackoverflow.com/questions/2246772/whats-the-difference-between-not-exists-vs-not-in-vs-left-join-where-is-null)
I need to wite a query that will retrieve the records from Table A , provided that the key in Table A does not exist in Table B.
Any help will be appreciated.
Thanks
|
```
SELECT *
FROM A
WHERE ID NOT IN
(SELECT ID FROM B)
```
|
```
select a.*
from
tableA a
left join tableB b
ON a.id = b.id
where
b.id is null
```
|
Retrive Records Form One Table as long as they do not exist in Another table T-SQL
|
[
"",
"sql",
"sql-server",
""
] |
```
<form id="frm_1" name="frm_1" target="_self" method="GET" action="local_page.php" >
</form>
<form id="tgt_1" name="tgt_1" target="_blank" method="POST" action="http://stackoverflow.com/" >
</form>
<a onclick="test(event, '1'); " href="#" >Click Here</a>
<script>
function test(event, id){
document.getElementById("frm_"+id).submit;
document.getElementById("tgt_"+id).submit;
}
</script>
```
Is it possible to open a new tab/window and change the current page ?
|
```
<form id="frm_1" name="frm_1" target="_self" method="POST" action="local_page.php" >
<input type="hidden" name="vital_param" value="<?= $something ?>">
</form>
<form id="tgt_1" name="tgt_1" target="_blank" method="POST" action="http://stackoverflow.com/" >
</form>
<button type="submit" onclick="test(event, '1'); " >Click Here</button>
<script>
function test(event, id){
window.open( document.getElementById("tgt_"+id).action, "_blank");
setTimeout('document.getElementById("frm_'+id+'").submit();', 1000);
return true;
}
</script>
```
tgt kept as source of target url, could be array or attribute.
Without setyTimeout() browser stays on current page (FF/IE).
|
As far as I know, it's not possible to submit two forms at once. Since you're using PHP however, why not take a look at the [cURL](https://www.php.net/manual/en/book.curl.php) library? It lets you send POST and GET requests and parse the results in PHP.
To answer the question in the title, if you simply want to open two pages with one click, you could do it like this (excuse the inline javascript):
```
<a
href="http://www.google.com"
target="_blank"
onclick="document.location.href='http://www.yahoo.com'"
>Click here to open Google in a new window and yahoo in this window</a>
```
|
How to open a new tab and change the current page
|
[
"",
"javascript",
""
] |
want to pass boost::bind to a method expecting a plain function pointer (same signature).
```
typedef void TriggerProc_type(Variable*,void*);
void InitVariable(TriggerProc_type *proc);
boost::function<void (Variable*, void*)> triggerProc ...
InitVariable(triggerProc);
error C2664: 'InitVariable' : cannot convert parameter 1 from
'boost::function<Signature>' to 'void (__cdecl *)(type *,void *)'
```
I can avoid storing a boost::function and just pass the bound functor directly, but then I get similar error:
```
error C2664: 'blah(void (__cdecl *)(type *,void *))' : cannot convert parameter
1 from 'boost::_bi::bind_t<R,F,L>' to 'void (__cdecl *)(type *,void *)'
```
|
Has anyone noticed that the [accepted answer](https://stackoverflow.com/a/282433/20003) only works with trivial cases? The only way that function<>::target() will return an object that can be bound to a C callback, is if it was constructed with an object that can be bound to a C callback. If that's the case, then you could have bound it directly and skipped all of the function<> nonsense to begin with.
If you think about it, there isn't any magic solution to this. A C-style callback is stored as a single pointer which points to executable code. Any nontrivial boost::function<> is going to need at least two pointers: one to the executable code, the other to the data that's needed to set up the call (e.g. the 'this' pointer, in the case of a bound member function).
The right way to use boost::function and boost::bind with C callbacks is to create a shim function that satisfies the callback signature, figures out which function<> to call, and calls it. Usually C callbacks will have some kind of a void\* for 'user data'; that's where you stash your function pointer:
```
typedef void (*CallbackType)(int x, void* user_data);
void RegisterCallback(CallbackType cb, void* user_data);
void MyCallback(int x, void* userData) {
boost::function<void(int)> pfn = static_cast<boost::function<void(int)> >(userData);
pfn(x);
}
boost::function<void(int)> fn = boost::bind(myFunction(5));
RegisterCallback(MyCallback, &fn);
```
Of course, if your callback signature doesn't include some kind of user data pointer, you're out of luck. But any callback that doesn't include a user data pointer is already unusable in most real-world scenarios, and needs to be rewritten.
|
I think you want to use the target() member function of boost::function (isn't that a mouthful...)
```
#include <boost/function.hpp>
#include <iostream>
int f(int x)
{
return x + x;
}
typedef int (*pointer_to_func)(int);
int
main()
{
boost::function<int(int x)> g(f);
if(*g.target<pointer_to_func>() == f) {
std::cout << "g contains f" << std::endl;
} else {
std::cout << "g does not contain f" << std::endl;
}
return 0;
}
```
|
demote boost::function to a plain function pointer
|
[
"",
"c++",
"boost",
"functor",
""
] |
I'm refactoring some old JavaScript code and there's a lot of DOM manipulation going on.
```
var d = document;
var odv = d.createElement("div");
odv.style.display = "none";
this.OuterDiv = odv;
var t = d.createElement("table");
t.cellSpacing = 0;
t.className = "text";
odv.appendChild(t);
```
I would like to know if there is a better way to do this using jQuery. I've been experimenting with:
```
var odv = $.create("div");
$.append(odv);
// And many more
```
But I'm not sure if this is any better.
|
Here's your example in the "one" line.
```
this.$OuterDiv = $('<div></div>')
.hide()
.append($('<table></table>')
.attr({ cellSpacing : 0 })
.addClass("text")
)
;
```
---
*Update*: I thought I'd update this post since it still gets quite a bit of traffic. In the comments below there's some discussion about `$("<div>")` vs `$("<div></div>")` vs `$(document.createElement('div'))` as a way of creating new elements, and which is "best".
I put together [a small benchmark](http://jsbin.com/elula3), and here are roughly the results of repeating the above options 100,000 times:
**jQuery 1.4, 1.5, 1.6**
```
Chrome 11 Firefox 4 IE9
<div> 440ms 640ms 460ms
<div></div> 420ms 650ms 480ms
createElement 100ms 180ms 300ms
```
**jQuery 1.3**
```
Chrome 11
<div> 770ms
<div></div> 3800ms
createElement 100ms
```
**jQuery 1.2**
```
Chrome 11
<div> 3500ms
<div></div> 3500ms
createElement 100ms
```
I think it's no big surprise, but `document.createElement` is the fastest method. Of course, before you go off and start refactoring your entire codebase, remember that the differences we're talking about here (in all but the archaic versions of jQuery) equate to about an extra 3 milliseconds *per thousand elements*.
---
**Update 2**
Updated for **jQuery 1.7.2** and put the benchmark on `JSBen.ch` which is probably a bit more scientific than my primitive benchmarks, plus it can be crowdsourced now!
<http://jsben.ch/#/ARUtz>
|
Simply supplying the HTML of elements you want to add to a jQuery constructor `$()` will return a jQuery object from newly built HTML, suitable for being appended into the DOM using jQuery's `append()` method.
For example:
```
var t = $("<table cellspacing='0' class='text'></table>");
$.append(t);
```
You could then populate this table programmatically, if you wished.
This gives you the ability to specify any arbitrary HTML you like, including class names or other attributes, which you might find more concise than using `createElement` and then setting attributes like `cellSpacing` and `className` via JS.
|
jQuery document.createElement equivalent?
|
[
"",
"javascript",
"jquery",
"html",
"dom",
"dhtml",
""
] |
Is there an efficient way to take a subset of a C# array and pass it to another peice of code (without modifying the original array)? I use CUDA.net which has a function which copies an array to the GPU. I would like to e.g. pass the function a 10th of the array and thus copy each 10th of the array to the GPU seperately (for pipelining purposes).
Copying the array in this way should be as efficient as copying it in one go. It can be done with unsafe code and just referencing the proper memory location but other than that I'm not sure. The CopyTo function copies the entire array to another array so this does not appear useful.
|
Okay, I'd misunderstood the question before.
What you want is [System.Buffer.BlockCopy](http://msdn.microsoft.com/en-us/library/system.buffer.blockcopy.aspx) or [System.Array.Copy](http://msdn.microsoft.com/en-us/library/z50k9bft.aspx).
The LINQ ways will be hideously inefficient. If you're able to reuse the buffer you're copying into, that will also help the efficiency, avoiding creating a new array each time - just copy over the top. Unless you can divide your "big" array up equally though, you'll need a new one for the last case.
|
I'm not sure how efficient this is but...
```
int[] myInts = new int[100];
//Code to populate original arrray
for (int i = 0; i < myInts.Length; i += 10)
{
int[] newarray = myInts.Skip(i).Take(10).ToArray();
//Do stuff with new array
}
```
|
Getting array subsets efficiently
|
[
"",
"c#",
"arrays",
"cuda",
"cuda.net",
""
] |
Is there a fairly easy way to convert a datetime object into an RFC 1123 (HTTP/1.1) date/time string, i.e. a string with the format
```
Sun, 06 Nov 1994 08:49:37 GMT
```
Using `strftime` does not work, since the strings are locale-dependant. Do I have to build the string by hand?
|
You can use wsgiref.handlers.format\_date\_time from the stdlib which does not rely on locale settings
```
from wsgiref.handlers import format_date_time
from datetime import datetime
from time import mktime
now = datetime.now()
stamp = mktime(now.timetuple())
print format_date_time(stamp) #--> Wed, 22 Oct 2008 10:52:40 GMT
```
You can use email.utils.formatdate from the stdlib which does not rely on locale settings
```
from email.utils import formatdate
from datetime import datetime
from time import mktime
now = datetime.now()
stamp = mktime(now.timetuple())
print formatdate(
timeval = stamp,
localtime = False,
usegmt = True
) #--> Wed, 22 Oct 2008 10:55:46 GMT
```
If you can set the locale process wide then you can do:
```
import locale, datetime
locale.setlocale(locale.LC_TIME, 'en_US')
datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
```
If you don't want to set the locale process wide you could use [Babel](http://babel.edgewall.org/) [date formating](http://babel.edgewall.org/wiki/Documentation/dates.html)
```
from datetime import datetime
from babel.dates import format_datetime
now = datetime.utcnow()
format = 'EEE, dd LLL yyyy hh:mm:ss'
print format_datetime(now, format, locale='en') + ' GMT'
```
A manual way to format it which is identical with wsgiref.handlers.format\_date\_time is:
```
def httpdate(dt):
"""Return a string representation of a date according to RFC 1123
(HTTP/1.1).
The supplied date must be in UTC.
"""
weekday = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][dt.weekday()]
month = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep",
"Oct", "Nov", "Dec"][dt.month - 1]
return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (weekday, dt.day, month,
dt.year, dt.hour, dt.minute, dt.second)
```
|
You can use the formatdate() function from the Python standard email module:
```
from email.utils import formatdate
print formatdate(timeval=None, localtime=False, usegmt=True)
```
Gives the current time in the desired format:
```
Wed, 22 Oct 2008 10:32:33 GMT
```
In fact, this function does it "by hand" without using strftime()
|
RFC 1123 Date Representation in Python?
|
[
"",
"python",
"http",
"datetime",
""
] |
I am wondering what the best way is using php to obtain a list of all the rows in the database, and when clicking on a row show the information in more detail, such as a related image etc.
Should I use frames to do this? Are there good examples of this somewhere?
Edit:
I need much simpler instructions, as I am not a programmer and am just starting out. Can any links or examples be recommended?
|
Contrary to other's recommendations, I would not recommend a framework or abstraction level. It will insulate you from understanding how php works and requires that you learn php and the framework structure/process at the same time. An abstraction layer is good practice in a commercial environment, but from the vibe of your question, you don't anticipate moving servers or migrating your db.
I recommend working procedurally (not object-oriented) with the php and mysql until you understand what is going on and how the language works.
To respond to your actual question:
You need to connect to the database: `mysql_connect()`
You need to select the database you want to work with: `mysql_select_db()`
You need to define the query: `msyql_query()`
You need to use a while loop to get the data:
```
$query=mysql_query("select * from table_name");
while($row=mysql_fetch_assoc($query)){
extract($row);
echo $name of field 1.": ".$name of field 2;
}
```
To make each row of output a link to more info rewrite the echo statement like this:
```
echo "<a href=\"http://addresstomoreinfo.php?image_id=".$image_id.\">".$name
of field 1.": ".$name of field 2."</a>";
```
The "name of field" variables represent the column names of your db table and I have made up the layout of the field name, colon, and second field name. How the info is displayed is up to you.
The question mark prepends the name of a variable that is defined in the addresstomoreinfo.php page that will be identified by `$var=$_GET['image_id'];`
Other php, html, css elements are involved in the big picture of accomplishing this. A good source for begining information is <http://www.w3schools.com/> **I also live and die by the php manual linked to above**
|
I use tables and JavaScript to do this.
Data in a SQL database is, by nature, tabular. So I just select the data and create a table. Then, to drill down (when I need do), I provide a JavaScript "more" functionality and use CSS to hide/display the additional data.
|
use php to show mysql data
|
[
"",
"php",
"mysql",
""
] |
I'm writing a MUD engine and I've just started on the game object model, which needs to be extensible.
I need help mainly because what I've done feels messy, but I can't think of a another solution that works better.
I have a class called `MudObject`, and another class called `Container`, A container can contain multiple `MudObjects`, but is a `MudObject` itself, however `MudObject`s need to know what they are contained in.
So they look something like this:
```
public abstract class MudObject
{
Container containedBy;
}
public abstract class Container : MudObject
{
List<MudObject> Contains;
}
```
*(please note these are just example and some qualifiers and access modifiers, properties and such are missed off)*
Now just this in itself seems messy, but lets add something else to the mix:
`Item` is a `MudObject` that all visual items (such as weapons) will be inherited from, however some of these need to be containers too (like chests). But theres no such as multiple inheritance in c#, So it comes down to interfaces, the best choice would be to make the container an interface (as far as I can see) However there was a reason I didn't want it to be, that being that adding an `MudObject` to a container will cause the container to update the `MudObject`s `.containedBy` value.
Any ideas that would make this work, or am I falling into the trap of making things too complicated?
If so what else could you suggest?
|
What you're asking for is reasonable, and is the [Composite Design Pattern](http://home.earthlink.net/~huston2/dp/composite.html)
|
I think you are overcomplicating. If MudObjects can contain other MudObjects, the single base class you need should be along these lines:
```
public abstract class MudObject
{
MudObject containedBy; //technically Parent
List<MudObject> Contains; //children
}
```
This is similar to the way WinForms and ASP.NET works. Many container controls are both controls, and can contain a collection of subcontrols.
|
Advice with Class Hierarchy for Game Items
|
[
"",
"c#",
"oop",
"hierarchy",
"mud",
""
] |
I have this as Main
```
int[] M ={ 10, 2, 30, 4, 50, 6, 7, 80 };
MyMath.Reverse(M);
for (int i = 0; i < M.Length; i++)
Console.WriteLine(M[i].ToString() + ", ");
```
---
After I created the class MyMath I made the Reverse method
```
public int Reverse(Array M)
{
int len = M.Length;
for (int i = 0; i < len / 2; i++)
{
int temp = M[i]; M[i] = M[len - i - 1]; M[len - i - 1] = temp;
}
}
```
but I'm sure it's wrong because it's not working :-) so do you have a different code to write in the reverse method?
note: I don't want to use the built in Reverse in the Array class
---
yes guys when i used the built in reverse method i got this error
Process is terminated due to StackOverflowException.
thats after i wrote the method as
```
public static int Reverse(Array M)
{
return Reverse(M);
}
```
So then I tried to create my own reverse method and there i got stuck
|
Working from your
```
public static int Reverse(Array M)
{
return Reverse(M);
}
```
You have 2 problems.
1. Reverse(M) looks like the same function that you're in, so you're calling your new function, which calls itself, which calls itself, etc., resulting in the stack overflow. Change to `return Array.Reverse(M);`
2. `Array.Reverse` returns a void, so if you need to return an `int` (not sure what it's supposed to be here) you'll need to supply your own. Or change *your* `Reverse` function to be `void`.
|
To fix your problem, change your method to:
```
// the built-in returns void, so that needed to be changed...
public static void Reverse(Array M)
{
Array.Reverse(M); // you forgot to reference the Array class in yours
}
```
There, no Stack Overflow problems.
|
Reverse Method for an Array of int
|
[
"",
"c#",
"arrays",
""
] |
I've some experiences on build application with Asp.Net, but now MVC frameworks become more popular. I would like to try building new multilingual web application using with Asp.Net MVC or Castle MonoRail but I don't know which one is good for me. I don't like the web form view engine, but I like routing feature in Asp.Net MVC.
* Could anyone tells about pros and cons between those?
* Which ViewEngine is the better as well for overriding the master template?
|
Speaking as an advocate of monorail, I've got to say you should probably go for ASP.NET MVC. To be honest, the simple fact that ASP.NET MVC is going to become the default architecture within three years should probably swing it. This equation was different a year ago, simply because the default architecture had serious productivity problems compared to MonoRail.
If you want to talk technical advantages and disadvantages:
* ASP.NET AJAX is a mess (avoid it), but they've now got jQuery. In fact, the jQuery support is better than any other environment. Of course, you only fully get that with IDE integration with the standard view engine.
* There are some aesthetic improvements (for instance, the way model information is passed around is much cleaner and more obvious than Monorail).
Also, don't dismiss the standard view engine out of hand. You don't have to throw controls at it like you did with ASP.NET, you can code it in a pretty similar manner to Brail, only using C# instead of Boo.
There are things that are just plain ugly
\* the number of methods that take object for a parameter. Good luck finding the documentation on what exactly they expect.
\* Microsoft's fondness for abstract classes over interfaces. They have their reasons, but I still dislike it.
Also, in many ways, MonoRail remains the more complete platform. There's no abstraction for validation or paging in ASP.NET, for instance. Also, there's not really any help for binding to a model. The helpers have very little functionality compared to their Monorail equivalents.
Overall, though, I think ASP.NET MVC is a winner.
|
MonoRail and ASP.NET MVC are fundamentally very similar, you should be well off using either one of them. MonoRail has existed much longer and has therefore more higher level features.
The main strength of ASP.NET MVC is it's routeing engine, to be fair MonoRail has pretty much an equivalent routing engine, and with some modification you can use the ASP.NET MVC routing engine with MonoRail as the routing engine is not really in ASP.NET MVC but in System.Web.Routing (Released in .NET 3.5 SP1). ASP.NET MVC and integration with Visual studio is also a plus, and will probably get better as we approach RTM of v1.
The MvcContrib project contains some great view engines, like Spark, NHaml and Brail. No one could be considered "Best", A personal favourite is Spark. For more on spark: <http://dev.dejardin.org/documentation/syntax>
The WebForms engine has intellisense which is a great advantage that to my knowledge all alternative view engines lack.
|
Asp.Net MVC vs Castle MonoRail
|
[
"",
"c#",
"asp.net-mvc",
"castle-monorail",
""
] |
i wonder if there is something similar to Sql Profiler for Sql Server Compact Edition?
i use SqlCE as backend for a desktop application and it would be really great to have something like sql profiler for this embedded database.
or at least something simliar to the NHibernate show\_sql feature...
any ideas?
thanks
j.
|
The only tested solution I know of that could solve this problem is [Altiris Profiler](https://stackoverflow.com/questions/206743/is-there-any-way-i-can-get-net-stack-traces-in-sql-profiler-or-a-similar-tool) which is a tool I designed at my previous job, but is closed source and not-for-sale.
The way you would hook it in, is by creating a factory for your commands and proxing them for profiling purposes before using them (using RealProxy). Its really light weight and about 10 lines of code to implement.
On [my question](https://stackoverflow.com/questions/206743/is-there-any-way-i-can-get-net-stack-traces-in-sql-profiler-or-a-similar-tool) Flory talks about a new tool called [dynaTrace](http://www.dynatrace.com/en/) that may also be able to solve this problem as well.
|
I don't think that would work - CE seems like a totally different beast.
You can enable some logging that might help you:
<http://msdn.microsoft.com/en-us/library/ms171949(SQL.90).aspx>
I tried to do this and managed to set the database up and connect from SSMS - you have to specify the alternate connection type of 'SQL Server Compact Edition'. Profiler has no such thing - and entering a path to the datafile for the 'database' field did nothing.
|
Profiler for Sql CE
|
[
"",
"sql",
"sql-server-ce",
"profiler",
""
] |
I'm tryint to post to a ADO.NET Data Service but the parameters seems to get lost along the way.
I got something like:
```
[WebInvoke(Method="POST")]
public int MyMethod(int foo, string bar) {...}
```
and I make an ajax-call using prototype.js as:
```
var args = {foo: 4, bar: "'test'"};
new Ajax.Requst(baseurl + 'MyMethod',
method: 'POST',
parameters: args,
onSuccess: jadda,
onFailure: jidda
}
```
If I replace "method: 'POST'" with "method: 'GET'" and "WebInvoke(Method="POST")" with "WebGet" everything works but now (using post) all I get is:
> Bad Request - Error in query syntax.
from the service.
The only fix (that I don't want to use) is to send all parameters in the URL even when I perform a post. Any ideas are welcome.
|
WCF and ASMX webservices tend to be a bit choosey about the request body, when you specify args the request is usually encoded as a form post i.e. foo=4&bar=test instead you need to specify the javascript literal:-
```
new Ajax.Request(baseurl + 'MyMethod', {
method: 'POST',
postBody: '{"foo":4, "bar":"test"}',
encoding: "UTF-8",
contentType: "application/json;",
onSuccess: function(result) {
alert(result.responseJSON.d);
},
onFailure: function() {
alert("Error");
}
});
```
|
If you want to use POST, you need to specify the parameters to be wrapped in the request in WebInvoke attribute unless the parameters contains on object (e.g. message contract). This makes sense since there is no way to serialize the parameters without wrapped in either json or xml.
Unwrapped which is not XML indeed as missing root element
```
<foo>1</foo>
<bar>abc</bar>
```
Wrapped, valid XML
```
<Request>
<foo>1</foo>
<bar>abc</bar>
</Request>
```
This sample also applies to JSON
|
Receive parameter from request body in WCF/ADO.NET Data Service
|
[
"",
".net",
"javascript",
"ajax",
"wcf",
"ado.net",
""
] |
I am looking for a way to truncate a string in Python that will not cut off the string in the middle of a word.
For example:
```
Original: "This is really awesome."
"Dumb" truncate: "This is real..."
"Smart" truncate: "This is really..."
```
I'm looking for a way to accomplish the "smart" truncate from above.
|
I actually wrote a solution for this on a recent project of mine. I've compressed the majority of it down to be a little smaller.
```
def smart_truncate(content, length=100, suffix='...'):
if len(content) <= length:
return content
else:
return ' '.join(content[:length+1].split(' ')[0:-1]) + suffix
```
What happens is the if-statement checks if your content is already less than the cutoff point. If it's not, it truncates to the desired length, splits on the space, removes the last element (so that you don't cut off a word), and then joins it back together (while tacking on the '...').
|
Here's a slightly better version of the last line in Adam's solution:
```
return content[:length].rsplit(' ', 1)[0]+suffix
```
(This is slightly more efficient, and returns a more sensible result in the case there are no spaces in the front of the string.)
|
Truncate a string without ending in the middle of a word
|
[
"",
"python",
"truncate",
""
] |
I need to pass an array from JavaScript to a page method in C#. I have tried using a parameter in the C# method of IList and also a string[] array. Both ways throw an exception "cannot convert an object of type system.string to xxx", where xxx is the parameter type in the C# method. I am passing the object from jQuery as a json object, and looks like it is coming through properly. Here is what it looks like:
```
{"testNumbers":"5555555555,3333333333,4444444444"}
```
What am I doing wrong here?
EDIT: C# Code:
```
[WebMethod()]
public static void ProcessAction(string[] testNumbers)
{
var dataProvider = new DataProvider();
dataProvider.ProcessAction(testNumbers);
}
```
|
Hmm, two things:
1. JSON arrays have square brackets around them
2. JSON numbers aren't quoted.
Try this instead:
```
{"testNumbers": [5555555555,3333333333,4444444444]}
```
|
You need to use one of the .NET JSON Deserializers:
<http://msdn.microsoft.com/en-us/library/bb299886.aspx>
|
Best way to pass an array from JavaScript to C#?
|
[
"",
"c#",
"asp.net",
"jquery",
"web-services",
""
] |
**Java** is the key here. I need to be able to delete files but users expect to be able to "undelete" from the recycle bin. As far as I can tell this isn't possible. Anyone know otherwise?
|
For various reasons Windows [has no concept of a folder that simply corresponds to the Recycle Bin](http://blogs.msdn.com/oldnewthing/archive/2008/09/18/8956382.aspx).
The correct way is to use JNI to invoke the Windows `SHFileOperation` API, setting the `FO_DELETE` flag in the `SHFILEOPSTRUCT` structure.
* [SHFileOperation documention](http://msdn.microsoft.com/en-us/library/bb762164(VS.85).aspx)
* [Java example for copying a file using SHFileOperation](http://www.jroller.com/ethdsy/entry/using_windows_explorer_copy) (the Recycle Bin link in the same article doesn't work)
|
Ten years later, with Java 9, finally there is a builtin way to move files to the Trash Bin
[`java.awt.Desktop.moveToTrash(java.io.File)`](https://docs.oracle.com/javase/9/docs/api/java/awt/Desktop.html#moveToTrash-java.io.File-):
> ### `public boolean moveToTrash(File file)`
>
> Moves the specified file to the trash.
>
> ### Parameters:
>
> file - the file
>
> ### Returns:
>
> returns true if successfully moved the file to the trash.
The availability of this feature for the underlying platform can be tested with `Desktop.isSupported(Desktop.Action.MOVE_TO_TRASH)`.
|
Is it possible with Java to delete to the Recycle Bin?
|
[
"",
"java",
"recycle-bin",
""
] |
I have to copy quite a lot of files from one folder to another. Currently I am doing it in this way:
```
string[] files = Directory.GetFiles(rootFolder, "*.xml");
foreach (string file in files)
{
string otherFile = Path.Combine(otherFolder, Path.GetFileName(file));
File.Copy(file, otherFile);
}
```
Is that the most efficient way? Seems to take ages.
EDIT: I am really asking if there is a faster way to do a batch copy, instead of copying individual files, but I guess the answer is no.
|
I can't think of a more efficient way than File.Copy, it goes directly to the OS.
On the other hand if it takes that long, I would strongly suggest to show a progress dialog - like [SHFileOperation](http://msdn.microsoft.com/en-us/library/bb762164(VS.85).aspx) does it for you. At least your users will know what is happening.
|
You could use the operating system to move the files. This is what tools like WinMerge do. You click the "copy" button in your app and it pops up the Windows progress box as if you had used Explorer to arrange the copy. [This thread](http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.dotnet.languages.csharp&tid=89d90929-c087-4b5a-a885-13d41ad37b82&cat=&lang=en&cr=US&sloc=&p=1) describes it.
|
Effective copying multiple files
|
[
"",
"c#",
".net",
"file",
"copy",
""
] |
I recently started using Linux as my primary OS. What are the tools that I will need to set up a complete programming environment in Linux for C and C++?
|
Standard stuff:
* The compiler tools, gcc, gdb, etc.
* Some sort of editor/IDE (emacs, vim, eclipse)
* Profiling tools
* Source Control (SubVersion, git, etc)
* Language specific tools, like easy\_install for python (you said C/C++, but the same goes for everything)
* A web server maybe? Apache, Lighttpd, nginx
* Any libraries you'll be using. Are you doing kernal hacking? Driver development? opengl?
|
Among others you should also have gprof and valgrind ( or something in it's class ).
|
Setting up a Programming Environment in Linux
|
[
"",
"c++",
"c",
"linux",
"ide",
"development-environment",
""
] |
I have a c# winforms program and it opens up a serial port. The problem happens when the end user unplugs the usb cable and then the device disappears. After this the program will crash and want to report the error to microsoft.
Is there a way to capture this event and shut down gracefully?
|
You can use WMI (Windows Management Instrumentation) to receive notification on USB events.
I did exactly that two years ago, monitoring for plugging and unplugging of a specific usb device.
Unfortunately, the code stays with my former employer, but I found one example at [bytes.com](http://bytes.com/topic/net/answers/102489-how-detect-usb-device-being-plugged-unplugged):
```
using System;
using System.ComponentModel;
using System.Runtime.InteropServices;
using System.Management;
class UsbWatcher
{
public static void Main()
{
WMIEvent wEvent = new WMIEvent();
ManagementEventWatcher watcher = null;
WqlEventQuery query;
ManagementOperationObserver observer = new ManagementOperationObserver();
ManagementScope scope = new ManagementScope("root\\CIMV2");
scope.Options.EnablePrivileges = true;
try
{
query = new WqlEventQuery();
query.EventClassName = "__InstanceCreationEvent";
query.WithinInterval = new TimeSpan(0,0,10);
query.Condition = @"TargetInstance ISA 'Win32_USBControllerDevice' ";
watcher = new ManagementEventWatcher(scope, query);
watcher.EventArrived
+= new EventArrivedEventHandler(wEvent.UsbEventArrived);
watcher.Start();
}
catch (Exception e)
{
//handle exception
}
}
```
I don't remember if I modified the query to receive events only for e specific device, or if I filtered out events from other devices in my event handler. For further information you may want to have a look at the [MSDN WMI .NET Code Directory](http://msdn.microsoft.com/en-us/library/ms257338.aspx).
**EDIT**
I found some more info on the event handler, it looks roughly like this:
```
protected virtual void OnUsbConnected(object Sender, EventArrivedEventArgs Arguments)
{
PropertyData TargetInstanceData = Arguments.NewEvent.Properties["TargetInstance"];
if (TargetInstanceData != null)
{
ManagementBaseObject TargetInstanceObject = (ManagementBaseObject)TargetInstanceData.Value;
if (TargetInstanceObject != null)
{
string dependent = TargetInstanceObject.Properties["Dependent"].Value.ToString();
string deviceId = dependent.Substring(dependent.IndexOf("DeviceID=") + 10);
// device id string taken from windows device manager
if (deviceId = "USB\\\\VID_0403&PID_6001\\\\12345678\"")
{
// Device is connected
}
}
}
}
```
You may want to add some exception handling, though.
|
Yes, there is a way to capture the event. Unfortunately, there can be a long delay between the time the device is removed and the time the program receives any notification.
The approach is to trap com port events such as ErrorReceived and to catch the WM\_DEVICECHANGE message.
Not sure why your program is crashing; you should take a look at the stack to see where this is happening.
|
How to capture a serial port that disappears because the usb cable gets unplugged
|
[
"",
"c#",
"winforms",
"serial-port",
""
] |
I'm developing an object-oriented PHP website right now and am trying to determine the best way to abstract database functionality from the rest of the system. Right now, I've got a DB class that manages all the connections and queries that the system uses (it's pretty much an interface to MDB2). However, when using this system, I've realized that I've got a lot of SQL query strings showing up everywhere in my code. For instance, in my User class, I've got something like this:
```
function checkLogin($email,$password,$remember=false){
$password = $this->__encrypt($password);
$query = "SELECT uid FROM Users WHERE email=? AND pw=?";
$result = $this->db->q($query,array($email,$password));
if(sizeof($result) == 1){
$row = $result->fetchRow(MDB2_FETCHMODE_ASSOC);
$uid = $row['uid'];
}else{
return false;
}
/* Rest of the login script */
}
```
What I would like to do is find out the best technique for reducing the amount of inline SQL. I understand that one way to do this would be to write functions within User for each of the queries that User makes use of (something like the following), but that could lead to quite a few functions.
```
function checkLogin($email,$password,$remember=false){
$password = $this->__encrypt($password);
$uid = $this->do_verify_login_query($email,$password);
/* Rest of the login script */
}
function do_verify_login_query($email,$encpw){
$query = "SELECT uid FROM Users WHERE email=? AND pw=?";
$result = $this->$db->q($query,array($email,$encpw));
if(sizeof($result) == 1){
$row = $result->fetchRow(MDB2_FETCHMODE_ASSOC);
return $row['uid'];
}else{
return false;
}
}
```
So...my question. What is the best technique for managing the large amount of queries that a typical database application would use? Would the way I described be the proper way of handling this situation? Or what about registering a list of queries within the DB class and associating with each a unique ID (such as USER\_CHECKLOGIN) that is passed into the DB's query function? This method could also help with security, as it would limit the queries that could be run to only those that are registered in this list, but it's one more thing to remember when writing all the class functions. Thoughts?
|
Having the SQL pulled out into separate functions is a decent start. Some other things you can do:
* Create separate classes for database access code. This will help make sure you don't have SQL functions scattered around in all of your PHP files.
* Load the SQL from external files. This completely separates your SQL code and your PHP code, making both more maintainable.
* Use stored procedures when you can. This removes the SQL from your PHP code altogether, and helps improve your database security by reducing the risk that external SQL will get executed.
|
You might want to look into implementing the [ActiveRecord Pattern](http://en.wikipedia.org/wiki/Active_record_pattern). Using a design pattern such as this provides some consistency in how you work with data from your tables. There can be some downsides to these sorts of approaches, mainly performance for certain types of queries but it can be worked around.
|
Separating code from DB functionality
|
[
"",
"php",
"database",
""
] |
The code looks like below:
```
namespace Test
{
public interface IMyClass
{
List<IMyClass> GetList();
}
public class MyClass : IMyClass
{
public List<IMyClass> GetList()
{
return new List<IMyClass>();
}
}
}
```
When I Run Code Analysis i get the following recommendation.
> Warning 3 CA1002 : Microsoft.Design : Change 'List' in 'IMyClass.GetList()' to use Collection, ReadOnlyCollection or KeyedCollection
How should I fix this and what is good practice here?
|
To answer the "why" part of the question as to why not [`List<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1), The reasons are future-proofing and API simplicity.
**Future-proofing**
[`List<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1) is not designed to be easily extensible by subclassing it; it is designed to be fast for internal implementations. You'll notice the methods on it are not virtual and so cannot be overridden, and there are no hooks into its [`Add`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.add)/[`Insert`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.insert)/[`Remove`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.remove) operations.
This means that if you need to alter the behavior of the collection in the future (e.g. to reject null objects that people try to add, or to perform additional work when this happens such as updating your class state) then you need to change the type of collection you return to one you can subclass, which will be a breaking interface change (of course changing the semantics of things like not allowing null may also be an interface change, but things like updating your internal class state would not be).
So by returning either a class that can be easily subclassed such as [`Collection<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.objectmodel.collection-1) or an interface such as [`IList<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.ilist-1), [`ICollection<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.icollection-1) or [`IEnumerable<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.ienumerable-1) you can change your internal implementation to be a different collection type to meet your needs, without breaking the code of consumers because it can still be returned as the type they are expecting.
**API Simplicity**
[`List<T>`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1) contains a lot of useful operations such as [`BinarySearch`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.binarysearch), [`Sort`](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1.sort) and so on. However if this is a collection you are exposing then it is likely that you control the semantics of the list, and not the consumers. So while your class internally may need these operations it is very unlikely that consumers of your class would want to (or even should) call them.
As such, by offering a simpler collection class or interface, you reduce the number of members that users of your API see, and make it easier for them to use.
|
I would personally declare it to return an interface rather than a concrete collection. If you really want list access, use [`IList<T>`](http://msdn.microsoft.com/en-us/library/5y536ey6.aspx). Otherwise, consider [`ICollection<T>`](http://msdn.microsoft.com/en-us/library/92t2ye13.aspx) and [`IEnumerable<T>`](http://msdn.microsoft.com/en-us/library/9eekhta0.aspx).
|
Collection<T> versus List<T> what should you use on your interfaces?
|
[
"",
"c#",
".net",
"collections",
"code-analysis",
""
] |
Can source code examples be kept in a SQL database **while retaining all formatting** (tabs, newlines, etc.)? If so what data type would be used?
|
Yes, use a `TEXT` type (or `MEDIUMTEXT` or `LONGTEXT` - you get the idea)
|
A BLOB type (varbinary) would definitely work, although databases shouldn't mangle text that's stored as varchar either.
|
Can source code examples be kept in a SQL database while retaining all formatting? If so
|
[
"",
"sql",
"language-agnostic",
"database-design",
"code-snippets",
""
] |
I would like to use Python to script an application that advertises itself as providing an OLE component. How should I get started?
I don't yet know what methods I need to call on the COMponents I will be accessing. Should I use win32com to load those components, and then start pressing 'tab' in IPython?
|
"[Python and COM](http://www.boddie.org.uk/python/COM.html)" contains an example. OLE is related to COM and ActiveX so you should look for those terms.
"[Python Programming on Win32](http://oreilly.com/catalog/9781565926219/)" is a useful book. There is also a "[Python Win32](http://mail.python.org/pipermail/python-win32/)" mailing list.
|
You need the [win32com](http://python.net/crew/mhammond/win32/Downloads.html) package. Some examples:
```
from win32com.client.dynamic import Dispatch
# Excel
excel = Dispatch('Excel.Application')
# Vim
vim = Dispatch('Vim.Application')
```
And then call whatever you like on them.
|
How to script an OLE component using Python
|
[
"",
"python",
"windows",
"scripting",
"activex",
"ole",
""
] |
Is there a way to write log4j logging events to a log file that is also being written to by other applications. The other applications could be non-java applications. What are the drawbacks? Locking issues? Formatting?
|
Log4j has a SocketAppender, which will send events to a service, which you can implement yourself or use the simple implementation bundled with Log4j.
It also supports syslogd and the Windows event log, which may be useful in trying to unify your log output with events from non-Java applications.
If performance is an issue at all, you want a single service writing the log file, rather than trying to coordinate a consistent locking strategy among diverse logging applications.
|
Your best bet might be to let each application log separately, then put a scheduled job in place to 'zipper' the files together based on time. If you need really up-to-date access to the full log, you could have this run every hour.
|
Log4j Logging to a Shared Log File
|
[
"",
"java",
"log4j",
""
] |
OK, I have just been reading and trying for the last hour to import a CSV file from access into MySQL, but I can not get it to do it correctly, no matter what I try.
My table is like so:
```
+-----------------+-------------
| Field | Type
+-----------------+-------------
| ARTICLE_NO | varchar(20)
| ARTICLE_NAME | varchar(100)
| SUBTITLE | varchar(20)
| CURRENT_BID | varchar(20)
| START_PRICE | varchar(20)
| BID_COUNT | varchar(20)
| QUANT_TOTAL | varchar(20)
| QUANT_SOLD | varchar(20)
| STARTS | datetime
| ENDS | datetime
| ORIGIN_END | datetime
| SELLER_ID | varchar(20)
| BEST_BIDDER_ID | varchar(20)
| FINISHED | varchar(20)
| WATCH | varchar(20)
| BUYITNOW_PRICE | varchar(20)
| PIC_URL | varchar(20)
| PRIVATE_AUCTION | varchar(20)
| AUCTION_TYPE | varchar(20)
| INSERT_DATE | datetime
| UPDATE_DATE | datetime
| CAT_1_ID | varchar(20)
| CAT_2_ID | varchar(20)
| ARTICLE_DESC | varchar(20)
| DESC_TEXTONLY | varchar(20)
| COUNTRYCODE | varchar(20)
| LOCATION | varchar(20)
| CONDITIONS | varchar(20)
| REVISED | varchar(20)
| PAYPAL_ACCEPT | tinyint(4)
| PRE_TERMINATED | varchar(20)
| SHIPPING_TO | varchar(20)
| FEE_INSERTION | varchar(20)
| FEE_FINAL | varchar(20)
| FEE_LISTING | varchar(20)
| PIC_XXL | tinyint(4)
| PIC_DIASHOW | tinyint(4)
| PIC_COUNT | varchar(20)
| ITEM_SITE_ID | varchar(20)
```
Which should be fine, and my data is currently semicolon delimited, an example of a row from my csv file is thus:
```
"110268889894";"ORIGINAL 2008 ED HARDY GÜRTEL* MYSTERY LOVE * M *BLACK";"";0,00 €;0,00 €;0;1;0;8.7.2008 17:18:37;5.11.2008 16:23:37;6.10.2008 17:23:37;29;0;0;0;125,00 €;"";0;2;6.10.2008 16:21:51;6.10.2008 14:19:08;80578;0;;0;77;"";0;0;1;0;-1;0,00 €;0,00 €;0,00 €;0;0;0;77
"110293328957";"Orig. Ed Hardy Shirt - Tank Top - Gr. XS- OVP/NEU";"";25,05 €;0,00 €;7;1;0;27.9.2008 06:26:27;6.10.2008 18:26:21;6.10.2008 18:26:21;49;0;0;0;0,00 €;"";0;1;6.10.2008 16:21:56;6.10.2008 16:33:20;31058;0;;0;77;"";1;0;0;0;-1;0,00 €;0,00 €;0,00 €;0;0;0;77
```
I am using a simple PHP page to output the table data, and it is completely wrong. The problem is not with the PHP page as a simple table with just text displays perfectly. I am wondering if I have selected incorrect field types or the columns do not match up, but I see no reason this would be the case. Could something be happening because the data is trying to be displayed as HTML?
I have tried to keep this question as specific as possible, and am not asking for people to do it for me as others have suggested, but I honestly can not see where the problem is.
edit: When trying to display the data through PHP, at the moment the page is completely black with some table cells having borders and others not, othertimes when trying different delimiters, the fields do not match the columns. The starts field has never displayed correctly.
I am loading the file from a CSV file with this command:
```
LOAD DATA LOCAL INFILE 'C:/Dokumente und Einstellungen/hom/Desktop/tblAuction.txt' INTO TABLE Auctions FIELDS TERMINATED BY '\"' LINES TERMINATED BY '\n';
```
It looks fine in the DB as far as I have been able to tell, apart from the starts field, which is not the main problem at the moment.
edit2: I have serialized the result, and this is what was output to the screen:
```
i:0;
```
I have absolutely no idea what that means.
edit3: I had tried updating the terminated by a delimiter, and after changing \" to ; the problem still remains of data not being put into the correct columns etc.., I don't understand what I have missed.
|
The error can be caused by corrupt data in your DB, by the query to retrieve it from the DB, or in the way you output it. You need to narrow it down to one of those causes.
1. Have a direct look at the table you are selecting from. I suggest [phpMyAdmin](http://www.phpmyadmin.net/) for this.
2. Directly print the result of your query to the screen. A very crude but workable way could be to [serialise](http://fr3.php.net/serialize) it:, e.g. `echo serialize($result);`
3. If you can exclude 1 and 2 as causes of your error, the problem must lie in the processing of the query.
Once you have identified *where* the cause is, it gets much easier to nail down *what* the cause is ;-)
As Ted Elliot mentioned: *Your fields are terminated by ";" not "\"". Change `FIELDS TERMINATED BY '\"'`
to `FIELDS TERMINATED BY ';'`*
Also, the correct format for importing data into datetime fields in mysql seems to be the ISO format: YYYY-MM-DD HH:mm:ss (e.g. 2007-07-31 00:00:00).
|
Your fields are terminated by ";" not "\"". Change
```
FIELDS TERMINATED BY '\"'
```
to
```
FIELDS TERMINATED BY ';'
```
You could add this as well:
```
OPTIONALLY ENCLOSED BY '"'
```
which I think is what you were trying to do with the TERMINATED BY clause.
|
How to import CSV in mysql?
|
[
"",
"php",
"mysql",
"html",
"csv",
""
] |
What is the best way to implement connection pooling in hsqldb, without compromising on the speed?
|
Hibernate gets connections from a `DataSource`, uses them and closes them. You need a connection pool or it will be very inefficient, consuming a lot of resources both on your app and on the DBMS, regardless of the database server you use.
You should try out *commons-dbcp* from Apache-Jakarta, it's very efficient and really simple to set up. It depends on *commons-pool*.
You just define a `BasicDataSource` with DBCP and it will manage the connections from whatever JDBC driver you tell it to use. It has connection validation and lots of other stuff.
Of, if you're writing a web app, configure a connection pool on the container you will be using and use that, instead of defining your own pool.
|
You are comparing apples and oranges:
1. If you want orm compare the performance of different orm tools against the same db.
2. If you want connection pooling compare different connection pooling libraries against the same db.
Performing ORM incurs extra effort so it will never be as fast as direct JDBC access. That said, hibernate goes to great lengths (and very successfuly) to minimise this additional overhead. With ORM you are trading off significantly increased development productivity against a relatively small drop in performance.
Connection pooling is an orthogonal problem to orm. Most obviously, hibernate allows you to select your own connection pooling infrastructure.
Also, be aware that in practise there is often a fairly tight coupling between connection pooling and transaction mgmt. For example, a typical J2EE application will leave connection pooling to the container (via the JDBC Datasource API) and rely on declarative transactions. In this case connections and transactions are managed (approximately) together.
If you aren't in a J2EE container and you don't need orm I would simply compare C3P0, commons-pool, etc.
|
Connection pooling in hsqldb
|
[
"",
"java",
"database",
"connection-pooling",
"hsqldb",
""
] |
My website has been giving me intermittent errors when trying to perform *any* Ajax activities. The message I get is
```
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
Details: Error parsing near '
<!DOCTYPE html P'.
```
So its obviously some sort of server timeout or the server's just returning back mangled garbage. This generally, unfortunately not always, happe
|
There is an excellent blog entry by Eilon Lipton. It contains of lot of tips on how to avoid this error:
**[Sys.WebForms.PageRequestManagerParserErrorException - what it is and how to avoid it](http://weblogs.asp.net/leftslipper/archive/2007/02/26/sys-webforms-pagerequestmanagerparsererrorexception-what-it-is-and-how-to-avoid-it.aspx)**
Read the comments too. There is a comment of somebody with the same problem: "**I solved it changing server idle time of my app pool on IIS**. It was only 5, so I incremented it and now works."
*"The UpdatePanel control uses asynchronous postbacks to control which parts of the page get rendered. It does this using a whole bunch of JavaScript on the client and a whole bunch of C# on the server.*
*Asynchronous postbacks are exactly the same as regular postbacks except for one important thing: the rendering. Asynchronous postbacks go through the same life cycles events as regular pages (this is a question I get asked often).*
*Only at the render phase do things get different. We capture the rendering of only the UpdatePanels that we care about and send it down to the client using a special format. In addition, we send out some other pieces of information, such as the page title, hidden form values, the form action URL, and lists of scripts."*
**Most common reasons for that error:**
1. Calls to Response.Write():
2. Response filters
3. HttpModules
4. Server trace is enabled
5. Calls to Server.Transfer()
|
Probably there is an error occuring on post back. In this case, you can view the details about the error by adding a PostBackTrigger to your updatepanel and referencing the button which causes the problem:
```
<asp:updatepanel ID="updatepanel1" runat="server">
<Triggers>
<asp:PostBackTrigger ControlID="button1" />
</Triggers>
<ContentTemplate>
</ContentTemplate>
</asp:updatepanel>
```
|
ASP.NET Ajax Error: Sys.WebForms.PageRequestManagerParserErrorException
|
[
"",
"javascript",
"asp.net",
".net",
"exception",
"ajax.net",
""
] |
I need to highlight, case insensitively, given keywords in a JavaScript string.
For example:
* `highlight("foobar Foo bar FOO", "foo")` should return `"<b>foo</b>bar <b>Foo</b> bar <b>FOO</b>"`
I need the code to work for any keyword, and therefore using a hardcoded regular expression like `/foo/i` is not a sufficient solution.
What is the easiest way to do this?
(This an instance of a more general problem detailed in the title, but I feel that it's best to tackle with a concrete, useful example.)
|
You *can* use regular expressions if you prepare the search string. In PHP e.g. there is a function preg\_quote, which replaces all regex-chars in a string with their escaped versions.
Here is such a function for javascript ([source](https://locutus.io/php/pcre/preg_quote/)):
```
function preg_quote (str, delimiter) {
// discuss at: https://locutus.io/php/preg_quote/
// original by: booeyOH
// improved by: Ates Goral (https://magnetiq.com)
// improved by: Kevin van Zonneveld (https://kvz.io)
// improved by: Brett Zamir (https://brett-zamir.me)
// bugfixed by: Onno Marsman (https://twitter.com/onnomarsman)
// example 1: preg_quote("$40")
// returns 1: '\\$40'
// example 2: preg_quote("*RRRING* Hello?")
// returns 2: '\\*RRRING\\* Hello\\?'
// example 3: preg_quote("\\.+*?[^]$(){}=!<>|:")
// returns 3: '\\\\\\.\\+\\*\\?\\[\\^\\]\\$\\(\\)\\{\\}\\=\\!\\<\\>\\|\\:'
return (str + '')
.replace(new RegExp('[.\\\\+*?\\[\\^\\]$(){}=!<>|:\\' + (delimiter || '') + '-]', 'g'), '\\$&')
}
```
So you could do the following:
```
function highlight(str, search) {
return str.replace(new RegExp("(" + preg_quote(search) + ")", 'gi'), "<b>$1</b>");
}
```
|
```
function highlightWords( line, word )
{
var regex = new RegExp( '(' + word + ')', 'gi' );
return line.replace( regex, "<b>$1</b>" );
}
```
|
Case insensitive string replacement in JavaScript?
|
[
"",
"javascript",
"string",
"replace",
"case-insensitive",
""
] |
It's rare that I hear someone using [Inversion of Control (Ioc)](http://martinfowler.com/articles/injection.html) principle with .Net. I have some friends that work with Java that use a lot more Ioc with Spring and PicoContainer.
I understand the principle of removing dependencies from your code... but I have a doubt that it's so much better.
**Why do .Net programmers not use (or use less) those types of frameworks? If you do, do you really find a positive effect in the long term?**
|
Lots of people use IOC in .NET, and there are several frameworks available to assist with using IoC. You may see it less in the WinForms side of things, because it's harder to just let the container wire everything together when you are designing forms in Visual Studio, but I can say that for server-side .NET applications, where I work at least, IoC is used very successfully.
Why use it in .NET? For the same reason you use it everywhere else. The 2 biggest things I like are:
* Designing for IoC tends to enforce good coding practice - designing to interfaces, low coupling, high cohesion. This also leads to classes that are very easy to unit-test.
* System configuration can often be changed without recompiling.
Some other posts discussing the different IoC/DI frameworks available for .NET:
* [Which C#/.net Dependency Injection frameworks are worth looking into?](https://stackoverflow.com/questions/21288/which-cnet-dependency-injection-frameworks-are-worth-looking-into)
* [Which Dependency Injection Tool Should I Use?](https://stackoverflow.com/questions/148908/which-dependency-injection-tool-should-i-use)
|
I use [StructureMap](http://www.google.com/search?q=structuremap&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) for dependency injection and have only recently started using it with [iBATIS.NET](http://ibatis.apache.org/dotnetdownloads.cgi) to inject our domain object mappers at runtime (and **not** through an XML config file, no thanks!).
I've seen immediate benefits. Creating interfaces for all our mappers (such as `IPersonMapper`) and then adding [Moq](http://code.google.com/p/moq/) allows me to write some pretty great database-free unit tests quickly and easily.
Previously (.NET 1.0) I wrote my own plugin system mainly to learn about reflection. Since that time I've implemented some sort of IoC in my projects. Its only recently I started using IoC for making unit tests so much less painful to write. I couldn't imagine doing it any other way at this point.
|
Inversion of Control with .net
|
[
"",
"c#",
".net",
"dependency-injection",
"inversion-of-control",
""
] |
This is a C# console application. I have a function that does something like this:
```
static void foo()
{
Application powerpointApp;
Presentation presentation = null;
powerpointApp = new Microsoft.Office.Interop.PowerPoint.ApplicationClass();
}
```
That's all it does. When it is called there is a fifteen second delay before the function gets hit. I added something like this:
```
static void MyAssemblyLoadEventHandler(object sender, AssemblyLoadEventArgs args)
{
Console.WriteLine(DateTime.Now.ToString() + " ASSEMBLY LOADED: " + args.LoadedAssembly.FullName);
Console.WriteLine();
}
```
This gets fired telling me that my interop assemblies have been loaded about 10 milliseconds before my foo function gets hit. What can I do about this? The program needs to call this function (and eventually do something else) once and then exit so I need for these assemblies to be cached or something. Ideas?
|
It could be the certificate revocation list - the time-out on this is 15 seconds.
Is there anything in the event log? Can you check if any network connections are happening during the time-out?
[I blogged some details about certificate revocation delay](http://blogs.conchango.com/anthonysteele/archive/2007/02/07/Delay-when-starting-up-a-web-service.aspx) a while ago. Follow the link, I won't cut and paste it here.
|
15 seconds sounds like a timeout to me. Are you signing your assemblies? We had a problem where the framework wants to check the certificate revocation list when loading, but fails after 15 secs.
HTH
Tim
|
.Net - interop assemblies taking 15 seconds to load when being referenced in a function
|
[
"",
"c#",
".net",
"com-interop",
""
] |
Can anyone explain why following code won't compile? At least on g++ 4.2.4.
And more interesting, why it will compile when I cast MEMBER to int?
```
#include <vector>
class Foo {
public:
static const int MEMBER = 1;
};
int main(){
vector<int> v;
v.push_back( Foo::MEMBER ); // undefined reference to `Foo::MEMBER'
v.push_back( (int) Foo::MEMBER ); // OK
return 0;
}
```
|
You need to actually define the static member somewhere (after the class definition). Try this:
```
class Foo { /* ... */ };
const int Foo::MEMBER;
int main() { /* ... */ }
```
That should get rid of the undefined reference.
|
The problem comes because of an interesting clash of new C++ features and what you're trying to do. First, let's take a look at the `push_back` signature:
```
void push_back(const T&)
```
It's expecting a reference to an object of type `T`. Under the old system of initialization, such a member exists. For example, the following code compiles just fine:
```
#include <vector>
class Foo {
public:
static const int MEMBER;
};
const int Foo::MEMBER = 1;
int main(){
std::vector<int> v;
v.push_back( Foo::MEMBER ); // undefined reference to `Foo::MEMBER'
v.push_back( (int) Foo::MEMBER ); // OK
return 0;
}
```
This is because there is an actual object somewhere that has that value stored in it. If, however, you switch to the new method of specifying static const members, like you have above, `Foo::MEMBER` is no longer an object. It is a constant, somewhat akin to:
```
#define MEMBER 1
```
But without the headaches of a preprocessor macro (and with type safety). That means that the vector, which is expecting a reference, can't get one.
|
Undefined reference to static class member
|
[
"",
"c++",
"g++",
"linker-errors",
"static-members",
""
] |
I'm trying to read the contents of a text file, in this case a list of computer names (Computer1, computer2 etc,) and I thought that StreamReader would be what you would use but when I do the following:
```
StreamReader arrComputer = new StreamReader(FileDialog.filename)();
```
I got this exception:
```
The type or namespace name 'StreamReader' could not be found (are you missing a using directive or an assembly reference?)
```
I'm very new to C# so I'm sure I'm making a newbie mistake.
|
You need to import the `System.IO` namespace. Put this at the top of your .cs file:
```
using System.IO;
```
Either that, or explicitly qualify the type name:
```
System.IO.StreamReader arrComputer = new System.IO.StreamReader(FileDialog.filename);
```
|
You'll need:
```
using System.IO;
```
At the top of the .cs file.
If you're reading text content I recommend you use a TextReader which is bizarrely a base class of StreamReader.
try:
```
using(TextReader reader = new StreamReader(/* your args */))
{
}
```
The using block just makes sure it's disposed of properly.
|
How to use StreamReader in C# (newbie)
|
[
"",
"c#",
".net",
"stream",
""
] |
I have an SQL database with multiple tables, and I am working on creating a searching feature. Other than having multiple queries for the different tables, is there a different way to go about said searching function?
---
I should probably add that a lot of my content is database driven to make upkeep easier. Lucene will not work for this, correct?
|
Different approaches to consider:
1) Multiple queries pre-baked, like you described.
2) Dynamic sql that you put together on the fly based on user-entered criteria.
3) If text is involved, based on SQL Server full text search or Lucene.
In my open source app BugTracker.NET, I do both 2 and 3 (using Lucene.NET).
I documented how I use Lucene.NET here:
<http://www.ifdefined.com/blog/post/2009/02/Full-Text-Search-in-ASPNET-using-LuceneNET.aspx>
|
Since you have tagged the question with Asp.net I suppose you want to search your webpages. In that case you can use Indexing Server to perform freetext searches easily that search the generated html and any keywords you have set up.
As Corey Trager suggested, using Lucene.NET is also an option. It has a good reputation of being fast and quite easy to use.
|
Best way to create a search function ASP.NET and SQL server
|
[
"",
"asp.net",
"sql",
"search",
""
] |
I made a class that derives from Component:
```
public class MyComponent: System.ComponentModel.Component
{
}
```
I saw that Visual Studio put this code in for me:
```
protected override void Dispose(bool disposing)
{
try
{
if (disposing && (components != null))
{
components.Dispose();
}
}
catch
{
throw;
}
finally
{
base.Dispose(disposing);
}
}
```
`MyComponent` has a member that is a `DataSet` and maybe there's some other members that implement `IDisposable`. What, if anything, do i need to modify with the `Dispose()` method to make sure things are cleaned up properly? Thanks for helping.
|
Change:
```
if (disposing && (components != null))
{
components.Dispose();
}
```
to be:
```
if (disposing && (components != null))
{
_dataset.Dispose();
components.Dispose();
}
```
|
Check any disposable member objects and dispose them if they are not null.
|
Implementing Dispose() with class derived from System.ComponentModel.Component
|
[
"",
"c#",
"dispose",
""
] |
I need a basic pdf generator that'll let me toss some images and text into a pdf file. The ability to have some basic drawing commands (lines and so forth) would also be a plus.
I did read through [this question](https://stackoverflow.com/questions/177799/what-is-a-good-pdf-report-generator-tool-for-python), but I really don't need a report generator and most of the responses there seemed like real overkill for what I'm trying to do. (I don't need templates or LaTeX-grade layout control.)
|
For one of my projects, I have tested and/or implemented probably six or seven different methods of going from an image to a PDF in the last six months. Ultimately I ended up coming back to [ReportLab](http://www.reportlab.org/downloads.html) (which I had initially avoided for reasons similar to those you described) because all of the others had glaring limitations or outright omissions (such as the inability to set document metadata).
ReportLab isn't as difficult to handle as it appears at first glance and it may save you a lot of headache-laden refactoring later on. I would strongly suggest you go ahead and use it and therefore know that if you ever want to be able to do more you will have the ability too rather than do what I did and bounce back and forth between a number of different utilities, libraries, and formats.
**EDIT:**
It is also worth mentioning that you can bypass the Platypus layout system that comes with ReportLab if all you want to do is put bit a of text and imagery on a page.
|
I think going through Latex is the easiest way, and not overkill at all. Generating a working PDF file is quite a difficult activity, whereas generating a Tex source is much easier. Any other typesetting change would probably work as well, such as going through reStructuredText or troff.
|
Can anyone recommend a decent FOSS PDF generator for Python?
|
[
"",
"python",
"pdf-generation",
""
] |
I see many different Java terms floating around. I need to install the JDK 1.6. It was my understanding that Java 6 == Java 1.6. However, when I install Java SE 6, I get a JVM that reports as version 11.0! Who can solve the madness?
|
When you type "java -version", you see three version numbers - the java version (on mine, that's "`1.6.0_07`"), the Java SE Runtime Environment version ("build `1.6.0_07-b06`"), and the HotSpot version (on mine, that's "`build 10.0-b23, mixed mode"`). I suspect the "11.0" you are seeing is the HotSpot version.
Update: HotSpot is (or used to be, now they seem to use it to mean the whole VM) the just-in-time compiler that is built in to the Java Virtual Machine. God only knows why Sun gives it a separate version number.
|
* JDK - Java Development Kit
* JRE - Java Runtime Environment
* Java SE - Java Standard Edition
SE defines a set of capabilities and functionalities; there are more complex editions (Enterprise Edition – EE) and simpler ones (Micro Edition – ME – for mobile environments).
The JDK includes the compiler and other tools needed to develop Java applications; JRE does not. The JDK also includes a JRE. So, to run a Java application someone else provides, you need a JRE; to develop a Java application, you need a JDK.
*Edited*:
As Chris Marasti-Georg pointed out in a comment, you can find out lots of information on Sun's [Java](http://java.sun.com/) website, and in particular from the [Java SE](http://java.sun.com/javase/downloads/index.jsp) section (2nd option, Java SE Development Kit (JDK) 6 Update 10).
---
*Edited 2011-04-06:*
The world turns, and Java is now managed by Oracle, which bought Sun. Later this year, the `sun.com` domain is supposed to go dark. The new page (based on a redirect) is this [Java](http://www.oracle.com/technetwork/java/index.html) page at the Oracle Tech Network. (See also [java.com](http://java.com/).)
---
*Edited 2013-01-11:* And the world keeps on turning (2012-12-21 notwithstanding), and lo and behold, JRE 6 is about to reach its end of support. [Oracle](http://www.oracle.com/technetwork/java/eol-135779.html) says no more public updates to Java 6 after February 2013.
Within a given version of Java, this answer remains valid. JDK is the Java Development Kit, JRE is the Java Runtime Environment, Java SE is the standard edition, and so on. But the version 6 (1.6) is becoming antiquated.
*Edited 2015-04-29:* And with another couple of revolutions around the sun, the time has come for the end of support for Java SE 7, too. In April 2015, Oracle [affirmed](http://www.oracle.com/technetwork/java/eol-135779.html) that it was no longer providing public updates to Java SE 7. The tentative end of public updates for Java SE 8 is March 2017, but that end date is subject to change (later, not earlier).
|
Java SE 6 vs. JRE 1.6 vs. JDK 1.6 - What do these mean?
|
[
"",
"java",
""
] |
I have an archiving process that basically deletes archived records after a set number of days. Is it better to write a scheduled SQL job or a windows service to accomplish the deletion? The database is mssql2005.
## Update:
To speak to some of the answers below, this question is regarding an in house application and not a distributed product.
|
It depends on what you want to accomplish.
Do you want to store the deleted archives somewhere? Log the changes? An SQL Job should perform better since it is run directly in the database, but it is easier to give a service acces to resources outside the database. So it depends on what you want to do,,,
|
I would think a scheduled SQL job would be a safer solution since if the database is migrated to a new machine, someone doing the migration might forget that there is a windows service involved and forget to start/install it on the new server.
|
Windows Service or SQL Job?
|
[
"",
"sql",
"windows",
"sql-server-2005",
"windows-services",
""
] |
I am developing an application for PocketPC. When the application starts the custom function SetScreenOrientation(270) is called which rotates the screen. When the application closes the function SetScreenOrientation(0) is called which restores the screen orientation.
This way the screen orientation isn't restored if the user minimizes the application and this is not acceptable.
Does anyone know where (in which event handlers) should SetScreenOrientation(int angle) be called to set the screen orientation on application start, restore orientation on minimize, set the orientation on maximize and restore the orientation on close?
Actually I don't know which event handler handles the Minimize and Maximize event.
|
The correct message is [WM\_SIZE](http://msdn.microsoft.com/en-us/library/aa453907.aspx), but Daemin's answer points to the wrong WM\_SIZE help topic. Check the wParam. Be careful as your window may be maximized but hidden.
|
Going from my Windows CE experience you should handle either the [WM\_SIZE](http://msdn.microsoft.com/en-us/library/ms915712.aspx) or [WM\_WINDOWPOSCHANGED](http://msdn.microsoft.com/en-us/library/ms942858.aspx) messages. If you're working on PocketPC I would suggest you take a look at the WM\_WINDOWPOSCHANGED message first because I'm not sure the WM\_SIZE has the right parameters that you need.
From the WM\_WINDOWPOSCHANGED message's [WINDOWPOS](http://msdn.microsoft.com/en-us/library/ms915525.aspx) structure take a look at the flags member, specifically SWP\_SHOWWINDOW and SWP\_HIDEWINDOW.
---
The specific version of the messages that you need to look at vary with what operating system you're using. The Pocket PC OS is built on Windows CE 3.0 (and lower), while Windows Mobile is now built on Windows CE 5.0 (even Windows Mobile 6), but was also built on Windows CE 4. [(Source)](http://www.pocketpcfaq.com/wce/versions.htm)
So just look under the relevant section in MSDN for the OS that you're writing for.
|
Event Handler for Minimize and Maximize Window
|
[
"",
"c++",
"windows-mobile",
"pocketpc",
""
] |
When you use the POISSON function in Excel (or in OpenOffice Calc), it takes two arguments:
* an integer
* an 'average' number
and returns a float.
In Python (I tried RandomArray and NumPy) it returns an array of random poisson numbers.
What I really want is the percentage that this event will occur (it is a constant number and the array has every time different numbers - so is it an average?).
for example:
```
print poisson(2.6,6)
```
returns `[1 3 3 0 1 3]` (and every time I run it, it's different).
The number I get from calc/excel is 3.19 (`POISSON(6,2.16,0)*100`).
Am I using the python's poisson wrong (no pun!) or am I missing something?
|
It is easy to do by hand, but you can overflow doing it that way. You can do the exponent and factorial in a loop to avoid the overflow:
```
def poisson_probability(actual, mean):
# naive: math.exp(-mean) * mean**actual / factorial(actual)
# iterative, to keep the components from getting too large or small:
p = math.exp(-mean)
for i in xrange(actual):
p *= mean
p /= i+1
return p
```
|
`scipy` has what you want
```
>>> scipy.stats.distributions
<module 'scipy.stats.distributions' from '/home/coventry/lib/python2.5/site-packages/scipy/stats/distributions.pyc'>
>>> scipy.stats.distributions.poisson.pmf(6, 2.6)
array(0.031867055625524499)
```
It's worth noting that it's pretty easy to calculate by hand, [too](http://www.scipy.org/doc/api_docs/SciPy.stats.distributions.html#poisson).
|
Calculate poisson probability percentage
|
[
"",
"python",
"statistics",
"poisson",
""
] |
Is there really that much of a difference between the performance of `Vector` and `ArrayList`? Is it good practice to use ArrayLists at all times when thread safety isn't an issue?
|
Vector originates back from the pre-Collections API days, and have been retrofitted since to be a part of it. From what I've read, the reason it is not deprecated is because the core API depends on it.
ArrayList was written from scratch as a part of the Collections API and as such should be used unless you need to support Java versions down to 1.2.
If you need a thread-safe ArrayList, you can use the static factory method `Collections.synchronizedList(new ArrayList<type>);` to generate your list.
|
If thread safety is not an issue, `ArrayList` will be faster as it does not have to synchronize. Although, you should always declare your variable as a `List` so that the implementation can be changed later as needed.
I prefer to handle my synchronization explicitly because a lot of operations require multiple calls. For example:
```
if (!myList.isEmpty()) {
myList.get(0);
}
```
should be:
```
synchronized (myList) {
if (!myList.isEmpty()) {
myList.get(0);
}
}
```
|
ArrayList vs. Vectors in Java if thread safety isn't a concern
|
[
"",
"java",
"performance",
"collections",
""
] |
I'm setting up a new server and want to support UTF-8 fully in my web application. I have tried this in the past on existing servers and always seem to end up having to fall back to ISO-8859-1.
Where exactly do I need to set the encoding/charsets? I'm aware that I need to configure Apache, MySQL, and PHP to do this — is there some standard checklist I can follow, or perhaps troubleshoot where the mismatches occur?
This is for a new Linux server, running MySQL 5, PHP, 5 and Apache 2.
|
**Data Storage**:
* Specify the `utf8mb4` character set on all tables and text columns in your database. This makes MySQL physically store and retrieve values encoded natively in UTF-8. Note that MySQL will implicitly use `utf8mb4` encoding if a `utf8mb4_*` collation is specified (without any explicit character set).
* In older versions of MySQL (< 5.5.3), you'll unfortunately be forced to use simply `utf8`, which only supports a subset of Unicode characters. I wish I were kidding.
**Data Access**:
* In your application code (e.g. PHP), in whatever DB access method you use, you'll need to set the connection charset to `utf8mb4`. This way, MySQL does no conversion from its native UTF-8 when it hands data off to your application and vice versa.
* Some drivers provide their own mechanism for configuring the connection character set, which both updates its own internal state and informs MySQL of the encoding to be used on the connection—this is usually the preferred approach. In PHP:
+ If you're using the [PDO](http://www.php.net/manual/en/book.pdo.php) abstraction layer with PHP ≥ 5.3.6, you can specify `charset` in the [DSN](http://php.net/manual/en/ref.pdo-mysql.connection.php):
```
$dbh = new PDO('mysql:charset=utf8mb4');
```
+ If you're using [mysqli](http://www.php.net/manual/en/book.mysqli.php), you can call [`set_charset()`](http://php.net/manual/en/mysqli.set-charset.php):
```
$mysqli->set_charset('utf8mb4'); // object oriented style
mysqli_set_charset($link, 'utf8mb4'); // procedural style
```
+ If you're stuck with plain [mysql](http://php.net/manual/en/book.mysql.php) but happen to be running PHP ≥ 5.2.3, you can call [`mysql_set_charset`](http://php.net/manual/en/function.mysql-set-charset.php).
* If the driver does not provide its own mechanism for setting the connection character set, you may have to issue a query to tell MySQL how your application expects data on the connection to be encoded: [`SET NAMES 'utf8mb4'`](http://dev.mysql.com/doc/en/charset-connection.html).
* The same consideration regarding `utf8mb4`/`utf8` applies as above.
**Output**:
* UTF-8 should be set in the HTTP header, such as `Content-Type: text/html; charset=utf-8`. You can achieve that either by setting [`default_charset`](http://www.php.net/manual/en/ini.core.php#ini.default-charset) in php.ini (preferred), or manually using `header()` function.
* If your application transmits text to other systems, they will also need to be informed of the character encoding. With web applications, the browser must be informed of the encoding in which data is sent (through HTTP response headers or [HTML metadata](https://stackoverflow.com/q/4696499)).
* When encoding the output using `json_encode()`, add `JSON_UNESCAPED_UNICODE` as a second parameter.
**Input**:
* Browsers will submit data in the character set specified for the document, hence nothing particular has to be done on the input.
* In case you have doubts about request encoding (in case it could be tampered with), you may verify every received string as being valid UTF-8 before you try to store it or use it anywhere. PHP's [`mb_check_encoding()`](http://php.net/manual/en/function.mb-check-encoding.php) does the trick, but you have to use it religiously. There's really no way around this, as malicious clients can submit data in whatever encoding they want, and I haven't found a trick to get PHP to do this for you reliably.
**Other Code Considerations**:
* Obviously enough, all files you'll be serving (PHP, HTML, JavaScript, etc.) should be encoded in valid UTF-8.
* You need to make sure that every time you process a UTF-8 string, you do so safely. This is, unfortunately, the hard part. You'll probably want to make extensive use of PHP's [`mbstring`](http://www.php.net/manual/en/book.mbstring.php) extension.
* **PHP's built-in string operations are *not* by default UTF-8 safe.** There are some things you can safely do with normal PHP string operations (like concatenation), but for most things you should use the equivalent `mbstring` function.
* To know what you're doing (read: not mess it up), you really need to know UTF-8 and how it works on the lowest possible level. Check out any of the links from [utf8.com](http://www.utf8.com/) for some good resources to learn everything you need to know.
|
I'd like to add one thing to [chazomaticus' excellent answer](https://stackoverflow.com/questions/279170/utf-8-all-the-way-through#279279):
Don't forget the META tag either (like this, or [the HTML4 or XHTML version of it](http://www.w3.org/International/questions/qa-html-encoding-declarations#quicklookup)):
```
<meta charset="utf-8">
```
That seems trivial, but IE7 has given me problems with that before.
I was doing everything right; the database, database connection and Content-Type HTTP header were all set to UTF-8, and it worked fine in all other browsers, but Internet Explorer still insisted on using the "Western European" encoding.
It turned out the page was missing the META tag. Adding that solved the problem.
**Edit:**
The W3C actually has a rather large [section dedicated to I18N](http://www.w3.org/International/). They have a number of articles related to this issue – describing the HTTP, (X)HTML and CSS side of things:
* [FAQ: Changing (X)HTML page encoding to UTF-8](http://www.w3.org/International/questions/qa-changing-encoding)
* [Declaring character encodings in HTML](http://www.w3.org/International/questions/qa-html-encoding-declarations)
* [Tutorial: Character sets & encodings in XHTML, HTML and CSS](http://www.w3.org/International/tutorials/tutorial-char-enc/)
* [Setting the HTTP charset parameter](http://www.w3.org/International/O-HTTP-charset)
They recommend using both the HTTP header and HTML meta tag (or XML declaration in case of XHTML served as XML).
|
UTF-8 all the way through
|
[
"",
"php",
"mysql",
"apache",
"utf-8",
""
] |
Is there a standard Java library that handles common file operations such as moving/copying files/folders?
|
Here's how to do this with `java.nio` operations:
```
public static void copyFile(File sourceFile, File destFile) throws IOException {
if(!destFile.exists()) {
destFile.createNewFile();
}
FileChannel source = null;
FileChannel destination = null;
try {
source = new FileInputStream(sourceFile).getChannel();
destination = new FileOutputStream(destFile).getChannel();
// previous code: destination.transferFrom(source, 0, source.size());
// to avoid infinite loops, should be:
long count = 0;
long size = source.size();
while((count += destination.transferFrom(source, count, size-count))<size);
}
finally {
if(source != null) {
source.close();
}
if(destination != null) {
destination.close();
}
}
}
```
|
Not yet, but the [New NIO (JSR 203)](http://jcp.org/en/jsr/detail?id=203) will have support for these common operations.
In the meantime, there are a few things to keep in mind.
[File.renameTo](http://java.sun.com/j2se/1.5.0/docs/api/java/io/File.html#renameTo(java.io.File)) generally works only on the same file system volume. I think of this as the equivalent to a "mv" command. Use it if you can, but for general copy and move support, you'll need to have a fallback.
When a rename doesn't work you will need to actually copy the file (deleting the original with [File.delete](http://java.sun.com/j2se/1.5.0/docs/api/java/io/File.html#delete()) if it's a "move" operation). To do this with the greatest efficiency, use the [FileChannel.transferTo](http://java.sun.com/j2se/1.5.0/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)) or [FileChannel.transferFrom](http://java.sun.com/j2se/1.5.0/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)) methods. The implementation is platform specific, but in general, when copying from one file to another, implementations avoid transporting data back and forth between kernel and user space, yielding a big boost in efficiency.
|
Move / Copy File Operations in Java
|
[
"",
"java",
"file",
"copy",
"move",
""
] |
Does anyone know of a Java library that provides a useful abstraction for analyzing and manipulating arbitrary relational database schemata? I'm thinking of something that could do things like
```
LibraryClass dbLib = ...;
DbSchema schema = dbLib.getSchema("my_schema");
List<DbTable> tables = schema.getTables();
```
and
```
DbTable myTable = ...
for(DbColumn col : myTable.getColumns()){
... = col.getType();
}
```
or even manipulate tables like
```
myTable.addColumn(
new DbColumn("my_new_column", Type.UNSIGNED_INTEGER);
);
DbColumn myColumn = ...
myTable.removeColumn(myColumn);
```
Most Database modeling tools will have such an abstraction internally, but is there one in Java that I can use, or will I have to roll my own?
|
[DdlUtils](http://db.apache.org/ddlutils/) has what you're looking for. You can read/write schemas to/from XML (in Torque format) or a live database, or even define the database schema in pure Java. Better yet, read the on-line doco, it's quite good.
|
JDBC itself has such an abstraction. Look at java.sql.DatabaseMetaData. However, this is an optional part of the standard and it depends on the JDBC driver you are using wether it is implemented or not.
|
Is there a database modelling library for Java?
|
[
"",
"java",
"database",
"rdbms",
"modeling",
""
] |
Which are the most advanced frameworks and tools there are available for python for practicing Behavior Driven Development? Especially finding similar tools as rspec and mocha for ruby would be great.
|
[Ian Bicking](http://blog.ianbicking.org/behavior-driven-programming.html) recommends using [doctest](http://docs.python.org/library/doctest.html?highlight=doctest#module-doctest) for behavior driven design:
I personally tend to use [nose](https://web.archive.org/web/20110610084952/http://somethingaboutorange.com/mrl/projects/nose/1.0.0) and [voidspace mock](http://www.voidspace.org.uk/python/mock.html) in a behavior driven design style. Specifically, the spec [plugin](http://darcs.idyll.org/~t/projects/pinocchio/doc/#spec-generate-test-description-from-test-class-method-names) for nose is excellent for BDD.
|
Lettuce means to be a cucumber-like tool for python: <http://lettuce.it/>
You can grab the source at github.com/gabrielfalcao/lettuce
|
Practicing BDD with python
|
[
"",
"python",
"testing",
"bdd",
""
] |
I would like to use JavaScript to manipulate hidden input fields in a JSF/Facelets page. When the page loads, I need to set a hidden field to the color depth of the client.
From my Facelet:
```
<body onload="setColorDepth(document.getElementById(?????);">
<h:form>
<h:inputHidden value="#{login.colorDepth}" id="colorDepth" />
</h:form>
```
When JSF processes the page, it is of course changing the IDs of the elements. What's the best way to reference these elements from my JavaScript code?
|
You'll want to set the ID of the form so you'll know what it is. Then you'll be able to construct the actual element ID.
```
<body onload="setColorDepth(document.getElementById('myForm:colorDepth');">
<h:form id="myForm">
<h:inputHidden value="#{login.colorDepth}" id="colorDepth" />
</h:form>
```
If you don't want to set the form's ID field, you could find it at runtime, like so:
```
<body onload="setColorDepth(document.getElementById(document.forms[0].id + ':colorDepth');">
```
|
You can use the control's *clientId* as returned by [UIComponent.getClientId(FacesContext)](http://java.sun.com/javaee/javaserverfaces/1.1_01/docs/api/javax/faces/component/UIComponent.html#getClientId(javax.faces.context.FacesContext)). See [here](https://stackoverflow.com/questions/265175/how-does-jsf-generate-the-name-of-the-form-input-field#265561) for sample code.
|
Using JavaScript with JSF and Facelets
|
[
"",
"javascript",
"jsf",
"facelets",
""
] |
I have a string that is HTML encoded:
```
'''<img class="size-medium wp-image-113"\
style="margin-left: 15px;" title="su1"\
src="http://blah.org/wp-content/uploads/2008/10/su1-300x194.jpg"\
alt="" width="300" height="194" />'''
```
I want to change that to:
```
<img class="size-medium wp-image-113" style="margin-left: 15px;"
title="su1" src="http://blah.org/wp-content/uploads/2008/10/su1-300x194.jpg"
alt="" width="300" height="194" />
```
I want this to register as HTML so that it is rendered as an image by the browser instead of being displayed as text.
The string is stored like that because I am using a web-scraping tool called [`BeautifulSoup`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/), it "scans" a web-page and gets certain content from it, then returns the string in that format.
I've found how to do this in **C#** but not in **Python**. Can someone help me out?
### Related
* [Convert XML/HTML Entities into Unicode String in Python](https://stackoverflow.com/questions/57708/convert-xml-html-entities-into-unicode-string-in-python)
|
Given the Django use case, there are two answers to this. Here is its `django.utils.html.escape` function, for reference:
```
def escape(html):
"""Returns the given HTML with ampersands, quotes and carets encoded."""
return mark_safe(force_unicode(html).replace('&', '&').replace('<', '&l
t;').replace('>', '>').replace('"', '"').replace("'", '''))
```
To reverse this, the Cheetah function described in Jake's answer should work, but is missing the single-quote. This version includes an updated tuple, with the order of replacement reversed to avoid symmetric problems:
```
def html_decode(s):
"""
Returns the ASCII decoded version of the given HTML string. This does
NOT remove normal HTML tags like <p>.
"""
htmlCodes = (
("'", '''),
('"', '"'),
('>', '>'),
('<', '<'),
('&', '&')
)
for code in htmlCodes:
s = s.replace(code[1], code[0])
return s
unescaped = html_decode(my_string)
```
This, however, is not a general solution; it is only appropriate for strings encoded with `django.utils.html.escape`. More generally, it is a good idea to stick with the standard library:
```
# Python 2.x:
import HTMLParser
html_parser = HTMLParser.HTMLParser()
unescaped = html_parser.unescape(my_string)
# Python 3.x:
import html.parser
html_parser = html.parser.HTMLParser()
unescaped = html_parser.unescape(my_string)
# >= Python 3.5:
from html import unescape
unescaped = unescape(my_string)
```
As a suggestion: it may make more sense to store the HTML unescaped in your database. It'd be worth looking into getting unescaped results back from BeautifulSoup if possible, and avoiding this process altogether.
With Django, escaping only occurs during template rendering; so to prevent escaping you just tell the templating engine not to escape your string. To do that, use one of these options in your template:
```
{{ context_var|safe }}
{% autoescape off %}
{{ context_var }}
{% endautoescape %}
```
|
With the standard library:
* HTML Escape
```
try:
from html import escape # python 3.x
except ImportError:
from cgi import escape # python 2.x
print(escape("<"))
```
* HTML Unescape
```
try:
from html import unescape # python 3.4+
except ImportError:
try:
from html.parser import HTMLParser # python 3.x (<3.4)
except ImportError:
from HTMLParser import HTMLParser # python 2.x
unescape = HTMLParser().unescape
print(unescape(">"))
```
|
How do I perform HTML decoding/encoding using Python/Django?
|
[
"",
"python",
"django",
"html-encode",
""
] |
I am developing a map control in WPF with C#. I am using a canvas control e.g. 400 x 200 which is assigned a map area of e.g. 2,000m x 1,000m.
The scale of the map would be: **canvas\_size\_in\_meters / real\_size\_in\_meters**.
I want to find the canvas\_size\_in\_meters.
The canvas.ActualWidth gives the Width in DIU's (Device Independant Units). So, 400 DIU's is 400/96 = 4,17 inches, **PROVIDED** that the **physical** resolution of my monitor is 96 dpi.
However, using a ruler, I found that the physical resolution of my monitor is 87 dpi. (There are only few monitors that ACTUALLY have 96 physical dpi)
That DPI difference (10%) translates to a +10% difference in the actual map control width on screen.
How do I measure the size of a WPF control in inches EXACTLY and regardless of screen resolution and DPI setting ?
|
> How do I measure the size of a WPF control in inches EXACTLY and regardless of screen resolution and DPI setting ?
This isn't actually possible, because for it to work, WPF would have to know the resolution (in terms of DPI) of your monitor. Sounds nice in theory, but in practice windows doesn't know this information. This is why windows itself always assumes 96dpi blindly instead of being smarter about it.
Even if there were some way to manually tell it, or if your particular monitor has a custom driver that does pass the correct information to windows, this isn't going to work on anyone else's computer, so windows doesn't pass this information on to any applications.
The best you can do is draw a scale like google maps does. You know that 1 *pixel* == 1 mile, so you can draw a 50 pixel line on your map, with a label saying "this line equals 50 miles"
|
There is way to compute current pixel size in mm or inches. As mentioned in the earlier posts, it is not a fixed value and would vary depending on the current resolution and monitor size.
First get the current resolution. Assume it is 1280x1024
Now get the monitor width in mm using GetDeviceCaps function. Its a standard windows library function.
int widthmm = GetDeviceCaps(deviceContext, HORZSIZE);
My monitor width is 362mm
So pixel size = 362/1280 = 0.282 mm
The accuracy of this method depends on the assumption that the display area covers the width of the monitor exactly.
So to answer the original question, the canvas size of 400 x 200 pixels would be
(400 \* 0.282/1000) x (200 \* 0.282/1000) in meters when shown on my monitor.
|
C# WPF resolution independancy?
|
[
"",
"c#",
"wpf",
"gis",
"resolution",
"dpi",
""
] |
A site I am working on that is built using PHP is sometimes showing a completely blank page.
There are no error messages on the client or on the server.
The same page may display sometimes but not others.
All pages are working fine in IE7, Firefox 3, Safari and Opera.
All pages are XHTML with this meta element:
```
<meta http-equiv="Content-Type" content="application/xhtml+xml; charset=utf-8" />
```
It *appears* that I have fixed the problem by adding this PHP code:
```
header('Content-type: text/html; charset=utf-8');
```
I have read that this problem may be caused by XHTML, encoding, gzip compression, or caching, but nobody has been able to backup these guesses.
As the problem was intermittent I am not confident that my solution has actually solved the problem.
My question is, are there *reproducible* ways of having IE6 show a blank page when other browsers display content?
If so, what causes it and what solves it?
|
This is a content-type problem from IE.
It does not know how to handle application/xhtml+xml.
Although you write xhtml+xml, IE only knows text+html.
It will be the future before all agents know xhtml+xml
change your meta tag with content type to content="text/html;
|
Sounds like [bug #153 "Self Closing Script Tag"](http://webbugtrack.blogspot.com/2007/08/bug-153-self-closing-script-tag-issues.html) bug in IE, **which is well known to cause blank pages**.
Due to IE's bug, you can **NEVER** code the following and expect it to work in IE.
```
<script src="...." />
```
(if the tag is self closing, you are in for a world of pain)
Instead, always code as;
```
<script src="...."></script>
```
|
Blank page in IE6
|
[
"",
"php",
"internet-explorer",
"http",
"internet-explorer-6",
"http-headers",
""
] |
When I serialize;
```
public class SpeedDial
{
public string Value { get; set; }
public string TextTR { get; set; }
public string TextEN { get; set; }
public string IconId { get; set; }
}
```
It results:
```
<SpeedDial>
<Value>110</Value>
<TextTR>Yangın</TextTR>
<TextEN>Fire</TextEN>
<IconId>39</IconId>
</SpeedDial>
```
But what I want is this:
```
<speedDial>
<value>110</value>
<text>
<TR>Yangın</TR>
<EN>Fire</EN>
</text>
<iconId>39</iconId>
</speedDial>
```
I want to learn the canonical way...
|
Three approaches leap to mind:
1: create a property to use for the serialization, and hide the others with `[XmlIgnore]`
2: implement `IXmlSerializable` and do it yourself
3: create a separate DTO just for the serialization
Here's an example that re-factors the "text" portion into objects that `XmlSerializer` will like, while retaining the original public AIP:
```
[Serializable]
public class SpeedDial
{
static void Main()
{
XmlSerializer ser = new XmlSerializer(typeof(SpeedDial));
SpeedDial foo = new SpeedDial { Value = "110", TextTR = "Yangin",
TextEN = "Fire", IconId = "39" };
ser.Serialize(Console.Out, foo);
}
public SpeedDial()
{
Text = new SpeedDialText();
}
[XmlElement("text"), EditorBrowsable(EditorBrowsableState.Never)]
public SpeedDialText Text { get; set; }
public string Value { get; set; }
[XmlIgnore]
public string TextTR
{
get { return Text.Tr; }
set { Text.Tr = value; }
}
[XmlIgnore]
public string TextEN
{
get { return Text.En; }
set { Text.En = value; }
}
public string IconId { get; set; }
}
[Serializable]
public class SpeedDialText
{
[XmlElement("EN")]
public string En { get; set; }
[XmlElement("TR")]
public string Tr { get; set; }
}
```
|
I won't do it if I were you, because you make your serializer dependent of your business objects. For lowercase you could use the xml-customattributes.
|
How to combine multiple properties into one tag via overriding object serialization
|
[
"",
"c#",
"xml-serialization",
""
] |
Is it possible to retrieve variable set in `onreadystatechange` function from outside the function?
--edit--
Regarding execution of functions:
If its possible i would like to execute ajaxFunction() with one click
and then popup() with next click, or
somehow wait for ajax function to end and then call for alert box
In pseudocode:
```
function ajaxFunction(){
//creating AJAX
...
// Create a function that will receive data sent from the server
ajaxRequest.onreadystatechange = function (){
if(ajaxRequest.readyState == 4){
//success code
======>Here i want to set variable <=====
var MyVariable='MyContent';
}
}
//Retrieving page
....
}
function popup(){
ajaxFunction();
alert(MyVariable);
}
```
|
The following code assumes that the ajax-request is synchronous:
```
function popup(){
ajaxFunction();
alert(MyVariable);
}
```
But since synchronous requests are blocking the browser you should in almost all cases use asynchronous calls (If I remember correctly onreadystatechange should not be called on synchronous request but different browsers behaves differently)
What you could do is:
```
function ajaxFunction(callback){
//creating AJAX
...
// Create a function that will receive data sent from the server
ajaxRequest.onreadystatechange = function (){
if(ajaxRequest.readyState == 4){
//success code
callback('MyContent')
}
}
//Retrieving page
....
}
function popup() {
ajaxFunction(function(MyVariable){alert(MyVariable););
}
```
|
some's comments are correct... this has nothing to do with variable scoping, and everything to do with the fact that the inner function (the 'onreadystatechange' function) setting the value of MyVariable will not have been executed by the time that the alert() happens... so the alert will *always* be empty.
The inner function doesn't get executed synchronously (ie, right away), but is deferred and executed later, when the request returns, which is long after the alert() has been executed. The *only* solution to this is to defer the alert until after the request finishes.
But regardless of all this, the inner function *can* set variables outside its scope, as the other posts mention. But your problem is more about execution order than it is about variable scope.
|
In AJAX how to retrive variable from inside of onreadystatechange = function ()
|
[
"",
"javascript",
"ajax",
""
] |
I've been using the [Java Pet Store](http://java.sun.com/developer/releases/petstore/) and [.Net Pet Store](http://blogs.vertigosoftware.com/petshop/default.aspx) examples [for years](http://www.onjava.com/pub/a/onjava/2001/11/28/catfight.html) when comparing Java EE and .Net performance in an eBusiness type setting.
Is there a better way to compare the performance of Java EE and .Net for this type of application?
**Update:**
As some have pointed out below, the implementation, framework versions, architecture, and hardware can have much more influence on performance than using .Net vs. Java. I wholeheartedly agree with this. However, keeping this in mind, I would still like to know what kind of general comparison would be best. Since the pet shop is a fairly mature and general eCommerce test case, I tend to think it's pretty useful for a baseline.
|
There is never really a definitive measure for comparing the performance of platforms. For example, a comparison of the *same* J2EE platform could be impacted by minor configuration changes. It would seem that the platform is less of a factor in performance today than it once was, whilst design and architecture will have a bigger impact.
|
The pet store project in your link is .Net 2.0 which was released three years ago, in November 2005. Since then, there have been two additional releases (3.0 and 3.5), with many enhancements which could improve performance.
I guess it depends on whether you want to compare "mature" web app's, or those written with the latest version of the technology.
|
Is there a better way to compare general performance of .Net and Java in an eCommerce setting than the "pet store"?
|
[
"",
"java",
".net",
"performance",
"jakarta-ee",
""
] |
I am looking at embedding Lua in a C++ application I am developing. My intention is to use Lua to script what ordered operation(s) to perform for some given input, ie.
receive a new work item in c++ program, pass details to Lua backend, Lua calls back into c++ to carry out necessary work, returns finished results.
The primary data structure involved is a large (approx 80+ members) `struct`. I am (very) new to Lua and am unsure of how I can pass it to and from my host application and the embedded Lua state.
Thus far I see my options as:
a) Pushing/Popping all the individual data members onto the stack when crossing to/from C++ and Lua (sounds messy).
b) Constructing a table and populating with the values, then putting that on/off the stack (a little cleaner).
c) Pass it as `userdata` (light/heavy?) (I'm sketchy on this, and not sure if/how I can then access it from the Lua side to query what operations are necessary).
Any guidance would be greatly appreciated.
|
If I recall correctly, light userdata is actually just a pointer. They all share the same metatable. They are mostly used to pass around addresses of C data.
Full userdata is probably closer of what you need if you must access it from the Lua side. Their metatable would allow you to access it like it was a regular Lua table, for example.
You might be also interested by Roberto's [library for converting data to and from C structs for Lua 5.1](http://www.inf.puc-rio.br/~roberto/struct/ "Library for Converting Data to and from C Structs for Lua 5.1"). Or not. Depends on your needs... :-)
|
I have not done this myself (it was years since I used Lua, and I've never used in an embedded fashion), but I think you should look into [metatables](http://www.lua.org/manual/5.1/manual.html#2.8) and the userdata type. The manual says this about userdata values:
> This type corresponds to a block of raw memory and has no pre-defined operations in Lua, except assignment and identity test. However, by using metatables, the programmer can define operations for userdata values (see §2.8). Userdata values cannot be created or modified in Lua, only through the C API
Sounds about right.
|
How to pass large struct back and forth between between C++ and Lua
|
[
"",
"c++",
"lua",
""
] |
I have seen many apps that take instrument classes and take `-javaagent` as a param when loading also put a `-noverify` to the command line.
The Java doc says that `-noverify` turns off class verification.
However why would anyone want to turn off verification even if they are instrumenting classes?
|
Start-up time, I'd say. Verification that classes are correct takes some time when the class is loaded. Since classes might be loaded in a lazy fashion (not on app start, but when being used for the first time), this might cause unexpected and undesired runtime delays.
Actually the class does not need to be checked in general. The compiler will not emit any invalid bytecode or class construct. The reason for verification is that the class may be build on one system, get hosted online and is transmitted to you through the unprotected internet. On this path, a malicious attacker might modify the bytecode and create something the compiler might never create; something that can crash the JVM or possibly circumvents security restrictions. Thus the class is verified before it is used. If this is a local application, there is usually no need to check the bytecode again.
|
When it is used in conjunction with `-javaagent`, it is most likely **not** for performance reasons, but because the agent intentionally creates "invalid" bytecode.
It should be noted that invalid bytecode might still execute fine, because some of the verification rules are quite strict. For instance, `this` must not be accessed in a constructor before the super-constructor was called, because the variables are not initialized at this point. But there still might be other things you want to do (see the JRebel example). Then, you use `-noverify` to circumvent that rule.
|
Use of -noverify when launching java apps
|
[
"",
"java",
"command-line-arguments",
""
] |
I have a query that works on Postgresql 7.4 but not on Postgresql 8.3 with same database.
Query:
```
SELECT * FROM login_session WHERE (now()-modified) > timeout;
```
Gets the following error:
```
ERROR: operator does not exist: interval > integer
LINE 1: ...ELECT * FROM login_session WHERE (now()-modified) > timeout ...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
```
Column `modified` is a `timestamp` and `timeout` is an `integer`.
Is there some settings I need to change on server?
I'm installing an application for a client on new server (ubuntu) so I can't change queries in the application.
|
```
create or replace function int2interval (x integer) returns interval as $$ select $1*'1 sec'::interval $$ language sql;
create cast (integer as interval) with function int2interval (integer) as implicit;
```
ought to do it.
|
There are a lot of changes between 7.4 and 8.3. Some of the most drastic were the removal of some automatic casts.
I suppose the "timeout" is in seconds? If so you could change the query to:
```
SELECT
*
FROM
login_session
WHERE
(CURRENT_TIMESTAMP - modified) > (timeout * '1 sec'::interval);
```
|
Operator does not exist: interval > integer
|
[
"",
"sql",
"postgresql",
""
] |
Given the following simple example:
```
List<string> list = new List<string>() { "One", "Two", "Three", "three", "Four", "Five" };
CaseInsensitiveComparer ignoreCaseComparer = new CaseInsensitiveComparer();
var distinctList = list.Distinct(ignoreCaseComparer as IEqualityComparer<string>).ToList();
```
It appears the CaseInsensitiveComparer is not actually being used to do a case-insensitive comparison.
In other words **distinctList** contains the same number of items as **list**. Instead I would expect, for example, "Three" and "three" be considered equal.
Am I missing something or is this an issue with the Distinct operator?
|
`StringComparer` does what you need:
```
List<string> list = new List<string>() {
"One", "Two", "Three", "three", "Four", "Five" };
var distinctList = list.Distinct(
StringComparer.CurrentCultureIgnoreCase).ToList();
```
(or invariant / ordinal / etc depending on the data you are comparing)
|
[See Marc Gravells answer if you want the most concise approach]
After some investigation and good feedback from Bradley Grainger I've implemented the following IEqualityComparer. It suports a case insensitive Distinct() statement (just pass an instance of this to the Distinct operator) :
> ```
> class IgnoreCaseComparer : IEqualityComparer<string>
> {
> public CaseInsensitiveComparer myComparer;
>
> public IgnoreCaseComparer()
> {
> myComparer = CaseInsensitiveComparer.DefaultInvariant;
> }
>
> public IgnoreCaseComparer(CultureInfo myCulture)
> {
> myComparer = new CaseInsensitiveComparer(myCulture);
> }
>
> #region IEqualityComparer<string> Members
>
> public bool Equals(string x, string y)
> {
> if (myComparer.Compare(x, y) == 0)
> {
> return true;
> }
> else
> {
> return false;
> }
> }
>
> public int GetHashCode(string obj)
> {
> return obj.ToLower().GetHashCode();
> }
>
> #endregion
> }
> ```
|
LINQ Distinct operator, ignore case?
|
[
"",
"c#",
"linq",
"string",
"comparison",
"distinct",
""
] |
I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
|
add a column `exec_status` to `myhappytable` (maybe also `time_started` and `time_finished`, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
```
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
```
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
**A potential pitfall:** if the cron script is killed, a scheduled task will remain in "executing\_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
```
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
```
|
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
|
cron script to act as a queue OR a queue for cron?
|
[
"",
"sql",
"queue",
"cron",
""
] |
I am looking to parse INSERT and UPDATE MySQL SQL queries in PHP to determine what changes where made from what original data. Now this would be pretty easy to create, but I want to see if there are any existing libraries in PHP to do this.
Basically what I have is a table with all of the above queries that have been run on a database. I have already separated out the table name and type of query. I am looking to create a full change log for user viewing based on this data, so I need to get the values of the original INSERT and then changes that are made in each UPDATE. In the end I need field name and new value and with the record id(s). I'll do the rest of the checking/beautifying, including the column name to human readable and if a field value hasn't actually changed.
At the moment, I probably don't need to do multiple table UPDATE's, but it would be useful.
What libraries are there to do this?
|
<http://pear.php.net/package/SQL_Parser>
<http://sourceforge.net/projects/txtsql>
<http://code.google.com/p/php-sql-parser>
For Perl there's more variety
|
A little bit off the question, but maybe a suggestion worth thinking about:
Since MySQL 5.0, the support for triggers is quite good. If you want to keep a record of what changes have been made to a database, instead of storing the sql statements, you could also define insert/update triggers and define another table in which these values can be stored. You could, for example create a simple table having the fields
```
timestamp, user, field, old_value, new_value
```
and insert the respective values whenever a DML on one of your watched tables occured.
To simplify this even more, you could add the field
```
table
```
to the "tracking table" to store all changes to all watched tables in one place.
See the [MySQL manual](http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html "MySQL manual for triggers") for more infos about this topic.
|
PHP MySQL SQL parser (INSERT and UPDATE)
|
[
"",
"php",
"mysql",
"changelog",
""
] |
I'm using Eclipse for Java development. All my sources compile fine and the resulting application compiles fine. However, I keep getting an "red-x" error notification in the Package Explorer.
All my sources in this source directory (too long for the snapshot) compile fine, none of the show the "red-x" error icon.
Any suggestions on how to fix the problem would be highly appreciated.
|
yeah, this happens sometimes for no apparent reason. You can go to the "Problems"-Tab (right next to console output) and see the error message, so maybe you can narrow it down that way.
|
This happens often when I use Maven, and I had always ignored it until I found this question. You need to update the project in this case (figured this out by looking in the Problems pane)
From the project context menu: Maven -> Update Project and select the available maven codebases
Alternatively you can use (Alt + F5) to bring up the same window
|
In Eclipse, what can cause Package Explorer "red-x" error-icon when all Java sources compile without errors?
|
[
"",
"java",
"eclipse",
""
] |
I know I can loop over the string or build a regex or invert the set (ASCII isn't that big after all) and search for the first instance of that, but Yuck.
What I'm looking for is a nice one liner.
fewer features is better, LINQ is out (for me, don't ask, it's a *long* story)
---
The solution I'm going with (unless I see something better)
```
static int FirstNotMeta(int i, string str)
{
for(; i < str.Length; i++)
switch(str[i])
{
case '\\':
case '/':
case '.':
continue;
default:
return i;
}
return -1;
}
```
OK, I cheated, I know in advance what char's I care about.
|
This works:
```
public static char FindFirstNotAny(this string value, params char[] charset)
{
return value.TrimStart(charset)[0];
}
```
|
If you don't have access to LINQ, I think you may just have to write a static method with a loop (which is probably more efficient than LINQ anyway. Remember the compiler will inline small methods when possible.
The simplest non-LINQ I can come up with is below. I recommend adding braces so scope and the blocks are clear:
```
public static char? GetFirstChar(string str, char[] list)
{
foreach (char c in str) if (!list.Contains(c)) return c;
return null;
}
```
With C# 3.0 and LINQ:
```
char[] list = { 'A', 'B' };
string str = "AABAGAF";
char first = str.ToArray().Where(c => !list.Contains(c)).FirstOrDefault();
```
In that case, if there is no non-list character, first will equal 0x0000 (or the *character* null). You could do this:
```
char? first = str.ToArray().Cast<char?>().Where(
c => !list.Contains(c.Value)).FirstOrDefault();
```
Then first will be null if there are no matches. This can also be written as:
```
var query = from char c in str
where !list.Contains(c)
select (char?)c;
char? first = query.FirstOrDefault();
```
|
How to find the index of the first char in a string that is not in a list
|
[
"",
"c#",
"string",
"search",
""
] |
I want to insert something into a STL list in C++, but I only have a reverse iterator. What is the usual way to accomplish this?
This works: (of course it does)
```
std::list<int> l;
std::list<int>::iterator forward = l.begin();
l.insert(forward, 5);
```
This doesn't work: (what should I do instead?)
```
std::list<int> l;
std::list<int>::reverse_iterator reverse = l.rbegin();
l.insert(reverse, 10);
```
|
`l.insert(reverse.base(), 10);` will insert '10' at the end, given your definition of the 'reverse' iterator. Actually, `l.rbegin().base() == l.end()`.
|
Essentially, you don't. See 19.2.5 in TCPPPL.
The `reverse_iterator` has a member called `base()` which will return a "regular" iterator. So the following code would work in your example:
```
l.insert(reverse.base(), 10);
```
Be careful though because the `base()` method returns the element one after the orginal `reverse_iterator` had pointed to. (This is so that reverse\_iterators pointing at `rbegin()` and `rend()` work correctly.)
|
How do you insert with a reverse_iterator
|
[
"",
"c++",
"stl",
""
] |
I'm just getting back into C++ after a couple of years of doing a lot of C#, and recently Objective C.
One thing I've done before is to roll my own iterator adapter for std::map that will deref to just the value part, rather than the key-value pair. This is quite a common and natural thing to do. C# provides this facility with its Keys and Values properties of its Dictionary class. Objective-C's NSDictionary, similarly, has allKeys and allValues.
Since I've been "away", Boost has acquired the Range and ForEach libraries, which I am now using extensively. I wondered if between the two there was some facility to do the same, but I haven't been able to find anything.
I'm thinking of knocking something up using Boost's iterator adapters, but before I go down that route I thought I'd ask here if anyone knows of such a facility in Boost, or somewhere else ready made?
|
I don't think there's anything out of the box. You can use boost::make\_transform.
```
template<typename T1, typename T2> T2& take_second(const std::pair<T1, T2> &a_pair)
{
return a_pair.second;
}
void run_map_value()
{
map<int,string> a_map;
a_map[0] = "zero";
a_map[1] = "one";
a_map[2] = "two";
copy( boost::make_transform_iterator(a_map.begin(), take_second<int, string>),
boost::make_transform_iterator(a_map.end(), take_second<int, string>),
ostream_iterator<string>(cout, "\n")
);
}
```
|
Replacing the previous answer, in case anybody else finds this like I did. As of boost 1.43, there are some commonly used range adaptors provided. In this case, you want boost::adaptors::map\_values. The relevant example:
<http://www.boost.org/doc/libs/1_46_0/libs/range/doc/html/range/reference/adaptors/reference/map_values.html#range.reference.adaptors.reference.map_values.map_values_example>
|
iterator adapter to iterate just the values in a map?
|
[
"",
"c++",
"maps",
"iterator",
"adapter",
""
] |
How do I launch an app and capture the output via stdout and maybe stderr?
I am writing an automated build system and I need to capture the output to analyze. I'd like to update the svn repo and grab the revision number so I can move the files in autobuild/revNumber/ if successful. I also would like to build using make and upload the compile text to my server for everyone to see the warnings and errors on a failed build.
I can't find the `system()` function, but I found the `CreateProcess()` function on MSDN. I am able to launch what I need but I have no idea how to capture the stderr and stdout. I notice the process launches separately unless I set a breakpoint and keep my app exiting which it then will keep all the text in my app console window. I would also want to wait until all processes are finished and then scan the data it produced to do any additional operations I need. How do I do any of this?
|
In real shells (meaning, not sea shells - I mean, not in C Shell or its derivatives), then:
```
program arg1 arg2 >/tmp/log.file 2>&1
```
This runs program with the given arguments, and redirects the stdout to /tmp/log.file; the notation (~~hieroglyph~~) '`2>&1`' at the end sends stderr (file descriptor 2) to the same place that stdout (file descriptor 1) is going. Note that the sequence of operations is important; if you reverse them, then standard error will go to where standard output was going, and then standard output (but not standard error) will be redirected to the file.
The choice of file name shown is abysmal for numerous reasons - you should allow the user to choose the directory, and probably should include the process ID or time stamp in the file name.
```
LOG=${TMPDIR:-/tmp}/log.$$.$(date +%Y%m%d-%H%M%S)
program arg1 arg2 >$LOG 2>&1
```
In C++, you can use the `system()` function (inherited from C) to run processes. If you need to know the file name in the C++ program (plausible), then generate the name in the program (`strftime()` is your friend) and create the command string with that file name.
(Strictly, you also need `getenv()` to get $TMPDIR, and the POSIX function `getpid()` to get the process ID, and then you can simulate the two-line shell script (though the PID used would be of the C++ program, not the launched shell).
You could instead use the POSIX `popen()` function; you'd have to include the '`2>&1`' notation in the command string that you create to send the standard error of the command to the same place as standard output goes, but you would not need a temporary file:
```
FILE *pp = popen("program arg1 arg2 2>&1", "r");
```
You can then read off the file stream. I'm not sure whether there's a clean way to map a C file stream into a C++ istream; there probably is.
|
You need to fill up the STARTUP\_INFO structure, which has hStdInput, hStdOutput and hStdError. Remember to inherit handles when you CreateProcess.
```
/* Assume you open a file handle or pipe called myoutput */
STARTUP_INFO si_startinfo;
ZeroMemory(&si_startinfo, sizeof(STARTUP_INFO));
si_startinfo.cb = sizeof(STARTUP_INFO);
si_startinfo.hStdInput = GetStdHandle(STD_INPUT_HANDLE);
si_startinfo.hStdOutput = myoutput;
si_startinfo.hStdError = myoutput;
si_startifno.dwFlags != STARTF_USEHANDLES;
PROCESS_INFORMATION pi_procinfo;
ZeroMemory(&pi_procinfo, sizeof(PROCESS_INFORMATION);
CreateProcess(NULL, cmdline, NULL, NULL, true, 0, NULL, pathname, &si_startinfo, &pi_procinfo);
```
I have not shown the error handling aspects, which you will need to do. The 5th argument is set to true to inherit the handles. Others have explained how to create pipes so I won't repeat it here.
|
launch app, capture stdout and stderr in c++
|
[
"",
"c++",
"process",
"stdout",
"capture",
"launch",
""
] |
I've just come across a property setter that catches exceptions (all Exceptions; I know that's bad, but it's not relevant here), and *only* logs them. First of all, I think it should through them again as well; why wait for a crash and a log study when you can know something is wrong right away?
However, my main question is, do I validate against invalid date values, add a RuleViolation object to a ValidationRules object on my document, or throw an InvalidDate exception, or just let the CLR throw the exception for me (invalid dates are nothing but invalid dates, not checked for range etc.)
|
Exceptions should be thrown whenever the method or class member is unable to complete whatever task it is designed to accomplish.
So for a property setter, if the setter is unable to set the property, then it should throw an exception.
As to whether you should catch it and rethrow it, the answer is yes, but only if you need to process the exception immediately in the setter, before passing it up the stack... but logging it is not a reason to do that. In general, you should implement cross-cutting logging of exceptions at a higher level, where the exception is NOT going to be re-thrown... if you are taking care of those cross-cutting concerns higher up the stack somewhere, then no, definitely do not catch and re-throw the same exception.
However, if you are writing a tool, or a framework library, where you want your component's clients to have a clearly defined set of expected exceptions, and you have defined your own custom exceptions that your component will throw to client code, and which client components will expect to see, then you may want to catch CLR generated exceptions and rethrow your own custom exceptions instead.. Always include the Actual underlying exception in your custom exceptions "InnerException" property before passing it up the stack, so that the data in it is available to whatever system end up consuming it.
|
It depends on the specific task at hand. If you are writing a library class that will be used as a component in other programs and the contract of the method of the class says that it should only accept valid dates then throwing the Exception is fine.
If you are accepting user input and then waiting for an exception is a bad practice. In this case you should validate the date yourself.
Exceptions are for exceptional cases, and should not be part of your logic. It usually means that a contract was broken by the programmer.
|
Exception vs Validation
|
[
"",
"c#",
".net",
""
] |
Is this seen as an in efficient prime number generator. It seems to me that this is pretty efficient. Is it the use of the stream that makes the program run slower?
I am trying to submit this to [SPOJ](http://www.spoj.pl/) and it tells me that my time limit exceeded...
```
#include <iostream>
#include <sstream>
using namespace std;
int main() {
int testCases, first, second, counter = 0;
bool isPrime = true;
stringstream out;
cin >> testCases;
for (int i = 0; i < testCases; i++) {
// get the next two numbers
cin >> first >> second;
if (first%2 == 0)
first++;
// find the prime numbers between the two given numbers
for (int j = first; j <= second; j+=2) {
// go through and check if j is prime
for (int k = 2; k < j; k++) {
if (j%k == 0) {
isPrime = false;
break;
}
}
if (isPrime) {
out << j << "\n";
}
isPrime = true;
}
out << "\n";
}
cout << out.str();
return 0;
}
```
EDIT: The program is supposed to generate prime numbers between the numbers specified in the input. (See here for more details: [Prime Generator Problem](http://www.spoj.pl/problems/PRIME1/) )
-Tomek
|
This is one step (skipping even numbers) above the naive algorithm. I would suggest the [Sieve Of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) as a more efficient algorithm. From the above link:
> The complexity of the algorithm is
> O((nlogn)(loglogn)) with a memory
> requirement of O(n). The segmented
> version of the sieve of Eratosthenes,
> with basic optimizations such as wheel
> factorization, uses O(n) operations
> and O(n1 / 2loglogn / logn) bits of
> memory.
The algorithm you give is somewhere near O(n^2). The speedup you get by skipping evens isn't that great because you would find an even number not to be prime on the first test. The sieve has a much greater memory requirement, but the runtime complexity is far superior for large *N*.
|
You're searching a *lot* more numbers than you have to - at most you only need to go to `<= (sqrt(num))`.
|
Is this prime generator inefficient C++?
|
[
"",
"c++",
"performance",
"primes",
""
] |
I'm writing a wrapper class for a command line executable. This exe accepts input from `stdin` until I hit `Ctrl+C` in the command prompt shell, in which case it prints output to `stdout` based on the input. I want to simulate that `Ctrl+C` press in C# code, sending the kill command to a .NET `Process` object. I've tried calling `Process.Kill()`, but that doesn't seem to give me anything in the process's `StandardOutput` `StreamReader`. Might there be anything I'm not doing right? Here's the code I'm trying to use:
```
ProcessStartInfo info = new ProcessStartInfo(exe, args);
info.RedirectStandardError = true;
info.RedirectStandardInput = true;
info.RedirectStandardOutput = true;
info.UseShellExecute = false;
Process p = Process.Start(info);
p.StandardInput.AutoFlush = true;
p.StandardInput.WriteLine(scriptcode);
p.Kill();
string error = p.StandardError.ReadToEnd();
if (!String.IsNullOrEmpty(error))
{
throw new Exception(error);
}
string output = p.StandardOutput.ReadToEnd();
```
The output is always empty, even though I get data back from `stdout` when I run the exe manually.
**Edit**: This is C# 2.0 by the way.
|
I've actually just figured out the answer. Thank you both for your answers, but it turns out that all i had to do was this:
```
p.StandardInput.Close()
```
which causes the program I've spawned to finish reading from stdin and output what i need.
|
Despite the fact that using `GenerateConsoleCtrlEvent()` for sending `Ctrl`+`C` signal is the right answer, it needs significant clarification to get it to work in different .NET application types.
If your .NET application doesn't use its own console (Windows Forms/WPF/Windows Service/ASP.NET), the basic flow is:
1. Attach the main .NET process to the console of the process that you want to signal with `Ctrl`+`C`.
2. Prevent the main .NET process from stopping because of `Ctrl`+`C` event by disabling handling of the signal with `SetConsoleCtrlHandler()`.
3. Generate the console event for the *current* console with `GenerateConsoleCtrlEvent()` (`processGroupId` should be zero! The answer with code that sends `p.SessionId` will not work and is incorrect).
4. Wait for the signaled process to respond (e.g. by waiting for it to exit)
5. Restore `Ctrl`+`C` handling by main process and disconnect from console.
The following code snippet illustrates how to do that:
```
Process p;
if (AttachConsole((uint)p.Id)) {
SetConsoleCtrlHandler(null, true);
try {
if (!GenerateConsoleCtrlEvent(CTRL_C_EVENT, 0))
return false;
p.WaitForExit();
} finally {
SetConsoleCtrlHandler(null, false);
FreeConsole();
}
return true;
}
```
where `SetConsoleCtrlHandler()`, `FreeConsole()`, `AttachConsole()` and `GenerateConsoleCtrlEvent()` are native WinAPI methods:
```
internal const int CTRL_C_EVENT = 0;
[DllImport("kernel32.dll")]
internal static extern bool GenerateConsoleCtrlEvent(uint dwCtrlEvent, uint dwProcessGroupId);
[DllImport("kernel32.dll", SetLastError = true)]
internal static extern bool AttachConsole(uint dwProcessId);
[DllImport("kernel32.dll", SetLastError = true, ExactSpelling = true)]
internal static extern bool FreeConsole();
[DllImport("kernel32.dll")]
static extern bool SetConsoleCtrlHandler(ConsoleCtrlDelegate HandlerRoutine, bool Add);
// Delegate type to be used as the Handler Routine for SCCH
delegate Boolean ConsoleCtrlDelegate(uint CtrlType);
```
Note that waiting for the targeted process to respond, typically by waiting for the process to exit, is critical. Otherwise, the `Ctrl`+`C` signal will remain in the current process's input queue and when handling is restored by the second call to `SetConsoleCtrlHandler()`, that signal will terminate the *current* process, rather than the targeted one.
Things become more complex if you need to send `Ctrl`+`C` from .NET console application. The above approach will not work because `AttachConsole()` returns `false` in this case (the main console app already has a console). It is possible to call `FreeConsole()` before `AttachConsole()` call, but doing so will result in the original .NET app console being lost, which is not acceptable in most cases.
Here is my solution for this case; it works and has no side effects for the .NET main process console:
1. Create small supporting .NET console program that accepts process ID from command line arguments, loses its own console with `FreeConsole()` before the `AttachConsole()` call and sends `Ctrl`+`C` to the target process with code mentioned above.
2. The main .NET console process just invokes this utility in a new process when it needs to send `Ctrl`+`C` to another console process.
3. In the main .NET console process call `SetConsoleCtrlHandler(null, true)` before spawning the "killer"-process (from step 1) and `SetConsoleCtrlHandler(null, false)` after. Else your main process will also receive `Ctrl`+`C` and die
|
How do I send ctrl+c to a process in c#?
|
[
"",
"c#",
"command-line",
".net-2.0",
"process",
""
] |
Lets say I have a web app which has a page that may contain 4 script blocks - the script I write may be found in one of those blocks, but I do not know which one, that is handled by the controller.
I bind some `onclick` events to a button, but I find that they sometimes execute in an order I did not expect.
Is there a way to ensure order, or how have you handled this problem in the past?
|
I had been trying for ages to generalize this kind of process, but in my case I was only concerned with the order of first event listener in the chain.
If it's of any use, here is my jQuery plugin that binds an event listener that is always triggered before any others:
\*\* *UPDATED inline with jQuery changes (thanks Toskan)* \*\*
```
(function($) {
$.fn.bindFirst = function(/*String*/ eventType, /*[Object])*/ eventData, /*Function*/ handler) {
var indexOfDot = eventType.indexOf(".");
var eventNameSpace = indexOfDot > 0 ? eventType.substring(indexOfDot) : "";
eventType = indexOfDot > 0 ? eventType.substring(0, indexOfDot) : eventType;
handler = handler == undefined ? eventData : handler;
eventData = typeof eventData == "function" ? {} : eventData;
return this.each(function() {
var $this = $(this);
var currentAttrListener = this["on" + eventType];
if (currentAttrListener) {
$this.bind(eventType, function(e) {
return currentAttrListener(e.originalEvent);
});
this["on" + eventType] = null;
}
$this.bind(eventType + eventNameSpace, eventData, handler);
var allEvents = $this.data("events") || $._data($this[0], "events");
var typeEvents = allEvents[eventType];
var newEvent = typeEvents.pop();
typeEvents.unshift(newEvent);
});
};
})(jQuery);
```
**Things to note:**
* This hasn't been fully tested.
* It relies on the internals of the jQuery framework not changing (only tested with 1.5.2).
* It will not necessarily get triggered before event listeners that are bound in any way other than as an attribute of the source element or using jQuery bind() and other associated functions.
|
If order is important you can create your own events and bind callbacks to fire when those events are triggered by other callbacks.
```
$('#mydiv').click(function(e) {
// maniplate #mydiv ...
$('#mydiv').trigger('mydiv-manipulated');
});
$('#mydiv').bind('mydiv-manipulated', function(e) {
// do more stuff now that #mydiv has been manipulated
return;
});
```
Something like that at least.
|
How to order events bound with jQuery
|
[
"",
"javascript",
"jquery",
"events",
""
] |
How do I do the above? There is mktime function but that treats the input as expressed in local time but how do i perform the conversion if my input tm variable happens to be in UTC.
|
Use `timegm()` instead of `mktime()`
Worth noting as pointed out by @chux - Reinstate Monica below is that `time_t timegm(struct tm *timeptr)` is considered adding to the C23 standard (and thus by inclusion into the C++ standard).
|
for those on windows, the below function is available:
```
_mkgmtime
```
link for more info: <https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/mkgmtime-mkgmtime32-mkgmtime64>
|
Easy way to convert a struct tm (expressed in UTC) to time_t type
|
[
"",
"c++",
"c",
""
] |
I run a website where users can post items (e.g. pictures). The items are stored in a MySQL database.
I want to query for the last ten posted items BUT with the constraint of a maximum of 3 items can come from any single user.
What is the best way of doing it? My preferred solution is a constraint that is put on the SQL query requesting the last ten items. But ideas on how to set up the database design is very welcome.
Thanks in advance!
BR
|
It's pretty easy with a correlated sub-query:
```
SELECT `img`.`id` , `img`.`userid`
FROM `img`
WHERE 3 > (
SELECT count( * )
FROM `img` AS `img1`
WHERE `img`.`userid` = `img1`.`userid`
AND `img`.`id` > `img1`.`id` )
ORDER BY `img`.`id` DESC
LIMIT 10
```
The query assumes that larger `id` means added later
Correlated sub-queries are a powerful tool! :-)
|
This is difficult because MySQL does not support the LIMIT clause on sub-queries. If it did, this would be rather trivial... But alas, here is my naïve approach:
```
SELECT
i.UserId,
i.ImageId
FROM
UserSuppliedImages i
WHERE
/* second valid ImageId */
ImageId = (
SELECT MAX(ImageId)
FROM UserSuppliedImages
WHERE UserId = i.UserId
)
OR
/* second valid ImageId */
ImageId = (
SELECT MAX(ImageId)
FROM UserSuppliedImages
WHERE UserId = i.UserId
AND ImageId < (
SELECT MAX(ImageId)
FROM UserSuppliedImages
WHERE UserId = i.UserId
)
)
/* you get the picture...
the more "per user" images you want, the more complex this will get */
LIMIT 10;
```
You did not comment on having a preferred result order, so this selects the latest images (assuming `ImageId` is an ascending auto-incrementing value).
For comparison, on SQL Server the same would look like this:
```
SELECT TOP 10
img.ImageId,
img.ImagePath,
img.UserId
FROM
UserSuppliedImages img
WHERE
ImageId IN (
SELECT TOP 3 ImageId
FROM UserSuppliedImages
WHERE UserId = img.UserId
)
```
|
How to select maximum 3 items per users in MySQL?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I've got a lot of pages in my site, I'm trying to think of a nice way to separate these into areas that are a little more isolated than just simple directories under my base web project. Is there a way to put my web forms into a separate class library? If so, how is it done?
Thanks in advance.
|
At first thought, I don't think this is possible, due to the way ASPX is non-precompiled..
However, you can create classes that inherit from `Page` and place them into a DLL to re-use code-behind functionality. This can of course include control instantiate logic if required, but there is no designer to work with (if you need it).
|
My question is, why do you want to do this?
If it's purely organisational then your "simple folders" should really be enough, so maybe you need to re-think your project structure.
If it is for compilation purposes, like it takes ages to recompile the site every time you change something, maybe you could split the site into multiple site project that each run as a subdomain of your main site.
These will then be recompiled separately.
If it's an organisational thing but related to management, in that you have loads of code and it's difficult to get around then maybe you should assess the way you are building the site. Would an N-Tier aproach or ASP.NET MVC provide better separation of your code?
What do you think?...
|
Is there a way to put aspx files into a class library in Visual Studio 2008 .NET 3.5?
|
[
"",
"c#",
".net",
"asp.net",
".net-3.5",
""
] |
I have a program in which I've lost the C++ source code. Are there any good C++ decompilers out there?
I've already ran across [Boomerang](http://boomerang.sourceforge.net/).
|
You can use [IDA Pro](http://www.hex-rays.com/idapro/) by [Hex-Rays](http://www.hex-rays.com/). You will usually not get good C++ out of a binary unless you compiled in debugging information. Prepare to spend a **lot** of manual labor reversing the code.
If you didn't strip the binaries there is some hope as IDA Pro can produce C-alike code for you to work with. Usually it is very rough though, at least when I used it a couple of years ago.
|
information is discarded in the compiling process. Even if a decompiler could produce the logical equivalent code with classes and everything (it probably can't), the self-documenting part is gone in optimized release code. No variable names, no routine names, no class names - just addresses.
|
Is there a C++ decompiler?
|
[
"",
"c++",
"reverse-engineering",
"decompiling",
""
] |
I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application.
|
* There's [pyrex](http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/) that compiles python like source to python extension modules
* [rpython](https://rpython.readthedocs.io/en/latest/) which allows you to compile python with some restrictions to various backends like C, LLVM, .Net etc.
* There's also [shed-skin](https://shedskin.readthedocs.io) which translates python to C++, but I can't say if it's any good.
* [PyPy](http://codespeak.net/pypy/dist/pypy/doc/home.html) implements a JIT compiler which attempts to optimize runtime by translating pieces of what's running at runtime to machine code, if you write for the PyPy interpreter that might be a feasible path.
* The same author that is working on JIT in PyPy wrote [psyco](http://psyco.sourceforge.net/) previously which optimizes python in the CPython interpreter.
|
You can use something like py2exe to compile your python script into an exe, or Freeze for a linux binary.
see: [How can I create a directly-executable cross-platform GUI app using Python?](https://stackoverflow.com/questions/2933/an-executable-python-app#2937)
|
Is it possible to compile Python natively (beyond pyc byte code)?
|
[
"",
"python",
"module",
"compilation",
""
] |
This is a problem that only occurs on application update (only tested through Admin Console, not CLI). Also, this is only happening on our development environment, which is identical to our prod env. On uninstall/install, everything is compiled properly. However, this is a large application and it takes long enough to do an update--we do not want to uninstall/install everytime (esp. during dev. builds).
JSP .java and .smap files are being generated, but not .class. On prod, there is no .smap--only .java and .class. If the JSPs would compile, we believe the tag libs would be compiled also.
Has anyone faced this problem, or know what we are possibly overlooking?
WAS Version: 6.1.0.17
**EDIT:** This is only happening for one JSP and the tag library its using. We're trying to trouble shoot the issue. Let you know if we figure anything out. We think it may be an issue with the .SMAP file per [this fix](http://www-01.ibm.com/support/docview.wss?rs=180&uid=swg1PK68395)
[## SOLUTION](https://stackoverflow.com/questions/205069/websphere-app-server-not-compiling-jsptag-libs#205906)
|
# Solution:
**Problem was**: WAS had debugging enabled. This generated .smap files (source mappings) instead of .class files. Once we discovered this, we disabled the debugging service under the Admin Console (Application Servers > server1 > Debugging Processes > uncheck 'Start service'). I don't know why this created a bug upon an update only, but, whatever the case, the problem is solved.
|
One suggestion:
Have you checked that your environment has a JAVA\_HOME pointing to a JDK and not just a JRE. The JRE doesn't have javac, so there won't be any way to compile .java files created at runtime by the JSP compiler into .classes?
EDIT: Also, perhaps this [link](http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.zseries.doc/info/zseries/ae/cweb_jspengine.html) may help?
|
WebSphere App Server Not Compiling JSP/Tag Libs
|
[
"",
"java",
"jsp",
"websphere",
"jsp-tags",
""
] |
I'm relatively new to Threading in Java and I've noticed that everytime I use Thread.sleep() I have to catch InterrupetdException.
What kind of behaviour causes this, and in simple applications where I have a monitor thread can I just Ignore the exception?
|
Well if some other Thread calls thread.interupt(), while the thread is sleeping, you'll get the Exception. And yes, you can probably just put try..catch arround the sleep() and ignore it ;)
|
It happens when something calls [interrupt()](http://java.sun.com/javase/6/docs/api/java/lang/Thread.html#interrupt()) on the thread. [This article by Brian Goetz](http://www.ibm.com/developerworks/java/library/j-jtp05236.html) explains the interruption mechanism and how you should handle InterruptedExceptions:
> "The most common response to InterruptedException is to swallow it -- catch it and do nothing (or perhaps log it, which isn't any better) -- as we'll see later in Listing 4. Unfortunately, this approach throws away important information about the fact that an interrupt occurred, which could compromise the application's ability to cancel activities or shut down in a timely manner."
>
> "If you catch InterruptedException but cannot rethrow it, you should preserve evidence that the interruption occurred [...]. This task is accomplished by calling interrupt() to "reinterrupt" the current thread."
|
What kind of behaviour causes an interrupted exception?
|
[
"",
"java",
"multithreading",
"exception",
""
] |
Is there any IDE (like VS) with drag and drop support for building python GUI, connecting to dbs etc?
Eventhough I am an emacs guy, I find it much easier to create GUI with VS.
|
For GUI only, I find VisualWx (<http://visualwx.altervista.org/>) to be very good for designing wxPython apps under Windows.
For GUI + database, dabo (<http://dabodev.com/>) is probably a good answer.
|
The short answer is "no". There is not a swiss-army-knife like IDE that is both a full-featured Python code-editor and a full-featured WYSIWYG GUI editor. However, there are several stand-alone tools that make creating a GUI easier and there are a myriad of code editors, so if you can handle having two windows open, then you can accomplish what you are trying to.
As for stand-alone GUI editors, which you choose is going to depend on what library you choose to develop your GUI with. I would recommend using [GTK+](http://www.gtk.org/), which binds to Python via [PyGtk](http://www.pygtk.org/) and has the [Glade](http://glade.gnome.org/) GUI designer. I believe that there are other GUI libraries for Python that have WYSIWYG designers (Qt, Tkinter, wxWindows, etc.), but GTK+ is the one I have the most experience with so I will leave the others for other commentators.
Note, however, that the designer in this case is not at all language dependent. It just spits out a .glade file that could be loaded into any language that has GTK+ bindings. If you are looking for a designer that produces raw Python code (like the Code-Behind model that VS.Net uses), then I am not aware of any.
As for general code-editing IDE's (that do not include a GUI designer), there are [many](http://wiki.python.org/moin/IntegratedDevelopmentEnvironments), of which [PyDev](http://pydev.sourceforge.net/)/[Eclipse](http://www.eclipse.org/) is probably the most Visual Studio-like.
(Revised for clarity.)
|
With what kind of IDE (if any) you build python GUI projects?
|
[
"",
"python",
"user-interface",
"ide",
""
] |
I'm trying to duplicate the effect used in the Firefox search box where, if the search field does not have focus ( the user has not clicked inside of it ), it just says *Google* in gray text. Then, when the user clicks in the box, the text is removed and they can fill in their search term.
I want to use this to provide example field data for a web form.
JQuery syntax would be preferable to plain javascript, but plain JS would be fine too.
Thanks SO Hive Mind!
|
```
<style type='text/css'>
input #ghost { color: #CCC; }
input #normal { color: #OOO; }
</style>
<script type='text/javascript'>
function addTextHint(elem, hintText)
{
if (elem.value == '')
{
elem.value = hintText;
elem.style.className = 'ghost';
}
elem.onfocus = function ()
{
if (elem.value == hintText)
{
elem.value = '';
elem.className = 'normal';
}
}
elem.onblur = function ()
{
if (elem.value == '')
{
elem.value = hintText;
elem.className = 'ghost';
}
}
}
addTextHint(document.getElementById('foobar'),'Google');
</script>
```
Just whipped this up for you. jQuery would make it smaller I'm sure, but I don't use it.
|
This technique is so commonly used that it's now explicitly supported in the HTML spec through the placeholder attribute of the input tag:
[HTML placeholder Attribute](http://www.w3schools.com/tags/att_input_placeholder.asp)
[It's already supported in most browsers](http://caniuse.com/input-placeholder) and for older ones there is a jquery plugin providing shim implementation that can be found [here](https://github.com/mathiasbynens/jquery-placeholder).
For styling the placeholder text, refer to this (live examples included):
[HTML5 Placeholder Styling with CSS](http://davidwalsh.name/html5-placeholder-css)
|
Show 'ghosted' example text in a field, and then clear it onblur
|
[
"",
"javascript",
"jquery",
"html",
"forms",
"field",
""
] |
I'm mostly a .Net person at the moment, but I've been playing with Java some lately-- exploring what's out there.
Now I'm looking for the Java equivalent for WPF. I know I could find an OpenGL library or two out there, but that's not really as rich or simple as the WPF system.
|
I think a combination of JavaFX, Swing, Java2D, and Java's browser-based JRE comprise the solutions that WPF provides:
* JavaFX applications (actually, any Java app) can run in the browser or on a desktop
* JavaFX provides high-end video support
* JavaFX provides for scripted animations and visual special effects
* Swing provides UI capabilities, and can be used in both Java and JavaFX
* Java2D, which provides the underpinnings for all drawing tasks (including Swing), takes advantage of hardware acceleration and DirectX support
* The JRE on the desktop or the browser enable Java applications to be deployed to multiple environments (including other screens, like set-top boxes or phones)
|
I've programmed Aqua, Macintosh Quick Draw, Windows GDI and GDI+, Qt, and .NET Winforms and WPF is by far the most sophisticated API I've used. Although it has a pretty capable feature set that's better than preceding technologies such as swing, it's no match for WPF. It solves some major problems that have plagued graphics programming. If you're coming from the HTML/JS world, it is easy to learn, but if you're coming from the traditional graphics programming world, it's a major paradigm shift. Regardless, it's much easier to learn than CSS/HTML/JS. It's a clean break from the legacy concepts which plague other graphics programming environments.
The biggest strength of WPF is that it's resolution independent. It can scale across devices with little to no modifications. It requires little work to take a screen version of a drawing and output it to a high resolution printer without resolution loss.
It also supports event triggering. UI elements can respond to events of other UI elements or to your applications code, making dynamic interfaces possible. It makes it easy to separate your code from the UI in a way that even HTML/JS can't achieve. Elements can broadcast and listen to events and respond accordingly.
Another strength is its highly object-oriented and declarative capable API. Using XAML, you can easily construct a working interface quickly and efficiently with a few lines. Unlike HTML/JS, it is easier to learn and its output is far more predictable and efficient. You can even program WPF completely in code, but it's generally not worth the minor performance gain. A better method is to compile your Xaml into .NET code.
In addition, the tooling available for WPF is very extensive compared to JavaFX. There are tons of tools including Expression Blend available. There are also numerous tools for taking vector graphics formats such as SVG and Adobe Illustrator and converting them into XAML. Now, designers and programmers can collaborate on desktop publishing in a way that was very difficult to do before.
In summary, WPF is so comprehensive that the Mono team opted not to port it to the Mono code base. They claimed it would take many man years to fully implement a reasonable feature set. If a Mono compatible version of WPF existed, it would make .NET the de facto cross-platform application framework. In fact, it may even supersede HTML/CSS, since it is far more powerful and easier to get your head around. Unfortunately, Microsoft didn't see a business case for a cross-platform enabled WPF. It's why SilverLight/WPF lost to HTML5/JS. Thanks to this mistake, they are now forced to hand over not only the web but part of their desktop to HTML 5/JS.
|
What is Java's answer to WPF?
|
[
"",
"java",
".net",
"wpf",
""
] |
How can I access options that I set in a jQuery Datepicker?
```
$("#testDatePicker").datepicker({
minDate: new Date(2005, 0, 26),
showOn: 'button',
buttonImage: 'js/themes/default/images/calendar.gif',
buttonImageOnly: true
});
var minDate = $("#testDatePicker").?????;
```
|
What about storing the options in a named var instead of an anonymous object?
```
var dpOptions = {minDate: new Date(2005, 0, 26), ...};
$('#testDatePicker').datepicker(dpOptions);
.
.
.
var minDate = dpOptions.minDate;
```
|
If you don't have enough with @mausch solution the settings are stored here:
```
$.data($('#testDatePicker')[0], 'datepicker').settings
```
|
Accessing options in jQuery Datapicker
|
[
"",
"javascript",
"jquery",
""
] |
What's the best way to take some plain text (not PHP code) which contains PHP-style variables, and then substitute in the value of the variable. This is kinda hard to describe, so here's an example.
```
// -- myFile.txt --
Mary had a little $pet.
// -- parser.php --
$pet = "lamb";
// open myFile.txt and transform it such that...
$newContents = "Mary had a little lamb.";
```
I've been considering using a regex or perhaps `eval()`, though I'm not sure which would be easiest. This script is only going to be running locally, so any worries regarding security issues and `eval()` do not apply *(i think?)*.
I'll also just add that I can get all the necessary variables into an array by using `get_defined_vars()`:
```
$allVars = get_defined_vars();
echo $pet; // "lamb"
echo $allVars['pet']; // "lamb"
```
|
If it's from a trusted source you can use (dramatic pause) [eval()](http://www.php.net/eval) (gasps of horror from the audience).
```
$text = 'this is a $test'; // single quotes to simulate getting it from a file
$test = 'banana';
$text = eval('return "' . addslashes($text) . '";');
echo $text; // this is a banana
```
|
Regex would be easy enough. And it would not care about things that `eval()` would consider a syntax error.
Here's the pattern to find PHP style variable names.
```
\$\w+
```
I would probably take this general pattern and use a PHP array to look up each match I've found (using (`preg_replace_callback()`). That way the regex needs to be applied only once, which is faster on the long run.
```
$allVars = get_defined_vars();
$file = file_get_contents('myFile.txt');
// unsure if you have to use single or double backslashes here for PHP to understand
preg_replace_callback ('/\$(\w+)/', "find_replacements", $file);
// replace callback function
function find_replacements($match)
{
global $allVars;
if (array_key_exists($match[1], $allVars))
return $allVars[$match[1]];
else
return $match[0];
}
```
|
Best way to substitute variables in plain text using PHP
|
[
"",
"php",
"variables",
"substitution",
""
] |
Is it possible using StAX (specifically woodstox) to format the output xml with newlines and tabs, i.e. in the form:
```
<element1>
<element2>
someData
</element2>
</element1>
```
instead of:
```
<element1><element2>someData</element2></element1>
```
If this is not possible in woodstox, is there any other lightweight libs that can do this?
|
Via the JDK: `transformer.setOutputProperty(OutputKeys.INDENT, "yes");`.
|
There is com.sun.xml.txw2.output.IndentingXMLStreamWriter
```
XMLOutputFactory xmlof = XMLOutputFactory.newInstance();
XMLStreamWriter writer = new IndentingXMLStreamWriter(xmlof.createXMLStreamWriter(out));
```
|
StAX XML formatting in Java
|
[
"",
"java",
"xml",
"formatting",
"stax",
"woodstox",
""
] |
When does java let go of a connections to a URL? I don't see a close() method on either URL or URLConnection so does it free up the connection as soon as the request finishes? I'm mainly asking to see if I need to do any clean up in an exception handler.
```
try {
URL url = new URL("http://foo.bar");
URLConnection conn = url.openConnection();
// use the connection
}
catch (Exception e) {
// any clean up here?
}
```
|
It depends on the specific protocol specified in the protocol. Some maintain persistent connections, other close their connections when your call close in the input or outputstream given by the connection. But other than remembering to closing the streams you opened from the URLConnection, there is nothing else you can do.
From the javadoc for java.net.URLConnection
> Invoking the close() methods on the
> InputStream or OutputStream of an
> URLConnection after a request may free
> network resources associated with this
> instance, unless particular protocol
> specifications specify different
> behaviours for it.
|
If you cast to an HttpURLConnection, there is a [disconnect()](http://docs.oracle.com/javase/7/docs/api/java/net/HttpURLConnection.html#disconnect()) method. If the connection is idle, it will probably disconnect immediately. No guarantees.
|
In Java when does a URL connection close?
|
[
"",
"java",
"exception",
"url",
"connection",
""
] |
How can I go through all external links in a div with javascript, adding (or appending) a class and alt-text?
I guess I need to fetch all objects inside the div element, then check if each object is a , and check if the href attributen starts with http(s):// (should then be an external link), then add content to the alt and class attribute (if they don't exist create them, if they do exists; append the wanted values).
But, how do I do this in code?
|
This one is tested:
```
<style type="text/css">
.AddedClass
{
background-color: #88FF99;
}
</style>
<script type="text/javascript">
window.onload = function ()
{
var re = /^(https?:\/\/[^\/]+).*$/;
var currentHref = window.location.href.replace(re, '$1');
var reLocal = new RegExp('^' + currentHref.replace(/\./, '\\.'));
var linksDiv = document.getElementById("Links");
if (linksDiv == null) return;
var links = linksDiv.getElementsByTagName("a");
for (var i = 0; i < links.length; i++)
{
var href = links[i].href;
if (href == '' || reLocal.test(href) || !/^http/.test(href))
continue;
if (links[i].className != undefined)
{
links[i].className += ' AddedClass';
}
else
{
links[i].className = 'AddedClass';
}
if (links[i].title != undefined && links[i].title != '')
{
links[i].title += ' (outside link)';
}
else
{
links[i].title = 'Outside link';
}
}
}
</script>
<div id="Links">
<a name="_Links"></a>
<a href="/foo.asp">FOO</a>
<a href="ftp://FTP.org/FILE.zip">FILE</a>
<a href="http://example.com/somewhere.html">SomeWhere</a>
<a href="http://example.com/somewhere2.html" class="Gah">SomeWhere 2</a>
<a href="http://example.com/somewhere3.html" title="It goes somewhere">SomeWhere 3</a>
<a href="https://another-example.com/elsewhere.php?foo=bar">ElseWhere 1</a>
<a href="https://another-example.com/elsewhere.php?foo=boz" class="Doh">ElseWhere 2</a>
<a href="https://another-example.com/elsewhere.php?foo=rad" title="It goes elsewhere">ElseWhere 3</a>
<a href="deep/below/bar.asp">BAR</a>
<a href="javascript:ShowHideElement('me');">Show/Hide</a>
</div>
```
If you are on an account on a shared server, like <http://big-server.com/~UserName/>, you might want to hard-code the URL to go beyond the top level. On the other hand, you might want to alter the RE if you want <http://foo.my-server.com> and <http://bar.my-server.com> marked as local.
[UPDATE] Improved robustness after good remarks...
I don't highlight FTP or other protocols, they probably deserve a distinct routine.
|
I think something like this could be a starting point:
```
var links = document.getElementsByTagName("a"); //use div object here instead of document
for (var i=0; i<links.length; i++)
{
if (links[i].href.substring(0, 5) == 'https')
{
links[i].setAttribute('title', 'abc');
links[i].setAttribute('class', 'abc');
links[i].setAttribute('className', 'abc');
}
}
```
you could also loop through all the A elements in the document, and check the parent to see if the div is the one you are looking for
|
Editing all external links with javascript
|
[
"",
"javascript",
"html",
"href",
""
] |
I need to read account number from Maestro/Mastercard with smart card reader. I am using Java 1.6 and its javax.smartcardio package. I need to send APDU command which will ask EMV application stored on card's chip for PAN number. Problem is, I cannot find regular byte array to construct APDU command which will return needed data anywhere...
|
You shouldn't need to wrap the APDU further. The API layer should take care of that.
It looks like the 0x6D00 response just means that the application did not support the INS.
Just troubleshooting now, but you did start out by selecting the MasterCard application, right?
I.e. something like this:
```
void selectApplication(CardChannel channel) throws CardException {
byte[] masterCardRid = new byte[]{0xA0, 0x00, 0x00, 0x00, 0x04};
CommandAPDU command = new CommandAPDU(0x00, 0xA4, 0x04, 0x00, masterCardRid);
ResponseAPDU response = channel.transmit(command);
return response.getData();
}
```
|
here is some working example:
```
CardChannel channel = card.getBasicChannel();
byte[] selectMaestro={(byte)0x00, (byte)0xA4,(byte)0x04,(byte)0x00 ,(byte)0x07 ,(byte)0xA0 ,(byte)0x00 ,(byte)0x00 ,(byte)0x00 ,(byte)0x04 ,(byte)0x30 ,(byte)0x60 ,(byte)0x00};
byte[] getProcessingOptions={(byte)0x80,(byte)0xA8,(byte)0x00,(byte)0x00,(byte)0x02,(byte)0x83,(byte)0x00,(byte)0x00};
byte[] readRecord={(byte)0x00,(byte)0xB2,(byte)0x02,(byte)0x0C,(byte)0x00};
ResponseAPDU r=null;
try {
ATR atr = card.getATR(); //reset kartice
CommandAPDU capdu=new CommandAPDU( selectMaestro );
r=card.getBasicChannel().transmit( capdu );
capdu=new CommandAPDU(getProcessingOptions);
r=card.getBasicChannel().transmit( capdu );
capdu=new CommandAPDU(readRecord);
r=card.getBasicChannel().transmit( capdu );
```
This works with Maestro card, I can read PAN number, yet now I need to read MasterCard's PAN number. I do not know should I change the read record APDU or select application APDU. Anyone familiar with APDUs?
|
How do I read the PAN from an EMV SmartCard from Java
|
[
"",
"java",
"smartcard",
"apdu",
"emv",
""
] |
Is it possible to call a constructor from another (within the same class, not from a subclass)? If yes how? And what could be the best way to call another constructor (if there are several ways to do it)?
|
Yes, it is possible:
```
public class Foo {
private int x;
public Foo() {
this(1);
}
public Foo(int x) {
this.x = x;
}
}
```
To chain to a particular superclass constructor instead of one in the same class, use `super` instead of `this`. Note that **you can only chain to one constructor**, and **it has to be the first statement in your constructor body**.
See also [this related question](https://stackoverflow.com/questions/284896), which is about C# but where the same principles apply.
|
Using `this(args)`. The preferred pattern is to work from the smallest constructor to the largest.
```
public class Cons {
public Cons() {
// A no arguments constructor that sends default values to the largest
this(madeUpArg1Value,madeUpArg2Value,madeUpArg3Value);
}
public Cons(int arg1, int arg2) {
// An example of a partial constructor that uses the passed in arguments
// and sends a hidden default value to the largest
this(arg1,arg2, madeUpArg3Value);
}
// Largest constructor that does the work
public Cons(int arg1, int arg2, int arg3) {
this.arg1 = arg1;
this.arg2 = arg2;
this.arg3 = arg3;
}
}
```
You can also use a more recently advocated approach of valueOf or just "of":
```
public class Cons {
public static Cons newCons(int arg1,...) {
// This function is commonly called valueOf, like Integer.valueOf(..)
// More recently called "of", like EnumSet.of(..)
Cons c = new Cons(...);
c.setArg1(....);
return c;
}
}
```
To call a super class, use `super(someValue)`. The call to super must be the first call in the constructor or you will get a compiler error.
|
How do I call one constructor from another in Java?
|
[
"",
"java",
"constructor",
""
] |
I'm working for the first time with Forms Authentication, I'm using an example from the web to learn, I included in my web.config
```
<authentication mode="Forms">
<forms name="MYWEBAPP.ASPXAUTH" loginUrl="Login.aspx" protection="All" path="/"/>
</authentication>
<authorization>
<deny users="?"/>
</authorization>
```
Then I created a page for logging in "login.aspx", and coded this on a button, just to start;
```
private void btnLogin_Click(Object sender, EventArgs e)
{
// Initialize FormsAuthentication
FormsAuthentication.Initialize();
// Create a new ticket used for authentication
FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(
1, // Ticket version
Username.Value, // Username associated with ticket
DateTime.Now, // Date/time issued
DateTime.Now.AddMinutes(30), // Date/time to expire
true, // "true" for a persistent user cookie
"accountants, seekers, copiers, typers", // User-data, in this case the roles
FormsAuthentication.FormsCookiePath);// Path cookie valid for
// Encrypt the cookie using the machine key for secure transport
string hash = FormsAuthentication.Encrypt(ticket);
HttpCookie cookie = new HttpCookie(
FormsAuthentication.FormsCookieName, // Name of auth cookie
hash); // Hashed ticket
// Set the cookie's expiration time to the tickets expiration time
if (ticket.IsPersistent) cookie.Expires = ticket.Expiration;
// Add the cookie to the list for outgoing response
Response.Cookies.Add(cookie);
}
```
Also I coded in Global.asax;
```
protected void Application_AuthenticateRequest(object sender, EventArgs e)
{
if(HttpContext.Current.User != null)
{
if(HttpContext.Current.User.Identity.IsAuthenticated)
{
if (HttpContext.Current.User.Identity is FormsIdentity)
{
FormsIdentity id = (FormsIdentity)HttpContext.Current.User.Identity;
FormsAuthenticationTicket ticket = id.Ticket;
// Get the stored user-data, in this case, our roles
string userData = ticket.UserData;
string[] roles = userData.Split(',');
HttpContext.Current.User = new GenericPrincipal(id, roles);
}
}
}
}
```
And finally in another page I tried to confirm the roles aquired;
```
protected void Page_Load(object sender, EventArgs e)
{
string str = null;
if (User.IsInRole("seekers"))
{
str += " seekers ";
}
if (User.IsInRole("accountants"))
{
str += " accountants ";
}
if (User.IsInRole("copiers"))
{
str += "copiers";
}
Response.Write(str);
}
```
But something strange happens cause it only writes "accountants" (note that "accountants" is the firts element in the delimited comma string) and not the other roles, which were supposed to be showed. I changed the order of the role list in the btnlogin click event writing "copiers" as the first element and it's written only "copiers" in the page.
I've tried with different combinations and always is printed the first element of the delimited comma string.
Sorry by my ignorance but what is happening here, are all the roles there or not? is normal? or there's something I'm forgetting here?
Thanks in advance.
|
Drop the spaces in
```
"accountants, seekers, copiers, typers"
```
|
Try it without the spaces after the commas:
"accountants,seekers,copiers,typers"
The Split will be creating strings like "accountants", " seekers", " copiers", " typers",
|
IsInRole problem
|
[
"",
"c#",
"asp.net",
""
] |
I have used an update command to update the whole table in an Sql Server 2000 database by mistake. I was actually meaning to update just one row. Now all the 2000 rows contain the update. Is there a way to roll back this change?
|
Unless you started your update with an explicit transaction, no.
However, you might be able to use [Log Rescue](http://www.red-gate.com/products/SQL_Log_Rescue/index.htm) . I'm not sure though if this can help you in this case.
A better option might be to restore a backup to a new database and merge the old and new tables.
|
No, not unless you have a backup to revert to.
I've done that mistake once. Now I start every manual operation with BEGIN TRAN.
The worst that can happen then is that you forget to COMMIT TRAN and keep a lock on the table.
|
Rolling back and update command in Sql Server 2000
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
"rollback",
""
] |
I found a lot of info about how to debug simple Python programs with Emacs. But what if I want to debug a Django application?
I run the development server and I would like to somehow attach to the process from Emacs and then set breakpoints, etc. Similar to Visual Studio's "attach to process". How to do that?
|
Start pdb like this:
`M-x` `pdb`
Then, start the Django development server:
```
python manage.py runserver --noreload
```
Once you have the (Pdb) prompt, you need to do this:
```
import sys
sys.path.append('/path/to/directory/containing/views.py')
```
Once you've done this, you should be able to set breakpoints normally. Just navigate to the line number you want, and
`C-x` `SPC`
|
This isn't emacs specific, but you can use the Python debugger by adding the following to a Django view function:
`import pdb; pdb.set_trace()`
Now when you run the development server and view the page, your browser will appear to hang or load very slowly - switch over to your console, and you have access to the full debugger. You can inspect AND modify state of your application via an interactive shell - check out the Python documentation for the debugger, or this link for some [Python debugging examples](http://www.onlamp.com/pub/a/python/2005/09/01/debugger.html)
---
If all you need is logging, add the following to your `settings.py`:
```
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(message)s',
filename = '/tmp/mylog.log',
filemode = 'w'
)
```
Now you can log messages to `/tmp/mylog.log` by adding the following to any view function:
```
import logging
logging.debug("Something happened")
```
|
Django debugging with Emacs
|
[
"",
"python",
"django",
"debugging",
"emacs",
""
] |
I'm trying to parse a html page and extract 2 values from a table row.
The html for the table row is as follows: -
```
<tr>
<td title="Associated temperature in (ºC)" class="TABLEDATACELL" nowrap="nowrap" align="Left" colspan="1" rowspan="1">Max Temperature (ºC)</td>
<td class="TABLEDATACELLNOTT" nowrap="nowrap" align="Center" colspan="1" rowspan="1">6</td>
<td class="TABLEDATACELLNOTT" nowrap="nowrap" align="Center" colspan="1" rowspan="1"> 13:41:30</td>
</tr>
```
and the expression I have at the moment is:
```
<tr>[\s]<td[^<]+?>Max Temperature[\w\s]*</td>[\s]
<td[^<]+?>(?<value>([\d]+))</td>[\s]
<td[^<]+?>(?<time>([\d\:]+))</td>[\s]</tr>
```
However I don't seem to be able to extract any matches.
Could anyone point me in the right direction, thanks.
|
Try
```
<tr>\s*
<td[^>]*>.*?</td>\s*
<td[^>]*>\s*(?<value>\d+)\s*</td>\s*
<td[^>]*>\s*(?<time>\d{2}:\d{2}:\d{2})\s*</td>\s*
</tr>\s*
```
|
Parsing HTML reliably using regexp is known to be notoriously difficult.
I think I would be looking for a HTML parsing library, or a "screen scraping" library ;)
If the HTML comes from an unreliable source, you have to be extra careful to handle malicious HTML syntax well. Bad HTML handling is a major source of security attacks.
|
regex for html parsing (in c#)
|
[
"",
"c#",
"html",
"regex",
"parsing",
""
] |
According to MSDN
> The return value specifies the result
> of the message processing; it depends
> on the message sent.
I know it is defined as
```
typedef LONG_PTR LRESULT;
```
Meaning it will be 8 bytes on 64bit machine but it doesn't!
Does anyone know if it is safe to assume that only the lower 4 bytes are used and store it as an `INT` (for example).
|
No it's not safe, because the return value is defined by the message being sent and the handler.
If you control the handler and the message then it'd be possible, it's not safe in the general case.
James
|
No it is not safe in general. Do not assume the downcast. Also, useful is to compile your code with /RTCc which ensures inadvertent casts are asserted at runtime.
|
SendMessage API in 64 bit
|
[
"",
"c++",
"c",
"winapi",
""
] |
I have a dll that must be useable from C etc, so I cant use string objects etc as a normal would, but I'm not sure on how to do this safely..
```
const char *GetString()
{
std::stringstream ss;
ss << "The random number is: " << rand();
return ss.str().c_str();
}
```
could the c string be destroyed when ss falls off the stack? I'm assuming so...
Another option may be to create a new string on the heap, but what is going to deallocate that?
```
const char *GetString()
{
std::stringstream ss;
ss << "The random number is: " << rand();
char *out = new char[ss.str().size()];
strcpy(ss.str().c_str(), out);
return out;//is out ever deleted?
}
```
The same goes for pointers to other things as well as strings.
|
The first variant doesn't work because you're returning a pointer into a stack object, which will get destroyed. (More presisely, you return a pointer to a heap memory, whch will have been deleted().) Worse still, it may even work for some time, if nobody's overwriting the memory, making it very hard to debug.
Next, you can not return a const char\* unless you return a pointer to a static string like this:
```
const char *GetString()
{
return "a static string in DATA segment - no need to delete";
}
```
You second variant has the problem of returning memory allocated with new() into a C program that will call free(). Those may not be compatible.
If you return a string to C, there are 2 way to do that:
```
char *GetString()
{
std::stringstream ss;
ss << "The random number is: " << rand();
return strdup( ss.str().c_str() ); // allocated in C style with malloc()
}
void foo()
{
char *p = GetString();
printf("string: %s", p));
free( p ); // must not forget to free(), must not use delete()
}
```
or:
```
char *GetString(char *buffer, size_t len)
{
std::stringstream ss;
ss << "The random number is: " << rand();
return strncpy(buffer, ss.str().c_str(), len); // caller allocates memory
}
void foo()
{
char buffer[ 100 ];
printf("string: %s", GetString(buffer, sizeof( buffer ))); // no memory leaks
}
```
depending on you memory handling policy.
As a rule, you can NOT ever return a pointer or a reference to an automatic object in C++. This is one of common mistakes analyzed in many C++ books.
|
Over the years C boiled this down to 2 standard methods:
* Caller passes in buffer.
There are three versions of this.
Version 1: Pass a buffer and a length.
Version 2: Documentation specifies an expected min buffer size.
Version 3: Pre-Flight. Function returns the min buffer required. caller calls twice first time with a NULL buffer.
+ Example: read()
* Use a static buffer that is valid until the next call.
+ Example: tmpname()
A few non standard ones returned memory that you had to explicitly free
* strdup() pops to mind.
Common extension but not actually in the standard.
|
Return dynamically allocated memory from C++ to C
|
[
"",
"c++",
"c",
"memory-management",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.