Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I am introducing automated integration testing to a mature application that until now has only been manually tested.
The app is Windows based and talks to a MySQL database.
What is the best way (including details of any tools recommended) to keep tests independent of each other in terms of the database transactions that will occur?
(Modifications to the app source for this particular purpose are not an option.)
|
How are you verifying the results?
If you need to query the DB (and it sounds like you probably do) for results then I agree with Kris K, except I would endeavor to rebuild the DB after every test case, not just every suite.
This helps avoid dangerous [interacting tests](http://xunitpatterns.com/Erratic%20Test.html)
As for tools, I would recommend [CppUnit](http://cppunit.sourceforge.net/doc/lastest/index.html). You aren't really doing unit tests, but it shouldn't matter as the xUnit framework should give you the set up and teardown framework you'll need to automatically set up your [test fixture](http://xunitpatterns.com/Fixture%20Setup%20Patterns.html)
Obviously this can result in slow-running tests, depending on your database size, population etc. You may be able to attach/detach databases rather than dropping/rebuilding.
If you're interested in further research, check out [XUnit Test Patterns](http://www.xunitpatterns.com). It's a fine book and a good website for this kind of thing.
And thanks for automating :)
Nick
|
You can dump/restore the database for each test suite, etc. Since you are automating this, it may be something in the setup/teardown functionality.
|
Automated integration testing a C++ app with a database
|
[
"",
"c++",
"database",
"automated-tests",
"integration-testing",
""
] |
I have a function where I need to do something to a string. I need the function to return a boolean indicating whether or not the operation succeeded, and I also need to return the modified string.
In C#, I would use an out parameter for the string, but there is no equivalent in Python. I'm still very new to Python and the only thing I can think of is to return a tuple with the boolean and modified string.
Related question: [Is it pythonic for a function to return multiple values?](https://stackoverflow.com/questions/61605/is-it-pythonic-for-a-function-to-return-multiple-values)
|
```
def f(in_str):
out_str = in_str.upper()
return True, out_str # Creates tuple automatically
succeeded, b = f("a") # Automatic tuple unpacking
```
|
Why not throw an exception if the operation wasn't successful? Personally, I tend to be of the opinion that if you need to return more than one value from a function, you should reconsider if you're doing things the right way or use an object.
But more directly to the point, if you throw an exception, you're forcing them to deal with the problem. If you try to return a value that indicates failure, it's very well possible somebody could not check the value and end up with some potentially hard to debug errors.
|
What's the best way to return multiple values from a function?
|
[
"",
"python",
"variables",
"return",
""
] |
While answering [Static class variables in Python](https://stackoverflow.com/questions/68645/python-static-variable#81002)
I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice?
```
PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1, 1]
>>>
```
while the python interpreter does the right thing
```
C:\>python
ActivePython 2.5.0.0 (ActiveState Software Inc.) based on
Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1]
>>>
```
|
My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type `myobject.` it offers a little popup of all the availble method names. So I think when you type `X().` it's creating an instance of `X` in the background and doing a `dir` or similar to find out the attributes of the object.
So the constructor is only being run *once for each object* but to give you the interactivity it's creating objects silently in the background without telling you about it.
|
Dave Webb is correct, and you can see this by adding a print statement:
```
>>> class X:
... l = []
... def __init__(self):
... print 'inited'
... self.__class__.l.append(1)
...
```
Then as soon as you type the period in `X().` it prints `inited` prior to offering you the completion popup.
|
PythonWin's python interactive shell calling constructors twice?
|
[
"",
"python",
"python-2.x",
"activestate",
""
] |
How do I load a Python module given its full path?
Note that the file can be anywhere in the filesystem where the user has access rights.
---
**See also:** [How to import a module given its name as string?](https://stackoverflow.com/questions/301134)
|
Let's have `MyClass` in `module.name` module defined at `/path/to/file.py`. Below is how we import `MyClass` from this module
For Python 3.5+ use ([docs](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly)):
```
import importlib.util
import sys
spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py")
foo = importlib.util.module_from_spec(spec)
sys.modules["module.name"] = foo
spec.loader.exec_module(foo)
foo.MyClass()
```
For Python 3.3 and 3.4 use:
```
from importlib.machinery import SourceFileLoader
foo = SourceFileLoader("module.name", "/path/to/file.py").load_module()
foo.MyClass()
```
(Although this has been deprecated in Python 3.4.)
For Python 2 use:
```
import imp
foo = imp.load_source('module.name', '/path/to/file.py')
foo.MyClass()
```
There are equivalent convenience functions for compiled Python files and DLLs.
See also <http://bugs.python.org/issue21436>.
|
The advantage of adding a path to sys.path (over using imp) is that it simplifies things when importing more than one module from a single package. For example:
```
import sys
# the mock-0.3.1 dir contains testcase.py, testutils.py & mock.py
sys.path.append('/foo/bar/mock-0.3.1')
from testcase import TestCase
from testutils import RunTests
from mock import Mock, sentinel, patch
```
|
How can I import a module dynamically given the full path?
|
[
"",
"python",
"python-import",
"python-module",
""
] |
Given:
```
e = 'a' + 'b' + 'c' + 'd'
```
How do I write the above in two lines?
```
e = 'a' + 'b' +
'c' + 'd'
```
|
What is the line? You can just have arguments on the next line without any problems:
```
a = dostuff(blahblah1, blahblah2, blahblah3, blahblah4, blahblah5,
blahblah6, blahblah7)
```
Otherwise you can do something like this:
```
if (a == True and
b == False):
```
or with explicit line break:
```
if a == True and \
b == False:
```
Check the [style guide](http://www.python.org/dev/peps/pep-0008/) for more information.
Using parentheses, your example can be written over multiple lines:
```
a = ('1' + '2' + '3' +
'4' + '5')
```
The same effect can be obtained using explicit line break:
```
a = '1' + '2' + '3' + \
'4' + '5'
```
Note that the style guide says that using the implicit continuation with parentheses is preferred, but in this particular case just adding parentheses around your expression is probably the wrong way to go.
|
From *[PEP 8 -- Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/)*:
> **The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces.** Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation.
> Backslashes may still be appropriate at times. For example, long, multiple with-statements cannot use implicit continuation, so backslashes are acceptable:
> ```
> with open('/path/to/some/file/you/want/to/read') as file_1, \
> open('/path/to/some/file/being/written', 'w') as file_2:
> file_2.write(file_1.read())
> ```
>
> Another such case is with assert statements.
> Make sure to indent the continued line appropriately. The preferred place to break around a binary operator is **after** the operator, not before it. Some examples:
> ```
> class Rectangle(Blob):
>
> def __init__(self, width, height,
> color='black', emphasis=None, highlight=0):
> if (width == 0 and height == 0 and
> color == 'red' and emphasis == 'strong' or
> highlight > 100):
> raise ValueError("sorry, you lose")
> if width == 0 and height == 0 and (color == 'red' or
> emphasis is None):
> raise ValueError("I don't think so -- values are %s, %s" %
> (width, height))
> Blob.__init__(self, width, height,
> color, emphasis, highlight)file_2.write(file_1.read())
> ```
PEP8 now recommends the *opposite convention* (for breaking at binary operations) used by mathematicians and their publishers to improve readability.
Donald Knuth's style of breaking **before** a binary operator aligns operators vertically, thus reducing the eye's workload when determining which items are added and subtracted.
From [PEP8: *Should a line break before or after a binary operator?*](http://legacy.python.org/dev/peps/pep-0008/#should-a-line-break-before-or-after-a-binary-operator):
> Donald Knuth explains the traditional rule in his Computers and Typesetting series: "Although formulas within a paragraph always break after binary operations and relations, displayed formulas always break before binary operations"[3].
> Following the tradition from mathematics usually results in more readable code:
> ```
> # Yes: easy to match operators with operands
> ```
```
income = (gross_wages
+ taxable_interest
+ (dividends - qualified_dividends)
- ira_deduction
- student_loan_interest)
```
> In Python code, it is permissible to break before or after a binary operator, as long as the convention is consistent locally. For new code Knuth's style is suggested.
[3]: Donald Knuth's The TeXBook, pages 195 and 196
|
How can I do a line break (line continuation) in Python (split up a long line of source code)?
|
[
"",
"python",
"syntax",
"line-breaks",
"long-lines",
""
] |
What database should a beginner use in the C language?
Can I use MySQL?
Thanks
|
You can use SQLite, MySQL, PostgreSQL, or any other DBMS with a C language binding.
* [SQLite In 5 Minutes Or Less](http://www.sqlite.org/quickstart.html)
* [MySQL C API](http://dev.mysql.com/doc/refman/5.1/en/c.html)
* [PostgreSQL C API](http://www.postgresql.org/docs/8.1/static/libpq.html)
|
If you question is "What RDBMS do you recommend for beginner?" I agree that MySQL is not a bad choice. It's easy to install and configure. I think that most RDMS can be accessed from C code now days
|
RDMS for C language newbie?
|
[
"",
"mysql",
"sql",
"c",
"database",
""
] |
I have an XML that I want to load to objects, manipulate those objects (set values, read values) and then save those XMLs back.
It is important for me to have the XML in the structure (xsd) that I created.
One way to do that is to write my own serializer, but is there a built in support for it or open source in C# that I can use?
|
You can generate serializable C# classes from a schema (xsd) using xsd.exe:
```
xsd.exe dependency1.xsd dependency2.xsd schema.xsd /out:outputDir
```
If the schema has dependencies (included/imported schemas), they must all be included on the same command line.
|
This code (C# DotNet 1.0 onwards) works quite well to serialize most objects to XML. (and back)
It does not work for objects containing ArrayLists, and if possible stick to using only Arrays
```
using System;
using System.IO;
using System.Text;
using System.Xml.Serialization;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
public static string Serialize(object objectToSerialize)
{
MemoryStream mem = new MemoryStream();
XmlSerializer ser = new XmlSerializer(objectToSerialize.GetType());
ser.Serialize(mem, objectToSerialize);
ASCIIEncoding ascii = new ASCIIEncoding();
return ascii.GetString(mem.ToArray());
}
public static object Deserialize(Type typeToDeserialize, string xmlString)
{
byte[] bytes = Encoding.UTF8.GetBytes(xmlString);
MemoryStream mem = new MemoryStream(bytes);
XmlSerializer ser = new XmlSerializer(typeToDeserialize);
return ser.Deserialize(mem);
}
```
|
How do I map XML to C# objects
|
[
"",
"c#",
"xml",
"serialization",
"xml-serialization",
""
] |
Also, how do `LEFT OUTER JOIN`, `RIGHT OUTER JOIN`, and `FULL OUTER JOIN` fit in?
|
Assuming you're joining on columns with no duplicates, which is a very common case:
* An inner join of A and B gives the result of A intersect B, i.e. the inner part of a [Venn diagram](http://en.wikipedia.org/wiki/Venn_diagram) intersection.
* An outer join of A and B gives the results of A union B, i.e. the outer parts of a [Venn diagram](http://en.wikipedia.org/wiki/Venn_diagram) union.
**Examples**
Suppose you have two tables, with a single column each, and data as follows:
```
A B
- -
1 3
2 4
3 5
4 6
```
Note that (1,2) are unique to A, (3,4) are common, and (5,6) are unique to B.
**Inner join**
An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common.
```
select * from a INNER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a = b.b;
a | b
--+--
3 | 3
4 | 4
```
**Left outer join**
A left outer join will give all rows in A, plus any common rows in B.
```
select * from a LEFT OUTER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a = b.b(+);
a | b
--+-----
1 | null
2 | null
3 | 3
4 | 4
```
**Right outer join**
A right outer join will give all rows in B, plus any common rows in A.
```
select * from a RIGHT OUTER JOIN b on a.a = b.b;
select a.*, b.* from a,b where a.a(+) = b.b;
a | b
-----+----
3 | 3
4 | 4
null | 5
null | 6
```
**Full outer join**
A full outer join will give you the union of A and B, i.e. all the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versa.
```
select * from a FULL OUTER JOIN b on a.a = b.b;
a | b
-----+-----
1 | null
2 | null
3 | 3
4 | 4
null | 6
null | 5
```
|
The Venn diagrams don't really do it for me.
They don't show any distinction between a cross join and an inner join, for example, or more generally show any distinction between different types of join predicate or provide a framework for reasoning about how they will operate.
There is no substitute for understanding the logical processing and it is relatively straightforward to grasp anyway.
1. Imagine a cross join.
2. Evaluate the `on` clause against all rows from step 1 keeping those where the predicate evaluates to `true`
3. (For outer joins only) add back in any outer rows that were lost in step 2.
(NB: In practice the query optimiser may find more efficient ways of executing the query than the purely logical description above but the final result must be the same)
I'll start off with an animated version of a **full outer join**. Further explanation follows.
[](https://i.stack.imgur.com/VUkfU.gif)
---
# Explanation
**Source Tables**

First start with a `CROSS JOIN` (AKA Cartesian Product). This does not have an `ON` clause and simply returns every combination of rows from the two tables.
**SELECT A.Colour, B.Colour FROM A CROSS JOIN B**

Inner and Outer joins have an "ON" clause predicate.
* **Inner Join.** Evaluate the condition in the "ON" clause for all rows in the cross join result. If true return the joined row. Otherwise discard it.
* **Left Outer Join.** Same as inner join then for any rows in the left table that did not match anything output these with NULL values for the right table columns.
* **Right Outer Join.** Same as inner join then for any rows in the right table that did not match anything output these with NULL values for the left table columns.
* **Full Outer Join.** Same as inner join then preserve left non matched rows as in left outer join and right non matching rows as per right outer join.
# Some examples
**SELECT A.Colour, B.Colour FROM A INNER JOIN B ON A.Colour = B.Colour**
The above is the classic equi join.

## Animated Version
[](https://i.stack.imgur.com/kZcvR.gif)
### SELECT A.Colour, B.Colour FROM A INNER JOIN B ON A.Colour NOT IN ('Green','Blue')
The inner join condition need not necessarily be an equality condition and it need not reference columns from both (or even either) of the tables. Evaluating `A.Colour NOT IN ('Green','Blue')` on each row of the cross join returns.

**SELECT A.Colour, B.Colour FROM A INNER JOIN B ON 1 =1**
The join condition evaluates to true for all rows in the cross join result so this is just the same as a cross join. I won't repeat the picture of the 16 rows again.
### SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour
Outer Joins are logically evaluated in the same way as inner joins except that if a row from the left table (for a left join) does not join with any rows from the right hand table at all it is preserved in the result with `NULL` values for the right hand columns.

### SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour WHERE B.Colour IS NULL
This simply restricts the previous result to only return the rows where `B.Colour IS NULL`. In this particular case these will be the rows that were preserved as they had no match in the right hand table and the query returns the single red row not matched in table `B`. This is known as an anti semi join.
It is important to select a column for the `IS NULL` test that is either not nullable or for which the join condition ensures that any `NULL` values will be excluded in order for this pattern to work correctly and avoid just bringing back rows which happen to have a `NULL` value for that column in addition to the un matched rows.

### SELECT A.Colour, B.Colour FROM A RIGHT OUTER JOIN B ON A.Colour = B.Colour
Right outer joins act similarly to left outer joins except they preserve non matching rows from the right table and null extend the left hand columns.

### SELECT A.Colour, B.Colour FROM A FULL OUTER JOIN B ON A.Colour = B.Colour
Full outer joins combine the behaviour of left and right joins and preserve the non matching rows from both the left and the right tables.

### SELECT A.Colour, B.Colour FROM A FULL OUTER JOIN B ON 1 = 0
No rows in the cross join match the `1=0` predicate. All rows from both sides are preserved using normal outer join rules with NULL in the columns from the table on the other side.

### SELECT COALESCE(A.Colour, B.Colour) AS Colour FROM A FULL OUTER JOIN B ON 1 = 0
With a minor amend to the preceding query one could simulate a `UNION ALL` of the two tables.

### SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour WHERE B.Colour = 'Green'
Note that the `WHERE` clause (if present) logically runs after the join. One common error is to perform a left outer join and then include a WHERE clause with a condition on the right table that ends up excluding the non matching rows. The above ends up performing the outer join...

... And then the "Where" clause runs. `NULL= 'Green'` does not evaluate to true so the row preserved by the outer join ends up discarded (along with the blue one) effectively converting the join back to an inner one.

If the intention was to include only rows from B where Colour is Green and all rows from A regardless the correct syntax would be
### SELECT A.Colour, B.Colour FROM A LEFT OUTER JOIN B ON A.Colour = B.Colour AND B.Colour = 'Green'

## SQL Fiddle
See these examples [run live at SQLFiddle.com](http://sqlfiddle.com/#!17/10d3d/29).
|
What is the difference between "INNER JOIN" and "OUTER JOIN"?
|
[
"",
"sql",
"join",
"inner-join",
"outer-join",
""
] |
For my C# app, I don't want to always prompt for elevation on application start, but if they choose an output path that is UAC protected then I need to request elevation.
So, how do I check if a path is UAC protected and then how do I request elevation mid-execution?
|
The best way to detect if they are unable to perform an action is to attempt it and catch the `UnauthorizedAccessException`.
However as @[DannySmurf](https://stackoverflow.com/users/941/dannysmurf) [correctly points out](https://stackoverflow.com/questions/17533/request-vista-uac-elevation-if-path-is-protected#17544) you can only elevate a COM object or separate process.
There is a demonstration application within the Windows SDK Cross Technology Samples called [UAC Demo](http://msdn.microsoft.com/en-us/library/aa970890.aspx "MSDN - UAC Sample"). This demonstration application shows a method of executing actions with an elevated process. It also demonstrates how to find out if a user is currently an administrator.
|
Requesting elevation mid-execution requires that you either:
1. Use a COM control that's elevated, which will put up a prompt
2. Start a second process that is elevated from the start.
In .NET, there is currently no way to elevate a running process; you have to do one of the hackery things above, but all that does is give the user the appearance that the current process is being elevated.
The only way I can think of to check if a path is UAC elevated is to try to do some trivial write to it while you're in an un-elevated state, catch the exception, elevate and try again.
|
Request Windows Vista UAC elevation if path is protected?
|
[
"",
"c#",
".net",
"windows-vista",
"uac",
"elevated-privileges",
""
] |
I'm trying to modify my GreaseMonkey script from firing on window.onload to window.DOMContentLoaded, but this event never fires.
I'm using FireFox 2.0.0.16 / GreaseMonkey 0.8.20080609
[This](https://stackoverflow.com/questions/59205/enhancing-stackoverflow-user-experience) is the full script that I'm trying to modify, changing:
```
window.addEventListener ("load", doStuff, false);
```
to
```
window.addEventListener ("DOMContentLoaded", doStuff, false);
```
|
So I googled [greasemonkey dom ready](http://www.google.com/search?q=greasemonkey%20dom%20ready) and the [first result](http://www.sitepoint.com/article/beat-website-greasemonkey/) seemed to say that the greasemonkey script is actually running at "DOM ready" so you just need to remove the onload call and run the script straight away.
I removed the *`window.addEventListener ("load", function() {`* and *`}, false);`* wrapping and it worked perfectly. It's **much** more responsive this way, the page appears straight away with your script applied to it and all the unseen questions highlighted, no flicker at all. And there was much rejoicing.... yea.
|
GreaseMonkey scripts are themselves executed on DOMContentLoaded, so it's unnecessary to add a load event handler - just have your script do whatever it needs to to immediately.
<http://wiki.greasespot.net/DOMContentLoaded>
|
How to implement "DOM Ready" event in a GreaseMonkey script?
|
[
"",
"javascript",
"firefox",
"greasemonkey",
""
] |
I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually.
IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.
What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, [SWIG](http://www.swig.org/), [Boost.Python](http://www.boost.org/doc/libs/1_35_0/libs/python/doc/index.html), [Cython](http://cython.org/) or [Python SIP](http://www.riverbankcomputing.co.uk/software/sip/intro)?
For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
|
Finally a question that I can really put a value answer to :).
I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown:
**Disclaimer**: This is my personal experience. I am not involved with any of these projects.
**swig:**
does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it.
**Ctypes:**
I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on <https://launchpad.net/pydc1394>. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs.
**Boost.Python**: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python.
**Pyrex/Cython:** Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop.
**Timings**: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython.
**Summary**: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question.
---
**Edit**: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision.
|
I haven't used SWIG or SIP, but I find writing Python wrappers with [boost.python](http://www.boost.org/doc/libs/1_35_0/libs/python/doc/index.html) to be very powerful and relatively easy to use.
I'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic [boost::python::object](http://www.boost.org/doc/libs/1_35_0/libs/python/doc/v2/object.html) argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa.
If you plan use boost.python, the [tutorial](http://www.boost.org/doc/libs/1_35_0/libs/python/doc/tutorial/doc/html/index.html) is a good place to start.
I have implemented something somewhat similar to what you need. I have a C++ function that
accepts a python function and an image as arguments, and applies the python function to each pixel in the image.
```
Image* unary(boost::python::object op, Image& im)
{
Image* out = new Image(im.width(), im.height(), im.channels());
for(unsigned int i=0; i<im.size(); i++)
{
(*out)[i] == extract<float>(op(im[i]));
}
return out;
}
```
In this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a \_\_call\_\_ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function):
```
import image
im = image.load('somefile.tiff')
double_im = image.unary(lambda x: 2.0*x, im)
```
As for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - [this](http://www.boost.org/doc/libs/1_35_0/libs/python/doc/v2/faq.html#question2) might be helpful.
|
Prototyping with Python code before compiling
|
[
"",
"python",
"swig",
"ctypes",
"prototyping",
"python-sip",
""
] |
I'm using Eclipse 3.4 (Ganymede) with CDT 5 on Windows.
When the integrated spell checker doesn't know some word, it proposes (among others) the option to add the word to a user dictionary.
If the user dictionary doesn't exist yet, the spell checker offers then to help configuring it and shows the "General/Editors/Text Editors/Spelling" preference pane. This preference pane however states that **"The selected spelling engine does not exist"**, but has no control to add or install an engine.
How can I put a spelling engine in existence?
Update: What solved my problem was to install also the JDT. This solution was brought up on 2008-09-07 and was accepted, but is now missing.
|
Are you using the C/C++ Development Tools exclusively?
The Spellcheck functionality is dependent upon the Java Development Tools being installed also.
The spelling engine is scheduled to be pushed down from JDT to the Platform,
so you can get rid of the Java related bloat soon enough. :)
|
The CDT version of Ganymede apparently shipped improperly configured. After playing around for a while, I have come up with the following steps that fix the problem.
1. Export your Eclipse preferences (File > Export > General > Preferences).
2. Open the exported file in a text editor.
3. Find the line that says
```
/instance/org.eclipse.ui.editors/spellingEngine=org.eclipse.jdt.internal.ui.text.spelling.DefaultSpellingEngine
```
4. Change it to
```
/instance/org.eclipse.ui.editors/spellingEngine=org.eclipse.cdt.internal.ui.text.spelling.CSpellingEngine
```
5. Save the preferences file.
6. Import the preferences back into Eclipse (File > Import > General > Preferences).
You should now be able to access the Spelling configuration page as seen above.
Note: if you want to add a custom dictionary, Eclipse must be able to access and open the file (i.e. it must exist - an empty file will work)
|
Eclipse spelling engine does not exist
|
[
"",
"c++",
"eclipse",
"spell-checking",
"eclipse-3.4",
"eclipse-cdt",
""
] |
I'm looking for a light version of REST for a Java web application I'm developing.
I've looked at RESTlet (www.restlet.org) and the REST plugin for Struts 2, but I haven't made up my mind. I'm leaning towards RESTlet, as it seems to be lighter.
Has anyone implemented a RESTful layer without any of the the frameworks or with the frameworks?
Any performance issues that you've seen because of the new web layer?
Did the introduction of REST added unmanageable or unreasonable complexity to your project? (Some complexity is understandable, but what I mean is just plain overkilling your design just to add REST)
|
Well, I've used Enunciate quite a bit. It uses simple annotations to provide either REST and/or SOAP endpoints.
<http://enunciate.codehaus.org>
Plus, Ryan Heaton has always provided top-notch support for things, too.
|
I'm a huge fan of JAX-RS - I think they've done a great job with that specification. I use it on a number of projects and its been a joy to work with.
JAX-RS lets you create REST resources using POJOs with simple annotations dealing with the URI mappings, HTTP methods and content negotiation all integrated nicely with dependency injection. There's no complex APIs to learn; just the core REST concepts (URIs, headers/response codes and content negotiation) are required. FWIW JAX-RS is quite Rails-ish from the controller point of view
There are a number of JAX-RS implementations out there - see [this thread for a discussion](https://stackoverflow.com/questions/80799/jax-rs-frameworks).
My personal recommendation is to use [Jersey](http://jersey.java.net/) as its got the biggest, most active community behind it, has the best features at the time of writing (WADL support, implicit views, spring integration, nice REST client API); though if you are using JBoss/SEAM you might find RESTeasy integrates a little better.
|
Lightweight REST library for Java
|
[
"",
"java",
"rest",
""
] |
I like Doxygen to create documentation of C or PHP code. I have an upcoming Python project and I think I remember that Python doesn't have `/* .. */` comments, and also has its own self-documentation facility which seems to be the pythonic way to document.
Since I'm familiar with Doxygen, how can I use it to produce my Python documentation? Is there anything in particular that I need to be aware of?
|
This is [documented on the doxygen website](http://www.doxygen.nl/manual/docblocks.html#pythonblocks), but to summarize here:
You can use doxygen to document your Python code. You can either use the Python documentation string syntax:
```
"""@package docstring
Documentation for this module.
More details.
"""
def func():
"""Documentation for a function.
More details.
"""
pass
```
In which case the comments will be extracted by doxygen, but you won't be able to use any of the [special doxygen commands](http://www.doxygen.nl/manual/commands.html#cmd_intro).
**Or** you can (similar to C-style languages under doxygen) double up the comment marker (`#`) on the first line before the member:
```
## @package pyexample
# Documentation for this module.
#
# More details.
## Documentation for a function.
#
# More details.
def func():
pass
```
In that case, you can use the special doxygen commands. There's no particular Python output mode, but you can apparently improve the results by setting `OPTMIZE_OUTPUT_JAVA` to `YES`.
Honestly, I'm a little surprised at the difference - it seems like once doxygen can detect the comments in ## blocks or """ blocks, most of the work would be done and you'd be able to use the special commands in either case. Maybe they expect people using """ to adhere to more Pythonic documentation practices and that would interfere with the special doxygen commands?
|
The [doxypy](https://pypi.python.org/pypi/doxypy/) input filter allows you to use pretty much all of Doxygen's formatting tags in a standard Python docstring format. I use it to document a large mixed C++ and Python game application framework, and it's working well.
|
How to document Python code using Doxygen
|
[
"",
"python",
"documentation",
"python-sphinx",
"doxygen",
"docstring",
""
] |
Can someone describe what a symbol table is within the context of C and C++?
|
There are two common and related meaning of symbol tables here.
First, there's the symbol table in your object files. Usually, a C or C++ compiler compiles a single source file into an object file with a .obj or .o extension. This contains a collection of executable code and data that the linker can process into a working application or shared library. The object file has a data structure called a symbol table in it that maps the different items in the object file to names that the linker can understand. If you call a function from your code, the compiler doesn't put the final address of the routine in the object file. Instead, it puts a placeholder value into the code and adds a note that tells the linker to look up the reference in the various symbol tables from all the object files it's processing and stick the final location there.
Second, there's also the symbol table in a shared library or DLL. This is produced by the linker and serves to name all the functions and data items that are visible to users of the library. This allows the system to do run-time linking, resolving open references to those names to the location where the library is loaded in memory.
If you want to learn more, I suggest John Levine's excellent book [*Linkers and Loaders*](http://books.google.com/books?id=h34d_jr2iikC&dq=levine+linkers+and+loaders&pg=PP1&ots=IxGcKQO03W&sig=dduAcRRhK6E2nMlkTxbNEDpWBTg&hl=en&sa=X&oi=book_result&resnum=1&ct=result).
|
Briefly, it is the mapping of the name you assign a variable to its address in memory, including metadata like type, scope, and size. It is used by the compiler.
That's in general, not just C[++]\*. Technically, it doesn't always include direct memory address. It depends on what language, platform, etc. the compiler is targeting.
|
What is a symbol table?
|
[
"",
"c++",
"c",
"symbol-table",
""
] |
Today I had a coworker suggest I refactor my code to use a label statement to control flow through 2 nested for loops I had created. I've never used them before because personally I think they decrease the readability of a program. I am willing to change my mind about using them if the argument is solid enough however. What are people's opinions on label statements?
|
Many algorithms are expressed more easily if you can jump across two loops (or a loop containing a switch statement). Don't feel bad about it. On the other hand, it may indicate an overly complex solution. So stand back and look at the problem.
Some people prefer a "single entry, single exit" approach to all loops. That is to say avoiding break (and continue) and early return for loops altogether. This may result in some duplicate code.
What I would strongly avoid doing is introducing auxilary variables. Hiding control-flow within state adds to confusion.
Splitting labeled loops into two methods may well be difficult. Exceptions are probably too heavyweight. Try a single entry, single exit approach.
|
Labels are like goto's: Use them sparingly, and only when they make your code faster **and** more importantly, more understandable,
> e.g., If you are in big loops six levels deep and you encounter a condition that makes the rest of the loop pointless to complete, there's no sense in having 6 extra trap doors in your condition statements to exit out the loop early.
Labels (and goto's) aren't evil, it's just that sometimes people use them in bad ways. Most of the time we are actually trying to write our code so it is understandable for you and the next programmer who comes along. Making it uber-fast is a secondary concern (be wary of premature optimization).
When Labels (and goto's) are misused they make the code less readable, which causes grief for you and the next developer. The compiler doesn't care.
|
Should I avoid using Java Label Statements?
|
[
"",
"java",
"loops",
""
] |
Let me use the following example to explain my question:
```
public string ExampleFunction(string Variable) {
return something;
}
string WhatIsMyName = "Hello World";
string Hello = ExampleFunction(WhatIsMyName);
```
When I pass the variable `WhatIsMyName` to the `ExampleFunction`, I want to be able to get a string of the *original* variable's name. Perhaps something like:
```
Variable.OriginalName.ToString() // == "WhatIsMyName"
```
Is there any way to do this?
|
**No.** I don't think so.
The variable name that you use is for your convenience and readability. The compiler doesn't need it & just chucks it out if I'm not mistaken.
If it helps, you could define a new class called `NamedParameter` with attributes `Name` and `Param`. You then pass this object around as parameters.
|
What you want isn't possible directly but you can use Expressions in C# 3.0:
```
public void ExampleFunction(Expression<Func<string, string>> f) {
Console.WriteLine((f.Body as MemberExpression).Member.Name);
}
ExampleFunction(x => WhatIsMyName);
```
Note that this relies on unspecified behaviour and while it does work in Microsoft’s current C# and VB compilers, **and** in Mono’s C# compiler, there’s no guarantee that this won’t stop working in future versions.
|
How can I get the name of a variable passed into a function?
|
[
"",
"c#",
"asp.net",
".net",
""
] |
Lets say that you have websites www.xyz.com and www.abc.com.
Lets say that a user goes to www.abc.com and they get authenticated through the normal ASP .NET membership provider.
Then, from that site, they get sent to (redirection, linked, whatever works) site www.xyz.com, and the intent of site www.abc.com was to pass that user to the other site as the status of isAuthenticated, so that the site www.xyz.com does not ask for the credentials of said user again.
What would be needed for this to work? I have some constraints on this though, the user databases are completely separate, it is not internal to an organization, in all regards, it is like passing from stackoverflow.com to google as authenticated, it is that separate in nature. A link to a relevant article will suffice.
|
Try using FormAuthentication by setting the web.config authentication section like so:
```
<authentication mode="Forms">
<forms name=".ASPXAUTH" requireSSL="true"
protection="All"
enableCrossAppRedirects="true" />
</authentication>
```
Generate a machine key. Example: [Easiest way to generate MachineKey – Tips and tricks: ASP.NET, IIS ...](https://blogs.msdn.microsoft.com/amb/2012/07/31/easiest-way-to-generate-machinekey/)
When posting to the other application the authentication ticket is passed as a hidden field. While reading the post from the first app, the second app will read the encrypted ticket and authenticate the user. Here's an example of the page that passes that posts the field:
.aspx:
```
<form id="form1" runat="server">
<div>
<p><asp:Button ID="btnTransfer" runat="server" Text="Go" PostBackUrl="http://otherapp/" /></p>
<input id="hdnStreetCred" runat="server" type="hidden" />
</div>
</form>
```
code-behind:
```
protected void Page_Load(object sender, EventArgs e)
{
FormsIdentity cIdentity = Page.User.Identity as FormsIdentity;
if (cIdentity != null)
{
this.hdnStreetCred.ID = FormsAuthentication.FormsCookieName;
this.hdnStreetCred.Value = FormsAuthentication.Encrypt(((FormsIdentity)User.Identity).Ticket);
}
}
```
Also see the cross app form authentication section in Chapter 5 of this [book](http://www.wrox.com/WileyCDA/WroxTitle/Professional-ASP-NET-2-0-Security-Membership-and-Role-Management.productCd-0764596985.html) from Wrox. It recommends answers like the ones above in addition to providing a homebrew SSO solution.
|
If you are using the built in membership system you can do cross sub-domain authentication with forms auth by using some like this in each web.config.
```
<authentication mode="Forms">
<forms name=".ASPXAUTH" loginUrl="~/Login.aspx" path="/"
protection="All"
domain="datasharp.co.uk"
enableCrossAppRedirects="true" />
</authentication>
```
Make sure that name, path, protection and domain are the same in all web.configs. If the sites are on different machines you will also need to ensure that the machineKey and validation and encryption keys are the same.
|
How do you pass an authenticated session between app domains
|
[
"",
"c#",
"asp.net",
"asp.net-membership",
"membership",
""
] |
Using C++ (and Qt), I need to process a big amount of 3D coordinates.
Specifically, when I receive a 3D coordinate (made of 3 doubles), I need to check in a list if this coordinate has already been processed.
If not, then I process it and add it to the list (or container).
The amount of coordinates can become very big, so I need to store the processed coordinates in a container which will ensure that checking if a 3D coordinate is already contained in the container is fast.
I was thinking of using a map of a map of a map, storing the x coordinate, then the y coordinate then the z coordinate, but this makes it quite tedious to use, so I'm actually hoping there is a much better way to do it that I cannot think of.
|
Probably the simplest way to speed up such processing is to store the already-processed points in [Octree](http://en.wikipedia.org/wiki/Octree). Checking for duplication will become close to logarithmic.
Also, make sure you tolerate round-off errors by checking the distance between the points, not the equality of the coordinates.
|
Divide your space into discrete bins. Could be infinitely deep squares, or could be cubes. Store your processed coordinates in a simple linked list, sorted if you like in each bin. When you get a new coordinate, jump to the enclosing bin, and walk the list looking for the new point.
Be wary of floating point comparisons. You need to either turn values into integers (say multiply by 1000 and truncate), or decide how close 2 values are to be considered equal.
|
Fastest way to find if a 3D coordinate is already used
|
[
"",
"c++",
"performance",
""
] |
Every time I need to work with date and/or timstamps in Java I always feel like I'm doing something wrong and spend endless hours trying to find a better way of working with the APIs without having to code my own Date and Time utility classes. Here's a couple of annoying things I just ran into:
* 0-based months. I realize that best practice is to use Calendar.SEPTEMBER instead of 8, but it's annoying that 8 represents September and not August.
* Getting a date without a timestamp. I always need the utility that Zeros out the timestamp portion of the date.
* I know there's other issues I've had in the past, but can't recall. Feel free to add more in your responses.
So, my question is ... What third party APIs do you use to simplify Java's usage of Date and Time manipulation, if any? Any thoughts on using [Joda](http://www.joda.org/joda-time/)? Anyone looked closer at JSR-310 Date and Time API?
|
[This post](http://mike-java.blogspot.com/2008/02/java-date-time-api-vs-joda.html) has a good discussion on comparing the Java Date/Time API vs JODA.
I personally just use [Gregorian Calendar](http://docs.oracle.com/javase/8/docs/api/java/util/GregorianCalendar.html) and [SimpleDateFormat](http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html) any time I need to manipulate dates/times in Java. I've never really had any problems in using the Java API and find it quite easy to use, so have not really looked into any alternatives.
|
# java.time
Java 8 and later now includes the [java.time](http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html) framework. Inspired by [Joda-Time](http://www.joda.org/joda-time/), defined by [JSR 310](http://jcp.org/en/jsr/detail?id=310), extended by the [ThreeTen-Extra](http://www.threeten.org/threeten-extra/) project. See [the Tutorial](http://docs.oracle.com/javase/tutorial/java/TOC.html).
This framework supplants the old java.util.Date/.Calendar classes. Conversion methods let you convert back and forth to work with old code not yet updated for the java.time types.
[](https://i.stack.imgur.com/lLpLX.png)
The core classes are:
* [`Instant`](http://docs.oracle.com/javase/8/docs/api/java/time/Instant.html)
A moment on the timeline, always in [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time).
* [`ZoneId`](http://docs.oracle.com/javase/8/docs/api/java/time/ZoneId.html)
A time zone. The subclass [`ZoneOffset`](http://docs.oracle.com/javase/8/docs/api/java/time/ZoneOffset.html) includes [a constant for UTC](http://docs.oracle.com/javase/8/docs/api/java/time/ZoneOffset.html#UTC).
* [`ZonedDateTime`](http://docs.oracle.com/javase/8/docs/api/java/time/ZonedDateTime.html) = `Instant` + `ZoneId`
Represents a moment on the timeline adjusted into a specific time zone.
This framework solves the couple of problems you listed.
## 0-based months
Month numbers are 1-12 in java.time.
Even better, an [`Enum`](http://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html) ([`Month`](http://docs.oracle.com/javase/8/docs/api/java/time/Month.html)) provides an object instance for each month of the year. So you need not depend on "magic" numbers in your code like `9` or `10`.
```
if ( theMonth.equals ( Month.OCTOBER ) ) { …
```
Furthermore, that enum includes some handy utility methods such as getting a month’s localized name.
If not yet familiar with Java enums, read the [Tutorial](http://docs.oracle.com/javase/tutorial/java/javaOO/enum.html) and study up. They are surprisingly handy and powerful.
## A date without a time
The [`LocalDate`](http://docs.oracle.com/javase/8/docs/api/java/time/LocalDate.html) class represents a date-only value, without time-of-day, without time zone.
```
LocalDate localDate = LocalDate.parse( "2015-01-02" );
```
Note that determining a date requires a time zone. A new day dawns earlier in Paris than in Montréal where it is still ‘yesterday’. The [`ZoneId`](http://docs.oracle.com/javase/8/docs/api/java/time/ZoneId.html) class represents a time zone.
```
LocalDate today = LocalDate.now( ZoneId.of( "America/Montreal" ) );
```
Similarly, there is a [`LocalTime`](http://docs.oracle.com/javase/8/docs/api/java/time/LocalTime.html) class for a time-of-day not yet tied to a date or time zone.
---
# About java.time
The [java.time](http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/8/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/8/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
The [Joda-Time](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html) classes.
To learn more, see the [Oracle Tutorial](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310).
Where to obtain the java.time classes?
* [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later
+ Built-in.
+ Part of the standard Java API with a bundled implementation.
+ Java 9 adds some minor features and fixes.
* [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7)
+ Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/).
* [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system))
+ The [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above) for Android specifically.
+ See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706).
The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html).
|
What's the best way to manipulate Dates and Timestamps in Java?
|
[
"",
"java",
"datetime",
""
] |
So I have a "large" number of "very large" ASCII files of numerical data (gigabytes altogether), and my program will need to process the entirety of it sequentially at least once.
Any advice on storing/loading the data? I've thought of converting the files to binary to make them smaller and for faster loading.
Should I load everything into memory all at once?
If not, is opening what's a good way of loading the data partially?
What are some Java-relevant efficiency tips?
|
> So then what if the processing requires jumping around in the data for multiple files and multiple buffers? Is constant opening and closing of binary files going to become expensive?
I'm a big fan of *'memory mapped i/o'*, aka *'direct byte buffers'*. In Java they are called **[Mapped Byte Buffers](http://docs.oracle.com/javase/8/docs/api/java/nio/MappedByteBuffer.html)** are are part of java.nio. (Basically, this mechanism uses the OS's virtual memory paging system to 'map' your files and present them programmatically as byte buffers. The OS will manage moving the bytes to/from disk and memory auto-magically and very quickly.
I suggest this approach because a) it works for me, and b) it will let you focus on your algorithm and let the JVM, OS and hardware deal with the performance optimization. All to frequently, they know what is best more so than us lowly programmers. ;)
How would you use MBBs in your context? Just create an MBB for each of your files and read them as you see fit. You will only need to store your results. .
BTW: How much data are you dealing with, in GB? If it is more than 3-4GB, then this won't work for you on a 32-bit machine as the MBB implementation is defendant on the addressable memory space by the platform architecture. A 64-bit machine & OS will take you to 1TB or 128TB of mappable data.
If you are thinking about performance, then know Kirk Pepperdine (a somewhat famous Java performance guru.) He is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: **[NIO Performance Tips](http://www.javaperformancetuning.com/tips/nio.shtml#REF1)** and other Java performance related things.
|
You might want to have a look at the entries in the [Wide Finder Project](http://www.tbray.org/ongoing/When/200x/2007/09/20/Wide-Finder) (do a google search for ["wide finder" java](http://www.google.com.au/search?q="wide+finder"+java)).
The Wide finder involves reading over lots of lines in log files, so look at the Java implementations and see what worked and didn't work there.
|
Advice on handling large data volumes
|
[
"",
"java",
"loading",
"large-files",
"large-data-volumes",
""
] |
I have several applications that are part of a suite of tools that various developers at our studio use. these applications are mainly command line apps that open a DOS cmd shell. These apps in turn start up a GUI application that tracks output and status (via sockets) of these command line apps.
The command line apps can be started with the user is logged in, when their workstation is locked (they fire off a batch file and then immediately lock their workstation), and when they are logged out (via a scheduled task). The problems that I have are with the last two cases.
If any of these apps fire off when the user is locked or logged out, these command will spawn the GUI windows which tracks the output/status. That's fine, but say the user has their workstation locked -- when they unlock their workstation, the GUI isn't visible. It's running the task list, but it's not visible. The next time these users run some of our command line apps, the GUI doesn't get launched (because it's already running), but because it's not visible on the desktop, users don't see any output.
What I'm looking for is a way to tell from my command line apps if they are running behind a locked workstation or when a user is logged out (via scheduled task) -- basically are they running without a user's desktop visible. If I can tell that, then I can simply not start up our GUI and can prevent a lot of problem.
These apps that I need to test are C/C++ Windows applications.
I hope that this make sense.
|
I found the programmatic answer that I was looking for. It has to do with stations. Apparently anything running on the desktop will run on a station with a particular name. Anything that isn't on the desktop (i.e. a process started by the task manager when logged off or on a locked workstation) will get started with a different station name. Example code:
```
HWINSTA dHandle = GetProcessWindowStation();
if ( GetUserObjectInformation(dHandle, UOI_NAME, nameBuffer, bufferLen, &lenNeeded) ) {
if ( stricmp(nameBuffer, "winsta0") ) {
// when we get here, we are not running on the real desktop
return false;
}
}
```
If you get inside the 'if' statement, then your process is not on the desktop, but running "somewhere else". I looked at the namebuffer value when not running from the desktop and the names don't mean much, but they are not WinSta0.
Link to the docs [here](http://msdn.microsoft.com/en-us/library/ms681928.aspx).
|
You might be able to use SENS (System Event Notification Services). I've never used it myself, but I'm almost positive it will do what you want: give you notification for events like logon, logoff, screen saver, etc.
I know that's pretty vague, but hopefully it will get you started. A quick google search turned up this, among others: <http://discoveringdotnet.alexeyev.org/2008/02/sens-events.html>
|
Testing running condition of a Windows app
|
[
"",
"c++",
"windows",
"command-line",
""
] |
How can I, as the wiki admin, enter scripting (Javascript) into a Sharepoint wiki page?
I would like to enter a title and, when clicking on that, having displayed under it a small explanation. I usually have done that with javascript, any other idea?
|
Assuming you're the administrator of the wiki and are willing display this on mouseover instead of on click, you don't need javascript at all -- you can use straight CSS. Here's an example of the styles and markup:
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>Test</title>
<style type="text/css">
h1 { padding-bottom: .5em; position: relative; }
h1 span { font-weight: normal; font-size: small; position: absolute; bottom: 0; display: none; }
h1:hover span { display: block; }
</style>
</head>
<body>
<h1>Here is the title!
<span>Here is a little explanation</span>
</h1>
<p>Here is some page content</p>
</body>
</html>
```
With some more involved styles, your tooltip box can look as nice as you'd like.
|
If the wiki authors are wise, there's probably no way to do this.
The problem with user-contributed JavaScript is that it opens the door for all forms of evil-doers to grab data from the unsuspecting.
Let's suppose evil-me posts a script on a public web site:
```
i = new Image();
i.src = 'http://evilme.com/store_cookie_data?c=' + document.cookie;
```
Now I will receive the cookie information of each visitor to the page, posted to a log on my server. And that's just the tip of the iceberg.
|
How to enter Javascript into a wiki page?
|
[
"",
"javascript",
"html",
"sharepoint",
"wiki",
""
] |
Is there a tool or script which easily merges a bunch of [JAR](http://en.wikipedia.org/wiki/JAR_%28file_format%29) files into one JAR file? A bonus would be to easily set the main-file manifest and make it executable.
The concrete case is a [Java restructured text tool](http://jrst.labs.libre-entreprise.org/en/user/functionality.html). I would like to run it with something like:
> java -jar rst.jar
As far as I can tell, it has no dependencies which indicates that it shouldn't be an easy single-file tool, but the downloaded ZIP file contains a lot of libraries.
```
0 11-30-07 10:01 jrst-0.8.1/
922 11-30-07 09:53 jrst-0.8.1/jrst.bat
898 11-30-07 09:53 jrst-0.8.1/jrst.sh
2675 11-30-07 09:42 jrst-0.8.1/readmeEN.txt
108821 11-30-07 09:59 jrst-0.8.1/jrst-0.8.1.jar
2675 11-30-07 09:42 jrst-0.8.1/readme.txt
0 11-30-07 10:01 jrst-0.8.1/lib/
81508 11-30-07 09:49 jrst-0.8.1/lib/batik-util-1.6-1.jar
2450757 11-30-07 09:49 jrst-0.8.1/lib/icu4j-2.6.1.jar
559366 11-30-07 09:49 jrst-0.8.1/lib/commons-collections-3.1.jar
83613 11-30-07 09:49 jrst-0.8.1/lib/commons-io-1.3.1.jar
207723 11-30-07 09:49 jrst-0.8.1/lib/commons-lang-2.1.jar
52915 11-30-07 09:49 jrst-0.8.1/lib/commons-logging-1.1.jar
260172 11-30-07 09:49 jrst-0.8.1/lib/commons-primitives-1.0.jar
313898 11-30-07 09:49 jrst-0.8.1/lib/dom4j-1.6.1.jar
1994150 11-30-07 09:49 jrst-0.8.1/lib/fop-0.93-jdk15.jar
55147 11-30-07 09:49 jrst-0.8.1/lib/activation-1.0.2.jar
355030 11-30-07 09:49 jrst-0.8.1/lib/mail-1.3.3.jar
77977 11-30-07 09:49 jrst-0.8.1/lib/servlet-api-2.3.jar
226915 11-30-07 09:49 jrst-0.8.1/lib/jaxen-1.1.1.jar
153253 11-30-07 09:49 jrst-0.8.1/lib/jdom-1.0.jar
50789 11-30-07 09:49 jrst-0.8.1/lib/jewelcli-0.41.jar
324952 11-30-07 09:49 jrst-0.8.1/lib/looks-1.2.2.jar
121070 11-30-07 09:49 jrst-0.8.1/lib/junit-3.8.1.jar
358085 11-30-07 09:49 jrst-0.8.1/lib/log4j-1.2.12.jar
72150 11-30-07 09:49 jrst-0.8.1/lib/logkit-1.0.1.jar
342897 11-30-07 09:49 jrst-0.8.1/lib/lutinwidget-0.9.jar
2160934 11-30-07 09:49 jrst-0.8.1/lib/docbook-xsl-nwalsh-1.71.1.jar
301249 11-30-07 09:49 jrst-0.8.1/lib/xmlgraphics-commons-1.1.jar
68610 11-30-07 09:49 jrst-0.8.1/lib/sdoc-0.5.0-beta.jar
3149655 11-30-07 09:49 jrst-0.8.1/lib/xalan-2.6.0.jar
1010675 11-30-07 09:49 jrst-0.8.1/lib/xercesImpl-2.6.2.jar
194205 11-30-07 09:49 jrst-0.8.1/lib/xml-apis-1.3.02.jar
78440 11-30-07 09:49 jrst-0.8.1/lib/xmlParserAPIs-2.0.2.jar
86249 11-30-07 09:49 jrst-0.8.1/lib/xmlunit-1.1.jar
108874 11-30-07 09:49 jrst-0.8.1/lib/xom-1.0.jar
63966 11-30-07 09:49 jrst-0.8.1/lib/avalon-framework-4.1.3.jar
138228 11-30-07 09:49 jrst-0.8.1/lib/batik-gui-util-1.6-1.jar
216394 11-30-07 09:49 jrst-0.8.1/lib/l2fprod-common-0.1.jar
121689 11-30-07 09:49 jrst-0.8.1/lib/lutinutil-0.26.jar
76687 11-30-07 09:49 jrst-0.8.1/lib/batik-ext-1.6-1.jar
124724 11-30-07 09:49 jrst-0.8.1/lib/xmlParserAPIs-2.6.2.jar
```
As you can see, it is somewhat desirable to not need to do this manually.
So far I've only tried AutoJar and ProGuard, both of which were fairly easy to get running. It appears that there's some issue with the constant pool in the JAR files.
Apparently jrst is slightly broken, so I'll make a go of fixing it. The [Maven](http://en.wikipedia.org/wiki/Apache_Maven) `pom.xml` file was apparently broken too, so I'll have to fix that before fixing jrst ... I feel like a bug-magnet :-)
---
Update: I never got around to fixing this application, but I checked out [Eclipse](http://en.wikipedia.org/wiki/Eclipse_%28software%29)'s "Runnable JAR export wizard" which is based on a fat JAR. I found this very easy to use for deploying my own code.
Some of the other excellent suggestions might be better for builds in a non-Eclipse environment, oss probably should make a nice build using [Ant](http://en.wikipedia.org/wiki/Apache_Ant). (Maven, so far has just given me pain, but others love it.)
|
Eclipse 3.4 JDT's Runnable JAR export wizard.
In Eclipse 3.5, this has been extended. Now you can chose how you want to treat your referenced JAR files.
|
Ant's `zipfileset` does the job
```
<jar id="files" jarfile="all.jar">
<zipfileset src="first.jar" includes="**/*.java **/*.class"/>
<zipfileset src="second.jar" includes="**/*.java **/*.class"/>
</jar>
```
|
Easiest way to merge a release into one JAR file
|
[
"",
"java",
"jar",
"merge",
"restructuredtext",
"uberjar",
""
] |
The log4j network adapter sends events as a serialised java object. I would like to be able to capture this object and deserialise it in a different language (python). Is this possible?
*NOTE* The network capturing is easy; its just a TCP socket and reading in a stream. The difficulty is the deserialising part
|
Generally, no.
The stream format for Java serialization is defined [in this document](http://java.sun.com/j2se/1.4/pdf/serial-spec.pdf), but you need access to the original class definitions (and a Java runtime to load them into) to turn the stream data back into something approaching the original objects. For example, classes may define writeObject() and readObject() methods to customise their own serialized form.
(**edit:** lubos hasko suggests having a little java program to deserialize the objects in front of Python, but the problem is that for this to work, your "little java program" needs to load the same versions of all the same classes that it might deserialize. Which is tricky if you're receiving log messages from one app, and really tricky if you're multiplexing more than one log stream. Either way, it's not going to be a little program any more. **edit2:** I could be wrong here, I don't know what gets serialized. If it's just log4j classes you should be fine. On the other hand, it's possible to log arbitrary exceptions, and if they get put in the stream as well my point stands.)
It would be much easier to customise the log4j network adapter and replace the raw serialization with some more easily-deserialized form (for example you could use XStream to turn the object into an XML representation)
|
*Theoretically*, it's possible. The Java Serialization, like pretty much everything in Javaland, is standardized. So, you *could* implement a deserializer according to that standard in Python. However, the Java Serialization format is not designed for cross-language use, the serialization format is closely tied to the way objects are represented inside the JVM. While implementing a JVM in Python is surely a fun exercise, it's probably not what you're looking for (-:
There are other (data) serialization formats that are specifically designed to be language agnostic. They usually work by stripping the data formats down to the bare minimum (number, string, sequence, dictionary and that's it) and thus requiring a bit of work on both ends to represent a rich object as a graph of dumb data structures (and vice versa).
Two examples are [JSON (JavaScript Object Notation)](http://JSON.Org/ "JSON (JavaScript Object Notation)") and [YAML (YAML Ain't Markup Language)](http://YAML.Org/ "YAML (YAML Ain't Markup Language)").
[ASN.1 (Abstract Syntax Notation One)](http://ASN1.Elibel.Tm.Fr/en/ "ASN.1 (Abstract Syntax Notation One)") is another data serialization format. Instead of dumbing the format down to a point where it can be easily understood, ASN.1 is self-describing, meaning all the information needed to decode a stream is encoded within the stream itself.
And, of course, [XML (eXtensible Markup Language)](http://W3.Org/XML/ "XML (eXtensible Markup Language)"), will work too, provided that it is not just used to provide textual representation of a "memory dump" of a Java object, but an actual abstract, language-agnostic encoding.
So, to make a long story short: your best bet is to either try to coerce log4j into logging in one of the above-mentioned formats, replace log4j with something that does that or try to somehow intercept the objects before they are sent over the wire and convert them before leaving Javaland.
Libraries that implement JSON, YAML, ASN.1 and XML are available for both Java and Python (and pretty much every programming language known to man).
|
Deserialize in a different language
|
[
"",
"java",
"serialization",
"log4j",
""
] |
For debugging and testing I'm searching for a JavaScript shell with auto completion and if possible object introspection (like ipython). The online [JavaScript Shell](http://www.squarefree.com/shell/) is really nice, but I'm looking for something local, without the need for an browser.
So far I have tested the standalone JavaScript interpreter rhino, spidermonkey and google V8. But neither of them has completion. At least Rhino with jline and spidermonkey have some kind of command history via key up/down, but nothing more.
Any suggestions?
This question was asked again [here](https://stackoverflow.com/questions/260787/javascript-shell). It might contain an answer that you are looking for.
|
Rhino Shell since 1.7R2 has support for completion as well. You can find more information [here](http://blog.norrisboyd.com/2009/03/rhino-17-r2-released.html).
|
In Windows, you can run this file from the command prompt in cscript.exe, and it provides an simple interactive shell. No completion.
```
// shell.js
// ------------------------------------------------------------------
//
// implements an interactive javascript shell.
//
// from
// http://kobyk.wordpress.com/2007/09/14/a-jscript-interactive-interpreter-shell-for-the-windows-script-host/
//
// Sat Nov 28 00:09:55 2009
//
var GSHELL = (function () {
var numberToHexString = function (n) {
if (n >= 0) {
return n.toString(16);
} else {
n += 0x100000000;
return n.toString(16);
}
};
var line, scriptText, previousLine, result;
return function() {
while(true) {
WScript.StdOut.Write("js> ");
if (WScript.StdIn.AtEndOfStream) {
WScript.Echo("Bye.");
break;
}
line = WScript.StdIn.ReadLine();
scriptText = line + "\n";
if (line === "") {
WScript.Echo(
"Enter two consecutive blank lines to terminate multi-line input.");
do {
if (WScript.StdIn.AtEndOfStream) {
break;
}
previousLine = line;
line = WScript.StdIn.ReadLine();
line += "\n";
scriptText += line;
} while(previousLine != "\n" || line != "\n");
}
try {
result = eval(scriptText);
} catch (error) {
WScript.Echo("0x" + numberToHexString(error.number) + " " + error.name + ": " +
error.message);
}
if (result) {
try {
WScript.Echo(result);
} catch (error) {
WScript.Echo("<<>>");
}
}
result = null;
}
};
})();
GSHELL();
```
If you want, you can augment that with other utility libraries, with a .wsf file. Save the above to "shell.js", and save the following to "shell.wsf":
```
<job>
<reference object="Scripting.FileSystemObject" />
<script language="JavaScript" src="util.js" />
<script language="JavaScript" src="shell.js" />
</job>
```
...where util.js is:
```
var quit = function(x) { WScript.Quit(x);}
var say = function(s) { WScript.Echo(s); };
var echo = say;
var exit = quit;
var sleep = function(n) { WScript.Sleep(n*1000); };
```
...and then run shell.wsf from the command line.
|
JavaScript interactive shell with completion
|
[
"",
"javascript",
""
] |
I.e., a web browser client would be written in C++ !!!
|
There are a two choices. Managed C++ (/clr:oldSyntax, no longer maintained) or C++/CLI (definitely maintained). You'll want to use /clr:safe for in-browser software, because you wnat the browser to be able to verify it.
|
This was originally known as [Managed C++](http://en.wikipedia.org/wiki/Managed_Extensions_for_C%2B%2B), but as Josh commented, it has been superceded by [C++/CLI](http://en.wikipedia.org/wiki/C%2B%2B/CLI).
|
Is there a way to compile C++ code to Microsoft .Net CIL (bytecode)?
|
[
"",
".net",
"c++",
"browser",
"cil",
""
] |
If you have a statically allocated array, the Visual Studio debugger can easily display all of the array elements. However, if you have an array allocated dynamically and pointed to by a pointer, it will only display the first element of the array when you click the + to expand it. Is there an easy way to tell the debugger, show me this data as an array of type Foo and size X?
|
Yes, simple.
say you have
```
char *a = new char[10];
```
writing in the debugger:
```
a,10
```
would show you the content as if it were an array.
|
There are two methods to view data in an array m4x4:
```
float m4x4[16]={
1.f,0.f,0.f,0.f,
0.f,2.f,0.f,0.f,
0.f,0.f,3.f,0.f,
0.f,0.f,0.f,4.f
};
```
One way is with a Watch window (Debug/Windows/Watch). Add watch =
```
m4x4,16
```
This displays data in a list:

Another way is with a Memory window (Debug/Windows/Memory). Specify a memory start address =
```
m4x4
```
This displays data in a table, which is better for two and three dimensional matrices:

Right-click on the Memory window to determine how the binary data is visualized. Choices are limited to integers, floats and some text encodings.
|
How to display a dynamically allocated array in the Visual Studio debugger?
|
[
"",
"c++",
"c",
"visual-studio",
"debugging",
""
] |
Here's a very simple Prototype example.
All it does is, on window load, an ajax call which sticks some html into a div.
```
<html>
<head>
<script type="text/javascript" src="scriptaculous/lib/prototype.js"></script>
<script type="text/javascript">
Event.observe(window, 'load', function(){
new Ajax.Request('get-table.php', {
method: 'get',
onSuccess: function(response){
$('content').innerHTML = response.responseText;
//At this call, the div has HTML in it
click1();
},
onFailure: function(){
alert('Fail!');
}
});
//At this call, the div is empty
click1();
});
function click1(){if($('content').innerHTML){alert('Found content');}else{alert('Empty div');}}
</script>
</head>
<body><div id="content"></div></body>
</html>
```
The thing that's confusing is the context in which Prototype understands that the div actually has stuff in it.
If you look at the onSuccess part of the ajax call, you'll see that at that point $('content').innerHTML has stuff in it.
However when I check $('content').innerHTML right after the ajax call, it appears to be empty.
This has to be some fundamental misunderstanding on my part. Anyone care to explain it to me?
---
**Edit**
I just want to clarify something. I realize that the Ajax call is asynchronous.
Here's the actual order that things are being executed and why it's confusing to me:
1. The page loads.
2. The Ajax request to get-table.php is made.
3. The call to click1() INSIDE onSuccess happens. I see an alert that the div has content.
4. The call to click1() AFTER the Ajax call happens. I see an alert that the div is empty.
So it's like the code is executing in the order it's written but the DOM is not updating in the same order.
---
**Edit 2**
So the short answer is that putting the code in onSuccess is the correct place.
Another case to consider is the one where you do an Ajax call and then do another Ajax call from the onSuccess of the first call like this:
```
new Ajax.Request('foo.php',{
method: 'get',
onSuccess: function(response){
doAnotherAjaxCall();
}
});
function doAnotherAjaxCall(){
new Ajax.Request('foo.php',{
method: 'get',
onSuccess: function(response){
//Anything that needs to happen AFTER the call to doAnotherAjaxCall() above
//needs to happen here!
}
});
}
```
|
The first letter of AJAX stands for "asynchronous". This means that the AJAX call is performed in the background, i.e. the AJAX request call *immediately returns*. This means that the code immediately after it is normally actually executed *before* the onSuccess handler gets called (and before the AJAX request has even finished).
Taking into account your edited question: in some browsers (e.g. Firefox), alert boxes are not as modal as you might think. Asynchronous code may pop up an alert box even if another one is already open. In that case, the newer alert box (the one from the asynchronous code) gets displayed on top of the older one. This creates the illusion that the asynchronous code got executed first.
|
Without having tried your code: The AJAX call is executed asynchronously. Meaning that your Ajax.Request fires, then goes on to the click1() call that tells you the div is empty. At some point after that the Ajax request is finished and content is actually put into the div. At this point the onSuccess function is executed and you get the content you expected.
|
Problem accessing the content of a div when that content came from an Ajax call
|
[
"",
"javascript",
"dom",
"prototypejs",
""
] |
What is the best way to keep a PHP script running as a daemon, and what's the best way to check if needs restarting.
I have some scripts that need to run 24/7 and for the most part I can run them using [nohup](http://en.wikipedia.org/wiki/Nohup). But if they go down, what's the best way to monitor it so it can be automatically restarted?
|
If you can't use the (proper) init structure to do this (you're on shared hosting, etc.), use cron to run a script (it can be written in whatever language you like) every few minutes that checks to see if they're running, and restarts them if necessary.
|
The most elegant solution is [reactPHP](http://reactphp.org).
|
What's the best way to keep a PHP script running as a daemon?
|
[
"",
"php",
"daemon",
""
] |
How can I use the nifty JavaScript date and time widgets that the default admin uses with my custom view?
I have looked through [the Django forms documentation](https://docs.djangoproject.com/en/dev/topics/forms/), and it briefly mentions django.contrib.admin.widgets, but I don't know how to use it?
Here is my template that I want it applied on.
```
<form action="." method="POST">
<table>
{% for f in form %}
<tr> <td> {{ f.name }}</td> <td>{{ f }}</td> </tr>
{% endfor %}
</table>
<input type="submit" name="submit" value="Add Product">
</form>
```
Also, I think it should be noted that I haven't really written a view up myself for this form, I am using a generic view. Here is the entry from the url.py:
```
(r'^admin/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}),
```
And I am relevantly new to the whole Django/MVC/MTV thing, so please go easy...
|
The growing complexity of this answer over time, and the many hacks required, probably ought to caution you against doing this at all. It's relying on undocumented internal implementation details of the admin, is likely to break again in future versions of Django, and is no easier to implement than just finding another JS calendar widget and using that.
That said, here's what you have to do if you're determined to make this work:
1. Define your own `ModelForm` subclass for your model (best to put it in forms.py in your app), and tell it to use the `AdminDateWidget` / `AdminTimeWidget` / `AdminSplitDateTime` (replace 'mydate' etc with the proper field names from your model):
```
from django import forms
from my_app.models import Product
from django.contrib.admin import widgets
class ProductForm(forms.ModelForm):
class Meta:
model = Product
def __init__(self, *args, **kwargs):
super(ProductForm, self).__init__(*args, **kwargs)
self.fields['mydate'].widget = widgets.AdminDateWidget()
self.fields['mytime'].widget = widgets.AdminTimeWidget()
self.fields['mydatetime'].widget = widgets.AdminSplitDateTime()
```
2. Change your URLconf to pass `'form_class': ProductForm` instead of `'model': Product` to the generic `create_object` view (that'll mean `from my_app.forms import ProductForm` instead of `from my_app.models import Product`, of course).
3. In the head of your template, include `{{ form.media }}` to output the links to the Javascript files.
4. And the hacky part: the admin date/time widgets presume that the i18n JS stuff has been loaded, and also require core.js, but don't provide either one automatically. So in your template above `{{ form.media }}` you'll need:
```
<script type="text/javascript" src="/my_admin/jsi18n/"></script>
<script type="text/javascript" src="/media/admin/js/core.js"></script>
```
You may also wish to use the following admin CSS (thanks [Alex](https://stackoverflow.com/questions/38601/using-django-time-date-widgets-in-custom-form/719583#719583) for mentioning this):
```
<link rel="stylesheet" type="text/css" href="/media/admin/css/forms.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/base.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/global.css"/>
<link rel="stylesheet" type="text/css" href="/media/admin/css/widgets.css"/>
```
This implies that Django's admin media (`ADMIN_MEDIA_PREFIX`) is at /media/admin/ - you can change that for your setup. Ideally you'd use a context processor to pass this values to your template instead of hardcoding it, but that's beyond the scope of this question.
This also requires that the URL /my\_admin/jsi18n/ be manually wired up to the django.views.i18n.javascript\_catalog view (or null\_javascript\_catalog if you aren't using I18N). You have to do this yourself instead of going through the admin application so it's accessible regardless of whether you're logged into the admin (thanks [Jeremy](https://stackoverflow.com/questions/38601/using-django-time-date-widgets-in-custom-form/408230#408230) for pointing this out). Sample code for your URLconf:
```
(r'^my_admin/jsi18n', 'django.views.i18n.javascript_catalog'),
```
Lastly, if you are using Django 1.2 or later, you need some additional code in your template to help the widgets find their media:
```
{% load adminmedia %} /* At the top of the template. */
/* In the head section of the template. */
<script type="text/javascript">
window.__admin_media_prefix__ = "{% filter escapejs %}{% admin_media_prefix %}{% endfilter %}";
</script>
```
Thanks [lupefiasco](https://stackoverflow.com/questions/38601/using-django-time-date-widgets-in-custom-form/2818128#2818128) for this addition.
|
As the solution is hackish, I think using your own date/time widget with some JavaScript is more feasible.
|
Using Django time/date widgets in custom form
|
[
"",
"python",
"django",
""
] |
I have a big string (let's call it a CSV file, though it isn't actually one, it'll just be easier for now) that I have to parse in C# code.
The first step of the parsing process splits the file into individual lines by just using a `StreamReader` object and calling `ReadLine` until it's through the file. However, any given line might contain a quoted (in single quotes) literal with embedded newlines. I need to find those newlines and convert them temporarily into some other kind of token or escape sequence until I've split the file into an array of lines..then I can change them back.
Example input data:
```
1,2,10,99,'Some text without a newline', true, false, 90
2,1,11,98,'This text has an embedded newline
and continues here', true, true, 90
```
I could write all of the C# code needed to do this by using `string.IndexOf` to find the quoted sections and look within them for newlines, but I'm thinking a Regex might be a better choice (i.e. [now I have two problems](http://regex.info/blog/2006-09-15/247))
|
Since this isn't a true CSV file, does it have any sort of schema?
From your example, it looks like you have:
int, int, int, int, string , bool, bool, int
With that making up your record / object.
Assuming that your data is well formed (I don't know enough about your source to know how valid this assumption is); you could:
1. Read your line.
2. Use a state machine to parse your data.
3. If your line ends, and you're parsing a string, read the next line..and keep parsing.
I'd avoid using a regex if possible.
|
State-machines for doing such a job are made easy using C# 2.0 iterators. Here's hopefully the last CSV parser I'll ever write. The whole file is treated as a enumerable bunch of enumerable strings, i.e. rows/columns. IEnumerable is great because it can then be processed by LINQ operators.
```
public class CsvParser
{
public char FieldDelimiter { get; set; }
public CsvParser()
: this(',')
{
}
public CsvParser(char fieldDelimiter)
{
FieldDelimiter = fieldDelimiter;
}
public IEnumerable<IEnumerable<string>> Parse(string text)
{
return Parse(new StringReader(text));
}
public IEnumerable<IEnumerable<string>> Parse(TextReader reader)
{
while (reader.Peek() != -1)
yield return parseLine(reader);
}
IEnumerable<string> parseLine(TextReader reader)
{
bool insideQuotes = false;
StringBuilder item = new StringBuilder();
while (reader.Peek() != -1)
{
char ch = (char)reader.Read();
char? nextCh = reader.Peek() > -1 ? (char)reader.Peek() : (char?)null;
if (!insideQuotes && ch == FieldDelimiter)
{
yield return item.ToString();
item.Length = 0;
}
else if (!insideQuotes && ch == '\r' && nextCh == '\n') //CRLF
{
reader.Read(); // skip LF
break;
}
else if (!insideQuotes && ch == '\n') //LF for *nix-style line endings
break;
else if (ch == '"' && nextCh == '"') // escaped quotes ""
{
item.Append('"');
reader.Read(); // skip next "
}
else if (ch == '"')
insideQuotes = !insideQuotes;
else
item.Append(ch);
}
// last one
yield return item.ToString();
}
}
```
Note that the file is read character by character with the code deciding when newlines are to be treated as row delimiters or part of a quoted string.
|
Looking for Regex to find quoted newlines in a big string (for C#)
|
[
"",
"c#",
"regex",
""
] |
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
1. install and configure all software
2. Modify data to the base testing point
3. take a backup somehow (this is part of the question, how to do this)
4. do testing
5. return to step 3 state (restore back to backup point, this is the other half of the question)
**Optimally this would be completed through sqlplus or rman or some other scriptable method.**
|
You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.
The steps for this would be:
1. Startup the instance in mount mode.
startup force mount;
2. Create the restore point.
create restore point before\_test guarantee flashback database;
3. Open the database.
alter database open;
4. Run your tests.
5. Shutdown and mount the instance.
shutdown immediate;
startup mount;
6. Flashback to the restore point.
flashback database to restore point before\_test;
7. Open the database.
alter database open;
|
You could use a feature in Oracle called [Flashback](http://www.oracle.com/technology/deploy/availability/htdocs/Flashback_Overview.htm) which allows you to create a restore point, which you can easily jump back to after you've done testing.
Quoted from the site,
> Flashback Database is like a 'rewind
> button' for your database. It provides
> database point in time recovery
> without requiring a backup of the
> database to first be restored. When
> you eliminate the time it takes to
> restore a database backup from tape,
> database point in time recovery is
> fast.
|
Backup/Restore database for oracle 10g testing using sqlplus or rman
|
[
"",
"sql",
"oracle",
"backup",
"oracle10g",
"rman",
""
] |
I'm using OLEDB provider for ADO.Net connecting to an Oracle database. In my loop, I am doing an insert:
```
insert into ps_tl_compleave_tbl values('2626899', 0, TO_DATE('01/01/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '52', TO_DATE('01/01/2002', 'MM/DD/YYYY'), 16.000000, 24.000)insert into ps_tl_compleave_tbl values('4327142', 0, TO_DATE('03/23/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '51', TO_DATE('03/23/2002', 'MM/DD/YYYY'), 0.000000, 0.000)
```
The first insert succeeds but the second one gives an error:
```
ORA-00933: SQL command not properly ended
```
What am I doing wrong?
|
To me it seems you're missing a `;` between the two statements:
`insert into ps_tl_compleave_tbl values('2626899', 0, TO_DATE('01/01/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '52', TO_DATE('01/01/2002', 'MM/DD/YYYY'), 16.000000, 24.000)`
**`;`**
`insert into ps_tl_compleave_tbl values('4327142', 0, TO_DATE('03/23/2002', 'MM/DD/YYYY'), 'LTKN', 'LTKN', '51', TO_DATE('03/23/2002', 'MM/DD/YYYY'), 0.000000, 0.000)`
**`;`**
Try adding the `;` and let us know.
|
In .net, when we try to execute a single Oracle SQL statement with a semicolon at the end. The result will be an oracle error: ora-00911: invalid character. OK, you figure that one SQL statement doesn't need the semicolon, but what about executing 2 SQL statement in one string for example:
```
Dim db As Database = DatabaseFactory.CreateDatabase("db")
Dim cmd As System.Data.Common.DbCommand
Dim sql As String = ""
sql = "DELETE FROM iphone_applications WHERE appid = 1; DELETE FROM iphone_applications WHERE appid = 2; "
cmd = db.GetSqlStringCommand(sql)
db.ExecuteNonQuery(cmd)
```
The code above will give you the same Oracle error: ora-00911: invalid character.
The solution to this problem is to wrap your 2 Oracle SQL statements with a `BEGIN` and `END;` syntax, for example:
```
sql = "BEGIN DELETE FROM iphone_applications WHERE appid = 1; DELETE FROM iphone_applications WHERE appid = 2; END;"
```
Courtesy: <http://www.lazyasscoder.com/Article.aspx?id=89&title=ora-00911%3A+invalid+character+when+executing+multiple+Oracle+SQL+statements>
|
ORA-00933: SQL command not properly ended
|
[
"",
"sql",
"oracle",
"ora-00933",
""
] |
I am writing a simple checkers game in Java. When I mouse over the board my processor ramps up to 50% (100% on a core).
I would like to find out what part of my code(assuming its my fault) is executing during this.
I have tried debugging, but step-through debugging doesn't work very well in this case.
Is there any tool that can tell me where my problem lies? I am currently using Eclipse.
|
This is called "profiling". Your IDE probably comes with one: see [Open Source Profilers in Java](http://java-source.net/open-source/profilers).
|
Use a profiler (e.g [yourkit](http://www.yourkit.com) )
|
Java: How can I see what parts of my code are running the most? (profiling)
|
[
"",
"java",
"debugging",
"user-interface",
"profiling",
""
] |
How can I use .NET DataSet.Select method to search records that match a DateTime?
What format should I use to enter my dates in?
|
The best method is dd MMM yyyy (ie 15 Sep 2008). This means there is no possiblity of getting it wrong for different Locals.
```
ds.select(DBDate = '15 Sep 2008')
```
You can use the DateFormat function to convert to long date format as well and this will work fine too.
|
I use the following for the SQL Select:
```
public string BuildSQL()
{
// Format: CAST('2000-05-08 12:35:29' AS datetime)
StringBuilder sb = new StringBuilder("CAST('");
sb.Append(_dateTime.ToString("yyyy-MM-dd HH:mm:ss"));
sb.Append("' AS datetime)");
return sb.ToString();
}
```
|
DataSet.Select and DateTime
|
[
"",
"c#",
".net",
"dataset",
""
] |
In a web application, I have a page that contains a DIV that has an auto-width depending on the width of the browser window.
I need an auto-height for the object. The DIV starts about 300px from the top screen, and its height should make it stretch to the bottom of the browser screen. I have a max height for the container DIV, so there would have to be minimum-height for the div. I believe I can just restrict that in CSS, and use Javascript to handle the resizing of the DIV.
My javascript isn't nearly as good as it should be. Is there an easy script I could write that would do this for me?
Edit:
The DIV houses a control that does it's own overflow handling (implements its own scroll bar).
|
Try this simple, specific function:
```
function resizeElementHeight(element) {
var height = 0;
var body = window.document.body;
if (window.innerHeight) {
height = window.innerHeight;
} else if (body.parentElement.clientHeight) {
height = body.parentElement.clientHeight;
} else if (body && body.clientHeight) {
height = body.clientHeight;
}
element.style.height = ((height - element.offsetTop) + "px");
}
```
It does not depend on the current distance from the top of the body being specified (in case your 300px changes).
---
EDIT: By the way, you would want to call this on that div every time the user changed the browser's size, so you would need to wire up the event handler for that, of course.
|
What should happen in the case of overflow? If you want it to just get to the bottom of the window, use absolute positioning:
```
div {
position: absolute;
top: 300px;
bottom: 0px;
left: 30px;
right: 30px;
}
```
This will put the DIV 30px in from each side, 300px from the top of the screen, and flush with the bottom. Add an `overflow:auto;` to handle cases where the content is larger than the div.
---
Edit: @Whoever marked this down, an explanation would be nice... Is something wrong with the answer?
|
Setting the height of a DIV dynamically
|
[
"",
"javascript",
"html",
"css",
""
] |
We have hundreds of websites which were developed in asp, .net and java and we are paying lot of money for an external agency to do a penetration testing for our sites to check for security loopholes.
Are there any (good) software (paid or free) to do this?
or.. are there any technical articles which can help me develop this tool?
|
There are a couple different directions you can go with automated testing tools for web applications.
First, there are the **commercial web scanners**, of which HP WebInspect and Rational AppScan are the two most popular. These are "all-in-one", "fire-and-forget" tools that you download and install on an internal Windows desktop and then give a URL to spider your site, scan for well-known vulnerabilities (ie, the things that have hit Bugtraq), and probe for cross-site scripting and SQL injection vulnerabilities.
Second, there are the **source-code scanning tools**, of which Coverity and Fortify are probably the two best known. These are tools you install on a developer's desktop to process your Java or C# source code and look for well-known patterns of insecure code, like poor input validation.
Finally, there are the **penetration test tools**. By far the most popular web app penetration testing tool among security professionals is Burp Suite, which you can find at <http://www.portswigger.net/proxy>. Others include Spike Proxy and OWASP WebScarab. Again, you'll install this on an internal Windows desktop. It will run as an HTTP proxy, and you'll point your browser at it. You'll use your applications as a normal user would, while it records your actions. You can then go back to each individual page or HTTP action and probe it for security problems.
In a complex environment, and especially if you're considering anything DIY, **I strongly recommend the penetration testing tools**. Here's why:
Commercial web scanners provide a lot of "breadth", along with excellent reporting. However:
* They tend to miss things, because every application is different.
* They're expensive (WebInspect starts in the 10's of thousands).
* You're paying for stuff you don't need (like databases of known bad CGIs from the '90s).
* They're hard to customize.
* They can produce noisy results.
Source code scanners are more thorough than web scanners. However:
* They're even more expensive than the web scanners.
* They require source code to operate.
* To be effective, they often require you to annotate your source code (for instance, to pick out input pathways).
* They have a tendency to produce false positives.
Both commercial scanners and source code scanners have a bad habit of becoming shelfware. Worse, even if they work, their cost is comparable to getting 1 or 2 entire applications audited by a consultancy; if you trust your consultants, you're guaranteed to get better results from them than from the tools.
Penetration testing tools have downsides too:
* They're much harder to use than fire-and-forget commercial scanners.
* They assume some expertise in web application vulnerabilities --- you have to know what you're looking for.
* They produce little or no formal reporting.
On the other hand:
* They're much, much cheaper --- the best of the lot, Burp Suite, costs only 99EU, and has a free version.
* They're easy to customize and add to a testing workflow.
* They're much better at helping you "get to know" your applications from the inside.
Here's something you'd do with a pen-test tool for a basic web application:
1. Log into the application through the proxy
2. Create a "hit list" of the major functional areas of the application, and exercise each once.
3. Use the "spider" tool in your pen-test application to find all the pages and actions and handlers in the application.
4. For each dynamic page and each HTML form the spider uncovers, use the "fuzzer" tool (Burp calls it an "intruder") to exercise every parameter with invalid inputs. Most fuzzers come with basic test strings that include:
* SQL metacharacters
* HTML/Javascript escapes and metacharacters
* Internationalized variants of these to evade input filters
* Well-known default form field names and values
* Well-known directory names, file names, and handler verbs
5. Spend several hours filtering the resulting errors (a typical fuzz run for one form might generate 1000 of them) looking for suspicious responses.
This is a labor-intensive, "bare-metal" approach. But when your company owns the actual applications, the bare-metal approach pays off, because you can use it to build regression test suites that will run like clockwork at each dev cycle for each app. This is a win for a bunch of reasons:
* Your security testing will take a predictable amount of time and resources per application, which allows you to budget and triage.
* Your team will get maximally accurate and thorough results, since your testing is going to be tuned to your applications.
* It's going to cost less than commercial scanners and less than consultants.
Of course, if you go this route, you're basically turning yourself into a security consultant for your company. I don't think that's a bad thing; if you don't want that expertise, WebInspect or Fortify isn't going to help you much anyways.
|
I know you asked specifically about pentesting tools, but since those have been amply answered (I usually go with a mix of AppScan and trained pentester), I think it's important to point out that *pentesting is not the only way to "check for security loopholes", and is often **not the most effective**.*
Source code review tools can provide you with much better visibility into your codebase, and find many flaws that pentesting won't.
These include Fortify and OunceLabs (expensive and for many languages), VisualStudio.NET CodeAnalysis (for .NET and C++, free with VSTS, decent but not great), OWASP's LAPSE for Java (free, decent not great), CheckMarx (not cheap, fanTASTic tool for .NET and Java, but high overhead), and many more.
An important point you must note - (most of) the automated tools do not find all the vulnerabilities, not even close. You can expect the automated tools to find approximately 35-40% of the secbugs that would be found by a professional pentester; the same goes for automated vs. manual source code review.
And of course a proper SDLC (Security Development Lifecycle), including Threat Modeling, Design Review, etc, will help even more...
|
Penetration testing tools
|
[
"",
"sql",
"security",
"sql-injection",
""
] |
I'm attracted to the neatness that a single file database provides. What driver/connector library is out there to connect and use SQLite with Java.
I've discovered a wrapper library, [http://www.ch-werner.de/javasqlite](http://www.ch-werner.de/javasqlite/), but are there other more prominent projects available?
|
The [wiki](http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers) lists some more wrappers:
* Java wrapper (around a SWIG interface): <http://tk-software.home.comcast.net/>
* A good tutorial to use JDBC driver for SQLite. (it works at least !) <http://www.ci.uchicago.edu/wiki/bin/view/VDS/VDSDevelopment/UsingSQLite>
* Cross-platform JDBC driver which uses embedded native SQLite libraries on Windows, Linux, OS X, and falls back to pure Java implementation on other OSes: <https://github.com/xerial/sqlite-jdbc> (formerly [zentus](http://www.zentus.com/sqlitejdbc))
* Another Java - SWIG wrapper. It only works on Win32. <http://rodolfo_3.tripod.com/index.html>
* sqlite-java-shell: 100% pure Java port of the sqlite3 commandline shell built with NestedVM. (This is not a JDBC driver).
* SQLite JDBC Driver for Mysaifu JVM: SQLite JDBC Driver for Mysaifu JVM and SQLite JNI Library for Windows (x86) and Linux (i386/PowerPC).
|
I found your question while searching for information with [SQLite](http://www.sqlite.org/) and Java. Just thought I'd add my answer which I also posted on my [blog](http://blog.rungeek.com/post/81611917/how-to-use-sqlite-with-java).
I have been coding in Java for a while now. I have also known about SQLite but never used it… Well I have used it through other [applications](http://www.sqlite.org/famous.html) but never in an app that I coded. So I needed it for a project this week and it's so simple use!
I found a Java JDBC driver for SQLite. Just add the [JAR file](https://github.com/xerial/sqlite-jdbc) to your classpath and import java.sql.\*
His test app will create a database file, send some SQL commands to create a table, store some data in the table, and read it back and display on console. It will create the **test.db** file in the root directory of the project. You can run this example with `java -cp .:sqlitejdbc-v056.jar Test`.
```
package com.rungeek.sqlite;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.Statement;
public class Test {
public static void main(String[] args) throws Exception {
Class.forName("org.sqlite.JDBC");
Connection conn = DriverManager.getConnection("jdbc:sqlite:test.db");
Statement stat = conn.createStatement();
stat.executeUpdate("drop table if exists people;");
stat.executeUpdate("create table people (name, occupation);");
PreparedStatement prep = conn.prepareStatement(
"insert into people values (?, ?);");
prep.setString(1, "Gandhi");
prep.setString(2, "politics");
prep.addBatch();
prep.setString(1, "Turing");
prep.setString(2, "computers");
prep.addBatch();
prep.setString(1, "Wittgenstein");
prep.setString(2, "smartypants");
prep.addBatch();
conn.setAutoCommit(false);
prep.executeBatch();
conn.setAutoCommit(true);
ResultSet rs = stat.executeQuery("select * from people;");
while (rs.next()) {
System.out.println("name = " + rs.getString("name"));
System.out.println("job = " + rs.getString("occupation"));
}
rs.close();
conn.close();
}
}
```
|
Java and SQLite
|
[
"",
"java",
"sqlite",
""
] |
I have a simple SQL 'Select' query, and I'd like to dump the results into an Excel file. I'm only able to save as .csv and converting to .xls creates some super ugly output. In any case, as far as I can tell (using Google) this doesn't seem to be so straight forward. Any help would be greatly appreciated.
|
Use "External data" from Excel. It can use ODBC connection to fetch data from external source: Data/Get External Data/New Database Query
That way, even if the data in the database changes, you can easily refresh.
|
SSIS is a no-brainer for doing stuff like this and is very straight forward (and this is just the kind of thing it is for).
1. Right-click the database in SQL Management Studio
2. Go to Tasks and then Export data, you'll then see an easy to use wizard.
3. Your database will be the source, you can enter your SQL query
4. Choose Excel as the target
5. Run it at end of wizard
If you wanted, you could save the SSIS package as well (there's an option at the end of the wizard) so that you can do it on a schedule or something (and even open and modify to add more functionality if needed).
|
How do you transfer or export SQL Server 2005 data to Excel
|
[
"",
"sql",
"sql-server",
""
] |
Can anyone tell me what is the *memory usage overhead associated with PHP opcode cache*?
I've seen a lot of reviews of `opcode cache` but all of them only concentrate on the performance increase. I have a small entry level *VPS and memory limits* are a concern for me.
|
Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file.
Other than the cache size, the actual memory overhead of the cacher itself is negligible.
|
In todays world: It's neglectible. I think memory consumption was about 50 MB bigger with eAccelerator then it was without when I did my [benchmarks](http://blogs.interdose.com/dominik/2008/04/11/benchmarking-php-eaccelerator-und-andere-opcode-caches/).
If you really need the speed but do have headaches that your RAM might be not enough: grab $40 and buy another GIG of RAM for your server ;)
|
Opcode cache impact on memory usage
|
[
"",
"php",
"opcode-cache",
""
] |
This question is about removing **sequences** from an array, not duplicates in the strict sense.
Consider this sequence of elements in an array;
```
[0] a
[1] a
[2] b
[3] c
[4] c
[5] a
[6] c
[7] d
[8] c
[9] d
```
In this example I want to obtain the following...
```
[0] a
[1] b
[2] c
[3] a
[4] c
[5] d
```
Notice that duplicate elements are retained but that sequences of the same element have been reduced to a single instance of that element.
Further, notice that when two lines repeat they should be reduced to one set (of two lines).
```
[0] c
[1] d
[2] c
[3] d
```
...reduces to...
```
[0] c
[1] d
```
I'm coding in C# but algorithms in any language appreciated.
|
Here's C# app i wrote that solves this problem.
**takes**
aabccacdcd
**outputs**
abcacd
Probably looks pretty messy, took me a bit to get my head around the dynamic pattern length bit.
```
class Program
{
private static List<string> values;
private const int MAX_PATTERN_LENGTH = 4;
static void Main(string[] args)
{
values = new List<string>();
values.AddRange(new string[] { "a", "b", "c", "c", "a", "c", "d", "c", "d" });
for (int i = MAX_PATTERN_LENGTH; i > 0; i--)
{
RemoveDuplicatesOfLength(i);
}
foreach (string s in values)
{
Console.WriteLine(s);
}
}
private static void RemoveDuplicatesOfLength(int dupeLength)
{
for (int i = 0; i < values.Count; i++)
{
if (i + dupeLength > values.Count)
break;
if (i + dupeLength + dupeLength > values.Count)
break;
var patternA = values.GetRange(i, dupeLength);
var patternB = values.GetRange(i + dupeLength, dupeLength);
bool isPattern = ComparePatterns(patternA, patternB);
if (isPattern)
{
values.RemoveRange(i, dupeLength);
}
}
}
private static bool ComparePatterns(List<string> pattern, List<string> candidate)
{
for (int i = 0; i < pattern.Count; i++)
{
if (pattern[i] != candidate[i])
return false;
}
return true;
}
}
```
*fixed the initial values to match the questions values*
|
**EDIT: made some changes and new suggestions**
What about a sliding window...
```
REMOVE LENGTH 2: (no other length has other matches)
//the lower case letters are the matches
ABCBAbabaBBCbcbcbVbvBCbcbcAB
__ABCBABABABBCBCBCBVBVBCBCBCAB
REMOVE LENGTH 1 (duplicate characters):
//* denote that a string was removed to prevent continual contraction
//of the string, unless this is what you want.
ABCBA*BbC*V*BC*AB
_ABCBA*BBC*V*BC*AB
RESULT:
ABCBA*B*C*V*BC*AB == ABCBABCVBCAB
```
This is of course starting with length=2, increase it to L/2 and iterate down.
I'm also thinking of two other approaches:
1. **digraph** - Set a stateful digraph with the data and iterate over it with the string, if a cycle is found you'll have a duplication. I'm not sure how easy it is check check for these cycles... possibly some dynamic programming, so it could be equivlent to method 2 below. I'm going to have to think about this one as well longer.
2. **distance matrix** - using a levenstein distance matrix you might be able to detect duplication from diagonal movement (off the diagonal) with cost 0. This could indicate duplication of data. I will have to think about this more.
|
Best way to reduce sequences in an array of strings
|
[
"",
"c#",
".net",
"algorithm",
""
] |
In C/C++, what an `unsigned char` is used for? How is it different from a regular `char`?
|
In C++, there are three *distinct* character types:
1. `char`
2. `signed char`
3. `unsigned char`
### 1. `char`
If you are using character types for **text**, use the unqualified `char`:
* it is the type of character literals like `'a'` or `'0'` (in C++ only, in C their type is `int`)
* it is the type that makes up C strings like `"abcde"`
It also works out as a number value, but it is unspecified whether that value is treated as signed or unsigned. Beware character comparisons through inequalities - although if you limit yourself to ASCII (0-127) you're just about safe.
### 2. `signed char`/ 3. `unsigned char`
If you are using character types as **numbers**, use:
* `signed char`, which gives you *at least* the -127 to 127 range. (-128 to 127 is common)
* `unsigned char`, which gives you *at least* the 0 to 255 range. This might be useful for displaying an octet e.g. as hex value.
"At least", because the C++ standard only gives the minimum range of values that each numeric type is required to cover. `sizeof (char)` is required to be 1 (i.e. one byte), but a byte could in theory be for example 32 bits. **`sizeof` would still be report its size as `1`** - meaning that you *could* have `sizeof (char) == sizeof (long) == 1`.
|
This is implementation dependent, as the C standard does NOT define the signed-ness of `char`. Depending on the platform, char may be `signed` or `unsigned`, so you need to explicitly ask for `signed char` or `unsigned char` if your implementation depends on it. Just use `char` if you intend to represent characters from strings, as this will match what your platform puts in the string.
The difference between `signed char` and `unsigned char` is as you'd expect. On most platforms, `signed char` will be an 8-bit two's complement number ranging from `-128` to `127`, and `unsigned char` will be an 8-bit unsigned integer (`0` to `255`). Note the standard does NOT require that `char` types have 8 bits, only that `sizeof(char)` return `1`. You can get at the number of bits in a char with `CHAR_BIT` in `limits.h`. There are few if any platforms today where this will be something other than `8`, though.
There is a nice summary of this issue [here](http://www.arm.linux.org.uk/docs/faqs/signedchar.php).
As others have mentioned since I posted this, you're better off using `int8_t` and `uint8_t` if you really want to represent small integers.
|
What is an unsigned char?
|
[
"",
"c++",
"c",
"char",
""
] |
As an example in pseudocode:
```
if ((a mod 2) == 0)
{
isEven = true;
}
else
{
isEven = false;
}
```
|
Instead of the modulo operator, which has slightly different semantics, for non-negative integers, you can use the *remainder* operator `%`. For your exact example:
```
if ((a % 2) == 0)
{
isEven = true;
}
else
{
isEven = false;
}
```
This can be simplified to a one-liner:
```
isEven = (a % 2) == 0;
```
|
Here is the representation of your pseudo-code in minimal Java code;
```
boolean isEven = a % 2 == 0;
```
I'll now break it down into its components. The modulus operator in Java is the percent character (%). Therefore taking an int % int returns another int. The double equals (==) operator is used to compare values, such as a pair of ints and returns a boolean. This is then assigned to the boolean variable 'isEven'. Based on operator precedence the modulus will be evaluated before the comparison.
|
What's the syntax for mod in java
|
[
"",
"java",
"modulo",
""
] |
I found an example in the [VS2008 Examples](http://msdn2.microsoft.com/en-us/bb330936.aspx) for Dynamic LINQ that allows you to use a SQL-like string (e.g. `OrderBy("Name, Age DESC"))` for ordering. Unfortunately, the method included only works on `IQueryable<T>`. Is there any way to get this functionality on `IEnumerable<T>`?
|
Just stumbled into this oldie...
To do this without the dynamic LINQ library, you just need the code as below. This covers most common scenarios including nested properties.
To get it working with `IEnumerable<T>` you could add some wrapper methods that go via `AsQueryable` - but the code below is the core `Expression` logic needed.
```
public static IOrderedQueryable<T> OrderBy<T>(
this IQueryable<T> source,
string property)
{
return ApplyOrder<T>(source, property, "OrderBy");
}
public static IOrderedQueryable<T> OrderByDescending<T>(
this IQueryable<T> source,
string property)
{
return ApplyOrder<T>(source, property, "OrderByDescending");
}
public static IOrderedQueryable<T> ThenBy<T>(
this IOrderedQueryable<T> source,
string property)
{
return ApplyOrder<T>(source, property, "ThenBy");
}
public static IOrderedQueryable<T> ThenByDescending<T>(
this IOrderedQueryable<T> source,
string property)
{
return ApplyOrder<T>(source, property, "ThenByDescending");
}
static IOrderedQueryable<T> ApplyOrder<T>(
IQueryable<T> source,
string property,
string methodName)
{
string[] props = property.Split('.');
Type type = typeof(T);
ParameterExpression arg = Expression.Parameter(type, "x");
Expression expr = arg;
foreach(string prop in props) {
// use reflection (not ComponentModel) to mirror LINQ
PropertyInfo pi = type.GetProperty(prop);
expr = Expression.Property(expr, pi);
type = pi.PropertyType;
}
Type delegateType = typeof(Func<,>).MakeGenericType(typeof(T), type);
LambdaExpression lambda = Expression.Lambda(delegateType, expr, arg);
object result = typeof(Queryable).GetMethods().Single(
method => method.Name == methodName
&& method.IsGenericMethodDefinition
&& method.GetGenericArguments().Length == 2
&& method.GetParameters().Length == 2)
.MakeGenericMethod(typeof(T), type)
.Invoke(null, new object[] {source, lambda});
return (IOrderedQueryable<T>)result;
}
```
---
Edit: it gets more fun if you want to mix that with `dynamic` - although note that `dynamic` only applies to LINQ-to-Objects (expression-trees for ORMs etc can't really represent `dynamic` queries - `MemberExpression` doesn't support it). But here's a way to do it with LINQ-to-Objects. Note that the choice of `Hashtable` is due to favorable locking semantics:
```
using Microsoft.CSharp.RuntimeBinder;
using System;
using System.Collections;
using System.Collections.Generic;
using System.Dynamic;
using System.Linq;
using System.Runtime.CompilerServices;
static class Program
{
private static class AccessorCache
{
private static readonly Hashtable accessors = new Hashtable();
private static readonly Hashtable callSites = new Hashtable();
private static CallSite<Func<CallSite, object, object>> GetCallSiteLocked(
string name)
{
var callSite = (CallSite<Func<CallSite, object, object>>)callSites[name];
if(callSite == null)
{
callSites[name] = callSite = CallSite<Func<CallSite, object, object>>
.Create(Binder.GetMember(
CSharpBinderFlags.None,
name,
typeof(AccessorCache),
new CSharpArgumentInfo[] {
CSharpArgumentInfo.Create(
CSharpArgumentInfoFlags.None,
null)
}));
}
return callSite;
}
internal static Func<dynamic,object> GetAccessor(string name)
{
Func<dynamic, object> accessor = (Func<dynamic, object>)accessors[name];
if (accessor == null)
{
lock (accessors )
{
accessor = (Func<dynamic, object>)accessors[name];
if (accessor == null)
{
if(name.IndexOf('.') >= 0) {
string[] props = name.Split('.');
CallSite<Func<CallSite, object, object>>[] arr
= Array.ConvertAll(props, GetCallSiteLocked);
accessor = target =>
{
object val = (object)target;
for (int i = 0; i < arr.Length; i++)
{
var cs = arr[i];
val = cs.Target(cs, val);
}
return val;
};
} else {
var callSite = GetCallSiteLocked(name);
accessor = target =>
{
return callSite.Target(callSite, (object)target);
};
}
accessors[name] = accessor;
}
}
}
return accessor;
}
}
public static IOrderedEnumerable<dynamic> OrderBy(
this IEnumerable<dynamic> source,
string property)
{
return Enumerable.OrderBy<dynamic, object>(
source,
AccessorCache.GetAccessor(property),
Comparer<object>.Default);
}
public static IOrderedEnumerable<dynamic> OrderByDescending(
this IEnumerable<dynamic> source,
string property)
{
return Enumerable.OrderByDescending<dynamic, object>(
source,
AccessorCache.GetAccessor(property),
Comparer<object>.Default);
}
public static IOrderedEnumerable<dynamic> ThenBy(
this IOrderedEnumerable<dynamic> source,
string property)
{
return Enumerable.ThenBy<dynamic, object>(
source,
AccessorCache.GetAccessor(property),
Comparer<object>.Default);
}
public static IOrderedEnumerable<dynamic> ThenByDescending(
this IOrderedEnumerable<dynamic> source,
string property)
{
return Enumerable.ThenByDescending<dynamic, object>(
source,
AccessorCache.GetAccessor(property),
Comparer<object>.Default);
}
static void Main()
{
dynamic a = new ExpandoObject(),
b = new ExpandoObject(),
c = new ExpandoObject();
a.X = "abc";
b.X = "ghi";
c.X = "def";
dynamic[] data = new[] {
new { Y = a },
new { Y = b },
new { Y = c }
};
var ordered = data.OrderByDescending("Y.X").ToArray();
foreach (var obj in ordered)
{
Console.WriteLine(obj.Y.X);
}
}
}
```
|
Too easy without any complication:
1. Add `using System.Linq.Dynamic;` at the top.
2. Use `vehicles = vehicles.AsQueryable().OrderBy("Make ASC, Year DESC").ToList();`
**Edit**: to save some time, the **System.Linq.Dynamic.Core** (System.Linq.Dynamic is deprecated) assembly is not part of the framework, but can be installed from nuget: [System.Linq.Dynamic.Core](https://www.nuget.org/packages/System.Linq.Dynamic.Core/)
|
Dynamic LINQ OrderBy on IEnumerable<T> / IQueryable<T>
|
[
"",
"c#",
"linq",
"ienumerable",
"iqueryable",
"linq-to-objects",
""
] |
I want to scrape some information off a football (soccer) web page using simple python regexp's. The problem is that players such as the first chap, ÄÄRITALO, comes out as ÄÄRITALO!
That is, html uses escaped markup for the special characters, such as Ä
Is there a simple way of reading the html into the correct python string? If it was XML/XHTML it would be easy, the parser would do it.
|
I would recommend [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) for HTML scraping. You also need to tell it to convert HTML entities to the corresponding Unicode characters, like so:
```
>>> from BeautifulSoup import BeautifulSoup
>>> html = "<html>ÄÄRITALO!</html>"
>>> soup = BeautifulSoup(html, convertEntities=BeautifulSoup.HTML_ENTITIES)
>>> print soup.contents[0].string
ÄÄRITALO!
```
(It would be nice if the standard [codecs](http://docs.python.org/lib/module-codecs.html) module included a codec for this, such that you could do `"some_string".decode('html_entities')` but unfortunately it doesn't!)
**EDIT:**
Another solution:
Python developer Fredrik Lundh (author of elementtree, among other things) has [a function to unsecape HTML entities](http://docs.python.org/lib/module-codecs.html) on his website, which works with decimal, hex and named entities (BeautifulSoup will not work with the hex ones).
|
Try using [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/). It should do the trick and give you a nicely formatted DOM to work with as well.
[This blog](http://channel3b.wordpress.com/2007/07/04/how-to-convert-html-entities-to-real-unicode-in-python/) entry seems to have had some success with it.
|
Getting international characters from a web page?
|
[
"",
"python",
"html",
"parsing",
"unicode",
""
] |
Given a reference to a method, is there a way to check whether the method is bound to an object or not? Can you also access the instance that it's bound to?
|
```
def isbound(method):
return method.im_self is not None
def instance(bounded_method):
return bounded_method.im_self
```
[User-defined methods:](https://docs.python.org/2.7/reference/datamodel.html#index-40)
> When a user-defined method object is
> created by retrieving a user-defined
> function object from a class, its
> `im_self` attribute is `None` and the
> method object is said to be unbound.
> When one is created by retrieving a
> user-defined function object from a
> class via one of its instances, its
> `im_self` attribute is the instance, and
> the method object is said to be bound.
> In either case, the new method's
> `im_class` attribute is the class from
> which the retrieval takes place, and
> its `im_func` attribute is the original
> function object.
In Python [2.6 and 3.0](https://docs.python.org/2.7/whatsnew/2.6.html):
> Instance method objects have new
> attributes for the object and function
> comprising the method; the new synonym
> for `im_self` is `__self__`, and `im_func`
> is also available as `__func__`. The old
> names are still supported in Python
> 2.6, but are gone in 3.0.
|
In python 3 the `__self__` attribute is *only* set on bound methods. It's not set to `None` on plain functions (or unbound methods, which are just plain functions in python 3).
Use something like this:
```
def is_bound(m):
return hasattr(m, '__self__')
```
|
How do you check whether a python method is bound or not?
|
[
"",
"python",
"python-datamodel",
""
] |
While there are 100 ways to solve the conversion problem, I am focusing on performance.
Give that the string only contains binary data, what is the fastest method, in terms of performance, of converting that data to a byte[] (not char[]) under C#?
Clarification: This is not ASCII data, rather binary data that happens to be in a string.
|
I'm not sure ASCIIEncoding.GetBytes is going to do it, because it only supports the [range 0x0000 to 0x007F](https://learn.microsoft.com/en-us/dotnet/api/system.text.asciiencoding).
You tell the string contains only bytes. But a .NET string is an array of chars, and 1 char is 2 bytes (because a .NET stores strings as UTF16). So you can either have two situations for storing the bytes 0x42 and 0x98:
1. The string was an ANSI string and contained bytes and is converted to an unicode string, thus the bytes will be 0x00 0x42 0x00 0x98. (The string is stored as 0x0042 and 0x0098)
2. The string was just a byte array which you typecasted or just recieved to an string and thus became the following bytes 0x42 0x98. (The string is stored as 0x9842)
In the first situation on the result would be 0x42 and 0x3F (ascii for "B?"). The second situation would result in 0x3F (ascii for "?"). This is logical, because the chars are outside of the valid ascii range and the encoder does not know what to do with those values.
So i'm wondering why it's a string with bytes?
* Maybe it contains a byte encoded as a string (for instance [Base64](https://en.wikipedia.org/wiki/Base64))?
* Maybe you should start with an char array or a byte array?
If you realy do have situation 2 and you want to get the bytes out of it you should use the [UnicodeEncoding.GetBytes](https://learn.microsoft.com/en-us/dotnet/api/system.text.unicodeencoding.getbytes) call. Because that will return 0x42 and 0x98.
If you'd like to go from a char array to byte array, the fastest way would be Marshaling.. But that's not really nice, and uses double memory.
```
public Byte[] ConvertToBytes(Char[] source)
{
Byte[] result = new Byte[source.Length * sizeof(Char)];
IntPtr tempBuffer = Marshal.AllocHGlobal(result.Length);
try
{
Marshal.Copy(source, 0, tempBuffer, source.Length);
Marshal.Copy(tempBuffer, result, 0, result.Length);
}
finally
{
Marshal.FreeHGlobal(tempBuffer);
}
return result;
}
```
|
[UTF8Encoding.GetBytes](http://msdn.microsoft.com/en-us/library/system.text.utf8encoding.getbytes.aspx)
|
Using C#, what is the most efficient method of converting a string containing binary data to an array of bytes
|
[
"",
"c#",
"arrays",
""
] |
Below is my current char\* to hex string function. I wrote it as an exercise in bit manipulation. It takes ~7ms on a AMD Athlon MP 2800+ to hexify a 10 million byte array. Is there any trick or other way that I am missing?
How can I make this faster?
Compiled with -O3 in g++
```
static const char _hex2asciiU_value[256][2] =
{ {'0','0'}, {'0','1'}, /* snip..., */ {'F','E'},{'F','F'} };
std::string char_to_hex( const unsigned char* _pArray, unsigned int _len )
{
std::string str;
str.resize(_len*2);
char* pszHex = &str[0];
const unsigned char* pEnd = _pArray + _len;
clock_t stick, etick;
stick = clock();
for( const unsigned char* pChar = _pArray; pChar != pEnd; pChar++, pszHex += 2 ) {
pszHex[0] = _hex2asciiU_value[*pChar][0];
pszHex[1] = _hex2asciiU_value[*pChar][1];
}
etick = clock();
std::cout << "ticks to hexify " << etick - stick << std::endl;
return str;
}
```
**Updates**
Added timing code
[Brian R. Bondy](https://stackoverflow.com/questions/69115/char-to-hex-string-exercise#69126): replace the std::string with a heap alloc'd buffer and change ofs\*16 to ofs << 4 - however the heap allocated buffer seems to slow it down? - result ~11ms
[Antti Sykäri](https://stackoverflow.com/questions/69115/#69305):replace inner loop with
```
int upper = *pChar >> 4;
int lower = *pChar & 0x0f;
pszHex[0] = pHex[upper];
pszHex[1] = pHex[lower];
```
result ~8ms
[Robert](https://stackoverflow.com/questions/69115?sort=votes#69218): replace `_hex2asciiU_value` with a full 256-entry table, sacrificing memory space but result ~7ms!
[HoyHoy](https://stackoverflow.com/questions/69115/char-to-hex-string-exercise#70254): Noted it was producing incorrect results
|
At the cost of more memory you can create a full 256-entry table of the hex codes:
```
static const char _hex2asciiU_value[256][2] =
{ {'0','0'}, {'0','1'}, /* ..., */ {'F','E'},{'F','F'} };
```
Then direct index into the table, no bit fiddling required.
```
const char *pHexVal = pHex[*pChar];
pszHex[0] = pHexVal[0];
pszHex[1] = pHexVal[1];
```
|
This assembly function (based off my previous post here, but I had to modify the concept a bit to get it to actually work) processes 3.3 billion input characters per second (6.6 billion output characters) on one core of a Core 2 Conroe 3Ghz. Penryn is probably faster.
```
%include "x86inc.asm"
SECTION_RODATA
pb_f0: times 16 db 0xf0
pb_0f: times 16 db 0x0f
pb_hex: db 48,49,50,51,52,53,54,55,56,57,65,66,67,68,69,70
SECTION .text
; int convert_string_to_hex( char *input, char *output, int len )
cglobal _convert_string_to_hex,3,3
movdqa xmm6, [pb_f0 GLOBAL]
movdqa xmm7, [pb_0f GLOBAL]
.loop:
movdqa xmm5, [pb_hex GLOBAL]
movdqa xmm4, [pb_hex GLOBAL]
movq xmm0, [r0+r2-8]
movq xmm2, [r0+r2-16]
movq xmm1, xmm0
movq xmm3, xmm2
pand xmm0, xmm6 ;high bits
pand xmm2, xmm6
psrlq xmm0, 4
psrlq xmm2, 4
pand xmm1, xmm7 ;low bits
pand xmm3, xmm7
punpcklbw xmm0, xmm1
punpcklbw xmm2, xmm3
pshufb xmm4, xmm0
pshufb xmm5, xmm2
movdqa [r1+r2*2-16], xmm4
movdqa [r1+r2*2-32], xmm5
sub r2, 16
jg .loop
REP_RET
```
Note it uses x264 assembly syntax, which makes it more portable (to 32-bit vs 64-bit, etc). To convert this into the syntax of your choice is trivial: r0, r1, r2 are the three arguments to the functions in registers. Its a bit like pseudocode. Or you can just get common/x86/x86inc.asm from the x264 tree and include that to run it natively.
P.S. Stack Overflow, am I wrong for wasting time on such a trivial thing? Or is this awesome?
|
char[] to hex string exercise
|
[
"",
"c++",
"optimization",
"hex",
""
] |
Here's my problem - I'd like to communicate between two websites and I'm looking for a clean solution. The current solution uses Javascript but there are nasty workarounds because of (understandable) cross-site scripting restrictions.
At the moment, **website A** opens a modal window containing **website B** using a [jQuery](http://jquery.com/) plug-in called [jqModal](http://dev.iceburg.net/jquery/jqModal/). **Website B** does some work and returns some results to **website A**. To return that information we have to work around cross-site scripting restrictions - **website B** creates an iframe that refers to a page on **website A** and includes \*fragment identifiers" containing the information to be returned. The iframe is polled by **website A** to detect the returned information. It's a [common technique](http://tagneto.blogspot.com/2006/06/cross-domain-frame-communication-with.html) but it's hacky.
There are variations such as [CrossSite](http://www.julienlecomte.net/blog/2007/11/31/) and I could perhaps use an HTTP POST from **website B** to **website A** but I'm trying to avoid page refreshes.
Does anyone have any alternatives?
EDIT: I'd like to avoid having to save state on **website B**.
|
My best suggestion would be to create a webservice on each site that the other could call with the information that needs to get passed. If security is necessary, it's easy to add an SSL-like authentication scheme (or actual SSL even, if you like) to this system to ensure that only the two servers are able to talk to their respective web services.
This would let you avoid the hackiness that's inherent in any scheme that involves one site opening windows on the other.
|
With jQuery newer than 1.2 you can use [JSONP](http://remysharp.com/2007/10/08/what-is-jsonp/)
|
Communicating between websites (using Javascript or ?)
|
[
"",
"javascript",
"jquery",
"web",
"xss",
""
] |
In C++, what is the purpose of the scope resolution operator when used without a scope? For instance:
```
::foo();
```
|
It means global scope. You might need to use this operator when you have conflicting functions or variables in the same scope and you need to use a global one. You might have something like:
```
void bar(); // this is a global function
class foo {
void some_func() { ::bar(); } // this function is calling the global bar() and not the class version
void bar(); // this is a class member
};
```
If you need to call the global bar() function from within a class member function, you should use ::bar() to get to the global version of the function.
|
A name that begins with the [scope resolution operator](http://en.wikipedia.org/wiki/Scope_resolution_operator) (`::`) is looked up in the global namespace. We can see this by looking at the [draft C++ standard](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3485.pdf) section `3.4.3` *Qualified name lookup* paragraph *4* which says (*emphasis mine*):
> A name prefixed by the unary scope operator :: (5.1) **is looked up in global scope**, in the translation unit where it is used. The name shall be declared in global namespace scope or shall be a name whose declaration is visible in global scope because of a using-directive (3.4.3.2). The use of :: **allows a global name to be referred to even if its identifier has been hidden** (3.3.10).
As the standard states this allows us to use names from the global namespace [that would otherwise be hidden](http://publib.boulder.ibm.com/infocenter/lnxpcomp/v8v101/index.jsp?topic=%2Fcom.ibm.xlcpp8l.doc%2Flanguage%2Fref%2Fcplr175.htm), the example from the linked document is as follows:
```
int count = 0;
int main(void) {
int count = 0;
::count = 1; // set global count to 1
count = 2; // set local count to 2
return 0;
}
```
The wording is very similar going back to [N1804](http://www.open-std.org/Jtc1/sc22/wg21/docs/papers/2005/n1804.pdf) which is the earliest draft standard available.
|
scope resolution operator without a scope
|
[
"",
"c++",
""
] |
How can I detect when an Exception has been thrown anywhere in my application?
I'm try to auto-magically send myself an email whenever an exception is thrown anywhere in my Java Desktop Application. I figure this way I can be more proactive.
I know I could just explicitly log and notify myself whenever an exception occurs, but I'd have to do it everywhere and I might(more likely will) miss a couple.
Any suggestions?
|
You probobly don't want to mail on any exception. There are lots of code in the JDK that actaully depend on exceptions to work normally. What I presume you are more inerested in are uncaught exceptions. If you are catching the exceptions you should handle notifications there.
In a desktop app there are two places to worry about this, in the [event-dispatch-thread](/questions/tagged/event-dispatch-thread "show questions tagged 'event-dispatch-thread'") (EDT) and outside of the EDT. Globaly you can register a class implementing `java.util.Thread.UncaughtExceptionHandler` and register it via `java.util.Thread.setDefaultUncaughtExceptionHandler`. This will get called if an exception winds down to the bottom of the stack and the thread hasn't had a handler set on the current thread instance on the thread or the ThreadGroup.
The EDT has a different hook for handling exceptions. A system property `'sun.awt.exception.handler'` needs to be registerd with the Fully Qualified Class Name of a class with a zero argument constructor. This class needs an instance method handle(`Throwable`) that does your work. The return type doesn't matter, and since a new instance is created every time, don't count on keeping state.
So if you don't care what thread the exception occurred in a sample may look like this:
```
class ExceptionHandler implements Thread.UncaughtExceptionHandler {
public void uncaughtException(Thread t, Throwable e) {
handle(e);
}
public void handle(Throwable throwable) {
try {
// insert your e-mail code here
} catch (Throwable t) {
// don't let the exception get thrown out, will cause infinite looping!
}
}
public static void registerExceptionHandler() {
Thread.setDefaultUncaughtExceptionHandler(new ExceptionHandler());
System.setProperty("sun.awt.exception.handler", ExceptionHandler.class.getName());
}
}
```
Add this class into some random package, and then call the `registerExceptionHandler` method and you should be ready to go.
|
The new debugging hooks in Java 1.5 let you do this. It enables e.g. "break on any exception" in debuggers.
[Here's the specific Javadoc](http://java.sun.com/j2se/1.5.0/docs/guide/jpda/jdi/com/sun/jdi/event/ExceptionEvent.html) you need.
|
How can I detect when an Exception's been thrown globally in Java?
|
[
"",
"java",
"exception",
""
] |
What tools are there available for static analysis against C# code? I know about FxCop and StyleCop. Are there others? I've run across NStatic before but it's been in development for what seems like forever - it's looking pretty slick from what little I've seen of it, so it would be nice if it would ever see the light of day.
Along these same lines (this is primarily my interest for static analysis), tools for testing code for multithreading issues (deadlocks, race conditions, etc.) also seem a bit scarce. Typemock Racer just popped up so I'll be looking at that. Anything beyond this?
Real-life opinions about tools you've used are appreciated.
|
**Code violation detection Tools:**
* [FxCop](https://msdn.microsoft.com/en-us/library/bb429476(v=vs.80).aspx), excellent tool by Microsoft. Check compliance with .NET framework guidelines.
**Edit October 2010:** No longer available as a standalone download. It is now included in the [Windows SDK](http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6b6c21d2-2006-4afa-9702-529fa782d63b&displaylang=en) and after installation can be found in `Program Files\Microsoft SDKs\Windows\ [v7.1] \Bin\FXCop\FxCopSetup.exe`
**Edit February 2018**: This functionality has now been integrated into Visual Studio 2012 and later as [Code Analysis](https://learn.microsoft.com/en-gb/visualstudio/code-quality/code-analysis-for-managed-code-overview)
* [Clocksharp](http://sharptoolbox.com/tools/clocksharp), based on code source analysis (to C# 2.0)
* [Mono.Gendarme](http://www.mono-project.com/Gendarme), similar to FxCop but with an open source licence (based on [Mono.Cecil](http://mono-project.com/Cecil))
* [Smokey](http://code.google.com/p/smokey/), similar to FxCop and Gendarme, based on [Mono.Cecil](http://mono-project.com/Cecil). No longer on development, the main developer works with Gendarme team now.
* [Coverity Prevent™ for C#](http://www.coverity.com/html/coverity-prevent-for-c%23.html), commercial product
* [PRQA QA·C#](http://www.programmingresearch.com/products/qacsharp/), commercial product
* [PVS-Studio](http://www.viva64.com/en/pvs-studio/), commercial product
* [CAT.NET](http://www.microsoft.com/downloads/details.aspx?FamilyId=0178e2ef-9da8-445e-9348-c93f24cc9f9d&displaylang=en), visual studio addin that helps identification of security flaws **Edit November 2019:** Link is dead.
* [CodeIt.Right](http://submain.com/codeit.right)
* [Spec#](http://research.microsoft.com/SpecSharp/)
* [Pex](http://research.microsoft.com/Pex/)
* [SonarQube](https://www.sonarqube.org/), FOSS & Commercial options to support writing cleaner and safer code.
**Quality Metric Tools:**
* [NDepend](http://www.ndepend.com/), great visual tool. Useful for code metrics, rules, diff, coupling and dependency studies.
* [Nitriq](http://www.nitriq.com), free, can easily write your own metrics/constraints, nice visualizations. **Edit February 2018:** download links now dead. **Edit June 17, 2019: Links not dead.**
* [RSM Squared](http://msquaredtechnologies.com/), based on code source analysis
* [C# Metrics](http://www.semanticdesigns.com/Products/Metrics/CSharpMetrics.html), using a full parse of C#
* [SourceMonitor](http://www.campwoodsw.com/sourcemonitor.html), an old tool that occasionally gets updates
* [Code Metrics](https://archive.codeplex.com/?p=codemetrics), a [*Reflector*](http://www.red-gate.com/products/reflector/) add-in
* [Vil](https://web.archive.org/web/20140310013820/http://www.1bot.com/index.html), old tool that doesn't support .NET 2.0. **Edit January 2018:** Link now dead
**Checking Style Tools:**
* [StyleCop](https://github.com/StyleCop/StyleCop), Microsoft tool ( run from inside of Visual Studio or integrated into an MSBuild project). Also available [as an extension](https://visualstudiogallery.msdn.microsoft.com/cac2a05b-6eb6-4fa2-95b9-1f8d011e6cae) for Visual Studio 2015 and C#6.0
* [Agent Smith](https://code.google.com/p/agentsmithplugin/), code style validation plugin for *ReSharper*
**Duplication Detection:**
* [Simian](http://www.harukizaemon.com/simian/index.html), based on source code. Works with plenty languages.
* [CloneDR](http://www.semanticdesigns.com/Products/CloneDR), detects parameterized clones only on language boundaries (also handles many languages other than C#)
* [Clone Detective](https://github.com/terrajobst/clonedetective-vs) a Visual Studio plugin (which uses [ConQAT](https://www.cqse.eu/en/products/conqat/overview/) internally)
* [Atomiq](http://www.getatomiq.com), based on source code, plenty of languages, cool "wheel" visualization
**General Refactoring tools**
* [ReSharper](http://www.jetbrains.com/resharper/) - Majorly cool C# code analysis and refactoring features
|
The tool [NDepend](http://www.ndepend.com/) is quoted as **Quality Metric Tools** but it is pretty much also a **Code violation detection** tool. *Disclaimer: I am one of the developers of the tool*
With NDepend, one can write [Code Rule over LINQ Queries (what we call CQLinq)](http://www.ndepend.com/Features.aspx#CQL). More than [200 CQLinq code rules](http://www.ndepend.com/DefaultRules/webframe.html) are proposed by default. The strength of CQLinq is that *it is straightforward to write a code rule*, and get *immediately* results. Facilities are proposed to browse matched code elements. For example:
[](https://i.stack.imgur.com/4U5IV.png)
Beside that, NDepend comes with many others *static analysis like* features. These include:
* [Reporting from your CI/CD](https://www.ndepend.com/sample-reports/)
* [Azure DevOps Hub](https://www.ndepend.com/docs/azure-devops-tfs-vsts-integration-ndepend)
* [GitHub Action](https://www.ndepend.com/docs/ndepend-github-action)
* [Smart Technical Debt Estimation](http://www.ndepend.com/features/technical-debt)
* [Dependency Matrix](http://www.ndepend.com/Doc_Matrix.aspx)
* [Code Diff capabilities](http://www.ndepend.com/Doc_VS_Diff.aspx)
* [NDepend.API](http://www.ndepend.com/API/webframe.html) that lets write you own static analysis tool. With NDepend.APi we even developed a tool to detect code duplicate (details in this blog post: [An Original Algorithm to Find .NET Code Duplicate](http://codebetter.com/patricksmacchia/2012/06/12/an-original-algorithm-to-find-net-code-duplicate/)).
* [Powerful Dependency Graph](http://www.ndepend.com/Doc_VS_Arch.aspx)
[](https://i.stack.imgur.com/HXo0L.png)
|
What static analysis tools are available for C#?
|
[
"",
"c#",
"code-analysis",
"static-analysis",
""
] |
I have been making a little toy web application in C# along the lines of Rob Connery's Asp.net MVC storefront.
I find that I have a repository interface, call it IFooRepository, with methods, say
```
IQueryable<Foo> GetFoo();
void PersistFoo(Foo foo);
```
And I have three implementations of this: ISqlFooRepository, IFileFooRepostory, and IMockFooRepository.
I also have some test cases. What I would like to do, and haven't worked out how to do yet, is to run the same test cases against each of these three implementations, and have a green tick for each test pass on each interface type.
e.g.
```
[TestMethod]
Public void GetFoo_NotNull_Test()
{
IFooRepository repository = GetRepository();
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
```
I want this test method to be run three times, with some variation in the environment that allows it to get three different kinds of repository. At present I have three cut-and-pasted test classes that differ only in the implementation of the private helper method IFooRepository GetRepository(); Obviously, this is smelly.
However, I cannot just remove duplication by consolidating the cut and pasted methods, since they need to be present, public and marked as test for the test to run.
I am using the Microsoft testing framework, and would prefer to stay with it if I can. But a suggestion of how to do this in, say, MBUnit would also be of some interest.
|
Create an abstract class that contains concrete versions of the tests and an abstract GetRepository method which returns IFooRepository.
Create three classes that derive from the abstract class, each of which implements GetRepository in a way that returns the appropriate IFooRepository implementation.
Add all three classes to your test suite, and you're ready to go.
To be able to selectively run the tests for some providers and not others, consider using the MbUnit '[FixtureCategory]' attribute to categorise your tests - suggested categories are 'quick' 'slow' 'db' 'important' and 'unimportant' (The last two are jokes - honest!)
|
In MbUnit, you might be able to use the RowTest attribute to specify parameters on your test.
```
[RowTest]
[Row(new ThisRepository())]
[Row(new ThatRepository())]
Public void GetFoo_NotNull_Test(IFooRepository repository)
{
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
```
|
Using the same test suite on various implementations of a repository interface
|
[
"",
"c#",
"unit-testing",
"interface",
"duplication",
""
] |
Is there a way to change the encoding used by the String(byte[]) constructor ?
In my own code I use String(byte[],String) to specify the encoding but I am using an external library that I cannot change.
```
String src = "with accents: é à";
byte[] bytes = src.getBytes("UTF-8");
System.out.println("UTF-8 decoded: "+new String(bytes,"UTF-8"));
System.out.println("Default decoded: "+new String(bytes));
```
The output for this is :
```
UTF-8 decoded: with accents: é à
Default decoded: with accents: é Ã
```
I have tried changing the system property `file.encoding` but it does not work.
|
You need to change the locale before launching the JVM; see:
[Java, bug ID 4163515](https://bugs.java.com/bugdatabase/view_bug;jsessionid=6f932817e77b249681417c21bfaf?bug_id=4163515)
Some places seem to imply you can do this by setting the file.encoding variable when launching the JVM, such as
```
java -Dfile.encoding=UTF-8 ...
```
...but I haven't tried this myself. The safest way is to set an environment variable in the operating system.
|
Quoted from [defaultCharset()](http://java.sun.com/j2se/1.5.0/docs/api/java/nio/charset/Charset.html)
> The default charset is determined
> during virtual-machine startup and
> typically depends upon the locale and
> charset of the underlying operating
> system.
In most OSes you can set the charset using a environment variable.
|
Changing the default encoding for String(byte[])
|
[
"",
"java",
"encoding",
""
] |
This seemed like an easy thing to do. I just wanted to pop up a text window and display two columns of data -- a description on the left side and a corresponding value displayed on the right side. I haven't worked with Forms much so I just grabbed the first control that seemed appropriate, a TextBox. I thought using tabs would be an easy way to create the second column, but I discovered things just don't work that well.
There seems to be two problems with the way I tried to do this (see below). First, I read on numerous websites that the MeasureString function isn't very precise due to how complex fonts are, with kerning issues and all. The second is that I have no idea what the TextBox control is using as its StringFormat underneath.
Anyway, the result is that I invariably end up with items in the right column that are off by a tab. I suppose I could roll my own text window and do everything myself, but gee, isn't there a simple way to do this?
```
TextBox textBox = new TextBox();
textBox.Font = new Font("Calibri", 11);
textBox.Dock = DockStyle.Fill;
textBox.Multiline = true;
textBox.WordWrap = false;
textBox.ScrollBars = ScrollBars.Vertical;
Form form = new Form();
form.Text = "Recipe";
form.Size = new Size(400, 600);
form.FormBorderStyle = FormBorderStyle.Sizable;
form.StartPosition = FormStartPosition.CenterScreen;
form.Controls.Add(textBox);
Graphics g = form.CreateGraphics();
float targetWidth = 230;
foreach (PropertyInfo property in properties)
{
string text = String.Format("{0}:\t", Description);
while (g.MeasureString(text,textBox.Font).Width < targetWidth)
text += "\t";
textBox.AppendText(text + value.ToString() + "\n");
}
g.Dispose();
form.ShowDialog();
```
|
If you want, you can translate this VB.Net code to C#. The theory here is that you change the size of a tab in the control.
```
Private Declare Function SendMessage _
Lib "user32" Alias "SendMessageA" _
(ByVal handle As IntPtr, ByVal wMsg As Integer, _
ByVal wParam As Integer, ByRef lParam As Integer) As Integer
Private Sub SetTabStops(ByVal ctlTextBox As TextBox)
Const EM_SETTABSTOPS As Integer = &HCBS
Dim tabs() As Integer = {20, 40, 80}
SendMessage(ctlTextBox.Handle, EM_SETTABSTOPS, _
tabs.Length, tabs(0))
End Sub
```
I converted a version to C# for you, too. Tested and working in VS2005.
Add this using statement to your form:
```
using System.Runtime.InteropServices;
```
Put this right after the class declaration:
```
private const int EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll", CharSet = CharSet.Auto)]
public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int[] lParam);
```
Call this method when you want to set the tabstops:
```
private void SetTabStops(TextBox ctlTextBox)
{
const int EM_SETTABSTOPS = 203;
int[] tabs = { 100, 40, 80 };
SendMessage(textBox1.Handle, EM_SETTABSTOPS, tabs.Length, tabs);
}
```
To use it, here is all I did:
```
private void Form1_Load(object sender, EventArgs e)
{
SetTabStops(textBox1);
textBox1.Text = "Hi\tWorld";
}
```
|
Thanks Matt, your solution worked great for me. Here's my version of your code...
```
// This is a better way to pass in what tab stops I want...
SetTabStops(textBox, new int[] { 12,120 });
// And the code for the SetTabsStops method itself...
private const uint EM_SETTABSTOPS = 0x00CB;
[DllImport("User32.dll")]
private static extern uint SendMessage(IntPtr hWnd, uint wMsg, int wParam, int[] lParam);
public static void SetTabStops(TextBox textBox, int[] tabs)
{
SendMessage(textBox.Handle, EM_SETTABSTOPS, tabs.Length, tabs);
}
```
|
Is there an easy way to create two columns in a popup text window?
|
[
"",
"c#",
"winforms",
"controls",
"formatting",
""
] |
In order for my application (.Net 1.1) to use the system configured proxy server (trough a proxy.pac script) I was using an interop calls to WinHTTP function WinHttpGetProxyForUrl, passing the proxy.pac url I got from the registry.
Unfortunately, I hit a deployment scenario, where this does not work, as the proxy.pac file is deployed locally on the user's hard drive, and the url is "file://C://xxxx"
As clearly stated in the WinHttpGetProxyForUrl docs, it works only with http and https schemes, so it fails with file://
I'm considering 2 "ugly" solutions to the problem (the pac file is javascript):
1. Creating a separate JScript.NET project, with a single class with single static method Eval(string), and use it to eval in runtime the function read from the pac file
2. Building at runtime a JScript.NET assembly and load it.
As these solutions are really ugly :), does anybody know a better approach? Is there a Windows function which I can use trough interop?
If not, what are you guys thinking about the above 2 solutions - which one would you prefer?
|
Just a thought: Why not create a micro web server that can serve the local PAC file over a localhost socket. You should use a random URI for the content so that it is difficult to browse this in unexpected ways.
You could then pass a URL like <http://localhost:1234/gfdjklskjgfsdjgklsdfklgfsjkl> to the WinHttpGetProxyForUrl function and allow it to pull the PAC file from your micro server.
(hack... hack... hack...)
|
FWIW: <https://web.archive.org/web/20150405115150/http://msdn.microsoft.com/en-us/magazine/cc300743.aspx> describes how to use the JScript.NET engine to do this securely.
<https://web.archive.org/web/20090220132508/http://msdn.microsoft.com/en-us/library/aa383910(VS.85).aspx> explains how to use WinINET's implementation.
|
Autoproxy configuration script parsing in .Net/C#
|
[
"",
"c#",
"jscript.net",
"autoproxy",
""
] |
What is the best way to use ResolveUrl() in a Shared/static function in Asp.Net? My current solution for VB.Net is:
```
Dim x As New System.Web.UI.Control
x.ResolveUrl("~/someUrl")
```
Or C#:
```
System.Web.UI.Control x = new System.Web.UI.Control();
x.ResolveUrl("~/someUrl");
```
But I realize that isn't the best way of calling it.
|
I use [System.Web.VirtualPathUtility.ToAbsolute](http://msdn.microsoft.com/en-us/library/system.web.virtualpathutility.aspx).
|
It's worth noting that although System.Web.VirtualPathUtility.ToAbsolute is very useful here, it is **not** a perfect replacement for Control.ResolveUrl.
There is at least one significant difference: Control.ResolveUrl handles Query Strings very nicely, but they cause VirtualPathUtility to throw an HttpException. This can be absolutely mystifying the first time it happens, especially if you're used to the way that Control.ResolveUrl works.
If you know the exact structure of the Query String you want to use, this is easy enough to work around, viz:
```
public static string GetUrl(int id)
{
string path = VirtualPathUtility.ToAbsolute("~/SomePage.aspx");
return string.Format("{0}?id={1}", path, id);
}
```
...but if the Query String is getting passed in from an unknown source then you're going to need to parse it out somehow. (Before you get too deep into that, note that System.Uri might be able to do it for you).
|
ASP.Net: Using System.Web.UI.Control.ResolveUrl() in a shared/static function
|
[
"",
"c#",
"asp.net",
"vb.net",
"static",
"resolveurl",
""
] |
I need to find out the **external** IP of the computer a C# application is running on.
In the application I have a connection (via .NET remoting) to a server. Is there a good way to get the address of the client on the server side?
*(I have edited the question, to be a little more clear. I'm apologize to all kind people who did their best to respond to the question, when I perhaps was a little too vague)*
**Solution:**
I found a way that worked great for me. By implementing a custom IServerChannelSinkProvider and IServerChannelSink where I have access to CommonTransportKeys.IPAddress, it's easy to add the client ip on the CallContext.
```
public ServerProcessing ProcessMessage(IServerChannelSinkStack sinkStack,
IMessage requestmessage, ITransportHeaders requestHeaders,
System.IO.Stream requestStream, out IMessage responseMessage,
out ITransportHeaders responseHeaders, out System.IO.Stream responseStream)
{
try
{
// Get the IP address and add it to the call context.
IPAddress ipAddr = (IPAddress)requestHeaders[CommonTransportKeys.IPAddress];
CallContext.SetData("ClientIP", ipAddr);
}
catch (Exception)
{
}
sinkStack.Push(this, null);
ServerProcessing srvProc = _NextSink.ProcessMessage(sinkStack, requestmessage, requestHeaders,
requestStream, out responseMessage, out responseHeaders, out responseStream);
return srvProc;
}
```
And then later (when I get a request from a client) just get the IP from the CallContext like this.
```
public string GetClientIP()
{
// Get the client IP from the call context.
object data = CallContext.GetData("ClientIP");
// If the data is null or not a string, then return an empty string.
if (data == null || !(data is IPAddress))
return string.Empty;
// Return the data as a string.
return ((IPAddress)data).ToString();
}
```
I can now send the IP back to the client.
|
I found a way that worked great for me. By implementing a custom IServerChannelSinkProvider and IServerChannelSink where I have access to CommonTransportKeys.IPAddress, it's easy to add the client ip on the CallContext.
```
public ServerProcessing ProcessMessage(IServerChannelSinkStack sinkStack,
IMessage requestmessage, ITransportHeaders requestHeaders,
System.IO.Stream requestStream, out IMessage responseMessage,
out ITransportHeaders responseHeaders, out System.IO.Stream responseStream)
{
try
{
// Get the IP address and add it to the call context.
IPAddress ipAddr = (IPAddress)requestHeaders[CommonTransportKeys.IPAddress];
CallContext.SetData("ClientIP", ipAddr);
}
catch (Exception)
{
}
sinkStack.Push(this, null);
ServerProcessing srvProc = _NextSink.ProcessMessage(sinkStack, requestmessage, requestHeaders,
requestStream, out responseMessage, out responseHeaders, out responseStream);
return srvProc;
}
```
And then later (when I get a request from a client) just get the IP from the CallContext like this.
```
public string GetClientIP()
{
// Get the client IP from the call context.
object data = CallContext.GetData("ClientIP");
// If the data is null or not a string, then return an empty string.
if (data == null || !(data is IPAddress))
return string.Empty;
// Return the data as a string.
return ((IPAddress)data).ToString();
}
```
I can now send the IP back to the client.
|
This is one of those questions where you have to look deeper and maybe rethink the original problem; in this case, "Why do you need an external IP address?"
The issue is that the computer may not have an external IP address. For example, my laptop has an internal IP address (192.168.x.y) assigned by the router. The router itself has an internal IP address, but its "external" IP address is also internal. It's only used to communicate with the DSL modem, which actually has the external, internet-facing IP address.
So the real question becomes, "How do I get the Internet-facing IP address of a device 2 hops away?" And the answer is generally, you don't; at least not without using a service such as whatismyip.com that you have already dismissed, or doing a really massive hack involving hardcoding the DSL modem password into your application and querying the DSL modem and screen-scraping the admin page (and God help you if the modem is ever replaced).
EDIT: Now to apply this towards the refactored question, "How do I get the IP address of my client from a server .NET component?" Like whatismyip.com, the best the server will be able to do is give you the IP address of your internet-facing device, which is unlikely to be the actual IP address of the computer running the application. Going back to my laptop, if my Internet-facing IP was 75.75.75.75 and the LAN IP was 192.168.0.112, the server would only be able to see the 75.75.75.75 IP address. That will get it as far as my DSL modem. If your server wanted to make a separate connection back to my laptop, I would first need to configure the DSL modem and any routers inbetween it and my laptop to recognize incoming connections from your server and route them appropriately. There's a few ways to do this, but it's outside the scope of this topic.
If you are in fact trying to make a connection out from the server back to the client, rethink your design because you are delving into WTF territory (or at least, making your application that much harder to deploy).
|
Get external IP address over remoting in C#
|
[
"",
"c#",
".net",
"remoting",
""
] |
In Google Reader, you can use a bookmarklet to "note" a page you're visiting. When you press the bookmarklet, a little Google form is displayed on top of the current page. In the form you can enter a description, etc. When you press Submit, the form submits itself without leaving the page, and then the form disappears. All in all, a very smooth experience.
I obviously tried to take a look at how it's done, but the most interesting parts are minified and unreadable. So...
Any ideas on how to implement something like this (on the browser side)? What issues are there? Existing blog posts describing this?
|
Aupajo has it right. I will, however, point you towards a bookmarklet framework I worked up for our site ([www.iminta.com](http://www.iminta.com)).
The bookmarklet itself reads as follows:
```
javascript:void((function(){
var e=document.createElement('script');
e.setAttribute('type','text/javascript');
e.setAttribute('src','http://www.iminta.com/javascripts/new_bookmarklet.js?noCache='+new%20Date().getTime());
document.body.appendChild(e)
})())
```
This just injects a new script into the document that includes this file:
<http://www.iminta.com/javascripts/new_bookmarklet.js>
It's important to note that the bookmarklet creates an iframe, positions it, and adds events to the document to allow the user to do things like hit escape (to close the window) or to scroll (so it stays visible). It also hides elements that don't play well with z-positioning (flash, for example). Finally, it facilitates communicating across to the javascript that is running within the iframe. In this way, you can have a close button in the iframe that tells the parent document to remove the iframe. This kind of cross-domain stuff is a bit hacky, but it's the only way (I've seen) to do it.
Not for the feint of heart; if you're not good at JavaScript, prepare to struggle.
|
At it's very basic level it will be using `createElement` to create the elements to insert into the page and `appendChild` or `insertBefore` to insert them into the page.
|
How to display a form in any site's pages using a bookmarklet (like Note in Google Reader)?
|
[
"",
"javascript",
"client-side",
"bookmarklet",
"browser",
""
] |
Being relatively new to the .net game, I was wondering, has anyone had any experience of the pros / cons between the use of LINQ and what could be considered more traditional methods working with lists / collections?
For a specific example of a project I'm working on : a list of unique id / name pairs are being retrieved from a remote web-service.
* this list will change infrequently (once per day),
* will be read-only from the point of view of the application where it is being used
* will be stored at the application level for all requests to access
Given those points, I plan to store the returned values at the application level in a singleton class.
My initial approach was to iterate through the list returned from the remote service and store it in a NameValueCollection in a singleton class, with methods to retrieve from the collection based on an id:
```
sugarsoap soapService = new sugarsoap();
branch_summary[] branchList = soapService.getBranches();
foreach (branch_summary aBranch in branchList)
{
branchNameList.Add(aBranch.id, aBranch.name);
}
```
The alternative using LINQ is to simply add a method that works on the list directly once it has been retrieved:
```
public string branchName (string branchId)
{
//branchList populated in the constructor
branch_summary bs = from b in branchList where b.id == branchId select b;
return branch_summary.name;
}
```
Is either better than the other - is there a third way? I'm open to all answers, for both approaches and both in terms of solutions that offer elegance, and those which benefit performance.
|
i dont think the linq you wrote would compile, it'd have to be
```
public string branchName (string branchId)
{
//branchList populated in the constructor
branch_summary bs = (from b in branchList where b.id == branchId select b).FirstOrDefault();
return branch_summary == null ? null : branch_summary.name;
}
```
note the .FirstsOrDefault()
I'd rather use LINQ for the reason that it can be used in other places, for writing more complex filters on your data. I also think it's easier to read than NameValueCollection alternative.
that's my $0.02
|
In general, your simple one-line for/foreach loop will be faster than using Linq. Also, Linq doesn't [always] offer significant readability improvements in this case. Here is the general rule I code by:
If the algorithm is simple enough to write and maintain without Linq, and you don't need delayed evaluation, *and* Linq doesn't offer sufficient maintainability improvements, then don't use it. However, there are times where Linq *immensely* improves the readability and correctness of your code, as shown in two examples I posted [here](http://blog.280z28.org/archives/2008/10/20/) and [here](http://blog.280z28.org/archives/2008/10/24/).
|
Pros & cons between LINQ and traditional collection based approaches
|
[
"",
"c#",
".net",
"asp.net",
"linq",
""
] |
In the ContainsIngredients method in the following code, is it possible to cache the *p.Ingredients* value instead of explicitly referencing it several times? This is a fairly trivial example that I just cooked up for illustrative purposes, but the code I'm working on references values deep inside *p* eg. *p.InnerObject.ExpensiveMethod().Value*
edit:
I'm using the PredicateBuilder from <http://www.albahari.com/nutshell/predicatebuilder.html>
```
public class IngredientBag
{
private readonly Dictionary<string, string> _ingredients = new Dictionary<string, string>();
public void Add(string type, string name)
{
_ingredients.Add(type, name);
}
public string Get(string type)
{
return _ingredients[type];
}
public bool Contains(string type)
{
return _ingredients.ContainsKey(type);
}
}
public class Potion
{
public IngredientBag Ingredients { get; private set;}
public string Name {get; private set;}
public Potion(string name) : this(name, null)
{
}
public Potion(string name, IngredientBag ingredients)
{
Name = name;
Ingredients = ingredients;
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or (
p => p.Ingredients != null &&
p.Ingredients.Contains(ingredientType) &&
p.Ingredients.Get(ingredientType).Contains(temp));
}
return predicate;
}
}
[STAThread]
static void Main()
{
var potions = new List<Potion>
{
new Potion("Invisibility", new IngredientBag()),
new Potion("Bonus"),
new Potion("Speed", new IngredientBag()),
new Potion("Strength", new IngredientBag()),
new Potion("Dummy Potion")
};
potions[0].Ingredients.Add("solid", "Eye of Newt");
potions[0].Ingredients.Add("liquid", "Gall of Peacock");
potions[0].Ingredients.Add("gas", "Breath of Spider");
potions[2].Ingredients.Add("solid", "Hair of Toad");
potions[2].Ingredients.Add("gas", "Peacock's anguish");
potions[3].Ingredients.Add("liquid", "Peacock Sweat");
potions[3].Ingredients.Add("gas", "Newt's aura");
var predicate = Potion.ContainsIngredients("solid", "Newt", "Toad")
.Or(Potion.ContainsIngredients("gas", "Spider", "Scorpion"));
foreach (var result in
from p in potions
where(predicate).Compile()(p)
select p)
{
Console.WriteLine(result.Name);
}
}
```
|
Can't you simply write your boolean expression in a separate static function which you call from your lambda - passing p.Ingredients as a parameter...
```
private static bool IsIngredientPresent(IngredientBag i, string ingredientType, string ingredient)
{
return i != null && i.Contains(ingredientType) && i.Get(ingredientType).Contains(ingredient);
}
public static Expression<Func<Potion, bool>>
ContainsIngredients(string ingredientType, params string[] ingredients)
{
var predicate = PredicateBuilder.False<Potion>();
// Here, I'm accessing p.Ingredients several times in one
// expression. Is there any way to cache this value and
// reference the cached value in the expression?
foreach (var ingredient in ingredients)
{
var temp = ingredient;
predicate = predicate.Or(
p => IsIngredientPresent(p.Ingredients, ingredientType, temp));
}
return predicate;
}
```
|
Have you considered [Memoization](http://en.wikipedia.org/wiki/Memoization)?
The basic idea is this; if you have an expensive function call, there is a function which will calculate the expensive value on first call, but return a cached version thereafter. The function looks like this;
```
static Func<T> Remember<T>(Func<T> GetExpensiveValue)
{
bool isCached= false;
T cachedResult = default(T);
return () =>
{
if (!isCached)
{
cachedResult = GetExpensiveValue();
isCached = true;
}
return cachedResult;
};
}
```
This means you can write this;
```
// here's something that takes ages to calculate
Func<string> MyExpensiveMethod = () =>
{
System.Threading.Thread.Sleep(5000);
return "that took ages!";
};
// and heres a function call that only calculates it the once.
Func<string> CachedMethod = Remember(() => MyExpensiveMethod());
// only the first line takes five seconds;
// the second and third calls are instant.
Console.WriteLine(CachedMethod());
Console.WriteLine(CachedMethod());
Console.WriteLine(CachedMethod());
```
As a general strategy, it might help.
|
Is it possible to cache a value evaluated in a lambda expression?
|
[
"",
"c#",
"linq",
"lambda",
"predicate",
""
] |
I have a usercontrol that has several public properties. These properties automatically show up in the properties window of the VS2005 designer under the "Misc" category. Except two of the properties which are enumerations don't show up correctly.
The first on uses the following enum:
```
public enum VerticalControlAlign
{
Center,
Top,
Bottom
}
```
This does not show up in the designer *at all.*
The second uses this enum:
```
public enum AutoSizeMode
{
None,
KeepInControl
}
```
This one shows up, but the designer seems to think it's a bool and only shows True and False. And when you build a project using the controls it will say that it can't convert type bool to AutoSizeMode.
Also, these enums are declared globably to the Namespace, so they are accessible everywhere.
Any ideas?
|
For starters, the second enum, AutoSizeMode is declared in System.Windows.Forms. So that might cause the designer some issues.
Secondly, you might find the following page on MSDN useful:
<http://msdn.microsoft.com/en-us/library/tk67c2t8.aspx>
|
I made a little test with your problem (I'm not sure if I understood it correctly), and these properties shows up in the designer correctly, and all enums are shown appropriately. If this isn't what you're looking for, then please explain yourself further.
Don't get hang up on the \_Ugly part thrown in there. I just used it for a quick test.
```
using System.ComponentModel;
using System.Windows.Forms;
namespace SampleApplication
{
public partial class CustomUserControl : UserControl
{
public CustomUserControl()
{
InitializeComponent();
}
/// <summary>
/// We're hiding AutoSizeMode in UserControl here.
/// </summary>
public new enum AutoSizeMode { None, KeepInControl }
public enum VerticalControlAlign { Center, Top, Bottom }
/// <summary>
/// Note that you cannot have a property
/// called VerticalControlAlign if it is
/// already defined in the scope.
/// </summary>
[DisplayName("VerticalControlAlign")]
[Category("stackoverflow.com")]
[Description("Sets the vertical control align")]
public VerticalControlAlign VerticalControlAlign_Ugly
{
get { return m_align; }
set { m_align = value; }
}
private VerticalControlAlign m_align;
/// <summary>
/// Note that you cannot have a property
/// called AutoSizeMode if it is
/// already defined in the scope.
/// </summary>
[DisplayName("AutoSizeMode")]
[Category("stackoverflow.com")]
[Description("Sets the auto size mode")]
public AutoSizeMode AutoSizeMode_Ugly
{
get { return m_autoSize; }
set { m_autoSize = value; }
}
private AutoSizeMode m_autoSize;
}
}
```
|
UserControl Property of type Enum displays in designer as bool or not at all
|
[
"",
"c#",
"visual-studio",
"enums",
"user-controls",
""
] |
In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class?
So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class.
The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable.
|
A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so.
```
class dog(object):
def __init__(self, height, width, lenght):
self.height = height
self.width = width
self.length = length
def revert(self):
self.height = 1
self.width = 2
self.length = 3
dog1 = dog(5, 6, 7)
dog2 = dog(2, 3, 4)
dog1.revert()
```
|
Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state.
```
class MyReset:
def __init__(self, x, y):
self.x = x
self.y = y
self.save()
def save(self):
self.saved = self.__dict__.copy()
def reset(self):
self.__dict__ = self.saved.copy()
a = MyReset(20, 30)
a.x = 50
print a.x
a.reset()
print a.x
```
Why do you want to do this? It might not be the best/only way.
|
Classes in Python
|
[
"",
"python",
"class",
""
] |
I need to calculate `Math.exp()` from java very frequently, is it possible to get a native version to run faster than **java**'s `Math.exp()`??
I tried just jni + C, but it's slower than just plain **java**.
|
+1 to writing your own exp() implementation. That is, if this is *really* a bottle-neck in your application. If you can deal with a little inaccuracy, there are a number of extremely efficient exponent estimation algorithms out there, some of them dating back centuries. As I understand it, Java's exp() implementation is fairly slow, even for algorithms which must return "exact" results.
Oh, and don't be afraid to write that exp() implementation in pure-Java. JNI has a lot of overhead, and the JVM is able to optimize bytecode at runtime sometimes even beyond what C/C++ is able to achieve.
|
This has already been requested several times (see e.g. [here](https://stackoverflow.com/questions/412019/code-optimizations#412988)). Here is an approximation to Math.exp(), copied from [this blog posting](http://martin.ankerl.com/2007/02/11/optimized-exponential-functions-for-java/):
```
public static double exp(double val) {
final long tmp = (long) (1512775 * val + (1072693248 - 60801));
return Double.longBitsToDouble(tmp << 32);
}
```
It is basically the same as a lookup table with 2048 entries and linear interpolation between the entries, but all this with IEEE floating point tricks. Its 5 times faster than Math.exp() on my machine, but this can vary drastically if you compile with -server.
|
faster Math.exp() via JNI?
|
[
"",
"java",
"c",
"optimization",
"java-native-interface",
""
] |
I hope this question is not considered too basic for this forum, but we'll see. I'm wondering how to refactor some code for better performance that is getting run a bunch of times.
Say I'm creating a word frequency list, using a Map (probably a HashMap), where each key is a String with the word that's being counted and the value is an Integer that's incremented each time a token of the word is found.
In Perl, incrementing such a value would be trivially easy:
```
$map{$word}++;
```
But in Java, it's much more complicated. Here the way I'm currently doing it:
```
int count = map.containsKey(word) ? map.get(word) : 0;
map.put(word, count + 1);
```
Which of course relies on the autoboxing feature in the newer Java versions. I wonder if you can suggest a more efficient way of incrementing such a value. Are there even good performance reasons for eschewing the Collections framework and using a something else instead?
Update: I've done a test of several of the answers. See below.
|
## Some test results
I've gotten a lot of good answers to this question--thanks folks--so I decided to run some tests and figure out which method is actually fastest. The five methods I tested are these:
* the "ContainsKey" method that I presented in [the question](https://stackoverflow.com/questions/81346/most-efficient-way-to-increment-a-map-value-in-java)
* the "TestForNull" method suggested by Aleksandar Dimitrov
* the "AtomicLong" method suggested by Hank Gay
* the "Trove" method suggested by jrudolph
* the "MutableInt" method suggested by phax.myopenid.com
## Method
Here's what I did...
1. created five classes that were identical except for the differences shown below. Each class had to perform an operation typical of the scenario I presented: opening a 10MB file and reading it in, then performing a frequency count of all the word tokens in the file. Since this took an average of only 3 seconds, I had it perform the frequency count (not the I/O) 10 times.
2. timed the loop of 10 iterations but *not the I/O operation* and recorded the total time taken (in clock seconds) essentially using [Ian Darwin's method in the Java Cookbook](http://books.google.com/books?id=t85jM-ZwTX0C&printsec=frontcover&dq=java+cookbook&sig=ACfU3U1lAe1vnbVUwdIcWeTpaxZi1xVUXQ#PPA734,M1).
3. performed all five tests in series, and then did this another three times.
4. averaged the four results for each method.
## Results
I'll present the results first and the code below for those who are interested.
The **ContainsKey** method was, as expected, the slowest, so I'll give the speed of each method in comparison to the speed of that method.
* **ContainsKey:** 30.654 seconds (baseline)
* **AtomicLong:** 29.780 seconds (1.03 times as fast)
* **TestForNull:** 28.804 seconds (1.06 times as fast)
* **Trove:** 26.313 seconds (1.16 times as fast)
* **MutableInt:** 25.747 seconds (1.19 times as fast)
## Conclusions
It would appear that only the MutableInt method and the Trove method are significantly faster, in that only they give a performance boost of more than 10%. However, if threading is an issue, AtomicLong might be more attractive than the others (I'm not really sure). I also ran TestForNull with `final` variables, but the difference was negligible.
Note that I haven't profiled memory usage in the different scenarios. I'd be happy to hear from anybody who has good insights into how the MutableInt and Trove methods would be likely to affect memory usage.
Personally, I find the MutableInt method the most attractive, since it doesn't require loading any third-party classes. So unless I discover problems with it, that's the way I'm most likely to go.
## The code
Here is the crucial code from each method.
### ContainsKey
```
import java.util.HashMap;
import java.util.Map;
...
Map<String, Integer> freq = new HashMap<String, Integer>();
...
int count = freq.containsKey(word) ? freq.get(word) : 0;
freq.put(word, count + 1);
```
### TestForNull
```
import java.util.HashMap;
import java.util.Map;
...
Map<String, Integer> freq = new HashMap<String, Integer>();
...
Integer count = freq.get(word);
if (count == null) {
freq.put(word, 1);
}
else {
freq.put(word, count + 1);
}
```
### AtomicLong
```
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.atomic.AtomicLong;
...
final ConcurrentMap<String, AtomicLong> map =
new ConcurrentHashMap<String, AtomicLong>();
...
map.putIfAbsent(word, new AtomicLong(0));
map.get(word).incrementAndGet();
```
### Trove
```
import gnu.trove.TObjectIntHashMap;
...
TObjectIntHashMap<String> freq = new TObjectIntHashMap<String>();
...
freq.adjustOrPutValue(word, 1, 1);
```
### MutableInt
```
import java.util.HashMap;
import java.util.Map;
...
class MutableInt {
int value = 1; // note that we start at 1 since we're counting
public void increment () { ++value; }
public int get () { return value; }
}
...
Map<String, MutableInt> freq = new HashMap<String, MutableInt>();
...
MutableInt count = freq.get(word);
if (count == null) {
freq.put(word, new MutableInt());
}
else {
count.increment();
}
```
|
Now there is a shorter way with Java 8 using [`Map::merge`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Map.html#merge(K,V,java.util.function.BiFunction)).
```
myMap.merge(key, 1, Integer::sum)
```
or
```
myMap.merge(key, 1L, Long::sum)
```
for longs respectively.
What it does:
* if *key* do not exists, put *1* as value
* otherwise *sum 1* to the value linked to *key*
More information [here](https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#merge-K-V-java.util.function.BiFunction-).
|
Most efficient way to increment a Map value in Java
|
[
"",
"java",
"optimization",
"collections",
""
] |
I'm looking for a general solution for upgrading database schema with ORM tools, like JPOX or Hibernate. How do you do it in your projects?
The first solution that comes to my mind is to create my own mechanism for upgrading databases, with SQL scripts doing all the work. But in this case I'll have to remember about creating new scripts every time the object mappings are updated. And I'll still have to deal with low-level SQL queries, instead of just defining mappings and allowing the ORM tools to do all the job...
So the question is how to do it properly. Maybe some tools allow for simplifying this task (for example, I heard that Rails have such mechanism built-in), if so please help me decide which ORM tool to choose for my next Java project.
|
[LiquiBase](http://www.liquibase.org) is an interesting open source library for handling database refactorings (upgrades). I have not used it, but will definitely give it a try on my next project where I need to upgrade a db schema.
|
I don't see why ORM generated schemas are any different to other DB schemas - the problem is the same. Assuming your ORM will spit out a generation script, you can use an external tool to do the diff
I've not tried it but google came back with [SQLCompare](http://www.red-gate.com/products/SQL_Compare/index.htm) as one option - I'm sure there are others.
|
How to upgrade database schema built with an ORM tool?
|
[
"",
"java",
"database",
"orm",
"migration",
""
] |
Anyone have a good rule of thumb for choosing between different implementations of Java Collection interfaces like List, Map, or Set?
For example, generally why or in what cases would I prefer to use a Vector or an ArrayList, a Hashtable or a HashMap?
|
I've always made those decisions on a case by case basis, depending on the use case, such as:
* Do I need the ordering to remain?
* Will I have null key/values? Dups?
* Will it be accessed by multiple threads
* Do I need a key/value pair
* Will I need random access?
And then I break out my handy 5th edition **Java in a Nutshell** and compare the ~20 or so options. It has nice little tables in Chapter five to help one figure out what is appropriate.
Ok, maybe if I know off the cuff that a simple ArrayList or HashSet will do the trick I won't look it all up. ;) but if there is anything remotely complex about my indended use, you bet I'm in the book. BTW, I though Vector is supposed to be 'old hat'--I've not used on in years.
|
I really like this cheat sheet from Sergiy Kovalchuk's blog entry, but unfortunately it is offline. However, the Wayback Machine has a [historical copy](https://web.archive.org/web/20200802192003/http://www.sergiy.ca/guide-to-selecting-appropriate-map-collection-in-java/):

More detailed was Alexander Zagniotov's flowchart, also offline therefor also a historical [copy of the blog](https://web.archive.org/web/20130424223516/http://initbinder.com/articles/cheat-sheet-for-selecting-maplistset-in-java.html):
[](https://i.stack.imgur.com/GfpyN.png)
Excerpt from the blog on concerns raised in comments:
"This cheat sheet doesn't include rarely used classes like WeakHashMap, LinkedList, etc. because they are designed for very specific or exotic tasks and shouldn't be chosen in 99% cases."
|
Rule of thumb for choosing an implementation of a Java Collection?
|
[
"",
"java",
"collections",
"heuristics",
""
] |
What's the best way to convert a string to an enumeration value in C#?
I have an HTML select tag containing the values of an enumeration. When the page is posted, I want to pick up the value (which will be in the form of a string) and convert it to the corresponding enumeration value.
In an ideal world, I could do something like this:
```
StatusEnum MyStatus = StatusEnum.Parse("Active");
```
but that isn't valid code.
|
In .NET Core and .NET Framework ≥4.0 [there is a generic parse method](https://msdn.microsoft.com/en-us/library/dd783499%28v=vs.110%29.aspx):
```
Enum.TryParse("Active", out StatusEnum myStatus);
```
This also includes C#7's new inline `out` variables, so this does the try-parse, conversion to the explicit enum type and initialises+populates the `myStatus` variable.
If you have access to C#7 and the latest .NET this is the best way.
## Original Answer
In .NET it's rather ugly (until 4 or above):
```
StatusEnum MyStatus = (StatusEnum) Enum.Parse(typeof(StatusEnum), "Active", true);
```
I tend to simplify this with:
```
public static T ParseEnum<T>(string value)
{
return (T) Enum.Parse(typeof(T), value, true);
}
```
Then I can do:
```
StatusEnum MyStatus = EnumUtil.ParseEnum<StatusEnum>("Active");
```
One option suggested in the comments is to add an extension, which is simple enough:
```
public static T ToEnum<T>(this string value)
{
return (T) Enum.Parse(typeof(T), value, true);
}
StatusEnum MyStatus = "Active".ToEnum<StatusEnum>();
```
Finally, you may want to have a default enum to use if the string cannot be parsed:
```
public static T ToEnum<T>(this string value, T defaultValue)
{
if (string.IsNullOrEmpty(value))
{
return defaultValue;
}
T result;
return Enum.TryParse<T>(value, true, out result) ? result : defaultValue;
}
```
Which makes this the call:
```
StatusEnum MyStatus = "Active".ToEnum(StatusEnum.None);
```
However, I would be careful adding an extension method like this to `string` as (without namespace control) it will appear on all instances of `string` whether they hold an enum or not (so `1234.ToString().ToEnum(StatusEnum.None)` would be valid but nonsensical) . It's often be best to avoid cluttering Microsoft's core classes with extra methods that only apply in very specific contexts unless your entire development team has a very good understanding of what those extensions do.
|
Use [`Enum.TryParse<T>(String, T)`](http://msdn.microsoft.com/query/dev12.query?appId=Dev12IDEF1&l=EN-US&k=k%28System.Enum.TryParse%60%601%29;k%28SolutionItemsProject%29;k%28TargetFrameworkMoniker-.NETFramework,Version=v4.5%29;k%28DevLang-csharp%29&rd=true) (≥ .NET 4.0):
```
StatusEnum myStatus;
Enum.TryParse("Active", out myStatus);
```
It can be simplified even further with C# 7.0's [parameter type inlining](https://blogs.msdn.microsoft.com/dotnet/2016/08/24/whats-new-in-csharp-7-0/):
```
Enum.TryParse("Active", out StatusEnum myStatus);
```
|
Convert a string to an enum in C#
|
[
"",
"c#",
"string",
"enums",
""
] |
I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using `element.innerHTML = content` This works like a charm.
In another section of this website I use a Flickr 'badge' (<http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/>) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some `document.write` statments.
Both of them work perfectly when included in the HTML. Only when loading the flickr badge code *inside* the lightbox, no content is rendered at all. It seems that using `innerHTML` to write `document.write` statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior.
Can anyone clarify if this should or shouldn't work? Thanks.
|
In general, script tags aren't executed when using innerHTML. In your case, this is good, because the `document.write` call would wipe out everything that's already in the page. However, that leaves you without whatever HTML document.write was supposed to add.
jQuery's HTML manipulation methods will execute scripts in HTML for you, the trick is then capturing the calls to `document.write` and getting the HTML in the proper place. If it's simple enough, then something like this will do:
```
var content = '';
document.write = function(s) {
content += s;
};
// execute the script
$('#foo').html(markupWithScriptInIt);
$('#foo .whereverTheDocumentWriteContentGoes').html(content);
```
It gets complicated though. If the script is on another domain, it will be loaded asynchronously, so you'll have to wait until it's done to get the content. Also, what if it just writes the HTML into the middle of the fragment without a wrapper element that you can easily select? [writeCapture.js](http://github.com/iamnoah/writeCapture) (full disclosure: I wrote it) handles all of these problems. I'd recommend just using it, but at the very least you can look at the code to see how it handles everything.
EDIT: Here is a [page](http://iamnoah.github.com/writeCapture/lbFlickrDemo.html) demonstrating what sounds like the effect you want.
|
I created a simple test page that illustrates the problem:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<title>Document Write Testcase</title>
</head>
<body>
<div id="container">
</div>
<div id="container2">
</div>
<script>
// This doesn't work!
var container = document.getElementById('container');
container.innerHTML = "<script type='text/javascript'>alert('foo');document.write('bar');<\/script>";
// This does!
var container2 = document.getElementById('container2');
var script = document.createElement("script");
script.type = 'text/javascript';
script.innerHTML = "alert('bar');document.write('foo');";
container.appendChild(script);
</script>
</body>
</html>
```
This page alerts 'bar' and prints 'foo', while I expected it to also alert 'foo' and print 'bar'. But, unfortunately, since the `script` tag is part of a larger HTML page, I cannot single out that tag and append it like the example above. Well, I can, but that would require scanning `innerHTML` content for `script` tags, and replacing them in the string by placeholders, and then inserting them using the DOM. Sounds not *that* trivial.
|
Dynamically inserting javascript into HTML that uses document.write
|
[
"",
"javascript",
"html",
"ajax",
""
] |
When using Google Chrome, I want to debug some JavaScript code. How can I do that?
|
Windows: `CTRL`-`SHIFT`-`J` OR F12
Mac: `⌥`-`⌘`-`J`
Also available through the wrench menu (Tools > JavaScript Console):

|
Try adding this to your source:
```
debugger;
```
It works in most, if not all browsers. Just place it somewhere in your code, and it will act like a breakpoint.
|
How do you launch the JavaScript debugger in Google Chrome?
|
[
"",
"javascript",
"google-chrome",
"debugging",
"google-chrome-devtools",
""
] |
I have a servlet that is used for many different actions, used in the [Front Controller pattern](http://java.sun.com/blueprints/patterns/FrontController.html). Does anyone know if it is possible to tell if the data posted back to it is enctype="multipart/form-data"? I can't read the request parameters until I decide this, so I can't dispatch the request to the proper controller.
Any ideas?
|
Yes, the `Content-type` header in the user agent's request should include `multipart/form-data` as described in (at least) the HTML4 spec:
<http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2>
|
If you are going to try using the request.getContentType() method presented above, be aware that:
1. request.getContentType() may return null.
2. request.getContentType() may not be *equal* to "multipart/form-data", but may just start with it.
With this in mind, the check you should run is :
```
if (request.getContentType() != null && request.getContentType().toLowerCase().indexOf("multipart/form-data") > -1 ) {
// Multipart logic here
}
```
|
Can a servlet determine if the posted data is multipart/form-data?
|
[
"",
"java",
"servlets",
"multipartform-data",
"front-controller",
""
] |
I frequently have problems dealing with `DataRows` returned from `SqlDataAdapters`. When I try to fill in an object using code like this:
```
DataRow row = ds.Tables[0].Rows[0];
string value = (string)row;
```
What is the best way to deal with `DBNull's` in this type of situation.
|
Nullable types are good, but only for types that are not nullable to begin with.
To make a type "nullable" append a question mark to the type, for example:
```
int? value = 5;
```
I would also recommend using the "`as`" keyword instead of casting. You can only use the "as" keyword on nullable types, so make sure you're casting things that are already nullable (like strings) or you use nullable types as mentioned above. The reasoning for this is
1. If a type is nullable, the "`as`" keyword returns `null` if a value is `DBNull`.
2. It's [ever-so-slightly faster than casting](http://www.codeproject.com/Articles/8052/Type-casting-impact-over-execution-performance-in) though [only in certain cases](https://stackoverflow.com/a/496167/392). This on its own is never a good enough reason to use `as`, but coupled with the reason above it's useful.
I'd recommend doing something like this
```
DataRow row = ds.Tables[0].Rows[0];
string value = row as string;
```
In the case above, if `row` comes back as `DBNull`, then `value` will become `null` instead of throwing an exception. Be aware that if your DB query changes the columns/types being returned, **using `as` will cause your code to silently fail** and make values simple `null` instead of throwing the appropriate exception when incorrect data is returned so it is recommended that you have tests in place to validate your queries in other ways to ensure data integrity as your codebase evolves.
|
If you aren't using nullable types, the best thing to do is check to see if the column's value is DBNull. If it is DBNull, then set your reference to what you use for null/empty for the corresponding datatype.
```
DataRow row = ds.Tables[0].Rows[0];
string value;
if (row["fooColumn"] == DBNull.Value)
{
value = string.Empty;
}
else
{
value = Convert.ToString(row["fooColumn"]);
}
```
As Manu said, you can create a convert class with an overloaded convert method per type so you don't have to pepper your code with if/else blocks.
I will however stress that nullable types is the better route to go if you can use them. The reasoning is that with non-nullable types, you are going to have to resort to "magic numbers" to represent null. For example, if you are mapping a column to an int variable, how are you going to represent DBNull? Often you can't use 0 because 0 has a valid meaning in most programs. Often I see people map DBNull to int.MinValue, but that could potentially be problematic too. My best advice is this:
* For columns that can be null in the database, use nullable types.
* For columns that cannot be null in the database, use regular types.
Nullable types were made to solve this problem. That being said, if you are on an older version of the framework or work for someone who doesn't grok nullable types, the code example will do the trick.
|
What is the best way to deal with DBNull's
|
[
"",
"c#",
".net",
"sql-server",
""
] |
is there a way to abort threads created with QueueUserWorkItem?
Or maybe I don't need to? What happens if the main application exits? Are all thread created from it aborted automatically?
|
You don't need to abort them. When your application exits, .NET will kill any threads with IsBackground = true. The .NET threadpool has all its threads set to IsBackground = true, so you don't have to worry about it.
Now if you're creating threads by newing up the Thread class, then you'll either need to abort them or set their IsBackground property to true.
|
> However, if you are using unmanaged
> resources in those threads, you may
> end up in a lot of trouble.
That would rather depend how you were using them - if these unmanaged resources were properly wrapped then they'd be dealt with by their wrapper finalization regardless of the mechanism used to kill threads which had referenced them. *And unmanaged resources are freed up by the OS when an app exits anyway.*
There is a general feeling that (Windows) applications spend much too much time trying to clean-up on app shutdown - often involving paging-in huge amounts of memory just so that it can be discarded again (or paging-in code which runs around freeing unmangaged objects which the OS would deal with anyway).
|
How to abort threads created with ThreadPool.QueueUserWorkItem
|
[
"",
"c#",
"multithreading",
""
] |
I have 3 PDF documents that are generated on the fly by a legacy library that we use, and written to disk. What's the easiest way for my JAVA server code to grab these 3 documents and turn them into one long PDF document where it's just all the pages from document #1, followed by all the pages from document #2, etc.
Ideally I would like this to happen in memory so I can return it as a stream to the client, but writing it to disk is also an option.
|
@J D OConal, thanks for the tip, the article you sent me was very outdated, but it did point me towards iText. I found this page that explains how to do exactly what I need:
<http://java-x.blogspot.com/2006/11/merge-pdf-files-with-itext.html>
Thanks for the other answers, but I don't really want to have to spawn other processes if I can avoid it, and our project already has itext.jar, so I'm not adding any external dependancies
Here's the code I ended up writing:
```
public class PdfMergeHelper {
/**
* Merges the passed in PDFs, in the order that they are listed in the java.util.List.
* Writes the resulting PDF out to the OutputStream provided.
*
* Sample Usage:
* List<InputStream> pdfs = new ArrayList<InputStream>();
* pdfs.add(new FileInputStream("/location/of/pdf/OQS_FRSv1.5.pdf"));
* pdfs.add(new FileInputStream("/location/of/pdf/PPFP-Contract_Genericv0.5.pdf"));
* pdfs.add(new FileInputStream("/location/of/pdf/PPFP-Quotev0.6.pdf"));
* FileOutputStream output = new FileOutputStream("/location/to/write/to/merge.pdf");
* PdfMergeHelper.concatPDFs(pdfs, output, true);
*
* @param streamOfPDFFiles the list of files to merge, in the order that they should be merged
* @param outputStream the output stream to write the merged PDF to
* @param paginate true if you want page numbers to appear at the bottom of each page, false otherwise
*/
public static void concatPDFs(List<InputStream> streamOfPDFFiles, OutputStream outputStream, boolean paginate) {
Document document = new Document();
try {
List<InputStream> pdfs = streamOfPDFFiles;
List<PdfReader> readers = new ArrayList<PdfReader>();
int totalPages = 0;
Iterator<InputStream> iteratorPDFs = pdfs.iterator();
// Create Readers for the pdfs.
while (iteratorPDFs.hasNext()) {
InputStream pdf = iteratorPDFs.next();
PdfReader pdfReader = new PdfReader(pdf);
readers.add(pdfReader);
totalPages += pdfReader.getNumberOfPages();
}
// Create a writer for the outputstream
PdfWriter writer = PdfWriter.getInstance(document, outputStream);
document.open();
BaseFont bf = BaseFont.createFont(BaseFont.HELVETICA, BaseFont.CP1252, BaseFont.NOT_EMBEDDED);
PdfContentByte cb = writer.getDirectContent(); // Holds the PDF
// data
PdfImportedPage page;
int currentPageNumber = 0;
int pageOfCurrentReaderPDF = 0;
Iterator<PdfReader> iteratorPDFReader = readers.iterator();
// Loop through the PDF files and add to the output.
while (iteratorPDFReader.hasNext()) {
PdfReader pdfReader = iteratorPDFReader.next();
// Create a new page in the target for each source page.
while (pageOfCurrentReaderPDF < pdfReader.getNumberOfPages()) {
document.newPage();
pageOfCurrentReaderPDF++;
currentPageNumber++;
page = writer.getImportedPage(pdfReader, pageOfCurrentReaderPDF);
cb.addTemplate(page, 0, 0);
// Code for pagination.
if (paginate) {
cb.beginText();
cb.setFontAndSize(bf, 9);
cb.showTextAligned(PdfContentByte.ALIGN_CENTER, "" + currentPageNumber + " of " + totalPages,
520, 5, 0);
cb.endText();
}
}
pageOfCurrentReaderPDF = 0;
}
outputStream.flush();
document.close();
outputStream.close();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (document.isOpen()) {
document.close();
}
try {
if (outputStream != null) {
outputStream.close();
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
}
```
|
I've used [pdftk](http://www.accesspdf.com/pdftk/) to great effect. It's an external application that you'll have to run from your java app.
|
What's the easiest way to merge (server-side) a collection of PDF documents into one big PDF document in JAVA
|
[
"",
"java",
"pdf",
"pdf-generation",
""
] |
I'm trying to read a .doc file into a database so that I can index it's contents. Is there an easy way for PHP on Linux to read .doc files? Failing that is it possible to convert .doc files to rtf, pdf or some other 'open' format that is easy to read?
Note, I am not interested in .docx files.
|
There seems to be a [library for accessing Word documents](http://packages.ubuntu.com/hardy/libwv-1.2-3) but not sure how to access it from PHP. I think the best solution would be to call their [wv command](http://packages.ubuntu.com/hardy/wv) from PHP.
|
Conor, I'd suggest to look at OpenOffice command line interface / calling macros. It can convert many file formats to many others. Then you can pick something much more parse-able than MS doc.
For instance, to convert to PDF, a command line is:
```
/usr/lib/ooo-2.0/program/soffice.bin -norestore -nofirststart -nologo -headless -invisible "macro:///Standard.Module1.SaveAsPDF(demo.doc)"
```
|
Getting PHP to read .doc files on Linux
|
[
"",
"php",
"database",
""
] |
Conventional wisdom states that stored procedures are always faster. So, since they're always faster, use them **ALL THE TIME**.
I am pretty sure this is grounded in some historical context where this was once the case. Now, I'm not advocating that Stored Procs are not needed, but I want to know in what cases stored procedures are necessary in modern databases such as MySQL, SQL Server, Oracle, or <*Insert\_your\_DB\_here*>. Is it overkill to have ALL access through stored procedures?
|
> **NOTE** that this is a general look at stored procedures not regulated to a specific
> DBMS. Some DBMS (and even, different
> versions of the same DBMS!) may operate
> contrary to this, so you'll want to
> double-check with your target DBMS
> before assuming all of this still holds.
>
> I've been a Sybase ASE, MySQL, and SQL Server DBA on-and off since for almost a decade (along with application development in C, PHP, PL/SQL, C#.NET, and Ruby). So, I have no particular axe to grind in this (sometimes) holy war.
The historical performance benefit of stored procs have generally been from the following (in no particular order):
* Pre-parsed SQL
* Pre-generated query execution plan
* Reduced network latency
* Potential cache benefits
**Pre-parsed SQL** -- similar benefits to compiled vs. interpreted code, except on a very micro level.
*Still an advantage?*
Not very noticeable at all on the modern CPU, but if you are sending a single SQL statement that is VERY large eleventy-billion times a second, the parsing overhead can add up.
**Pre-generated query execution plan**.
If you have many JOINs the permutations can grow quite unmanageable (modern optimizers have limits and cut-offs for performance reasons). It is not unknown for very complicated SQL to have distinct, measurable (I've seen a complicated query take 10+ seconds just to generate a plan, before we tweaked the DBMS) latencies due to the optimizer trying to figure out the "near best" execution plan. Stored procedures will, generally, store this in memory so you can avoid this overhead.
*Still an advantage?*
Most DBMS' (the latest editions) will cache the query plans for INDIVIDUAL SQL statements, greatly reducing the performance differential between stored procs and ad hoc SQL. There are some caveats and cases in which this isn't the case, so you'll need to test on your target DBMS.
Also, more and more DBMS allow you to provide optimizer path plans (abstract query plans) to significantly reduce optimization time (for both ad hoc and stored procedure SQL!!).
> **WARNING** Cached query plans are not a performance panacea. Occasionally the query plan that is generated is sub-optimal.
> For example, if you send `SELECT *
> FROM table WHERE id BETWEEN 1 AND
> 99999999`, the DBMS may select a
> full-table scan instead of an index
> scan because you're grabbing every row
> in the table (so sayeth the
> statistics). If this is the cached
> version, then you can get poor
> performance when you later send
> `SELECT * FROM table WHERE id BETWEEN
> 1 AND 2`. The reasoning behind this is
> outside the scope of this posting, but
> for further reading see:
> <http://www.microsoft.com/technet/prodtechnol/sql/2005/frcqupln.mspx>
> and
> <http://msdn.microsoft.com/en-us/library/ms181055.aspx>
> and <http://www.simple-talk.com/sql/performance/execution-plan-basics/>
>
> "In summary, they determined that
> supplying anything other than the
> common values when a compile or
> recompile was performed resulted in
> the optimizer compiling and caching
> the query plan for that particular
> value. Yet, when that query plan was
> reused for subsequent executions of
> the same query for the common values
> (‘M’, ‘R’, or ‘T’), it resulted in
> sub-optimal performance. This
> sub-optimal performance problem
> existed until the query was
> recompiled. At that point, based on
> the @P1 parameter value supplied, the
> query might or might not have a
> performance problem."
**Reduced network latency**
A) If you are running the same SQL over and over -- and the SQL adds up to many KB of code -- replacing that with a simple "exec foobar" can really add up.
B) Stored procs can be used to move procedural code into the DBMS. This saves shuffling large amounts of data off to the client only to have it send a trickle of info back (or none at all!). Analogous to doing a JOIN in the DBMS vs. in your code (everyone's favorite WTF!)
*Still an advantage?*
A) Modern 1Gb (and 10Gb and up!) Ethernet really make this negligible.
B) Depends on how saturated your network is -- why shove several megabytes of data back and forth for no good reason?
**Potential cache benefits**
Performing server-side transforms of data can potentially be faster if you have sufficient memory on the DBMS and the data you need is in memory of the server.
*Still an advantage?*
Unless your app has shared memory access to DBMS data, the edge will always be to stored procs.
Of course, no discussion of Stored Procedure optimization would be complete without a discussion of parameterized and ad hoc SQL.
**Parameterized / Prepared SQL**
Kind of a cross between stored procedures and ad hoc SQL, they are embedded SQL statements in a host language that uses "parameters" for query values, e.g.:
```
SELECT .. FROM yourtable WHERE foo = ? AND bar = ?
```
These provide a more generalized version of a query that modern-day optimizers can use to cache (and re-use) the query execution plan, resulting in much of the performance benefit of stored procedures.
**Ad Hoc SQL**
Just open a console window to your DBMS and type in a SQL statement. In the past, these were the "worst" performers (on average) since the DBMS had no way of pre-optimizing the queries as in the parameterized/stored proc method.
*Still a disadvantage?*
Not necessarily. Most DBMS have the ability to "abstract" ad hoc SQL into parameterized versions -- thus more or less negating the difference between the two. Some do this implicitly or must be enabled with a command setting (SQL server: <http://msdn.microsoft.com/en-us/library/ms175037.aspx> , Oracle: <http://www.praetoriate.com/oracle_tips_cursor_sharing.htm>).
**Lessons learned?**
Moore's law continues to march on and DBMS optimizers, with every release, get more sophisticated. Sure, you can place every single silly teeny SQL statement inside a stored proc, but just know that the programmers working on optimizers are very smart and are continually looking for ways to improve performance. Eventually (if it's not here already) ad hoc SQL performance will become indistinguishable (on average!) from stored procedure performance, so any sort of *massive* stored procedure use \*\* solely for "performance reasons"\*\* sure sounds like premature optimization to me.
Anyway, I think if you avoid the edge cases and have fairly vanilla SQL, you won't notice a difference between ad hoc and stored procedures.
|
Reasons for using stored procedures:
* **Reduce network traffic** -- you have to send the SQL statement across the network. With sprocs, you can execute SQL in batches, which is also more efficient.
* **Caching query plan** -- the first time the sproc is executed, SQL Server creates an execution plan, which is cached for reuse. This is particularly performant for small queries run frequently.
* **Ability to use output parameters** -- if you send inline SQL that returns one row, you can only get back a recordset. With sprocs you can get them back as output parameters, which is considerably faster.
* **Permissions** -- when you send inline SQL, you have to grant permissions on the table(s) to the user, which is granting much more access than merely granting permission to execute a sproc
* **Separation of logic** -- remove the SQL-generating code and segregate it in the database.
* **Ability to edit without recompiling** -- this can be controversial. You can edit the SQL in a sproc without having to recompile the application.
* **Find where a table is used** -- with sprocs, if you want to find all SQL statements referencing a particular table, you can export the sproc code and search it. This is much easier than trying to find it in code.
* **Optimization** -- It's easier for a DBA to optimize the SQL and tune the database when sprocs are used. It's easier to find missing indexes and such.
* **SQL injection attacks** -- properly written inline SQL can defend against attacks, but sprocs are better for this protection.
|
Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS's?
|
[
"",
"sql",
"database",
"stored-procedures",
""
] |
I'm working with a large (270+ project) VS.Net solution. Yes, I know this is pushing the friendship with VS but it's inherited and blah blah. Anyway, to speed up the solution load and compile time I've removed all projects that I'm not currently working on... which in turn has removed those project references from the projects I want to retain. So now I'm going through a mind numbing process of adding binary references to the retained projects so that the referenced Types can be found.
Here's how I'm working at present;
* Attempt to compile, get thousands of
errors, 'type or namespace missing'
* Copy the first line of the error
list to the clipboard
* Using a perl script hooked up to a
hotkey (AHK) I extract the type name from
the error message and store it in the windows clipboard
* I paste the type name into source
insight symbol browser and note the
assembly containing the Type
* I go back to VS and add that
assembly as a binary reference to
the relevant project
So now, after about 30 mins I'm thinking there's just got to be a quicker way...
|
No, there currently isn't a built-in quicker way.
I would suggest not modifying the existing solution and create a new solution with new projects that duplicate (e.g. rename and edit) the projects you want to work on. If you find that the solution with the hundreds of projects is an issue for you then you'll likely just need to work on a subset. Start with a couple of new projects, add the binary (not project) reference and go from there.
|
These solutions come to my mind:
* You can try to use [Dependency Walker](http://www.dependencywalker.com) or similar program to analyze dependecies.
* Parse MSBuild files (\*.csproject) to get list of dependencies
**EDIT:**
Just found 2 cool tools [Dependency Visualizer](http://www.codeplex.com/dependencyvisualizer) & [Dependency Finder](http://www.codeplex.com/DependencyFinder) on codeplex I think they can help you greatly.
**EDIT:**
@edg, I totally misread your question, since you lose references from csproj files you have to use static analysis tool like NDepend or try to analyze dependencies in run time.
|
Locate and add project reference for .Net assembly containing a given type?
|
[
"",
"c#",
".net",
"visual-studio",
""
] |
Does anyone know of a way to force a nolock hint on all transactions issued by a certain user? I'd like to provide a login for a support team to query the production system, but I want to protect it by forcing a nolock on everything they do. I'm using SQL Server 2005.
|
This is a painful and hacky way to do it, but it's what we're doing where I work. We're also using classic asp so we're using inline sql calls. we actually wrap the sql call in a function (here you can check for a specific user) and add "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" to the beginning of the call.
I believe functionally this is the same as the no lock hint. Sorry I don't have a pure SQL answer, I'd be interested to hear if you find a good way to do this.
|
You could configure your support staff's SQL Management Studio to set the default transaction isolation level to READ UNCOMMITTED (Tools->Options->Query Execution->SQL Server->Advanced). This is functionally the same as having NOLOCK hints on everything.
The downsides are that you'd have to do this for each member of your support team, and they'd have the ability to change the configuration on their SQL Management Studio.
|
How to force nolock hint for sql server logins
|
[
"",
"sql",
"sql-server",
"security",
"sql-server-2005",
"query-optimization",
""
] |
I'm reading lines of text that can come in any order. The problem is that the output can actually be indentical to the previous output. How can I detect this, without sorting the output first?
Is there some kind of hash function that can take identical input, but in any order, and still produce the same result?
|
The easiest way would seem to be to hash each line on the way in, storing the hash and the original data, and then compare each new hash with your collection of existing hashes. If you get a positive, you could compare the actual data, to make sure it's not a false positive - though this would be extremely rare, you could go with a quicker hash algorithm, like MD5 or CRC (instead of something like SHA, which is slower but less likely to collide), just so it's quick, and then compare the actual data when you get a hit.
|
Well the problem specification is a bit limited.
As I understand it you wish to see if several strings contain the same elements regardless of order.
For example:
```
A B C
C B A
```
are the same.
The way to do this is to create a set of the values then compare the sets. To create a set do:
```
HashSet set = new HashSet();
foreach (item : string) {
set.add(item);
}
```
Then just compare the contents of the sets by running through one of the sets and comparing it w/others. The execution time will be `O(N)` instead of `O(NlogN)` for the sorting example.
|
Detect changes in random ordered input (hash function?)
|
[
"",
"java",
"multithreading",
"hash",
""
] |
Are there any open-source libraries that all programmers should know about? I'm thinking
something general, a sort of extension to the standard `java.util` that contains basic functions that are useful for all kinds of application.
|
* Apache Commons
* Log4j
* Google collections
|
The [Spring framework](http://www.springframework.org/) is surprisingly general purpose. I started by just using it as a configuration management tool, but then realized how helpful dependency injection is when doing test-driven development. Then I slowly discovered many useful modules hidden in the corners of Spring.
|
What is the most useful multi-purpose open-source library for java?
|
[
"",
"java",
"open-source",
""
] |
In php, I often need to map a variable using an array ... but I can not seem to be able to do this in a one liner. c.f. example:
```
// the following results in an error:
echo array('a','b','c')[$key];
// this works, using an unnecessary variable:
$variable = array('a','b','c');
echo $variable[$key];
```
This is a minor problem, but it keeps bugging every once in a while ... I don't like the fact, that I use a variable for nothing ;)
|
I wouldn't bother about that extra variable, really. If you want, though, you could also remove it from memory after you've used it:
```
$variable = array('a','b','c');
echo $variable[$key];
unset($variable);
```
Or, you could write a small function:
```
function indexonce(&$ar, $index) {
return $ar[$index];
}
```
and call this with:
```
$something = indexonce(array('a', 'b', 'c'), 2);
```
The array should be destroyed automatically now.
|
The technical answer is that the ***Grammar*** of the PHP language only allows subscript notation on the end of **variable expressions** and not **expressions** in general, which is how it works in most other languages. I've always viewed it as a deficiency in the language, because it is possible to have a grammar that resolves subscripts against any expression unambiguously. It could be the case, however, that they're using an inflexible parser generator or they simply don't want to break some sort of backwards compatibility.
Here are a couple more examples of invalid subscripts on valid expressions:
```
$x = array(1,2,3);
print ($x)[1]; //illegal, on a parenthetical expression, not a variable exp.
function ret($foo) { return $foo; }
echo ret($x)[1]; // illegal, on a call expression, not a variable exp.
```
|
PHP: Access Array Value on the Fly
|
[
"",
"php",
"arrays",
"coding-style",
""
] |
What is the most proper way to sending email of minimal 1000 or more in PHP? Any reliable email queuing technique that is capable to handle that?
|
You could just insert your emails into a Mail Queue database table, and have a separate process check the queue and batch send a certain number at once.
|
There's a tested solution for that: PEAR [Mail\_Queue](http://pear.php.net/package/Mail_Queue)
Works fine for me.
|
Email queueing in php
|
[
"",
"php",
"email",
"queue",
""
] |
Is there available any tool for PHP which can be used to generate code for consuming a [web service](http://en.wikipedia.org/wiki/Web_service) based on its [WSDL](http://en.wikipedia.org/wiki/Web_Services_Description_Language)? Something comparable to clicking "Add Web Reference" in Visual Studio or the Eclipse plugin which does the same thing for Java.
|
I've had great success with [wsdl2php](https://github.com/jbarciauskas/wsdl2php). It will automatically create wrapper classes for all objects and methods used in your web service.
|
In PHP 5 you can use [SoapClient](http://www.php.net/manual/en/ref.soap.php) on the WSDL to call the web service functions. [For example](http://www.php.net/manual/en/soapclient.construct.php):
```
$client = new SoapClient("some.wsdl");
```
and $client is now an object which has class methods as defined in some.wsdl. So if there was a method called getTime in the WSDL then you would just call:
```
$result = $client->getTime();
```
And the result of that would (obviously) be in the $result variable. You can use the \_\_getFunctions method to return a list of all the available methods.
|
How to easily consume a web service from PHP
|
[
"",
"php",
"web-services",
"visual-studio",
"wsdl",
""
] |
Typical way of creating a [CSV](http://en.wikipedia.org/wiki/Comma-separated_values) string (pseudocode):
1. Create a CSV container object (like a StringBuilder in C#).
2. Loop through the strings you want to add appending a comma after each one.
3. After the loop, remove that last superfluous comma.
Code sample:
```
public string ReturnAsCSV(ContactList contactList)
{
StringBuilder sb = new StringBuilder();
foreach (Contact c in contactList)
{
sb.Append(c.Name + ",");
}
sb.Remove(sb.Length - 1, 1);
//sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString();
}
```
I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence?
I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas?
|
You could use [LINQ to Objects](http://en.wikipedia.org/wiki/Language_Integrated_Query#LINQ_to_Objects):
```
string [] strings = contactList.Select(c => c.Name).ToArray();
string csv = string.Join(",", strings);
```
Obviously that could all be done in one line, but it's a bit clearer on two.
|
Your code not really compliant with [full CSV format](http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm). If you are just generating CSV from data that has no commas, leading/trailing spaces, tabs, newlines or quotes, it should be fine. However, in most real-world data-exchange scenarios, you do need the full imlementation.
For generation to proper CSV, you can use this:
```
public static String EncodeCsvLine(params String[] fields)
{
StringBuilder line = new StringBuilder();
for (int i = 0; i < fields.Length; i++)
{
if (i > 0)
{
line.Append(DelimiterChar);
}
String csvField = EncodeCsvField(fields[i]);
line.Append(csvField);
}
return line.ToString();
}
static String EncodeCsvField(String field)
{
StringBuilder sb = new StringBuilder();
sb.Append(field);
// Some fields with special characters must be embedded in double quotes
bool embedInQuotes = false;
// Embed in quotes to preserve leading/tralining whitespace
if (sb.Length > 0 &&
(sb[0] == ' ' ||
sb[0] == '\t' ||
sb[sb.Length-1] == ' ' ||
sb[sb.Length-1] == '\t' ))
{
embedInQuotes = true;
}
for (int i = 0; i < sb.Length; i++)
{
// Embed in quotes to preserve: commas, line-breaks etc.
if (sb[i] == DelimiterChar ||
sb[i]=='\r' ||
sb[i]=='\n' ||
sb[i] == '"')
{
embedInQuotes = true;
break;
}
}
// If the field itself has quotes, they must each be represented
// by a pair of consecutive quotes.
sb.Replace("\"", "\"\"");
String rv = sb.ToString();
if (embedInQuotes)
{
rv = "\"" + rv + "\"";
}
return rv;
}
```
Might not be world's most efficient code, but it has been tested. Real world sucks compared to quick sample code :)
|
CSV string handling
|
[
"",
"c#",
"csv",
""
] |
I have a table with a 'filename' column.
I recently performed an insert into this column but in my haste forgot to append the file extension to all the filenames entered. Fortunately they are all '.jpg' images.
How can I easily update the 'filename' column of these inserted fields (assuming I can select the recent rows based on known id values) to include the '.jpg' extension?
|
The solution is:
```
UPDATE tablename SET [filename] = RTRIM([filename]) + '.jpg' WHERE id > 50
```
RTRIM is required because otherwise the [filename] column in its entirety will be selected for the string concatenation i.e. if it is a varchar(20) column and filename is only 10 letters long then it will still select those 10 letters and then 10 spaces. This will in turn result in an error as you try to fit 20 + 3 characters into a 20 character long field.
|
MattMitchell's answer is correct if the column is a CHAR(20), but is not true if it was a VARCHAR(20) and the spaces hadn't been explicitly entered.
If you do try it on a CHAR field without the RTRIM function you will get a *"String or binary data would be truncated"* error.
|
What is the easiest way using T-SQL / MS-SQL to append a string to existing table cells?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is [BowlSK](http://www.bowlsk.com). However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working.
What I have:
* Main Level contains:
+ all .py files (didn't know how to make packages work)
+ all .html templates for main level pages
* Subdirectories:
+ separate folders for css, images, js, etc.
+ folders that hold .html templates for subdirecty-type urls
Example:
<http://www.bowlsk.com/> maps to HomePage (default package), template at "index.html"
<http://www.bowlsk.com/games/view-series.html?series=7130> maps to ViewSeriesPage (again, default package), template at "games/view-series.html"
It's nasty. How do I restructure? I had 2 ideas:
* Main Folder containing: appdef, indexes, main.py?
+ Subfolder for code. Does this have to be my first package?
+ Subfolder for templates. Folder heirarchy would match package heirarchy
+ Individual subfolders for css, images, js, etc.
* Main Folder containing appdef, indexes, main.py?
+ Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage"
Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again.
|
First, I would suggest you have a look at "[Rapid Development with Python, Django, and Google App Engine](http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine)"
GvR describes a general/standard project layout on page 10 of his [slide presentation](http://sites.google.com/site/io/rapid-development-with-python-django-and-google-app-engine/rapid_development_with_django_gae.pdf?attredirects=0).
Here I'll post a slightly modified version of the layout/structure from that page. I pretty much follow this pattern myself. You also mentioned you had trouble with packages. Just make sure each of your sub folders has an \_\_init\_\_.py file. It's ok if its empty.
## Boilerplate files
* These hardly vary between projects
* app.yaml: direct all non-static requests to main.py
* main.py: initialize app and send it all requests
## Project lay-out
* static/\*: static files; served directly by App Engine
* myapp/\*.py: app-specific python code
+ views.py, models.py, tests.py, \_\_init\_\_.py, and more
* templates/\*.html: templates (or myapp/templates/\*.html)
Here are some code examples that may help as well:
## main.py
```
import wsgiref.handlers
from google.appengine.ext import webapp
from myapp.views import *
application = webapp.WSGIApplication([
('/', IndexHandler),
('/foo', FooHandler)
], debug=True)
def main():
wsgiref.handlers.CGIHandler().run(application)
```
## myapp/views.py
```
import os
import datetime
import logging
import time
from google.appengine.api import urlfetch
from google.appengine.ext.webapp import template
from google.appengine.api import users
from google.appengine.ext import webapp
from models import *
class IndexHandler(webapp.RequestHandler):
def get(self):
date = "foo"
# Do some processing
template_values = {'data': data }
path = os.path.join(os.path.dirname(__file__) + '/../templates/', 'main.html')
self.response.out.write(template.render(path, template_values))
class FooHandler(webapp.RequestHandler):
def get(self):
#logging.debug("start of handler")
```
## myapp/models.py
```
from google.appengine.ext import db
class SampleModel(db.Model):
```
I think this layout works great for new and relatively small to medium projects. For larger projects I would suggest breaking up the views and models to have their own sub-folders with something like:
## Project lay-out
* static/: static files; served directly by App Engine
+ js/\*.js
+ images/\*.gif|png|jpg
+ css/\*.css
* myapp/: app structure
+ models/\*.py
+ views/\*.py
+ tests/\*.py
+ templates/\*.html: templates
|
My usual layout looks something like this:
* app.yaml
* index.yaml
* request.py - contains the basic WSGI app
* lib
+ `__init__.py` - common functionality, including a request handler base class
* controllers - contains all the handlers. request.yaml imports these.
* templates
+ all the django templates, used by the controllers
* model
+ all the datastore model classes
* static
+ static files (css, images, etc). Mapped to /static by app.yaml
I can provide examples of what my app.yaml, request.py, lib/**init**.py, and sample controllers look like, if this isn't clear.
|
Project structure for Google App Engine
|
[
"",
"python",
"google-app-engine",
""
] |
I am trying to achieve better performance for my Java SWT application, and I just found out it is possible to use OpenGL in SWT. It seems there are more than one Java binding for OpenGL. Which one do you prefer?
Note that I have never used OpenGL before, and that the application needs to work on Windows, Linux and Mac OS X.
|
[JOGL](http://www.opengl.org/resources/bindings/ "opengl.org")
My reasons can be quoted off the previously linked site:
> JOGL provides full access to the APIs in the OpenGL 2.0 specification as well as nearly all vendor extensions, and integrates with the AWT and Swing widget sets.
Also if you want to have some fun learning and poking around, [Processing](http://processing.org/ "Processing") is an excellent way to start (Processing also uses JOGL btw...)
|
I'd suggest checking out [LWJGL](http://lwjgl.org/), the LightWeight Java Game Library. It's got OpenGL bindings, but it also has OpenAL bindings and some great tutorials to get you started.
Just keep in mind that Swing/SWT and OpenGL are generally used for entirely different things. You may end up wanting to use a combination of both. Just try LWJGL out and see how well it fits with what you're doing.
|
What is the best OpenGL java binding?
|
[
"",
"java",
"user-interface",
"opengl",
"performance",
"swt",
""
] |
I would like to write a small program in C# which goes through my jpeg photos and, for example, sorts them into dated folders (using MY dating conventions, dammit...).
Does anyone know a relatively easy way to get at the EXIF data such as Date And Time or Exposure programatically?
Thanks!
|
Check out this [metadata extractor](https://www.drewnoakes.com/code/exif/). It is written in Java but has also been ported to C#. I have used the Java version to write a small utility to rename my jpeg files based on the date and model tags. Very easy to use.
---
**EDIT** *metadata-extractor* supports .NET too. It's a very fast and simple library for accessing metadata from images and videos.
It fully supports Exif, as well as IPTC, XMP and many other types of metadata from file types including JPEG, PNG, GIF, PNG, ICO, WebP, PSD, ...
```
var directories = ImageMetadataReader.ReadMetadata(imagePath);
// print out all metadata
foreach (var directory in directories)
foreach (var tag in directory.Tags)
Console.WriteLine($"{directory.Name} - {tag.Name} = {tag.Description}");
// access the date time
var subIfdDirectory = directories.OfType<ExifSubIfdDirectory>().FirstOrDefault();
var dateTime = subIfdDirectory?.GetDateTime(ExifDirectoryBase.TagDateTime);
```
It's available via [NuGet](https://www.nuget.org/packages/MetadataExtractor/) and the [code's on GitHub](https://github.com/drewnoakes/metadata-extractor-dotnet).
|
As suggested, you can use some 3rd party library, or do it manually (which is not that much work), but the simplest and the most flexible is to perhaps use the built-in functionality in .NET. For more see:
* [System.Drawing.Image.PropertyItems](http://msdn.microsoft.com/en-us/library/system.drawing.image.propertyitems.aspx) Property
* [System.Drawing.Imaging.PropertyItem](http://msdn.microsoft.com/en-us/library/system.drawing.imaging.propertyitem.aspx) Class
* [How to: Read Image Metadata](http://msdn.microsoft.com/en-us/library/xddt0dz7.aspx)
I say "it’s the most flexible" because .NET does not try to interpret or coalesce the data in any way. For each EXIF you basically get an array of bytes. This may be good or bad depending on how much control you actually want.
Also, I should point out that the property list does not in fact directly correspond to the EXIF values. EXIF itself is stored in multiple tables with overlapping ID’s, but .NET puts everything in one list and redefines ID’s of some items. But as long as you don’t care about the precise EXIF ID’s, you should be fine with the .NET mapping.
---
**Edit:** It's possible to do it without loading the full image following this answer: <https://stackoverflow.com/a/552642/2097240>
|
How to get the EXIF data from a file using C#
|
[
"",
"c#",
"exif",
"photography",
""
] |
I want to create a custom control in C#. But every time I have to fully redraw my control, it flickers, even if I use double buffering (drawing to an Image first, and blitting that).
How do I eliminate flicker when I have to fully redraw?
|
You could try putting the following in your constructor after the InitiliseComponent call.
```
SetStyle(ControlStyles.OptimizedDoubleBuffer |
ControlStyles.UserPaint |
ControlStyles.AllPaintingInWmPaint, true);
```
EDIT:
If you're giving this a go, if you can, remove your own double buffering code and just have the control draw itself in response to the appropriate virtual methods being called.
|
I pulled this from a working C# program. Other posters have syntax errors and clearly copied from C++ instead of C#
```
SetStyle(ControlStyles.OptimizedDoubleBuffer |
ControlStyles.UserPaint |
ControlStyles.AllPaintingInWmPaint, true);
```
|
How to eliminate flicker in Windows.Forms custom control when scrolling?
|
[
"",
"c#",
".net",
"winforms",
"gdi+",
""
] |
I am working on a Customer Server Control that extends another control. There is no problem with attaching to other controls on the form.
in vb.net: `Parent.FindControl(TargetControlName)`
I would like to pass a method to the control in the ASPX markup.
for example: `<c:MyCustomerControl runat=server InitializeStuffCallback="InitializeStuff">`
So, I tried using reflection to access the given method name from the Parent.
Something like (in VB)
```
Dim pageType As Type = Page.GetType
Dim CallbackMethodInfo As MethodInfo = pageType.GetMethod( "MethodName" )
'Also tried
sender.Parent.GetType.GetMethod("MethodName")
sender.Parent.Parent.GetType.GetMethod("MethodName")
```
The method isn't found, because it just isn't apart of the Page. Where should I be looking? I'm fairly sure this is possible because I've seen other controls do similar.
---
I forgot to mention, my work-around is to give the control events and attaching to them in the Code-behind.
|
If you want to be able to pass a method in the ASPX markup, you need to use the `Browsable` attribute in your code on the event.
VB.NET
```
<Browsable(True)> Public Event InitializeStuffCallback
```
C#
```
[Browsable(true)]
public event EventHandler InitializeStuffCallback;
```
Reference:
[Design-Time Attributes for Components](http://msdn.microsoft.com/en-us/library/tk67c2t8.aspx) and [BrowsableAttribute Class](http://msdn.microsoft.com/en-us/library/system.componentmodel.browsableattribute.aspx)
All the events, properties, or whatever need to be in the code-behind of the control with the browsable attribute to make it so you can change it in the tag code.
|
Normally you wouldn't need to get the method via reflection. Inside your user control, define a public event (sorry I do not know the vb syntax so this will be in c#)
```
public event EventHandler EventName;
```
Now, inside your aspx page, or whatever container of the user control, define a protected method that matches the EventHandler:
```
protected void MyCustomerControl_MethodName(object sender, EventArgs e) { }
```
Now, inside your markup, you can use
```
<c:MyCustomerControl id="MyCustomerControl" runat=server OnEventName="MyCustomerControl_MethodName">
```
|
How to pass method name to custom server control in asp.net?
|
[
"",
"c#",
"asp.net",
"vb.net",
"custom-server-controls",
"web-controls",
""
] |
Currently I am using NetTiers to generate my data access layer and service layer. I have been using NetTiers for over 2 years and have found it to be very useful. At some point I need to look at LINQ so my questions are...
1. Has anyone else gone from NetTiers to LINQ To SQL?
2. Was this switch over a good or bad thing?
3. Is there anything that I should be aware of?
4. Would you recommend this switch?
Basically I would welcome any thoughts
.
|
1. No
2. See #1
3. You should beware of standard abstraction overhead. Also it's very SQL Server based in it's current state.
4. Are you using SQL Server, then maybe. If you are using LINQ for other things right now like over XML data (great), Object data, Datasets, then yes you should could switch to have a uniform data syntax for all of them. Like [lagerdalek](https://stackoverflow.com/questions/81376/should-i-start-using-linq-to-sql#81422) mentioned if it ain't broke don't fix it.
From the quick look at .netTiers Application Framework, I'd say if you already have an investment with that solution it seems to give you much more than a simple Data Access Layer and you should stick with it.
From my experience LINQ to SQL is a good solution for small-medium sized projects. It is an ORM which is a great way to enhance productivity. It also ***should*** give you another layer of abstraction that will allow you to change out the layer underneath for something else. The designer in Visual Studio (and I belive VS Express also) is very easy and simple to use. It gives you the common drag-drop and property-based editing of the object mappings.
@ [Jason Jackson](https://stackoverflow.com/questions/81376/should-i-start-using-linq-to-sql#154525) - The Designer does let you add properties by hand, however you need to specify the attributes for that property, but you do this once, it might take 3 minutes longer than the initial dragging of the table into the designer, however it is only necessary once per change in the database itself. This is not too different from other ORMs, however you are correct that they could make this much easier, and find only those properties that have changed, or even implement some kind of refactoring tool for such needs.
Resources:
* [Why use LINQ to SQL?](http://dotnet.org.za/hiltong/archive/2008/02/01/why-use-linq-to-sql-part-1-performance-considerations.aspx)
* [Scott Guthrie on LINQ to SQL](http://weblogs.asp.net/scottgu/archive/2007/09/07/linq-to-sql-part-9-using-a-custom-linq-expression-with-the-lt-asp-linqdatasource-gt-control.aspx)
* [10 Tips to Improve your LINQ to SQL Application Performance](http://www.sidarok.com/web/blog/content/2008/05/02/10-tips-to-improve-your-linq-to-sql-application-performance.html)
* [LINQ To SQL and Visual Studio 2008 Performance Update](http://www.davidhayden.com/blog/dave/archive/2007/09/28/LINQToSQLVisualStudio2008PerformanceUpdate.aspx)
* [Performance Comparisons LINQ to SQL / ADO / C#](http://www.codeproject.com/KB/linq/performance_comparisons.aspx)
* [LINQ to SQL 5 Minute Overview](http://www.hookedonlinq.com/LINQtoSQL5MinuteOverview.ashx)
Note that [Parallel LINQ](http://msdn.microsoft.com/en-us/magazine/cc163329.aspx) is being developed to allow for much greater performance on multi-core machines.
|
I tried to use Linq to SQL on a small project, thinking that I wanted something I could generate quickly. I ran into a lot of problems in the designer. For example, anytime you need to add a column to a table you basically have to remove and re-add the table definition in the designer. If you have set any properties on the table then you have to re-set those properties. For me this really slowed down the development process.
LINQ to SQL itself is nice. I really like the extensibility. If they can improve the designer I might try it again. I think that the framework would benefit from a little more functionality aimed at a disconnected model like web development.
Check out [Scott Guthrie's LINQ to SQL series](http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx) of blog posts for some great examples of how to use it.
|
Should I start using LINQ To SQL?
|
[
"",
"c#",
"linq",
"linq-to-sql",
".nettiers",
""
] |
For most GUI's I've used, when a control that contains text gets the focus, the entire contents of the control are selected. This means if you just start typing, you completely replace the former contents.
Example: You have spin control that is initialized with the value zero. You tab to it and type "1" The value in the control is now 1.
With Swing, this doesn't happen. The text in the control is not selected and the carat appears at one end or another of the existing text. Continuing the above example:
With a Swing JSpinner, when you tab to the spin control, the carat is at the left. You type "1" and the value in the control is now 10.
This drives me, (and my users) up a wall, and I'd like to change it. Even more important, I'd like to change it globally so the new behavior applies to JTextField, JPasswordField, JFormattedTextField, JTextArea, JComboBox, JSpinner, and so on. The only way I have found to do this to add a FocusAdapter to each control and override the focusGained() method to Do The Right Thing[tm].
There's gotta be an easier, and less fragile way. Please?
EDIT: One additional piece of information for this particular case. The form I am working with was generated using Idea's form designer. That means I normally don't actually write the code to create the components. It is possible to tell Idea that you want to create them yourself, but that's a hassle I'd like to avoid.
Motto: All good programmers are basically lazy.
|
After reading the replies so far (Thanks!) I passed the outermost JPanel to the following method:
```
void addTextFocusSelect(JComponent component){
if(component instanceof JTextComponent){
component.addFocusListener(new FocusAdapter() {
@Override
public void focusGained(FocusEvent event) {
super.focusGained(event);
JTextComponent component = (JTextComponent)event.getComponent();
// a trick I found on JavaRanch.com
// Without this, some components don't honor selectAll
component.setText(component.getText());
component.selectAll();
}
});
}
else
{
for(Component child: component.getComponents()){
if(child instanceof JComponent){
addTextFocusSelect((JComponent) child);
}
}
}
}
```
It works!
|
I haven't tried this myself (only dabbled in it a while ago), but you can probably get the current focused component by using:
KeyboardFocusManager (there is a static method getCurrentKeyboardFocusManager())
an adding a PropertyChangeListener to it.
From there, you can find out if the component is a JTextComponent and select all text.
|
Is there an easy way to change the behavior of a Java/Swing control when it gets focus?
|
[
"",
"java",
"user-interface",
"swing",
""
] |
I want to encrypt few files using python what is the best way
I can use gpg/pgp using any standard/famous python libraries?
|
[PyCrypto](http://www.pycrypto.org) seems to be the best one around.
|
Try [KeyCzar](http://code.google.com/p/keyczar/)
Very easy to implement.
|
what is the best/easiest to use encryption library in python
|
[
"",
"python",
"encryption",
"gnupg",
"pgp",
""
] |
I'm looking for a technique or tool which we can use to obfuscate or somehow secure our compiled c# code. The goal is not for user/data security but to hinder reverse engineering of some of the technology in our software.
This is not for use on the web, but for a desktop application.
So, do you know of any tools available to do this type of thing? (They need not be free)
What kind of performance implications do they have if any?
Does this have any negative side effects when using a debugger during development?
We log stack traces of problems in the field. How would obfuscation affect this?
|
This is a pretty good list of obfuscators from [Visual Studio Marketplace](https://marketplace.visualstudio.com/search?term=.net%20obfuscators&target=VS&category=All%20categories&vsVersion=&sortBy=Relevance)
Obfuscators
* [ArmDot](https://www.armdot.com/)
* [Crypto Obfuscator](http://www.ssware.com/cryptoobfuscator/obfuscator-net.htm)
* [Demeanor](http://www.wiseowl.com/products/products.aspx) for .NET
* [DeployLX CodeVeil](http://xheo.com/products/code-protection)
* [Dotfuscator](http://www.preemptive.com/) .NET Obfuscator
* [Semantic Designs](http://www.semdesigns.com/products/obfuscators/csharpobfuscator.html): C# Source Code Obfuscator
* [Smartassembly](http://www.smartassembly.com)
* [Spices.Net](http://www.9rays.net/Products/Spices.Net/)
* [Xenocode](http://www.xenocode.com/Products/Postbuild/) Postbuild 2006
* [.NET Reactor](http://www.eziriz.com/)
I have not observed any performance issues when obfuscating my code. If your just sending text basted stack traces you might have a problem translating the method names.
|
There are tools that also 'deobfuscate' obfuscated DLLs - I'd suggest turning the piece that needs to be protected into an unmanaged component.
|
Best method to obfuscate or secure .Net assemblies
|
[
"",
"c#",
".net",
".net-2.0",
""
] |
I have a page that uses
```
$(id).show("highlight", {}, 2000);
```
to highlight an element when I start a ajax request, that might fail so that I want to use something like
```
$(id).show("highlight", {color: "#FF0000"}, 2000);
```
in the error handler. The problem is that if the first highlight haven't finished, the second is placed in a queue and wont run until the first is ready. Hence the question: Can I somehow stop the first effect?
|
From the jQuery docs:
<http://docs.jquery.com/Effects/stop>
> *Stop the currently-running animation on the matched elements.*...
>
> When `.stop()` is called on an element, the currently-running animation (if any) is immediately stopped. If, for instance, an element is being hidden with `.slideUp()` when `.stop()` is called, the element will now still be displayed, but will be a fraction of its previous height. Callback functions are not called.
>
> If more than one animation method is called on the same element, the later animations are placed in the effects queue for the element. These animations will not begin until the first one completes. When `.stop()` is called, the next animation in the queue begins immediately. If the `clearQueue` parameter is provided with a value of `true`, then the rest of the animations in the queue are removed and never run.
>
> If the `jumpToEnd` argument is provided with a value of true, the current animation stops, but the element is immediately given its target values for each CSS property. In our above `.slideUp()` example, the element would be immediately hidden. The callback function is then immediately called, if provided...
|
**I listed this as a comment for the accepted answer, but I thought it would be a good idea to post it as a standalone answer as it seems to be helping some people having problems with `.stop()`**
---
FYI - I was looking for this answer as well (trying to stop a Pulsate Effect), but I did have a `.stop()` in my code.
After reviewing the docs, I needed `.stop(true, true)`
|
How do I stop an effect in jQuery
|
[
"",
"javascript",
"jquery",
""
] |
I'm not sure I'm using all the correct terminology here so be forgiving.
I just put up a site with a contact form that sends an email using the PHP mail() function. Simple enough. However the live site doesn't actually send the email, the test site does. So it's not my code.
It's a shared host and we have another site that has the same function that works perfectly, so it's not the server.
The only difference between the two is that the site that doesn't work just has the name server pointing to us and so the MX record never touches our server.
So my question is, could some one please confirm that the mail() function wont work if we don't have the MX record pointing to our server. Thanks
|
Hey guys thanks for the answers, it is really appreciated.
After ignoring the issue for a few months it has come up again, I did however find the answer to my problems.
Firstly, as you answers suggested, PHP and the mail() function were working as expected. The mail was getting sent.
The problem lies when the email is sent, it simply presumes that because its being sent from mydomain.com to \*@mydomain.com email that the email itself is hosted on the same server, so it gets sent there instead and ignores the MX record.
OK it's a bit more complicated than that, but that is the general jist.
Edit:
Found a better version of the topic [sendmail and MX records when mail server is not on web host](https://stackoverflow.com/questions/322659/sendmail-and-mx-records-when-mail-server-is-not-on-web-host).
|
Yes. It will work just fine. I have a PHP script using the mail() function with the MX records set to Google Apps.
If the two scripts are on different hosts (it's a bit unclear from your post), then make sure that the host doesn't block some of the custom headers. I had issues with this when creating my script, but removing all but the From header fixed the problem.
|
Does the PHP mail() function work if I don't own the MX record
|
[
"",
"php",
"dns",
""
] |
Ok, so, my visual studio is broken. I say this NOT prematurely, as it was my first response to see where I had messed up in my code. When I add controls to the page I can't reference all of them in the code behind. Some of them I can, it seems that the first few I put on a page work, then it just stops.
I first thought it may be the type of control as initially I was trying to reference a repeater inside an update panel. I know I am correctly referencing the code behind in my aspx page. But just in case it was a screw up on my part I started to recreate the page from scratch and this time got a few more controls down before VS stopped recognizing my controls.
After creating my page twice and getting stuck I thought maybe it was still the type of controls. I created a new page and just threw some labels on it. No dice, build fails when referencing the control from the code behind.
In a possibly unrelated note when I switch to the dreaded "design" mode of the aspx pages VS 2008 errors out and restarts.
I have already put a trouble ticket in to Microsoft. I uninstalled all add-ins, I reinstalled visual studio.
Anyone that wants to see my code just ask, but I am using the straight WYSIWYG visual studio "new aspx page" nothing fancy.
I doubt anyone has run into this, but have you?
Has anyone had success trouble shooting these things with Microsoft? Any way to expedite this ticket without paying??? I have been talking to a rep from Microsoft for days with no luck yet and I am dead in the water.
---
**Jon Limjap:** I edited the title to both make it clear and descriptive *and* make sure that nobody sees it as offensive. "Foo-barred" doesn't exactly constitute a proper question title, although your question is clearly a valid one.
|
try clearing your local VS cache. find your project and delete the folder. the folder is created by VS for what reason I honestly don't understand. but I've had several occasions where clearing it and doing a re-build fixes things... hope this is all that you need as well.
here
```
%Temp%\VWDWebCache
```
and possibly here
```
%LocalAppData%\Microsoft\WebsiteCache
```
|
The above fix (deleting the temp files) did not work for me. I had to delete the `PageName.aspx.designer.cs` file, then right-click my page, and choose "Convert to Web Application" from the context menu.
When Visual Studio attempted to rebuild the designer file, it encountered (and revealed to me) the source of the problem. In my case, VS had lost a reference to a DLL needed by one of the controls on my page, so I had to clean out the generated bin folders in my project.
|
ASP.NET controls cannot be referenced in code-behind in Visual Studio 2008
|
[
"",
"c#",
"asp.net",
"visual-studio",
""
] |
I use the Boost Test framework for my C++ code but there are two problems with it that are probably common to all C++ test frameworks:
* There is no way to create automatic test stubs (by extracting public functions from selected classes for example).
* You cannot run a single test - you have to run the entire 'suite' of tests (unless you create lots of different test projects I guess).
Does anyone know of a better testing framework or am I forever to be jealous of the test tools available to Java/.NET developers?
|
I just responded to a [very similar question](https://stackoverflow.com/questions/91384/unit-testing-for-c-code-tools-and-methodology#92569). I ended up using Noel Llopis' UnitTest++. I liked it more than boost::test because it didn't insist on implementing the main program of the test harness with a macro - it can plug into whatever executable you create. It does suffer from the same encumbrance of boost::test in that it requires a library to be linked in. I've used CxxTest, and it does come closer than anything else in C++-land to automatically generating tests (though it requires Perl to be part of your build system to do this). C++ just does not provide the reflection hooks that the .NET languages and Java do. The MsTest tools in Visual Studio Team System - Developer's Edition will auto-generate test stubs of unmanaged C++, but the methods have to be exported from a DLL to do this, so it does not work with static libraries. Other test frameworks in the .NET world may have this ability too, but I'm not familiar with any of those. So right now we use UnitTest++ for unmanaged C++ and I'm currently deciding between MsTest and NUnit for the managed libraries.
|
I've just pushed my own framework, [CATCH](http://www.levelofindirection.com/journal/2010/12/28/unit-testing-in-c-and-objective-c-just-got-easier.html), out there. It's still under development but I believe it already surpasses most other frameworks.
Different people have different criteria but I've tried to cover most ground without too many trade-offs.
Take a look at my linked blog entry for a taster. My top five features are:
* Header only
* Auto registration of function and method based tests
* Decomposes standard C++ expressions into LHS and RHS (so you don't need a whole family of assert macros).
* Support for nested sections within a function based fixture
* Name tests using natural language - function/ method names are generated
It *doesn't* do generation of stubs - but that's a fairly specialised area. I think [Isolator++](http://www.typemock.com/isolatorpp-product-page) is the first tool to truly pull that off. Note that Mocking/ stubbing frameworks are usually independent of unit testing frameworks. CATCH works particularly well with mock objects as test state is not passed around by context.
It also has Objective-C bindings.
[update]
Just happened back across this answer of mine from a few years ago. Thanks for all the great comments!
Obviously Catch has developed on a lot in that time. It now has support for BDD style testing (given/ when/ then), tags, now in a *single* header, and loads of internal improvements and refinements (e.g. richer command line, clear and expressive output etc). [A more up-to-date blog post is here.](http://www.levelofindirection.com/journal/2013/6/28/catch-10.html)
|
C++ unit testing framework
|
[
"",
"c++",
"unit-testing",
""
] |
All I want is to update an ListViewItem's text whithout seeing any flickering.
This is my code for updating (called several times):
```
listView.BeginUpdate();
listViewItem.SubItems[0].Text = state.ToString(); // update the state
listViewItem.SubItems[1].Text = progress.ToString(); // update the progress
listView.EndUpdate();
```
I've seen some solutions that involve overriding the component's `WndProc():`
```
protected override void WndProc(ref Message m)
{
if (m.Msg == (int)WM.WM_ERASEBKGND)
{
m.Msg = (int)IntPtr.Zero;
}
base.WndProc(ref m);
}
```
**They say it solves the problem, but in my case It didn't**. I believe this is because I'm using icons on every item.
|
To end this question, here is a helper class that should be called when the form is loading for each ListView or any other ListView's derived control in your form. Thanks to "Brian Gillespie" for giving the solution.
```
public enum ListViewExtendedStyles
{
/// <summary>
/// LVS_EX_GRIDLINES
/// </summary>
GridLines = 0x00000001,
/// <summary>
/// LVS_EX_SUBITEMIMAGES
/// </summary>
SubItemImages = 0x00000002,
/// <summary>
/// LVS_EX_CHECKBOXES
/// </summary>
CheckBoxes = 0x00000004,
/// <summary>
/// LVS_EX_TRACKSELECT
/// </summary>
TrackSelect = 0x00000008,
/// <summary>
/// LVS_EX_HEADERDRAGDROP
/// </summary>
HeaderDragDrop = 0x00000010,
/// <summary>
/// LVS_EX_FULLROWSELECT
/// </summary>
FullRowSelect = 0x00000020,
/// <summary>
/// LVS_EX_ONECLICKACTIVATE
/// </summary>
OneClickActivate = 0x00000040,
/// <summary>
/// LVS_EX_TWOCLICKACTIVATE
/// </summary>
TwoClickActivate = 0x00000080,
/// <summary>
/// LVS_EX_FLATSB
/// </summary>
FlatsB = 0x00000100,
/// <summary>
/// LVS_EX_REGIONAL
/// </summary>
Regional = 0x00000200,
/// <summary>
/// LVS_EX_INFOTIP
/// </summary>
InfoTip = 0x00000400,
/// <summary>
/// LVS_EX_UNDERLINEHOT
/// </summary>
UnderlineHot = 0x00000800,
/// <summary>
/// LVS_EX_UNDERLINECOLD
/// </summary>
UnderlineCold = 0x00001000,
/// <summary>
/// LVS_EX_MULTIWORKAREAS
/// </summary>
MultilWorkAreas = 0x00002000,
/// <summary>
/// LVS_EX_LABELTIP
/// </summary>
LabelTip = 0x00004000,
/// <summary>
/// LVS_EX_BORDERSELECT
/// </summary>
BorderSelect = 0x00008000,
/// <summary>
/// LVS_EX_DOUBLEBUFFER
/// </summary>
DoubleBuffer = 0x00010000,
/// <summary>
/// LVS_EX_HIDELABELS
/// </summary>
HideLabels = 0x00020000,
/// <summary>
/// LVS_EX_SINGLEROW
/// </summary>
SingleRow = 0x00040000,
/// <summary>
/// LVS_EX_SNAPTOGRID
/// </summary>
SnapToGrid = 0x00080000,
/// <summary>
/// LVS_EX_SIMPLESELECT
/// </summary>
SimpleSelect = 0x00100000
}
public enum ListViewMessages
{
First = 0x1000,
SetExtendedStyle = (First + 54),
GetExtendedStyle = (First + 55),
}
/// <summary>
/// Contains helper methods to change extended styles on ListView, including enabling double buffering.
/// Based on Giovanni Montrone's article on <see cref="http://www.codeproject.com/KB/list/listviewxp.aspx"/>
/// </summary>
public class ListViewHelper
{
private ListViewHelper()
{
}
[DllImport("user32.dll", CharSet = CharSet.Auto)]
private static extern int SendMessage(IntPtr handle, int messg, int wparam, int lparam);
public static void SetExtendedStyle(Control control, ListViewExtendedStyles exStyle)
{
ListViewExtendedStyles styles;
styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0);
styles |= exStyle;
SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles);
}
public static void EnableDoubleBuffer(Control control)
{
ListViewExtendedStyles styles;
// read current style
styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0);
// enable double buffer and border select
styles |= ListViewExtendedStyles.DoubleBuffer | ListViewExtendedStyles.BorderSelect;
// write new style
SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles);
}
public static void DisableDoubleBuffer(Control control)
{
ListViewExtendedStyles styles;
// read current style
styles = (ListViewExtendedStyles)SendMessage(control.Handle, (int)ListViewMessages.GetExtendedStyle, 0, 0);
// disable double buffer and border select
styles -= styles & ListViewExtendedStyles.DoubleBuffer;
styles -= styles & ListViewExtendedStyles.BorderSelect;
// write new style
SendMessage(control.Handle, (int)ListViewMessages.SetExtendedStyle, 0, (int)styles);
}
}
```
|
The accepted answer works, but is quite lengthy, and deriving from the control (like mentioned in the other answers) just to enable double buffering is also a bit overdone. But fortunately we have reflection and can also call internal methods if we like to (but be sure what you do!).
Be encapsulating this approach into an extension method, we'll get a quite short class:
```
public static class ControlExtensions
{
public static void DoubleBuffering(this Control control, bool enable)
{
var method = typeof(Control).GetMethod("SetStyle", BindingFlags.Instance | BindingFlags.NonPublic);
method.Invoke(control, new object[] { ControlStyles.OptimizedDoubleBuffer, enable });
}
}
```
Which can easily be called within our code:
```
InitializeComponent();
myListView.DoubleBuffering(true); //after the InitializeComponent();
```
And all flickering is gone.
### Update
I stumbled on [this question](https://stackoverflow.com/questions/1967228/controlstyles-doublebuffer-vs-controlstyles-optimizeddoublebuffer) and due to this fact, the extension method should (maybe) better be:
```
public static void DoubleBuffered(this Control control, bool enable)
{
var doubleBufferPropertyInfo = control.GetType().GetProperty("DoubleBuffered", BindingFlags.Instance | BindingFlags.NonPublic);
doubleBufferPropertyInfo.SetValue(control, enable, null);
}
```
|
How to prevent flickering in ListView when updating a single ListViewItem's text?
|
[
"",
"c#",
".net",
"winforms",
"listview",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.