Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I can get the nameserver via NSLOOKUP on a Windows machine and get an AD DC via a WMI VBS script, but I can't use the VBS script from a Unix server ( which can ping the nameserver).
I'd like to use Java to obtain all the Active Directory domain controllers from a Unix box.
Is this possible? | If you can query the DNS Server, can't you perform a query for its service records to discover computers that can act as Domain Controllers?
That's how Windows Workstations find their candidate Domain Controllers. | I'd suggest you to look into java.net package which contains various classes like INetAddress, URLConnection and various socket classes that you might find useful. | How can I find the IP of Active Directory Domain Controller(s) from Unix? | [
"",
"java",
"active-directory",
""
] |
Is there a way to know the number of test methods in a test case?
What I want to do is have a test case which tests several scenarios and for all these i would be doing the data setUp() only once. Similarly I would like to do the cleanup (tearDown()) once at the end of all the test methods.
The current approach i am using is to maintain a counter for the number of test methods that are present in the file and decrement them in the tearDown method and do the cleanup when the count reaches 0. But this counter needs to be taken care of whenever new test methods are added. | Instead of using setup/teardown you should probably use methods annotated with [@BeforeClass](http://junit.org/apidocs/org/junit/BeforeClass.html) and @AfterClass instead. | You can do this through @BeforeClass and @AfterClass in JUnit4:
<http://junit.org/apidocs/org/junit/BeforeClass.html>
Volker | To find the number of test methods in a JUnit TestCase | [
"",
"java",
"junit",
""
] |
So I have a collection of objects. The exact type isn't important. From it I want to extract all the unique pairs of a pair of particular properties, thusly:
```
myObjectCollection.Select(item=>new
{
Alpha = item.propOne,
Bravo = item.propTwo
}
).Distinct();
```
So my question is: Will Distinct in this case use the default object equals (which will be useless to me, since each object is new) or can it be told to do a different equals (in this case, equal values of Alpha and Bravo => equal instances)? Is there any way to achieve that result, if this doesn't do it? | Have a read through K. Scott Allen's excellent post here:
[And Equality for All ... Anonymous Types](http://odetocode.com/Blogs/scott/archive/2008/03/25/11916.aspx)
The short answer (and I quote):
> Turns out the C# compiler overrides
> Equals and GetHashCode for anonymous
> types. The implementation of the two
> overridden methods uses all the public
> properties on the type to compute an
> object's hash code and test for
> equality. If two objects of the same
> anonymous type have all the same
> values for their properties – the
> objects are equal.
So it's totally safe to use the Distinct() method on a query that returns anonymous types. | ```
public class DelegateComparer<T> : IEqualityComparer<T>
{
private Func<T, T, bool> _equals;
private Func<T, int> _hashCode;
public DelegateComparer(Func<T, T, bool> equals, Func<T, int> hashCode)
{
_equals= equals;
_hashCode = hashCode;
}
public bool Equals(T x, T y)
{
return _equals(x, y);
}
public int GetHashCode(T obj)
{
if(_hashCode!=null)
return _hashCode(obj);
return obj.GetHashCode();
}
}
public static class Extensions
{
public static IEnumerable<T> Distinct<T>(this IEnumerable<T> items,
Func<T, T, bool> equals, Func<T,int> hashCode)
{
return items.Distinct(new DelegateComparer<T>(equals, hashCode));
}
public static IEnumerable<T> Distinct<T>(this IEnumerable<T> items,
Func<T, T, bool> equals)
{
return items.Distinct(new DelegateComparer<T>(equals,null));
}
}
var uniqueItems=students.Select(s=> new {FirstName=s.FirstName, LastName=s.LastName})
.Distinct((a,b) => a.FirstName==b.FirstName, c => c.FirstName.GetHashCode()).ToList();
```
Sorry for the messed up formatting earlier | LINQ Select Distinct with Anonymous Types | [
"",
"c#",
"linq",
"distinct",
"anonymous-types",
"equality",
""
] |
How to unit test JPA code? is there any way to generate Unit Test case itself?
Note: I am lazy and new to Unit Test code. | Check this out, [Unitils](http://www.unitils.org/summary.html). Here is a [related discussion](http://www.theserverside.com/news/thread.tss?thread_id=50487), with some example codes.
Here is the example, showing [DBUnit, Spring and OpenJPA](http://bill.dudney.net/roller/bill/entry/20070428) together. You might not using all, but this can take you somewhere if you want to go with [DBUnit](http://www.dbunit.org/), I believe. | I'm in the middle of trying out OpenEJB (<http://openejb.apache.org/>) for my ongoing project. It's an embeddable container with support for EJB 3.0 (and partially 3.1). My first impression of it is fairly good. | Suggest a JPA Unit test framework | [
"",
"java",
"unit-testing",
"jpa",
""
] |
Are the unit test case for JPA worthy? anyway it is just going to access DB, why do we need it? | I disagree with most of the answers so far. For most enterprise applications, much of the business logic is embedded within database queries, most in JPQL if you're using JPA, and some can be quite complex. That is code that you're writing and therefore you should test it.
If the queries are simple -- no complex joins or criteria -- then it's not as much of a concern, but in my experience those apps are rare, and you're probably not needing something as powerful as JPA to build them. You'll pay in terms of scalability for apps that naively attempt to keep their persistence layer free of "business logic".
For most JPA apps, it's vital that the tests run outside of a container as a part of your normal, continuously-integrated build. The typical way to do that is with an in-memory database like HSQLDB or H2.
Java EE competes with frameworks like Rails and Django that expect the developer to unit test his code against a real database dedicated especially for that purpose. JPA developers should expect no less. | you could use an embedded Database e.g. [HSQLDB](http://hsqldb.org/ "HSQLDB") or [H2 Database](http://www.h2database.com/html/main.html "H2 Database") and :
* test if the Annotation Mapping is "correct" (as in the SessionFactory can be started)
* use some "small" schema.sql to test constrains (e.g. String length, foreign key)
* take an early look at the SQL Statements to spot some weird mappings
and/or use an integration database (either remote or build locally with backups from production) and :
* make some performance or real data tests | Are the unit test case for JPA worthy? anyway it is just going to access DB, why do we need it? | [
"",
"java",
"unit-testing",
"jpa",
""
] |
I ran into a situation today where Java was not invoking the method I expected -- Here is the minimal test case: (I'm sorry this seems contrived -- the 'real world' scenario is substantially more complex, and makes much more sense from a "why the hell would you do *that*?" standpoint.)
I'm specifically interested in why this happens, I don't care about redesign suggestions. I have a feeling this is in Java Puzzlers, but I don't have my copy handy.
See the specific question in commends within Test<T>.getValue() below:
```
public class Ol2 {
public static void main(String[] args) {
Test<Integer> t = new Test<Integer>() {
protected Integer value() { return 5; }
};
System.out.println(t.getValue());
}
}
abstract class Test<T> {
protected abstract T value();
public String getValue() {
// Why does this always invoke makeString(Object)?
// The type of value() is available at compile-time.
return Util.makeString(value());
}
}
class Util {
public static String makeString(Integer i){
return "int: "+i;
}
public static String makeString(Object o){
return "obj: "+o;
}
}
```
The output from this code is:
```
obj: 5
``` | No, the type of value is not available at compile time. Keep in mind that javac will only compile one copy of the code to be used for all possible T's. Given that, the only possible type for the compiler to use in your getValue() method is Object.
C++ is different, because it will eventually create multiple compiled versions of the code as needed. | Because the decision about what `makeString()` to use is made at compile-time and, based on the fact that T could be anything, must be the `Object` version. Think about it. If you did `Test<String>` it would have to call the `Object` version. As such all instances of `Test<T>` will use `makeString(Object)`.
Now if you did something like:
```
public abstract class Test<T extends Integer> {
...
}
```
things might be different. | How does Java method dispatch work with Generics and abstract classes? | [
"",
"java",
"generics",
"overloading",
""
] |
I'm trying to make a Tetris-like game in XNA, and currently I'm thinking of what way would be the best to handle it.
This is what I have so far:
I have a class called Block, which has for example texture and color tint.
Then I was planning on having everything in a double array, like:
```
Block[,] blocks = new Block[10,20];
```
which would then be the full grid.
And then when the blocks move downwards, I was thinking of doing like this:
```
blocks[x,y+1] = blocks[x,y];
blocks[x,y] = null;
```
At first I thought this was a good idea, but now when I've been thinking I'm not so sure. How does it work with the memory and such? Does it create a new object every time I do that or what? Could someone please explain how it actually works when I move an object inside an array?
I'm not really looking for a Tetris-specific answer, I'm just interested in how it actually works.
Thanks. | No, you're just moving pointers around. When you say:
```
blocks[x,y+1] = blocks[x,y];
```
what you're essentially doing is swapping the pointer. The object will stay exactly where it is, but now instead of it being at index x,y it'll be at index of x , y+1. When you say
```
blocks[x,y] = null;
```
there you're removing the reference to the object x,y and if nothing else is holding a reference, the Garbage Collecter will clean it up. | The first answer above is almost correct, but the assignment is not swapping the pointer, it is duplicating it. After the first line of code there are two references to the object originally referenced at blocks[x,y]. The null assignment removes the original reference, but you still have the new reference living at blocks[x,y+1]. Null that one and the heap object will be fair game for the GC. | Moving objects inside arrays | [
"",
"c#",
"arrays",
"memory",
"object",
"xna",
""
] |
Currently I am using an ObservableCollection within a WPF application, the application is an implementation of Conway's Game of life and works well for about 500 cells but after that it begins to slow down significantly. I originally wrote the application using a HashSet but couldn't find any way to bind the cells to a canvas.
Is there a way to get my HashSet to notify its binding object of changes? My Cell class is a simple integer X,Y pair, if the pair exists the cell is alive otherwise dead. The Cell implements INotifyPropertyChanged and overrides GetHashCode and Equals. I couldn't get the cell to display any changes, just the cells present immediately after loading. Is there any way to Bind a Hashset to items on a Canvas? | You have to implement [INotifyCollectionChanged](http://msdn.microsoft.com/en-us/library/system.collections.specialized.inotifycollectionchanged.aspx) too, and then it should all work OK. There's another relevant SO answer which [uses freezables](https://stackoverflow.com/questions/269073/observablecollection-that-also-monitors-changes-on-the-elements-in-collection) to ensure that changes in underlying entities are also handled. | I don't know if this will help, but here's a *really* simple implementation of an "observable set" that I made for a personal project. It essentially guards against inserting (or overwriting with) an item that is already in the collection.
If you wanted to you could simply return out of the methods rather than throwing an exception.
```
public class SetCollection<T> : ObservableCollection<T>
{
protected override void InsertItem(int index, T item)
{
if (Contains(item)) throw new ItemExistsException(item);
base.InsertItem(index, item);
}
protected override void SetItem(int index, T item)
{
int i = IndexOf(item);
if (i >= 0 && i != index) throw new ItemExistsException(item);
base.SetItem(index, item);
}
}
``` | How can I make an Observable Hashset in C#? | [
"",
"c#",
"wpf",
"data-binding",
""
] |
The method **Concat()** does not modify the original value. It returns a new value.
like this:
```
String str = "good";
str.concat("ness");
System.out.println(str); //"good"
```
But some method modify the original value. Why?
In Groovy:
```
def languages = ["Java", "Groovy", "JRuby"]
languages.reverse()
===> [JRuby, Groovy, Java]
println languages
===> [Java, Groovy, JRuby]
languages.sort()
===> [Groovy, JRuby, Java]
println languages
===> [Groovy, JRuby, Java]
``` | **[`String`](http://java.sun.com/javase/6/docs/api/java/lang/String.html) is immutable in Java.** Any method that "modifies" a `String` must return a new instance of `String`.
From the [Java API Specifications for the `String` class](http://java.sun.com/javase/6/docs/api/java/lang/String.html):
> Strings are constant; their values
> cannot be changed after they are
> created.
[The Java Language Specifications](http://java.sun.com/docs/books/jls/second_edition/html/jTOC.doc.html) defines this behavior in [Section 4.3.3: The Class String](http://java.sun.com/docs/books/jls/second_edition/html/typesValues.doc.html#26992).
---
**Response to the edit:**
It appears that an example in Groovy has been added. (I haven't used Groovy before, so my understanding of it may not be correct.)
From what I understand from looking at the example, there seems to be a `languages` list that is being `reverse`-ed and `sort`-ed -- those operations themselves do not modify the `String` objects contained in the list, but are acting upon the list itself.
The way the list is returns a new list, or how it modifies or doesn't modify the list is not related to the behavior of the `String` objects themselves. | The Java API was designed by many many different people, as such it's hard to keep everything consistent. I believe people generally accept that immutability (i.e., the internal states should not change) is a good thing now though, at least where value objects are concerned.
Another similar question would be, "why are indexes sometimes 0-based (most of the time), and somes times 1-based (JDBC)." Again, I believe it's another situation of the API being too broad, and developers of different APIs not coordinating (I could be wrong here though, if anyone knows the real reason for JDBC being 1-based, please let me know). | Why some method in Java modify the original value and some do not? | [
"",
"java",
"groovy",
""
] |
How do i dynamically assign a name to a php object?
for example how would i assign a object to a var that is the id of the db row that i am using to create objects.
for example
```
$<idnum>= new object();
```
where idnum is the id from my database. | this little snippet works for me
```
$num=500;
${"id$num"} = 1234;
echo $id500;
```
basically just use the curly brackets to surround your variable name and prepend a $; | You can use the a double dollar sign to create a variable with the name of the value of another one for example:
```
$idnum = "myVar";
$$idnum = new object(); // This is equivalent to $myVar = new object();
```
But make sure if you really need to do that, your code can get really messy if you don't have enough care or you abuse of using this "feature"...
I think you can better use arrays or hash tables rather than polluting the global namespace with dynamically created variables. | Dynamic Naming of PHP Objects | [
"",
"php",
""
] |
I have an assignment in a language-independent class, and part of it is using templated functions like
```
T functionName(T ¶m1[], T ¶m2[]){
// do stuff
}
```
I would like to write this program in C#, but I've ran into a problem.
How can I make this work in C#:
```
<pre><code>
T functionName(ref List<T> param1, ref List<U> param2){
// do stuff
}
</code></pre>
```
? | If you intended param2 to be `List<U>`
```
T functionName<T,U>(ref List<T> param1, ref List<U> param2)
```
Otherwise:
```
T functionName<T>(ref List<T> param1, ref List<T> param2)
``` | You've got two answers that cover generics... just an aside: it is unusual is C# to need `ref`, since you are already passing just the reference (`List<T>` is a reference-type object). The `ref` is needed only if you are assigning new lists inside the method and want the client to see the re-assignment. Changes to the list(s) will already be seen without the `ref`. | Implementing C++ template-like behavior in C# | [
"",
"c#",
"templates",
""
] |
I'm writing a GUI for an application using Swing, and in the interests of code maintenance and readability, I want to follow a consistent pattern throughout the whole system.
Most of the articles and books (or at least book sections) that I've read appear to provide plenty of examples on how to create and arrange various components, but ignore the bigger picture of writing a full GUI.
What are your best tips for application GUI design, and what patterns do you follow when designing or refactoring a GUI application? | Use layout managers. You might think it's simpler just to position everything with hard coded positions now (especially if you use a graphical layout tool), but when it comes time to update the gui, or internationalize it, your successors will hate you. (Trust me on this, I was the guy saying to use the layout managers from the start, and the successor to the guy who ignored me.) | Never derive from JDialog, JFrame or JInternalFrame for defining your forms, dialogs...
Rather derive from JPanel. This will bring you the follwing advantages:
* possibility to later change from a
JFrame to a JDialog for instance
(because user changed his mind)
* you can reuse one panel instance from one JDialog to another (JDialog are generally not reusable because they are constructed with a reference to their "parent", a frame or another dialog)
* you can later on change replace
JDialog with a more functional
subclass from a 3rd-party framework. | What are your best Swing design patterns and tips? | [
"",
"java",
"user-interface",
"design-patterns",
"swing",
""
] |
I recently started a C# project (VS 2008) as a 'Console' project where I wrote a few libraries, test programs, etc. Now I'd like to add a couple of WPF windows, but it looks like console project won't let me do that. I'm coming from Java so this is a little strange. How can I add a WPF form (which I will instantiate my self from my "main" class? | Are you sure you need Console project? You can create 'WPF application' project and add references to your libraries, etc. If try to show WPF window from console app you will gen an exception due to differences in threading model between Console & WPF apps. | The accepted answer is not entirely true, I'm afraid, just add the [STAThread] attribute before your mainmethod and make references to the right libraries (like System.Windows) and you're all set to add wpf windows.
EDIT : in the comments @JamesWilkins supplied me with this usefull link : <http://code-phix.blogspot.be/2013/11/creating-wpf-project-from-scratch.html> | WPF window from a Console project? | [
"",
"c#",
"wpf",
"console",
""
] |
How can I get the Tab Control to place the tabs at the bottom of the control and not at the top | Open the properties window go to property **Alignment** and set it to **Bottom** | 6 years later, it is now
```
<TabControl TabStripPlacement="Bottom"/>
``` | tabs at the bottom of TabControl | [
"",
"c#",
"winforms",
""
] |
I've been using Hibernate for a few years but have only used it with annotations, and by setting the connection parameters in my code.
Am I "missing something" by not using the XML files? Are there important capabilities available only in XML? Are there situations or patterns where it makes sense to use the XML? | I think it's pretty safe to say that you're not missing out on anything.
If there are any capabilities in XML that can't be represented in attributes (and I believe there are some rare cases) then you still have the option to use [RawXml] and write the XML in the attribute. So you can't possibly miss out on any functionality.
It might make sense to use XML if you have enough programmers in your team who simply prefer to manage separate files or if there is a genuine need to edit xml mappings on the fly. Xml mapping files are probably easier to manipulate for very complex mapping and they can contain useful information (comments on fetching strategies etc).
There is also the issue of architecture, where some people argue that separating the mapping into XML files provides a better separation between business-related code and instructions on how it is persisted. | Annotations are a great way to quickly configure Hibernate. But one of the key features of OR mappers like Hibernate is that you keep your domain model "persistence ignorant" your objects don't need to know anything about where and how they are persisted. Some may argue that using annotations breaks that. And in situations where persistence is just one of many concerns it makes sense to keep things separate.
Another reason can be that domain objects are persisted differently in different situations, you can have multiple databases used by one application or even more applications that use the same domain model. | Is there a good reason to configure hibernate with XML rather than via annotations? | [
"",
"java",
"xml",
"hibernate",
"jpa",
"annotations",
""
] |
Is there another way to get the last child of a div element other than using `div:last-child`, which does not work in IE8 at least? | In jQuery it's easy:
```
$("div :last-child")
```
In pure CSS it can't be done.
Otherwise you're stuck with traversing the DOM in Javascript.
Also note the difference:
* `"div :last-child"`: every last child of a div; and
* `"div:last-child"`: every div that **is** a last child. | Unfortunately, I know not a way to do this in CSS without using :last-child, which (as you stated) fails in IE.
If you're talking about javascript, then it's possible using Node.lastChild:
```
var div = document.getElementById("mydiv");
var lastChild = div.lastChild;
``` | Getting last-child of a div in IE8? | [
"",
"javascript",
"jquery",
"css",
"internet-explorer-8",
"css-selectors",
""
] |
When should someone use structs instead of classes or vice versa in C++? I find myself using structs when a full-blown class managing some information seems like overkill but want to indicate the information being contained are all related. I was wondering what are some good guidelines to be able to tell when one is more appropriate than the other?
Edit:
Found these links while reading the material Stack Overflow indicated was related after the question was submitted:
[When should you use a class vs a struct in C++?](https://stackoverflow.com/questions/54585/when-should-you-use-a-class-vs-a-struct-in-c)
[What are the differences between struct and class in C++?](https://stackoverflow.com/questions/92859/what-are-the-differences-between-struct-and-class-in-c) | Technically, the only difference between the two is that structs are public: by default and classes are private:
Other than that, there is no technical difference.
struct vs class then becomes a purely expressive nuance of the language.
Usually, you avoid putting complicated methods in a struct, and most of the time structs data members will stay public. In a class you want to enforce strong encapsulation.
struct = data is public, with very simple helper methods
class = strongly encapsulated, data is modified / accessed only through methods | I use structs for simple containers of types that provide no constructors or operators.
Classes for everything else. | Structs vs classes in C++ | [
"",
"c++",
"class",
""
] |
Is there a way that I can create a function that takes an `int` template parameter, and have that function give a compile time error if the value passed to the function is less than 10?
The following code does not work, but it shows what I want to accomplish:
```
template <int number1>
void reportErrorIfLessThan10()
{
#if(number1 < 10)
#error the number is less than 10
#endif
}
int maint(int argc, char**argv)
{
reportErrorIfLessThan10<5>();//report an error!
reportErrorIfLessThan10<12>();//ok
return 0;
}
``` | If you don't want [Boost C++ Libraries](http://en.wikipedia.org/wiki/Boost_C%2B%2B_Libraries) magic and want bare bones...
```
template<bool> class static_check
{
};
template<> class static_check<false>
{
private: static_check();
};
#define StaticAssert(test) static_check<(test) != 0>()
```
Then use StaticAssert. It's a #define for me because I have code that needs to run in a lot of environments where C++ doesn't work right for templates and I need to just back it off to a runtime assert. :(
Also, not the best error messages. | ```
template <int number1>
typename boost::enable_if_c< (number1 >= 10) >::type
reportErrorIfLessThan10() {
// ...
}
```
The above `enable_if`, without the \_c because we have a plain bool, looks like this:
```
template<bool C, typename T = void>
struct enable_if {
typedef T type;
};
template<typename T>
struct enable_if<false, T> { };
```
[Boost](http://en.wikipedia.org/wiki/Boost_C%2B%2B_Libraries)'s `enable_if` takes not a plain bool, so they have another version which has a \_c appended, that takes plain bools. You won't be able to call it for `number1` < 10. *SFINAE* will exclude that template as possible candidates, because `enable_if` will not expose a type `::type` if the condition evaluates to `false`. If you want, for some reason, test it in the function, then if you have the [C++1x](http://en.wikipedia.org/wiki/C%2B%2B0x) feature available, you can use `static_assert`:
```
template <int number1>
void reportErrorIfLessThan10() {
static_assert(number >= 10, "number must be >= 10");
}
```
If not, you can use BOOST\_STATIC\_ASSERT:
```
template <int number1>
void reportErrorIfLessThan10() {
BOOST_STATIC_ASSERT(number >= 10);
}
```
The only way to display a descriptive message is using static\_assert, though. You can more or less simulate that, using types having names that describe the error condition:
```
namespace detail {
/* chooses type A if cond == true, chooses type B if cond == false */
template <bool cond, typename A, typename B>
struct Condition {
typedef A type;
};
template <typename A, typename B>
struct Condition<false, A, B> {
typedef B type;
};
struct number1_greater_than_10;
}
template <int number1>
void reportErrorIfLessThan10() {
// number1 must be greater than 10
sizeof( typename detail::Condition< (number1 >= 10),
char,
detail::number1_greater_than_10
>::type );
}
```
It prints this here:
> error: invalid application of 'sizeof' to incomplete type 'detail::number1\_greater\_than\_10'
But I think the very first approach, using `enable_if` will do it. You will get an error message about an undeclared `reportErrorIfLessThan10`. | C++ metaprogramming - generating errors in code | [
"",
"c++",
"templates",
"metaprogramming",
""
] |
I'm trying to extract the domain name from a string in C#. You don't necessarily have to use a RegEx but we should be able to extract `yourdomain.com` from all of the following:
```
yourdomain.com
www.yourdomain.com
http://www.yourdomain.com
http://www.yourdomain.com/
store.yourdomain.com
http://store.yourdomain.com
whatever.youdomain.com
*.yourdomain.com
```
Also, any TLD is acceptable, so replace all the above with `.net`, `.org`, `'co'uk`, etc. | 1. If no scheme present (no colon in string), prepend "http://" to make it a valid URL.
2. Pass string to [Uri constructor](http://msdn.microsoft.com/en-us/library/z6c2z492.aspx).
3. Access the Uri's [Host property](http://msdn.microsoft.com/en-us/library/system.uri.host.aspx).
Now you have the hostname. What exactly you consider the ‘domain name’ of a given hostname is a debatable point. I'm guessing you don't simply mean everything after the first dot.
It's not possible to distinguish hostnames like ‘whatever.youdomain.com’ from domains-in-an-SLD like ‘warwick.ac.uk’ from just the strings. Indeed, there is even a bit of grey area about what is and isn't a public SLD, given the efforts of some registrars to carve out their own niches.
A common approach is to maintain a big list of SLDs and other suffixes used by unrelated entities. This is what web browsers do to stop unwanted public cookie sharing. Once you've found a public suffix, you could add the one nearest prefix in the host name split by dots to get the highest-level entity responsible for the given hostname, if that's what you want. Suffix lists are hell to maintain, but you can piggy-back on [someone else's efforts](http://publicsuffix.org/).
Alternatively, if your app has the time and network connection to do it, it could start sniffing for information on the hostname. eg. it could do a whois query for the hostname, and keep looking at each parent until it got a result and that would be the domain name of the lowest-level entity responsible for the given hostname.
Or, if all that's too much work, you could try just chopping off any leading ‘www.’ present! | I would recommend trying this yourself. Using regulator and a regex cheat sheet.
<http://sourceforge.net/projects/regulator/>
<http://regexlib.com/CheatSheet.aspx>
Also find some good info on Regular Expressions at [coding horror](http://www.codinghorror.com/blog/archives/001016.html). | Regular expression to extract domain name from any domain | [
"",
"c#",
"regex",
""
] |
Are there any in-place editor in JS that supports Rich Text like TinyMCE / FCKEditor?
e.g. of in-place editing (non-rich text): <http://www.appelsiini.net/projects/jeditable/default.html> | [jEditable](http://www.appelsiini.net/2008/5/jeditable-and-tinymce) and [TinyMCE](http://sam.curren.ws/index.cfm/2008/6/12/jEditable-TinyMCE-Plugin) together:
[Google](http://www.google.com.au/search?q=jeditable+tinymce&sourceid=navclient-ff&ie=UTF-8&rlz=1B3GGGL_enAU216AU217&aq=t) is your friend! | [NicEdit](http://www.nicedit.com/) is what you want I think, there are some demos available on the website. | Any In-place Javascript Rich Text Editor available? | [
"",
"javascript",
"jquery",
"text-editor",
""
] |
By default, ASP.NET records all uncaught exceptions to system Event Log. I’m aware of the fact that one should have a proper logging facility in place, but this is better than nothing and it serves well as a temporary solution.
I would like to be able to filter efficiently the events in the log. I learned that, when logging programmatically, you can set a custom value for the Source column in the event log via:
```
EventLog eventLog = new EventLog("Application");
eventLog.Source = "My custom name";
eventLog.WriteEntry("Some error description ...", EventLogEntryType.Error);
```
However, ASP.NET sets this value to "ASP.NET" followed by its version. I briefly checked the documentation of web.config, but did not find an obvious place to change it. I wonder if it can be changed at all. | Your best bet is to use the source property as intended, but use an installer class in your installer to set up the registry at install time (under Admin), eg.:
```
using System;
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Configuration.Install;
using System.Diagnostics;
namespace InstallerClasses
{
[RunInstaller(true)]
public partial class EventLog : Installer
{
private EventLogInstaller eventLogInstaller;
///
/// Creates the event log for MyApp
///
public EventLog()
{
InitializeComponent();
// Create an instance of an EventLogInstaller.
eventLogInstaller = new EventLogInstaller();
// Set the source name of the event log.
eventLogInstaller.Source = "MySource";
// Set the event log that the source writes entries to.
eventLogInstaller.Log = "Application";
// Add myEventLogInstaller to the Installer collection.
Installers.Add(eventLogInstaller);
}
}
}
```
And make sure it gets run as a Custom Action in your installer. | It seems that using the source property is not such a good idea. Originally, I thought that it was a free form text. But I just found out that it’s something which has to be registered via the RegisterEventSource(...) Win32 API function, and this appears to work only when the application runs under the admin privileges. .NET silently creates a new source for you, but if you are not an admin, it throws an exception. So overall, using an ad-hoc source name inside an ASP.NET may require some pre-registration which would introduce another step into deployment. | Custom value for the Event Log source property for ASP.NET errors | [
"",
"c#",
"asp.net",
"logging",
"event-log",
""
] |
When my script sleeps for 50sec my IDE locks up which is very annoying. I cant switch tabs, look through my source, type code, etc. It happens in pylde and pyscripter, i havent tried other IDEs. What can i do to fix this? i'm actually doing
```
for i in range(0, timeInSeconds): time.sleep(1)
```
hoping the IDE will update once per second but it doesnt look that way. What can i do to fix this? | I'm assuming you are running your code from within the IDE?
Your IDE is probably blocking while running your code. Look for a setting of some sort which might control that behaviour, otherwise I think your only choice would be to change IDE. (Or, run your code from outside the IDE) | Can you configure to run your script externally? I don't know about the specific IDEs, but I would try to spawn a different process for the debugged script and not run them under the IDE. If that doesn't help, then it is a problem of the IDEs. | python sleep == IDE lock up | [
"",
"python",
"ide",
"lockup",
""
] |
I'm actually playing around with the [last.FM](http://www.lastfm.de/api) web serivce API which I call via REST. When I get the response I try to convert the result into a XDocument so I could use LINQ to work with it.
But when I pass the result string to the XDocumnet constructor an ArgumentException is thrown telling me that "Non white space characters cannot be added to content.". Unfortunately I'm very new to web services and XML programming so I don't really know how to interpret this exception.
I hope someone could give me a hint how to solve this problem. | It sounds to me as though you are holding the response in a string. If that is the case, you can try to use the Parse method on XDocument which is for parsing XML out of a string.
```
string myResult = "<?xml blahblahblah>";
XDocument doc = XDocument.Parse(myResult);
```
This may or may not solve your problem. Just a suggestion that is worth a try to see if you get a different result. | Here's a sample you can use to query the service:
```
class Program
{
static void Main(string[] args)
{
using (WebClient client = new WebClient())
using (Stream stream = client.OpenRead("http://ws.audioscrobbler.com/2.0/?method=album.getinfo&api_key=b25b959554ed76058ac220b7b2e0a026&artist=Cher&album=Believe"))
using (TextReader reader = new StreamReader(stream))
{
XDocument xdoc = XDocument.Load(reader);
var summaries = from element in xdoc.Descendants()
where element.Name == "summary"
select element;
foreach (var summary in summaries)
{
Console.WriteLine(summary.Value);
}
}
}
}
``` | Problems transforming a REST response to a XDocument | [
"",
"c#",
"xml",
"rest",
""
] |
Lets say i am developing a chat, first you come to a login window and when your logged in i want to use the same window but chaning the control :P how would be the best way to desight this?
is there any good way to implement this what root element should i use?
Thanks a lot!! | Take a look at Josh Smith's article in MSDN magazine (<http://msdn.microsoft.com/en-us/magazine/dd419663.aspx>). He describes an interesting method where you have a content presenter on your main window use data templates to switch out what the window is showing. | If you want to do this all within the same window, you could use a Grid as the root element and host a login element (possibly another grid for layout) and the chat window. These elements would stack on top of one another, depending upon the order in which you declare them. To hide the chat element initially, set its Visibility to `Collapsed`
You could then have the login element's Visibility set to `Collapsed` when the user submits their login details, and have the chat element's Visibility set to `Visible`.
I did something similar once and it worked well for me.
Hope that helps.
**EDIT** I knocked this together in Kaxaml for you to play with (and because I like playing with XAML):
```
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Grid VerticalAlignment="Stretch" HorizontalAlignment="Stretch">
<Border x:Name="_loginForm" BorderBrush="#888" BorderThickness="3" CornerRadius="5"
HorizontalAlignment="Center" VerticalAlignment="Center" Padding="10" Visibility="Visible">
<Grid>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="100"/>
<ColumnDefinition Width="100"/>
</Grid.ColumnDefinitions>
<TextBlock Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" HorizontalAlignment="Center" Height="30">Welcome to chat</TextBlock>
<TextBlock Grid.Row="1" Grid.Column="0">User Name</TextBlock>
<TextBox Grid.Row="1" Grid.Column="1" x:Name="_userName" />
<TextBlock Grid.Row="2" Grid.Column="0">Password</TextBlock>
<TextBox Grid.Row="2" Grid.Column="1" x:Name="_password"></TextBox>
<Button Grid.Row="3" Grid.Column="1">Log In</Button>
</Grid>
</Border>
<DockPanel x:Name="_chatForm" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" LastChildFill="True" Visibility="Collapsed">
<DockPanel DockPanel.Dock="Bottom" LastChildFill="True" Height="70">
<Button DockPanel.Dock="Right" Width="70">_Send</Button>
<TextBox x:Name="_input" HorizontalAlignment="Stretch">Hello world</TextBox>
</DockPanel>
<ListBox x:Name="_messageHistory" />
</DockPanel>
</Grid>
</Page>
```
Initially the element `_loginForm` is visible. You'd attach a handler for the Log In button's Click event that would hide it, and show the `_chatForm` instead.
This example shows usage of several layout controls -- the Grid, DockPanel and StackPanel. | WPF Design question | [
"",
"c#",
".net",
"wpf",
"layout",
""
] |
Is there anyway to verify if a Session has been disposed of by NHibernate?
I have a wrapper class on Session that has it's own Finalizer and IDispoable implementation however if the Session gets disposed before I handle it myself in my class I end up receiving an ObjectDisposedException.
I really don't wish to wrap my clean up code with
```
try {
...
}
catch (ObjectDisposedException) { }
```
But I'm not really sure of any other way. The Session.IsOpen and Session.IsActive properties do not seem to offer any reliable information for me to acknowledge the session has been disposed of.
For full source you can view it on [Assembla](http://code.assembla.com/MarisicDotNet/subversion/nodes/StructuredWeb/StructuredWeb/Repository/NHibernateDataAccess/Conversation/Conversation.cs). | Ok, just taked a peek at your code.
I don't know if this is exactly the issue, but you are calling End() from the conversation dispose method,
which in turn tries to reconnect and disposes the session..
if you have eplicitly called End() before this you will get what you get, avoid that call.
I think you shouldn't worry about rolling back the transaction before the session dispose as this is implicitly done.
Just taken a quick look, but i think i really like your implementation. | I always thought best practice with NHibernate was "session per request" meaning that it should only live inside the 'using' scope.
```
using(Session session = new Session())
{
}
```
I'd suggest trying to prevent two people from disposing the session/conversation. If you control the creation of sessions you could wrap it in your own ISession impl that performs it's own IsAlreadyDisposed() check to prevent the exception. Still, considering that effort vs "expected exception" and the original code doesn't look so bad.
I'd also suggest watching out with your finaliser implementation. The "Session.Is().InTransaction()" goes Session->Transaction and the session might be null by the time the finaliser gets round to it. Navigating a managed relationship at finaliser time isn't guaranteed to work. | NHibernate Session ObjectDisposedException | [
"",
"c#",
"nhibernate",
"objectdisposedexception",
""
] |
I want to update a cell in a spreadsheet that is used by a chart, using the Open XML SDK 2.0 (CTP). All the code samples I have found insert new cells. I am struggling with retrieving the right worksheet.
```
public static void InsertText(string docName, string text, uint rowIndex,
string columnName)
{
// Open the document for editing.
using (SpreadsheetDocument spreadSheet =
SpreadsheetDocument.Open(docName, true))
{
Workbook workBook = spreadSheet.WorkbookPart.Workbook;
WorksheetPart worksheetPart = workBook.WorkbookPart.
WorksheetParts.First();
SheetData sheetData = worksheetPart.Worksheet.
GetFirstChild<SheetData>();
// If the worksheet does not contain a row with the specified
// row index, insert one.
Row row;
if (sheetData.Elements<Row>().Where(
r => r.RowIndex == rowIndex).Count() != 0)
// At this point I am expecting a match for a row that exists
// in sheet1 but I am not getting one
```
When I navigate the tree in Visual Studio, I am seeing three sheets, but none of them has any children. What am I missing? | Here is the working code. This is a prototype. For a larger number of changes, one might open the document only once. Also, there are some hard-coded things like sheet name and cell type that would have to be parameterized before this can be called production-ready.
<http://openxmldeveloper.org/forums/4005/ShowThread.aspx> was very helpful.
```
using System;
using System.Collections.Generic;
using System.Data;
using System.Linq;
using System.Text;
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Spreadsheet;
using System.Xml;
using System.IO;
using System.Diagnostics;
namespace OpenXMLWindowsApp
{
public class OpenXMLWindowsApp
{
public void UpdateSheet()
{
UpdateCell("Chart.xlsx", "20", 2, "B");
UpdateCell("Chart.xlsx", "80", 3, "B");
UpdateCell("Chart.xlsx", "80", 2, "C");
UpdateCell("Chart.xlsx", "20", 3, "C");
ProcessStartInfo startInfo = new ProcessStartInfo("Chart.xlsx");
startInfo.WindowStyle = ProcessWindowStyle.Normal;
Process.Start(startInfo);
}
public static void UpdateCell(string docName, string text,
uint rowIndex, string columnName)
{
// Open the document for editing.
using (SpreadsheetDocument spreadSheet =
SpreadsheetDocument.Open(docName, true))
{
WorksheetPart worksheetPart =
GetWorksheetPartByName(spreadSheet, "Sheet1");
if (worksheetPart != null)
{
Cell cell = GetCell(worksheetPart.Worksheet,
columnName, rowIndex);
cell.CellValue = new CellValue(text);
cell.DataType =
new EnumValue<CellValues>(CellValues.Number);
// Save the worksheet.
worksheetPart.Worksheet.Save();
}
}
}
private static WorksheetPart
GetWorksheetPartByName(SpreadsheetDocument document,
string sheetName)
{
IEnumerable<Sheet> sheets =
document.WorkbookPart.Workbook.GetFirstChild<Sheets>().
Elements<Sheet>().Where(s => s.Name == sheetName);
if (sheets.Count() == 0)
{
// The specified worksheet does not exist.
return null;
}
string relationshipId = sheets.First().Id.Value;
WorksheetPart worksheetPart = (WorksheetPart)
document.WorkbookPart.GetPartById(relationshipId);
return worksheetPart;
}
// Given a worksheet, a column name, and a row index,
// gets the cell at the specified column and
private static Cell GetCell(Worksheet worksheet,
string columnName, uint rowIndex)
{
Row row = GetRow(worksheet, rowIndex);
if (row == null)
return null;
return row.Elements<Cell>().Where(c => string.Compare
(c.CellReference.Value, columnName +
rowIndex, true) == 0).First();
}
// Given a worksheet and a row index, return the row.
private static Row GetRow(Worksheet worksheet, uint rowIndex)
{
return worksheet.GetFirstChild<SheetData>().
Elements<Row>().Where(r => r.RowIndex == rowIndex).First();
}
}
}
``` | I've been working with excel and found this helper library to be of great help (I've created my own helpers for word, would have saved at least 2 weeks if I was aware of this):
<https://www.nuget.org/packages/SimpleOOXML/>
This is what is needed to update cell (writer.PasteText(...)):
```
MemoryStream stream = SpreadsheetReader.Create();
SpreadsheetDocument doc = SpreadsheetDocument.Open(stream, true);
WorksheetPart worksheetPart = SpreadsheetReader.GetWorksheetPartByName(doc, "Sheet1");
WorksheetWriter writer = new WorksheetWriter(doc, worksheetPart);
writer.PasteText("B2", "Hello World");
//Save to the memory stream
SpreadsheetWriter.Save(doc);
byte[] result = stream.ToArray();
FileStream file = new FileStream(@"D:\x1.xlsx", FileMode.Create);
file.Write(result, 0, result.Length);
file.Close();
``` | Open XML SDK 2.0 - how to update a cell in a spreadsheet? | [
"",
"c#",
"xml",
"excel",
"sdk",
""
] |
I have a javascript file that reads another file which may contain javascript fragments that need to be eval()-ed. The script fragments are supposed to conform to a strict subset of javascript that limits what they can do and which variables they can change, but I want to know if there is some way to enforce this by preventing the eval from seeing variables in the global scope. Something like the following:
```
function safeEval( fragment )
{
var localVariable = g_Variable;
{
// do magic scoping here so that the eval fragment can see localVariable
// but not g_Variable or anything else outside function scope
eval( fragment );
}
}
```
The actual code doesn't need to look like this--I'm open to any and all weird tricks with closures, etc. But I do want to know if this is even *possible*. | **Short answer**: No. If it's in the global scope, it's available to anything.
**Long answer**: if you're `eval()`ing untrusted code that *really* wants to read or mess with your execution environment, you're screwed. But if you own and trust all code being executed, including that being `eval()`ed, you can fake it by overriding the execution context:
```
function maskedEval(scr)
{
// set up an object to serve as the context for the code
// being evaluated.
var mask = {};
// mask global properties
for (p in this)
mask[p] = undefined;
// execute script in private context
(new Function( "with(this) { " + scr + "}")).call(mask);
}
```
Again, I must stress:
> This will only serve to shield **trusted** code from the context in which it is executed. If you don't trust the code, DO NOT `eval()` it (or pass it to new `Function()`, or use it in any other way that behaves like `eval()`). | Shog9♦'s Answer is great. But if your code is just an expression, the code will be executed and nothing will be returned. For expressions, use
```
function evalInContext(context, js) {
return eval('with(context) { ' + js + ' }');
}
```
Here is how to use it:
```
var obj = {key: true};
evalInContext(obj, 'key ? "YES" : "NO"');
```
It will return `"YES"`.
If you are not sure if the code to be executed is expressions or statements, you can combine them:
```
function evalInContext(context, js) {
var value;
try {
// for expressions
value = eval('with(context) { ' + js + ' }');
} catch (e) {
if (e instanceof SyntaxError) {
try {
// for statements
value = (new Function('with(this) { ' + js + ' }')).call(context);
} catch (e) {}
}
}
return value;
}
``` | Restricting eval() to a narrow scope | [
"",
"javascript",
"eval",
""
] |
Why doesn't code like the following catch CTRL-C?
```
MAXVAL = 10000
STEP_INTERVAL = 10
for i in range(1, MAXVAL, STEP_INTERVAL):
try:
print str(i)
except KeyboardInterrupt:
break
print "done"
```
My expectation is -- if CTRL-C is pressed while program is running, `KeyboardInterrupt` is supposed to leave the loop. It does not.
Any help on what I'm doing wrong? | Sounds like the program is done by the time control-c has been hit, but your operating system hasn't finished showing you all the output. . | code flow is as follows:
1. `for` grabs new object from list (generated by `range`) and sets `i` to it
2. `try`
3. `print`
4. go back to `1`
If you hit CTRL-C in the part 1 it is outside the `try`/`except`, so it won't catch the exception.
Try this instead:
```
MaxVal = 10000
StepInterval = 10
try:
for i in range(1, MaxVal, StepInterval):
print i
except KeyboardInterrupt:
pass
print "done"
``` | Why is KeyboardInterrupt not working in python? | [
"",
"python",
""
] |
Is there a way to convert a `List(of Object)` to a `List(of String)` in c# or vb.net without iterating through all the items? (Behind the scenes iteration is fine – I just want concise code)
**Update:**
The best way is probably just to do a new select
```
myList.Select(function(i) i.ToString()).ToList();
```
or
```
myList.Select(i => i.ToString()).ToList();
``` | Not possible without iterating to build a new list. You can wrap the list in a container that implements IList.
You can use LINQ to get a lazy evaluated version of `IEnumerable<string>` from an object list like this:
```
var stringList = myList.OfType<string>();
``` | This works for all types.
```
List<object> objects = new List<object>();
List<string> strings = objects.Select(s => (string)s).ToList();
``` | Convert List(of object) to List(of string) | [
"",
"c#",
"vb.net",
"generics",
""
] |
I need to write a custom "UrlRewriter" using a HttpModule, in the moment of "rewriting" I need access to the Session and has followed the advice from another SO thread:
[Can I access session state from an HTTPModule?](https://stackoverflow.com/questions/276355/can-i-access-session-state-from-an-httpmodule)
Everything works, except the RewritePath/Redirect part. I don't get any exceptions, but the browser takes forever to load. Is this really the best way to build a urlrewriter like this?
```
using System;
using System.Web;
using System.Web.SessionState;
using System.Diagnostics;
namespace MyCompany.Campaigns
{
public class CampaignRewriteModule : IHttpModule
{
public void Init(HttpApplication application)
{
application.PostAcquireRequestState += new EventHandler(Application_PostAcquireRequestState);
application.PostMapRequestHandler += new EventHandler(Application_PostMapRequestHandler);
}
void Application_PostMapRequestHandler(object source, EventArgs e)
{
HttpApplication app = (HttpApplication)source;
if (app.Context.Handler is IReadOnlySessionState || app.Context.Handler is IRequiresSessionState)
{
return;
}
app.Context.Handler = new MyHttpHandler(app.Context.Handler);
}
void Application_PostAcquireRequestState(object source, EventArgs e)
{
HttpApplication app = (HttpApplication)source;
MyHttpHandler resourceHttpHandler = HttpContext.Current.Handler as MyHttpHandler;
if (resourceHttpHandler != null)
{
HttpContext.Current.Handler = resourceHttpHandler.OriginalHandler;
}
Debug.Assert(app.Session != null);
string path = HttpUtils.Path();
if (!CampaignCodeMethods.IsValidCampaignCode(path)) return;
string domain = HttpUtils.Domain();
CampaignCode code = CampaignManager.RegisterCode(path, domain.Equals(Config.Instance.Domain.ToLower()) ? null : domain);
if (code != null)
{
//app.Context.RewritePath(code.CampaignCodePath.Path, false);
app.Context.Response.Redirect(code.CampaignCodePath.Path, true);
}
}
public void Dispose() { }
public class MyHttpHandler : IHttpHandler, IRequiresSessionState
{
internal readonly IHttpHandler OriginalHandler;
public MyHttpHandler(IHttpHandler originalHandler)
{
OriginalHandler = originalHandler;
}
public void ProcessRequest(HttpContext context)
{
throw new InvalidOperationException("MyHttpHandler cannot process requests.");
}
public bool IsReusable
{
get { return false; }
}
}
}
}
``` | I think I know what it is. Your module is executed on ALL requests and assigns a handler that throws an error unless there is a valid campaign code (where a rewrite/redirect occurs).
But because this is not just for your "handler campaign code" url it is causing an error to be thrown, which is causing you to be redirected to your error page, which is being caught by the module, which is assigning the handler, which is throwing an error, which is redirecting... I think you get where I'm going ;)
Otherwise I'd try a few things:
* Setup Fiddler and check for an infinite redirect loop
* Put a breakpoint on app.Context.Response.Redirect - make sure your not in an infinite loop
* Put a breakpoint on MyHttpHandler.ProcessRequest - make sure it's not being called and the exception swallowed | I wrote a simple URL rewriter module that did something similar. The url rewriting is done in BeginRequest by comparing the requested url to a list of known urls. If we find a mach we use HttpContext.RewritePath to change the requested url.
This appears to work well with no serious side effects.
I notice that you use Response.Redirect instead of Context.RewritePath. Using Redirect will cause the users browser to request a new page with the new url. Is this really what you want? The user will then see the new url in his browser. If this really is what you want you could use an alternative approach where you use a custom 404 page not found error handler to redirect the user to the appropriate page.
If you set up IIS to redirect all 404 errors to a new page, say Custom404.aspx, that you have set up. In this page you can check the requested url to see if the url should be rewritten. If it should you can simply set the Response.Status to "301 Moved Permanently" and write a header with the name "Location" and the new url as the value. If the url should not be rewritten you can just output the standard 404 page not found error.
This last approach works well, but as with your Response.Redirect approach the user will see the new url in his browser. Using Context.RewritePath allows you to serve a different page than the one requested. | UrlRewriter+HttpModule+Session problem | [
"",
"c#",
"asp.net",
""
] |
I am working on a project for my company, and I need to integrate some graphs of different types and average complexity to C# in the process of studying stock markets. I found this free library on the Internet, [ZedGraph](http://zedgraph.org/wiki/index.php?title=Main_Page). If you came across it, do you recommend using it? And how well is it supported? | I can recommend ZedGraph. I have been using it with great
success for several years in [MSQuant](http://msquant.sourceforge.net), for most plots: mass
spectrum display, recalibration error plots, LC peak plots,
quantitation profiles and others.
Here are some screen-shots from MSQuant where ZedGraph has
been used:
1. [Scatter plot, with trendline](https://i.stack.imgur.com/bvUae.png)
2. [X-Y plot with the actual data points shown, line connection data points](https://i.stack.imgur.com/xyznI.png)
3. [Sticks plot, with overlayed annotation (`TextBox`es, in fact)](https://i.stack.imgur.com/6CY5H.png)
4. [Several plots in the same window, types as in 2. and 3.](https://i.stack.imgur.com/HG6Wb.png) (the two plots in the bottom half)
5. [Closer look at type 2.](https://i.stack.imgur.com/xtFna.png)
6. [Collage, type 2. and code in Visual Studio](https://i.stack.imgur.com/b94Zj.png)
The source code that is behind the first plot can be found in *[Source code for MSQuant: frmRecalibrationVisualisation.vb, MSQuant/msquant/src/GUI/forms/frmRecalibrationVisualisation.vb.](http://pmortensen.eu/1/MSQuant/MSQsrcWWW,1.5,2008-12-19/frmRecalibrationVisualisation.vb.html)*.
In contrast to many other charting libraries, ZedGraph can
also be used for scientific/math oriented plots/charts (for example,
scatter plots) and not only for business-type plots/charts.
Stock market applications may also need scatter plots.
In ZedGraph, there is built-in support for the user to zoom
in (infinite) and zoom out, pan (drag while holding down the
`Ctrl` key), save the plot to a file or copy it to the
clipboard.
There is one thing I am missing in ZedGraph: the ability for
the user to select items in the plot in order to perform
some action on those selected items (for example, computing some
number, accepting them as verified or marking them as
outliers to the application program).
Don't be put off by the state of ZedGraph's development.
ZedGraph is mature, is of very high quality, and can be used
as-is. There is supposed to be a new team behind its further
development. | ZedGraph does not appear to be supported by the original developers anymore. However, you can find it as part of other projects where updates have been made.
For example, per [this discussion](http://sourceforge.net/projects/zedgraph/forums/forum/392231/topic/3487337) on a ZedGraph project discussion list:
> So I highly optimized ZedGraph for all
> the curves, and objects. Basically, I
> optimized how it uses GDI and
> specifically made it only draw objects
> that will fit in the chart.
>
> So it scrolls and zooms now extremely
> efficiently even if I have many
> millions of objects on the chart. Plus
> it users nearly zero CPU when it's
> running in real-time as slower speeds
> for tracking financial charts.
>
> I fixed a few defects also.
You can find a fork of the repo with the changes [here](https://github.com/jspraul/TickZoomPublic/tree/integrate/Platform/ZedGraph). | Can the ZedGraph charting library for .NET be recommended? | [
"",
"c#",
"graphics",
"charts",
"zedgraph",
""
] |
This is good:
```
import string
string.capwords("proper name")
Out: 'Proper Name'
```
This is not so good:
```
string.capwords("I.R.S")
Out: 'I.r.s'
```
Is there no string method to do capwords so that it accomodates acronyms? | This might work:
```
import re
def _callback(match):
""" This is a simple callback function for the regular expression which is
in charge of doing the actual capitalization. It is designed to only
capitalize words which aren't fully uppercased (like acronyms).
"""
word = match.group(0)
if word == word.upper():
return word
else:
return word.capitalize()
def capwords(data):
""" This function converts `data` into a capitalized version of itself. This
function accomidates acronyms.
"""
return re.sub("[\w\'\-\_]+", _callback, data)
```
Here is a test:
```
print capwords("This is an IRS test.") # Produces: "This Is An IRS Test."
print capwords("This is an I.R.S. test.") # Produces: "This Is An I.R.S. Test."
``` | No, there is no such method in the standard library. | is there a string method to capitalize acronyms in python? | [
"",
"python",
"string",
"acronym",
"capitalize",
""
] |
I'd like to know what practical way of constructing reports for EPSON Dot Matrix printers exists in Java. At this time, I'm having the LX300+II model to play with.
I know that there are basically two ways of using this printer:
1. As a typewriter, outputting directly raw ASCII data to the parallel port
2. Graphical printing, with graphical fonts and precise positioning.
How can I use both fast printing fonts(provided by **1**) and precise positioning (provided by **2**)?
I know this is possible to do because a couple of years ago, I got to make reports for the EPSON FX 2180 that included drivers with native printing fonts installed in Windows. This allowed to do exactly what I want here.
Now I'm using JasperReports for graphical reporting and works fine, but I do have some reports that *need* to be printed in dot matrix printers and *fast*, too.
What can be an alternative for that? | Would [TextPrinter](http://java4less.com/textprinter/Documentation.html) fit your needs? | If you want to print fast in dot-matrix printers, you need to do it in "plain-text" mode. The following code works for me:
```
try {
// LPT1 is the printer port
try (FileWriter out = new FileWriter("LPT1:")) {
out.write("String1\nString2\nString3\n");
out.flush();
}
} catch (IOException e) {
}
``` | Dot matrix fast printing with precise layout in Java | [
"",
"java",
"printing",
"reporting",
"legacy",
"dot-matrix",
""
] |
I just wrote an if statement in the lines of
```
if (value == value1 || value == value2 || value == value3 || value == value4)
//do something
```
and it annoys me that I always have to repeat the 'value ==' part. In my opinion this is serving no purpose other than making it difficult to read.
I wrote the following ExtensionMethod that should make above scenario more readable:
```
public static bool IsEqualToAny<T>(this T value, params T[] objects)
{
return objects.Contains(value);
}
```
Now I can simply write
```
if (value.IsEqualToAny(value1, value2, value3, value4))
//do something
```
Is this a good usage of an ExtensionMethod?
EDIT:
Thanks for all the great answers. For the record: I have kept the method. While the suggestion that you could simply use `new []{value1,value2,value3,value4}.Contains(value)` is true, I simply prefer reading this kind of if statement from left to right (***if this value is equal to any of these*** instead of ***if these values contain this value***). Having one more method show up in intellisense on each object is not an issue for me. | You haven't added functionality that is only useful to a specific application or context, your extension is clearly named and the behaviour is obvious without having to look at the implementation.
The answer is "Yes, it is" | It is unusual to write an extension method for an unrestricted `T`. Not least, this approach will quickly make your intellisense quite hard to use.
While valid, I'd probably avoid this *as an extension method* - perhaps just use a standard static utility method.
The C# 3 array initializer syntax might be easier?
```
bool isTrue = new[] { 1, 2, 3 }.Contains(3);
```
Of course, for large sets of data, you might want to cache a `HashSet<T>` somewhere ;-p | Is this a good use of an ExtensionMethod? | [
"",
"c#",
"extension-methods",
""
] |
I'm currently in charge of a process that seems to be very intimate with the database. My program/script/framework's goal is to make uniformity out of disparate data sources. Using a form of dependency injection, my process at a very high level works fine. The implementation of each data source type is hidden from the highest level business abstraction of what's going on. Great. My questions are two.
1) I have a long paragraph (and it's the length that's bothering me) that assembles an SQL statement in Perl-space of how to translate these different data sources into one, homogeneous end format. So the SQL string always depends on the type of data I'm working with. The WHERE clause depends, the FROM clause depends, the INSERT clause depends, it all depends. It's the high level of depending-ness that's confusing me. How do I model this process in an object-oriented way? MagicObject->buildSQL? That's essentially what I have now, but it feels like all of the parts of the code know too much, hence it's length.
2) If I have a function that does something (builds SQL?), do I pass in the business objects whole and then stringify them at the last minute? Or do I stringify them early and only let my function handle what it needs, as opposed to rendering the objects itself?
**Edit**: While I don't doubt the importance of ORMs, I do not believe we are yet in the ORM space. Imagine baseball data for the American, National, and Fictional leagues were all stored in wildly different formats with varying levels of normalization. It is the job of my process to read these data sources and put them in one unified, normalized pool. I feel the ORM space of acting on these objects happens after my process. I'm a sort of data janitor, if you will. There are essentially no business objects yet to act on because of the lack of a unified pool, which I create.
**Edit^2**: It's been brought to my attention that maybe I haven't described the problem space in enough detail. Here's an example.
Imagine you had to make a master database of all the criminals in the United States. Your company's service is selling a product which sits atop and provides access to this data in a clean, unified format.
This data is provided publicly by the 50 states, but in wildly different formats. Some are one file of data, not normalized. Other are normalized tables in CSV format. Some are Excel documents. Some are TSVs. Some records are even provided that are not complete without manual intervention (other, manually created data sources).
The purpose of my project is to make a "driver" for each of the 50 states and make sure the end product of the process is a master database of criminals in a perfect, relation model. Everything keyed correctly, the schema in perfect shape, etc. | You want to look at **[Fey](http://p3rl.org/Fey::SQL)**. I started using it a few months ago on the job, and while the implementation still has rough corners due to young age, the idea behind it is solid. F.ex., take a query lightly adapted from the manual:
```
my $user = $schema->table( 'user' );
my $q = Fey::SQL
->new_select
->select( $user->columns( 'user_id', 'username' ) )
->from( $user );
```
Now you could write a function like this:
```
sub restrict_with_group {
my ( $q, $table, @group_id ) = @_;
my $group = $schema->table( 'group' )->alias;
$q
->from( $table, $group )
->where( $group->column( 'group_id' ), 'IN', @group_id );
}
```
This will add an inner join from `user` to `group` as well as a `WHERE` condition. And voila, you can write the following in the main program:
```
restrict_with_group( $q, $user, qw( 1 2 3 ) );
```
**But this `restrict_with_group` function will work for *any* query that that has a foreign key to the `group` table!** To use it, you pass the query you want to restrict and the table to which you want to apply the restriction, as well as the group IDs to which you want to restrict it.
In the end you say `$q->sql( $dbh )` and you get back a string of SQL representing the query that you have built up in the `$q` object.
So basically Fey gives you the abstractive capabilities that native SQL is missing. You can extract reusable aspects from your queries and package them as separate functions. | Please do not write your own ORM. Use something like [DBIx::Class](http://search.cpan.org/perldoc?DBIx::Class).
All of these problems that you mention have been solved, and the implementation tested in thousands of other applications. Stick to writing your app, not reimplementing libraries. You might not actually *use* DBIC in your app, but you should look at its implementation approach; especially how it incrementally builds ResultSets (which aren't sets of results, but are rather deferred queries). | How can I assemble SQL with object-oriented Perl? | [
"",
"sql",
"perl",
"oop",
""
] |
I had been wondering for quite some time on how to manager memory in my next project. Which is writing a DSL in C/C++.
It can be done in any of the three ways.
1. Reference counted C or C++.
2. Garbage collected C.
3. In C++, copying class and structures from stack to stack and managing strings separately with some kind of GC.
The community probably already has a lot of experience on each of these methods. Which one will be faster? What are the pros and cons for each?
A related side question. Will malloc/free be slower than allocating a big chunk at the beginning of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves. | It all depends! That's a pretty open question. It needs an essay to answer it!
Hey.. here's one somebody prepared earlier:
<http://lambda-the-ultimate.org/node/2552>
<http://www.hpl.hp.com/personal/Hans_Boehm/gc/issues.html>
It depends how big your objects are, how many of them there are, how fast they're being allocated and discarded, how much time you want to invest optimizing and tweaking to make optimizations. If you know the limits of how much memory you need, for fast performance, I would think you can't really beat grabbing all the memory you need from the OS up front, and then managing it yourself.
The reason it can be slow allocating memory from the OS is that it deals with lots of processes and memory on disk and in ram, so to get memory it's got to decide if there is enough. Possibly, it might have to page another processes memory out from ram to disk so it can give you enough. There's lots going on. So managing it yourself (or with a GC collected heap) can be far quicker than going to the OS for each request. Also, the OS usually deals with bigger chunks of memory, so it might round up the size of requests you make meaning you could waste memory.
Have you got a real hard requirement for going super quick? A lot of DSL applications don't need raw performance. I'd suggest going with whatever's simplest to code. You could spend a lifetime writing memory management systems and worrying which is best. | uh ... It depends how *you* write the garbage collection system for your DSL. Neither C or C++ comes with a garbage collection facility built-in but either could be used to write a very efficient or a very inefficient garbage collector. Writing such a thing, by the way, is a non-trivial task.
DSLs are often written in higher level languages such as Ruby or Python specifically because the language writer can leverage the garbage collection and other facilities of the language. C and C++ are great for writing full, industrial strength languages but you certainly need to know what you are doing to use them - knowledge of [yacc](http://dinosaur.compilertools.net/yacc/) and lex is especially useful here but a good understanding of [dynamic memory management](http://www.bradrodriguez.com/papers/ms/pat4th-c.html) is important also, as you say. You could also check out [keykit](http://nosuch.com/keykit/), an open source music DSL written in C, if you still like the idea of a DSL in C/C++. | Will Garbage Collected C be Faster Than C++? | [
"",
"c++",
"c",
"optimization",
"memory-management",
"garbage-collection",
""
] |
SQL has always had a great feature: cascading deletes. You plan it in advance and when it's time to delete something, BAM! No need to worry about all those dependent records.
However, nowadays it's almost taboo to actually DELETE anything. You flag it as deleted and stop showing it. Unfortunately, I haven't been able to find a solid solution to doing this when there are dependent records. I've always manually coded the complicated web of soft deletes.
Is there a better solution out there that I have completely missed? | I hate to say it but triggers are designed specifically for this kind of thing.
(The hate part is because good triggers are very hard to write and , of course , cannot be debugged) | I've come up with a solution to cascading soft-deletes recently using Postgres 9.6 that makes use of inheritance to partition entries into deleted and non-deleted ones. Here's a copy of the document that I'm writing for our project:
---
# Cascading soft-deletes
## Abstract
In this document I describe our current approach to deal with deletion of objects in our Postgres database and I present the flaws of the current implementation. For example so far we don't have the ability to have cascading soft-deletes. Then I show a method that combines the strengths of Postgres' **cascading hard-delete** and an archiving approach that is easy to implement, maintain and that brings a performance boost in all search queries.
## About soft-deletes in GORM
In the [fabric8-services/fabric8-wit](https://github.com/fabric8-services/fabric8-wit) project which is written in Go we are using the an object oriented mapper for our database called [GORM](http://gorm.io/).
GORM offers a way to [soft-delete](http://gorm.io/docs/delete.html#Soft-Delete) database entries:
> If model has `DeletedAt` field, it will get soft delete ability automatically! then it won’t be deleted from database permanently when call `Delete`, but only set field `DeletedAt`‘s value to current time.
Suppose you have a model definition, in other words a Go struct that looks like this:
```
// User is the Go model for a user entry in the database
type User struct {
ID int
Name string
DeletedAt *time.Time
}
```
And let's say you've loaded an existing user entry by its `ID` from the DB into an object `u`.
```
id := 123
u := User{}
db.Where("id=?", id).First(&u)
```
If you then go ahead and delete the object using GORM:
```
db.Delete(&u)
```
the DB entry will not be deleted using `DELETE` in SQL but the row will be updated and the `deleted_at` is set to the current time:
```
UPDATE users SET deleted_at="2018-10-12 11:24" WHERE id = 123;
```
## Problems with soft-deletes in GORM - Inversion of dependency and no cascade
The above mentioned soft-delete is nice for archiving individual records but it can lead to very odd results for all records that depend on it. That is because soft-deletes by GORM don't cascade as a potential `DELETE` in SQL would do if a foreign key was modelled with `ON DELETE CASCADE`.
When you model a database you typcially design a table and then maybe another one that has a foreign key to the first one:
```
CREATE TABLE countries (
name text PRIMARY KEY,
deleted_at timestamp
);
CREATE TABLE cities (
name text,
country text REFERENCES countries(name) ON DELETE CASCADE,
deleted_at timestamp
);
```
Here we've modeled a list of countries and cities that reference a particular country. When you `DELETE` a country record, all cities will be deleted as well. But since the table has a `deleted_at` column that is carried on in the Go struct for a country or city, the GORM mapper will only soft-delete the country and leave the belonging cities untouched.
### Shifting responsibility from DB to user/developer
GORM thereby puts it in the hands of the developer to (soft-)delete all dependend cities. In other words, what previously was modeled as a relationship from **cities to countries** is now reversed as a relationship from **countries to cities**. That is because the user/developer is now responsible to (soft-)delete all cities belonging to a country when the country is deleted.
## Proposal
Wouldn't it be great if we can have soft-deletes and all the benefits of a `ON DELETE CASCADE`?
It turns out that we can have it without much effort. Let's focus on a single table for now, namely the `countries` table.
### An archive table
Suppose for a second, that we can have another table called `countries_archive` that has the excact **same structure** as the `countries` table. Also suppose that all future **schema migrations** that are done to `countries` are applied to the `countries_archive` table. The only exception is that **unique constraints** and **foreign keys** will not be applied to `countries_archive`.
I guess, this already sounds too good to be true, right? Well, we can create such a table using what's called [Inheritenance](https://www.postgresql.org/docs/9.6/static/ddl-inherit.html) in Postgres:
```
CREATE TABLE countries_archive () INHERITS (countries);
```
The resulting `countries_archive` table will is meant to store all records where `deleted_at IS NOT NULL`.
Note, that in our Go code we would never directly use any `_archive` table. Instead we would query for the original table from which `*_archive` table inherits and Postgres then magically looks into the `*_archive` table automatically. A bit further below I explain why that is; it has to do with partitioning.
### Moving entries to the archive table on (soft)-DELETE
Since the two tables, `countries` and `countries_archive` look exactly alike schemawise we can `INSERT` into the archive very easily using a trigger function when
1. a `DELETE` happens on the `countries` table
2. or when a soft-delete is happening by setting `deleted_at` to a not `NULL` value.
The trigger function looks like this:
```
CREATE OR REPLACE FUNCTION archive_record()
RETURNS TRIGGER AS $$
BEGIN
-- When a soft-delete happens...
IF (TG_OP = 'UPDATE' AND NEW.deleted_at IS NOT NULL) THEN
EXECUTE format('DELETE FROM %I.%I WHERE id = $1', TG_TABLE_SCHEMA, TG_TABLE_NAME) USING OLD.id;
RETURN OLD;
END IF;
-- When a hard-DELETE or a cascaded delete happens
IF (TG_OP = 'DELETE') THEN
-- Set the time when the deletion happens
IF (OLD.deleted_at IS NULL) THEN
OLD.deleted_at := now();
END IF;
EXECUTE format('INSERT INTO %I.%I SELECT $1.*'
, TG_TABLE_SCHEMA, TG_TABLE_NAME || '_archive')
USING OLD;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
```
To wire-up the function with a trigger we can write:
```
CREATE TRIGGER soft_delete_countries
AFTER
-- this is what is triggered by GORM
UPDATE OF deleted_at
-- this is what is triggered by a cascaded DELETE or a direct hard-DELETE
OR DELETE
ON countries
FOR EACH ROW
EXECUTE PROCEDURE archive_record();
```
## Conclusions
Originally the inheritance functionality in postgres was developed to [partition data](https://www.postgresql.org/docs/9.1/static/ddl-partitioning.html). When you search your partitioned data using a specific column or condition, Postgres can find out which partition to search through and can thereby [improve the performance of your query](https://stackoverflow.com/a/3075248/835098).
We can benefit from this performance improvement by only searching entities in existence, unless told otherwise. Entries in existence are those where `deleted_at IS NULL` holds true. (Notice, that GORM will automatically add an `AND deleted_at IS NULL` to every query if there's a `DeletedAt` in GORM's model struct.)
Let's see if Postgres already knows how to take advantage of our separation by running an `EXPLAIN`:
```
EXPLAIN SELECT * FROM countries WHERE deleted_at IS NULL;
+-------------------------------------------------------------------------+
| QUERY PLAN |
|-------------------------------------------------------------------------|
| Append (cost=0.00..21.30 rows=7 width=44) |
| -> Seq Scan on countries (cost=0.00..0.00 rows=1 width=44) |
| Filter: (deleted_at IS NULL) |
| -> Seq Scan on countries_archive (cost=0.00..21.30 rows=6 width=44) |
| Filter: (deleted_at IS NULL) |
+-------------------------------------------------------------------------+
```
As we can see, Postgres still searches both tables, `countries` and `countries_archive`. Let's see what happens when we add a check constraint to the `countries_archive` table upon table creation:
```
CREATE TABLE countries_archive (
CHECK (deleted_at IS NOT NULL)
) INHERITS (countries);
```
Now, Postgres knows that it can skip `countries_archive` when `deleted_at` is expected to be `NULL`:
```
EXPLAIN SELECT * FROM countries WHERE deleted_at IS NULL;
+----------------------------------------------------------------+
| QUERY PLAN |
|----------------------------------------------------------------|
| Append (cost=0.00..0.00 rows=1 width=44) |
| -> Seq Scan on countries (cost=0.00..0.00 rows=1 width=44) |
| Filter: (deleted_at IS NULL) |
+----------------------------------------------------------------+
```
Notice the absence of the sequential scan of the `countries_archive` table in the aforementioned `EXPLAIN`.
## Benefits and Risks
### Benefits
1. We have regular **cascaded deletes** back and can let the DB figure out in which order to delete things.
2. At the same time, we're **archiving our data** as well. Every soft-delete
3. **No Go code changes** are required. We only have to setup a table and a trigger for each table that shall be archived.
4. Whenever we figure that we don't want this behaviour with triggers and cascaded soft-delete anymore **we can easily go back**.
5. All future **schema migrations** that are being made to the original table will be applied to the `_archive` version of that table as well. Except for constraints, which is good.
### Risks
1. Suppose you add a new table that references another existing table with a foreign key that has `ON DELETE CASCADE`. If the existing table uses the `archive_record()` function from above, your new table will receive hard `DELETE`s when something in the existing table is soft-deletes. This isn't a problem, if you use `archive_record()` for your new dependent table as well. But you just have to remember it.
## Final thoughts
The approach presented here does not solve the problem of restoring individual rows. On the other hand, this approach doesn't make it harder or more complicated. It just remains unsolved.
In our application certain fields of a work item don't have a foreign key specified. A good example are the area IDs. That means when an area is `DELETE`d, an associated work item is not automatically `DELETE`d. There are two scenarios when an area is removed itself:
1. A delete is directly requested from a user.
2. A user requests to delete a space and then the area is removed due to its foreign key constraint on the space.
Notice that, in the first scenario the user's requests goes through the area controller code and then through the area repository code. We have a chance in any of those layers to modify all work items that would reference a non-existing area otherwise. In the second scenario everything related to the area happens and stays on the DB layer so we have no chance of moifying the work items. The good news is that we don't have to. Every work item references a space and will therefore be deleted anyways when the space goes away.
What applies to areas also applies to iterations, labels and board columns as well.
# How to apply to our database?
Steps
1. Create "\*\_archived" tables for all tables that inherit the original tables.
2. Install a soft-delete trigger usinge the above `archive_record()` function.
3. Move all entries where `deleted_at IS NOT NULL` to their respective `_archive` table by doing a hard `DELETE` which will trigger the `archive_record()` function.
# Example
Here is a [fully working example](https://rextester.com/BMCB94324) in which we demonstrated a cascaded soft-delete over two tables, `countries` and `capitals`. We show how records are being archived independently of the method that was chosen for the delete.
```
CREATE TABLE countries (
id int primary key,
name text unique,
deleted_at timestamp
);
CREATE TABLE countries_archive (
CHECK ( deleted_at IS NOT NULL )
) INHERITS(countries);
CREATE TABLE capitals (
id int primary key,
name text,
country_id int references countries(id) on delete cascade,
deleted_at timestamp
);
CREATE TABLE capitals_archive (
CHECK ( deleted_at IS NOT NULL )
) INHERITS(capitals);
CREATE OR REPLACE FUNCTION archive_record()
RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'UPDATE' AND NEW.deleted_at IS NOT NULL) THEN
EXECUTE format('DELETE FROM %I.%I WHERE id = $1', TG_TABLE_SCHEMA, TG_TABLE_NAME) USING OLD.id;
RETURN OLD;
END IF;
IF (TG_OP = 'DELETE') THEN
IF (OLD.deleted_at IS NULL) THEN
OLD.deleted_at := now();
END IF;
EXECUTE format('INSERT INTO %I.%I SELECT $1.*'
, TG_TABLE_SCHEMA, TG_TABLE_NAME || '_archive')
USING OLD;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER soft_delete_countries
AFTER
UPDATE OF deleted_at
OR DELETE
ON countries
FOR EACH ROW
EXECUTE PROCEDURE archive_record();
CREATE TRIGGER soft_delete_capitals
AFTER
UPDATE OF deleted_at
OR DELETE
ON capitals
FOR EACH ROW
EXECUTE PROCEDURE archive_record();
INSERT INTO countries (id, name) VALUES (1, 'France');
INSERT INTO countries (id, name) VALUES (2, 'India');
INSERT INTO capitals VALUES (1, 'Paris', 1);
INSERT INTO capitals VALUES (2, 'Bengaluru', 2);
SELECT 'BEFORE countries' as "info", * FROM ONLY countries;
SELECT 'BEFORE countries_archive' as "info", * FROM countries_archive;
SELECT 'BEFORE capitals' as "info", * FROM ONLY capitals;
SELECT 'BEFORE capitals_archive' as "info", * FROM capitals_archive;
-- Delete one country via hard-DELETE and one via soft-delete
DELETE FROM countries WHERE id = 1;
UPDATE countries SET deleted_at = '2018-12-01' WHERE id = 2;
SELECT 'AFTER countries' as "info", * FROM ONLY countries;
SELECT 'AFTER countries_archive' as "info", * FROM countries_archive;
SELECT 'AFTER capitals' as "info", * FROM ONLY capitals;
SELECT 'AFTER capitals_archive' as "info", * FROM capitals_archive;
``` | Cascading Soft Delete | [
"",
"sql",
"soft-delete",
""
] |
I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this:
GUI -> MainWindow -> CommandLineInterface -> EntryField
Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this:
```
public:
sig::csignal<void, string> msgEntered;
```
And in the c'tor:
```
entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp));
```
The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop:
```
gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
```
Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop:
```
gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
```
But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible.
It feels like I'm missing something essential here. Is there a clean way to do this?
**Note:** I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question. | Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow.
With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage).
I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event.
It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking.
But if you don't need to vary the parts of the system independently it's probably not worth worrying about. | The root problem is that you're treating the GUI like its a monolithic application, only the gui is connected to the rest of the logic via a bigger wire than usual.
You need to re-think the way the GUI interacts with the back-end server. Generally this means your GUI becomes a stand-alone application *that does almost nothing* and talks to the server without any direct coupling between the internals of the GUI (ie your signals and events) and the server's processing logic. ie, when you click a button you may want it to perform some action, in which case you need to call the server, but nearly all the other events need to only change the state inside the GUI and do nothing to the server - not until you're ready, or the user wants some response, or you have enough idle time to make the calls in the background.
The trick is to define an interface for the server totally independently of the GUI. You should be able to change GUIs later without modifying the server at all.
This means you will not be able to have the events sent automatically, you'll need to wire them up manually. | Keeping the GUI separate | [
"",
"c++",
"user-interface",
"signals-slots",
""
] |
I'm quite fond of using GNU [**getopt**](http://www.gnu.org/software/libtool/manual/libc/Getopt.html), when programming under **Linux**. I understand, that getopt(), is not available under MS VC++.
**Note:**
* Win32 environment
* using Visual Studio
* No [Boost](http://www.boost.org/)
* No MFC
* Not concerned with portability
**Question:**
* How can I then port getopt() accordingly?
+ What guidelines should I be aware of while porting?
* Known port with **same** features? | You will have to check out the license requirements, but the source to the GCC libraries is freely available. Just grab getopt() from there. | This may help, it's also very easy to integrate
<http://www.codeproject.com/KB/cpp/xgetopt.aspx> | getopt() in VC++ | [
"",
"c++",
"windows",
"visual-studio-2008",
""
] |
How do I accomplish a simple redirect (e.g. `cflocation` in ColdFusion, or `header(location:http://)` for PHP) in Django? | It's simple:
```
from django.http import HttpResponseRedirect
def myview(request):
...
return HttpResponseRedirect("/path/")
```
More info in the [official Django docs](https://docs.djangoproject.com/en/1.4/ref/request-response/#django.http.HttpResponseRedirect)
**Update: Django 1.0**
There is apparently a better way of doing this in Django now using `generic views`.
Example -
```
from django.views.generic.simple import redirect_to
urlpatterns = patterns('',
(r'^one/$', redirect_to, {'url': '/another/'}),
#etc...
)
```
There is more in the [generic views documentation](https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-simple-redirect-to).
Credit - [Carles Barrobés](https://stackoverflow.com/users/166761/carles-barrobes).
**Update #2: Django 1.3+**
In Django 1.5 *redirect\_to* no longer exists and has been replaced by [RedirectView](https://docs.djangoproject.com/en/1.5/ref/class-based-views/base/#redirectview). Credit to [Yonatan](https://stackoverflow.com/users/221917/yonatan)
```
from django.views.generic import RedirectView
urlpatterns = patterns('',
(r'^one/$', RedirectView.as_view(url='/another/')),
)
``` | Depending on what you want (i.e. if you do not want to do any additional pre-processing), it is simpler to just use Django's `redirect_to` generic view:
```
from django.views.generic.simple import redirect_to
urlpatterns = patterns('',
(r'^one/$', redirect_to, {'url': '/another/'}),
#etc...
)
```
See [documentation](https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-simple-redirect-to) for more advanced examples.
---
For Django 1.3+ use:
```
from django.views.generic import RedirectView
urlpatterns = patterns('',
(r'^one/$', RedirectView.as_view(url='/another/')),
)
``` | Python + Django page redirect | [
"",
"python",
"django",
"redirect",
"location",
""
] |
In the configuration reference for MySql's [connector J driver](http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html), a caveat emptor is issued on the use of the **autoReconnect** property. I followed the instructions and increased my server's *wait\_timeout*. Since I am using DBCP (I am considering moving to c3po after reading several posts on Stackoverflow shooting down DBCP ), is it ok to use the **autoReconnectForPools** property ? What does it actually do when enabled under DBCP or any connection pool for that matter ? | autoReconnect will throw an SQLException to the client, but will try to re-establish the connection.
autoReconnectForPools will try to ping the server before each SQL execution.
I had a lot of issues with dbcp in the past, especially disconnections. Most were solved by moving to [c3p0](http://c3p0.sourceforge.net/). Notice that the mysql driver has connection tester for c3p0 (com.mysql.jdbc.integration.c3p0.MysqlConnectionTester).
Also, you may want to check this out: [Connection pooling options with JDBC: DBCP vs C3P0](https://stackoverflow.com/questions/520585/connection-pooling-options-with-jdbc-dbcp-vs-c3p0) | MySQL's `autoReconnect` feature is deprecated, as it has many issues (ref: official [documentation](http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html)).
`autoReconnectForPools` has little to do with `autoReconnect`, it has more to do with `autoCommit` and `reconnectAtTxEnd` - when all 3 are `true`, it will ping the server at the end of each transaction and automatically reconnect if needed.
DBCP's connection validation is imperfect - even when `testOnBorrow` is set, it sometimes returns broken connections from the pool (not to mention testing a connection before every borrow is horribly inefficient).
According to [this article](https://github.com/brettwooldridge/HikariCP/wiki/Bad-Behavior:-Handling-Database-Down), HikariCP seems to be a better pool implementation, as it is able to use JDBC4 `isValid()` API which is much faster than running a test query, and is specially designed to never return broken connections to the client application. | What is difference between autoReconnect & autoReconnectForPools in MySql connector/J? | [
"",
"java",
"mysql",
"jdbc",
"apache-commons-dbcp",
""
] |
Recently I've been thinking about securing some of my code. I'm curious how one could make sure an object can never be created directly, but only via some method of a factory class. Let us say I have some "business object" class and I want to make sure any instance of this class will have a valid internal state. In order to achieve this I will need to perform some check before creating an object, probably in its constructor. This is all okay until I decide I want to make this check be a part of the business logic. So, how can I arrange for a business object to be creatable only through some method in my business logic class but never directly? The first natural desire to use a good old "friend" keyword of C++ will fall short with C#. So we need other options...
Let's try some example:
```
public MyBusinessObjectClass
{
public string MyProperty { get; private set; }
public MyBusinessObjectClass (string myProperty)
{
MyProperty = myProperty;
}
}
public MyBusinessLogicClass
{
public MyBusinessObjectClass CreateBusinessObject (string myProperty)
{
// Perform some check on myProperty
if (true /* check is okay */)
return new MyBusinessObjectClass (myProperty);
return null;
}
}
```
It's all okay until you remember you can still create MyBusinessObjectClass instance directly, without checking the input. I would like to exclude that technical possibility altogether.
So, what does the community think about this? | Looks like you just want to run some business logic before creating the object - so why dont you just create a static method inside the "BusinessClass" that does all the dirty "myProperty" checking work, and make the constructor private?
```
public BusinessClass
{
public string MyProperty { get; private set; }
private BusinessClass()
{
}
private BusinessClass(string myProperty)
{
MyProperty = myProperty;
}
public static BusinessClass CreateObject(string myProperty)
{
// Perform some check on myProperty
if (/* all ok */)
return new BusinessClass(myProperty);
return null;
}
}
```
Calling it would be pretty straightforward:
```
BusinessClass objBusiness = BusinessClass.CreateObject(someProperty);
``` | You can make the constructor private, and the factory a nested type:
```
public class BusinessObject
{
private BusinessObject(string property)
{
}
public class Factory
{
public static BusinessObject CreateBusinessObject(string property)
{
return new BusinessObject(property);
}
}
}
```
This works because nested types have access to the private members of their enclosing types. I know it's a bit restrictive, but hopefully it'll help... | Factory pattern in C#: How to ensure an object instance can only be created by a factory class? | [
"",
"c#",
"design-patterns",
"oop",
"factory",
""
] |
I'm relatively new to java (specifically swing) and have recently been making some fairly simple applications, learning as I go. The latest one has lots of form components such as JLabels, JTextFields, JButtons, etc etc. All were placed in NetBeans using a visual builder. I have to say I'm not really taking to NetBeans and have really just been designing the GUI and copying the code straight to Eclipse.
My question is: Is it worth getting a very good understanding of GroupLayout and coding by hand (and therefore having more control over my GUI) or just continuing on as is? | I'd say it's worth taking some time to understand `GroupLayout` if only because it's always a good idea to understand what your IDE is doing. Knowing what goes on behind the scenes actually makes you more flexible, because if you wind up in a situation where you don't have access to Netbeans, you'll know how to replicate the layout it would have produced for you. Besides, after you've gotten a decent amount of experience, you'll probably wind up saving time overall. You can often develop a simple GUI faster by hand than by using a visual editor, especially considering the time it takes you to copy and paste the code from Netbeans to Eclipse.
And of course, learning how to use `GroupLayout` by hand will also make it easier for you to transition to any of the other layout managers Java offers, which in turn can lead to simpler code and even more time saved. | [`GroupLayout`](http://java.sun.com/javase/6/docs/api/javax/swing/GroupLayout.html) wasn't really designed for hand-coding; it's much, much more verbose than any other layout manager (except possibly [`GridBagLayout`](http://java.sun.com/javase/6/docs/api/java/awt/GridBagLayout.html)). Since you say you're new to Swing, I suggest starting with the basic layout managers ([`BorderLayout`](http://java.sun.com/javase/6/docs/api/java/awt/BorderLayout.html) and [`FlowLayout`](http://java.sun.com/javase/6/docs/api/java/awt/FlowLayout.html)); you can use them in NetBeans by right-clicking the panel or frame and choosing "Set Layout". After you're familiar with those, I suggest you check out [`MiGLayout`](http://www.miglayout.com/).
To answer your other question: I think of NetBeans form designer as similar to a calculator; it's a wonderful tool if you already know the basics, but you'll never learn anything if you just use it as a crutch. | GroupLayout: Is it worth learning? | [
"",
"java",
"swing",
"user-interface",
""
] |
I have heard mixed responses on this topic, so what is a sure fire way to destroy a PHP session?
```
session_start();
if(isset($_SESSION['foo'])) {
unset($_SESSION['foo'];
...
}
session_destroy();
```
In the most simple of cases, would this sufficient to truly terminate the session between the user and the server? | To destroy a session you should take the following steps:
* delete the session data
* invalidate the session ID
To do this, I’d use this:
```
session_start();
// resets the session data for the rest of the runtime
$_SESSION = array();
// sends as Set-Cookie to invalidate the session cookie
if (isset($_COOKIE[session_name()])) {
$params = session_get_cookie_params();
setcookie(session_name(), '', 1, $params['path'], $params['domain'], $params['secure'], isset($params['httponly']));
}
session_destroy();
```
And to be sure that the session ID is invalid, you should only allow session IDs that were being initiated by your script. So set a flag and check if it is set:
```
session_start();
if (!isset($_SESSION['CREATED'])) {
// invalidate old session data and ID
session_regenerate_id(true);
$_SESSION['CREATED'] = time();
}
```
Additionally, you can use this timestamp to swap the session ID periodically to reduce its lifetime:
```
if (time() - $_SESSION['CREATED'] > ini_get('session.gc_maxlifetime')) {
session_regenerate_id(true);
$_SESSION['CREATED'] = time();
}
``` | The PHP Manual addresses this question.
You need to kill the session and also remove the session cookie (if you are using cookies).
See this page (especially the first example):
<https://www.php.net/manual/en/function.session-destroy.php> | Truly destroying a PHP Session? | [
"",
"php",
"session",
""
] |
I'm trying to add two images together using NumPy and PIL. The way I would do this in [MATLAB](http://en.wikipedia.org/wiki/MATLAB) would be something like:
```
>> M1 = imread('_1.jpg');
>> M2 = imread('_2.jpg');
>> resM = M1 + M2;
>> imwrite(resM, 'res.jpg');
```
I get something like this:
[alt text http://www.deadlink.cc/matlab.jpg](http://www.deadlink.cc/matlab.jpg)
Using a compositing program and adding the images the MATLAB result seems to be right.
In Python I'm trying to do the same thing like this:
```
from PIL import Image
from numpy import *
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
im1arr = asarray(im1)
im2arr = asarray(im2)
addition = im1arr + im2arr
resultImage = Image.fromarray(addition)
resultImage.save('/Users/rem7/Desktop/a.jpg')
```
and I get something like this:
[alt text http://www.deadlink.cc/python.jpg](http://www.deadlink.cc/python.jpg)
Why am I getting all those funky colors? I also tried using `ImageMath.eval("a+b", a=im1, b=im2)`, but I get an error about RGB unsupported.
I also saw that there is an `Image.blend()` but that requires an alpha.
What's the best way to achieve what I'm looking for?
Source Images (images have been removed):
[alt text http://www.deadlink.cc/\_1.jpg](http://www.deadlink.cc/_1.jpg)
[alt text http://www.deadlink.cc/\_2.jpg](http://www.deadlink.cc/_2.jpg)
Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post.
(images have been removed) 2013 05 09 | As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the [comment of schnaader's answer](https://stackoverflow.com/questions/524930/numpy-pil-adding-an-image/524943#524943) you **still get overflow** if you add your images like this:
```
addition=(im1arr+im2arr)/2
```
The reason for this overflow is that your NumPy arrays (*im1arr* *im2arr*) are of the **uint8** type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:
```
>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')
array([ 0, 20, 4], dtype=uint8)
```
To avoid overflow, your arrays should be able to contain values beyond 255. You need to **convert them to floats** for instance, perform the blending operation and **convert the result back to uint8**:
```
im1arrF = im1arr.astype('float')
im2arrF = im2arr.astype('float')
additionF = (im1arrF+im2arrF)/2
addition = additionF.astype('uint8')
```
You **should not** do this:
```
addition = im1arr/2 + im2arr/2
```
as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.
**MATLAB note**: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions. | Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers.
Try this:
```
from PIL import Image
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
Image.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg')
``` | NumPy, PIL adding an image | [
"",
"python",
"image-processing",
"numpy",
"python-imaging-library",
""
] |
I need to be able to read a file format that mixes binary and non-binary data. Assuming I know the input is good, what's the best way to do this? As an example, let's take a file that has a double as the first line, a newline (0x0D 0x0A) and then ten bytes of binary data afterward. I could, of course, calculate the position of the newline, then make a BinaryReader and seek to that position, but I keep thinking that there has to be a better way. | You can use System.IO.BinaryReader. The problem with this though is you *must* know what type of data you are going to be reading before you call any of the Read methods.
```
Read(byte[], int, int)
Read(char[], int, int)
Read()
Read7BitEncodedInt()
ReadBoolean()
ReadByte()
ReadBytes(int)
ReadChar()
ReadChars()
ReadDecimal()
ReadDouble()
ReadInt16()
ReadInt32()
ReadInt64()
ReadSByte()
ReadSingle()
ReadString()
ReadUInt16()
ReadUInt32()
ReadUInt64()
```
And of course the same methods exist for writing in System.IO.BinaryWriter. | Is this file format already fixed? If it's not, it's a really good idea to change to use a length-prefixed format for the strings. Then you can read just the right amount and convert it to a string.
Otherwise, you'll need to read chunks from the file, scan for the newline, and decode the right amount of data or (if you don't find the newline) either buffer it somewhere else (e.g. a MemoryStream) or just remember the starting point and rewind the stream appropriately. It *will* be ugly, but that's just because of the deficiency of the file format.
I would suggest you *don't* "over-decode" (i.e. decode the arbitrary binary data after the string) - while it may well not do any harm, in some encodings you could be reading an impossible sequence of binary data, which then starts getting into the realms of DecoderFallbacks and the like. | What's the best way to read mixed (i.e. text and binary) data? | [
"",
"c#",
"file-io",
""
] |
Let's say you were crazy enough to want to try to combine a number of different technologies just to show that you could do so - what kind of app would lend itself to this type of project as a demo for a potential employer?
Specifically I'm thinking of combining the following technologies:
PHP/Django/Rails/Flex
Does this sound ridiculous or could it be a useful exercise/demonstration of one's abilities? | If I were an employer, I would be much more impressed if you could implement the same sample application three times:
* Rails/Flex
* Django/Flex
* PHP/Flex
If you use ReST, then the Flex side wouldn't need to change too much to support each server technology, and you would demonstrate that you're a versatile developer that can learn new technologies quickly. | [How about a blog](https://stackoverflow.com/questions/471940/why-does-every-man-and-his-dog-want-to-code-a-blogging-engine/472130#472130)?
Of course, some folks might consider it cliche, but it's also a fairly well-rounded sort of Web project -- there's some database-design in there, some app-dev in your language of choice, some SOA (Web services with SOAP or REST, maybe some RSS as well), some UI design, some UI dev in Flex, and so on.
It's also a familiar-enough idea to the kinds of folks who'll probably be interviewing you, so you won't have to spend lots of time explaining what it is before digging into the technical details, and it's non-trivial, but still something whose scope you can keep under control. I think it's a good project for learning a new Web technology in general.
In terms of which part does what, you might consider doing the CMS with Flex, the majority of the public-facing site with PHP (or Rails) and HTML/CSS, and then adding one or two Flex extras to that as well -- a photo gallery, maybe, or a video/media player. Or what the heck, be different -- just build the whole thing in Flex, like Adobe's done with [its XD site](https://xd.adobe.com/#/home).
Have fun and good luck! | PHP/Django/Rails/Flex hybrid demo - need project idea | [
"",
"php",
"ruby-on-rails",
"django",
"apache-flex",
""
] |
Is there any way to retrieve the comments from an XML file?
I have an XML file, with comments in it, and i have to build an user-interface based on each node in this file, and the associated comments.
I can't seem to find a way to get those comments. I was able to get 'some' of them using simpleXML, but it didn't work for the root node, and was acting pretty strange... some comments were put in their own node, some other were left as childs, and all comments were put in the same node... Not sure this makes any sense:) the point is simpleXML broke the structure of the comments and it wasn't good for my needs. | You can use XMLReader to read through all of the nodes and pull out the comments. I've included some sample code to get you started, as it just pulls out the nodes, and doesn't take into account *where* the comment is inside, below or above any xml nodes.
```
$comments = '';
$xml =<<<EOX
<xml>
<!--data here -->
<data>
<!-- more here -->
<more />
</data>
</xml>
EOX;
$reader = new XMLReader();
$reader->XML($xml);
while ($reader->read()) {
if ($reader->nodeType == XMLReader::COMMENT) {
$comments .= "\n".$reader->value;
}
}
$reader->close();
echo "all comments below:\n-------------------".$comments
```
The expected output is:
```
all comments below:
-------------------
data here
more here
```
So just the values of the comments (not the `<!-- -->`) will be taken, as well as whitespace. | It's simple if you use XPath. The `comment()` function matches comments. So the pattern
```
//comment()
```
finds all comments in the document.
In XSLT, for the general pattern where the comment precedes the element you're transforming, e.g.:
```
<!-- This is the comment -->
<element>...
```
you'd use a template like:
```
<xsl:template match="*[.::preceding-sibling()/comment()]">
<xsl:variable name="comment" select=".::preceding-sibling()/comment()"/>
<!-- xsl:value-of $comment will now give you the text of the comment -->
...
``` | Is there any way to retrieve the comments from an XML file? | [
"",
"php",
"xml",
"simplexml",
""
] |
Is it the same as overloading, if not, can you please provide and example of each in C#
I have read the responses to a similar question asked in SO ... i did not understand the responses posted to it.
Similar question asked [here](https://stackoverflow.com/questions/479923/is-c-a-single-dispatch-or-multiple-dispatch-language)
EDIT: With the new "dynamic" keyword in C# 4.0 ... would this make the language "multi dispatch" enabled? | C# uses single dispatch, which includes overloaded methods. When you have the code
```
stringBuilder.Append(parameter);
```
the dispatcher looks at all methods defined on the stringBuilder's class, and finds the correct one.
For a multiple dispatch example, let's look at Prolog (which is the first one I could think of). You can define a function in prolog as such:
```
func(Arg1, Arg2) :- ....body....
```
This is not defined inside any class but in a global scope. Then, you can call `func(Arg1, Arg2)` on any two arguments and this function will be called. If you want something like overloading, you have to validate the argument types inside the function, and define it multiple times:
```
func(Arg1, Arg2) :- is_number(Arg1), is_string(Arg2), ....body....
func(Arg1, Arg2) :- is_string(Arg1), is_list(Arg2), ....body....
func(Arg1, Arg2) :- is_number(Arg1), is_list(Arg2), ....body....
```
Then, any two argument types you would send would both be checked - that is the multiple dispatch part.
In short, single dispatch only looks at methods defined on the first argument (in our first example, the stringBuilder), then resolves the correct overload to call using the other arguments. Multiple dispatch has methods/functions defined in the global scope and treats all arguments the same during overload resolution.
I hope I made myself clear, this is a pretty tough subject.
---
Update: I forgot to mention, multiple dispatch happens at runtime while single dispatch happens at compile time.
Update #2: Apparently, that was not true. | Multiple dispatch is a "form" of overloading...
For example, C# is single dispatch because if works out what method to call based on only one argument, the "this" pointer. When you have something like this:
```
Base base= new Derived();
base.DoSomething();
```
the method Derived.DoSomething is called even though you called it through the base pointer. Now, if we've got the following:
```
class Derived : Base
{
public override void Process(Stream stream);
public override void Process(FileStream stream);
public override void Process(MemoryStream stream);
}
```
And we do this:
```
Stream stream= new MemoryStream(...);
Base b= new Derived();
b.Process(stream);
```
Then we will call the **Process(Stream)** method from `Derived` as C# does a single dispatch on the object pointer (b) and then uses the compile time information to decide which method to call. Even though **stream** is a MemoryStream a single dispatch system will ignore this.
In a multi-dispatch system the object pointer will be looked at (as in C#) AND the runtime types of the arguments will be examined. In the above example, because **stream** is actually a MemoryStream the system would call the **Process(MemoryStream)** method. | What is - Single and Multiple Dispatch (in relation to .NET)? | [
"",
"c#",
"multiple-dispatch",
"single-dispatch",
""
] |
Given the tables:
role: roleid, name
permission: permissionid, name
role\_permission: roleid, permissionid
I have a set of permissions, and I want to see if there is an existing role that has these permissions, or if I need to make a new one. Note that I already know the permissionid, so really the permission table can be ignored - I just included it here for clarity.
Is this possible to do in a SQL query? I imagine it would have to be a dynamically-generated query.
If not, is there any better way than the brute force method of simply iterating over every role, and seeing if it has the exact permissions?
Note, I'm looking for a role that has the exact set of permissions - no more, no less. | You can select all roles that have the subset of permissions you are looking for. Count the number of permissions and see if it's exactly equal to the number of permissions you need:
```
select r.roleid
from role r
where not exists (select * from role_permissions rp where rp.roleid = r.roleid and rp.permissionid not in (1,2,3,4)) -- id of permissions
and (select count(*) from role_permissions rp where rp.roleid = r.roleid) = 4 -- number of permissions
``` | Having made a hash of my first answer to this question, here is a slightly left field alternative which works but does involve adding data to the database.
The trick is to add a column to the permission table that holds a unique value for each row.
This is a fairly common pattern and will give precise results. The downside is you have to code your way around hiding the numerical equivalents.
```
id int(10)
name varchar(45)
value int(10)
```
Then the contents will become:
```
Permission: Role Role_Permission
id name value id name roleid permissionid
-- ---- ----- -- ---- ------ ------------
1 Read 8 1 Admin 1 1
2 Write 16 2 DataAdmin 1 2
3 Update 32 3 User 1 3
4 Delete 64 1 4
2 1
2 3
2 4
```
Then each combination of roles gives a unique value:
```
SELECT x.roleid, sum(value) FROM role_permission x
inner join permission p
on x.permissionid = p.id
Group by x.roleid
```
Giving:
```
roleid sum(value)
------ ----------
1 120 (the sum of permissions 1+2+3+4 = 120)
2 104 (the sum of permissions 1+3+4 = 104)
```
Now where did I leave that corkscrew... | Using SQL to search for a set in a one-to-many relationship | [
"",
"sql",
"search",
""
] |
At my company we have a group of 8 web developers for our business web site (entirely written in PHP, but that shouldn't matter). Everyone in the group is working on different projects at the same time and whenever they're done with their task, they immediately deploy it (cause business is moving fast these days).
Currently the development happens on one shared server with all developers working on the same code base (using RCS to "lock" files away from others). When deployment is due, the changed files are copied over to a "staging" server and then a sync script uploads the files to our main webserver from where it is distributed over to the other 9 servers.
Quite happily, the web dev team asked us for help in order to improve the process (after us complaining for a while) and now our idea for setting up their dev environment is as follows:
* A dev server with virtual directories, so that everybody has their own codebase,
* SVN (or any other VCS) to keep track of changes
* a central server for testing holding the latest checked in code
The question is now: How do we manage to deploy the changed files on to the server without accidentaly uploading bugs from other projects? My first idea was to simply export the latest revision from the repository, but that would not give full control over the files.
How do you manage such a situation? What kind of deployment scripts do you have in action?
(As a special challenge: the website has organically grown over the last 10 years, so the projects are not split up in small chunks, but files for one specific feature are spread all over the directory tree.) | Cassy - you obviously have a long way to go before you'll get your source code management entirely in order, but it sounds like you are on your way!
Having individual sandboxes will definitely help on things. Next then make sure that the website is ALWAYS just a clean checkout of a particular revision, tag or branch from subversion.
We use git, but we have a similar setup. We tag a particular version with a version number (in git we also get to add a description to the tag; good for release notes!) and then we have a script that anyone with access to "do a release" can run that takes two parameters -- which system is going to be updated (the datacenter and if we're updating the test or the production server) and then the version number (the tag).
The script uses sudo to then run the release script in a shared account. It does a checkout of the relevant version, minimizes javascript and CSS[1](http://www.yellowbot.com/), pushes the code to the relevant servers for the environment and then restarts what needs to be restarted. The last line of the release script connects to one of the webservers and tails the error log.
On [our](http://www.yellowbot.com/) [websites](http://www.weblocal.ca/) we include an html comment at the bottom of each page with the current server name and the version -- makes it easy to see "What's running right now?"
[1](http://www.yellowbot.com/) and a bunch of other housekeeping tasks like that... | You should consider using branching and merging for individual projects (on the same codebase), if they make huge changes to the shared codebase.
we usually have a local dev enviroment for testing (meaning, webserver locally) for testing the uncommited code (you don't want to commit non functioning code at all), but that dev enviroment could even be on a separeate server using shared folders.
however, committed code, should be deployed to a staging server for testing before putting it in production. | How do you deploy a website to your webservers? | [
"",
"php",
"svn",
"deployment",
"webserver",
""
] |
When using SQL, are there any benefits of using `=` in a `WHERE` clause instead of `LIKE`?
Without any special operators, `LIKE` and `=` are the same, right? | # Different Operators
`LIKE` and `=` are different operators. Most answers here focus on the wildcard support, which is not the only difference between these operators!
`=` is a comparison operator that operates on numbers and strings. When comparing strings, the comparison operator compares *whole strings*.
`LIKE` is a string operator that compares *character by character*.
To complicate matters, both operators use a [collation](http://dev.mysql.com/doc/refman/5.1/en/charset-general.html) which can have important effects on the result of the comparison.
# Motivating Example
Let us first identify an example where these operators produce obviously different results. Allow me to quote from the MySQL manual:
> Per the SQL standard, LIKE performs matching on a per-character basis, thus it can produce results different from the = comparison operator:
```
mysql> SELECT 'ä' LIKE 'ae' COLLATE latin1_german2_ci;
+-----------------------------------------+
| 'ä' LIKE 'ae' COLLATE latin1_german2_ci |
+-----------------------------------------+
| 0 |
+-----------------------------------------+
mysql> SELECT 'ä' = 'ae' COLLATE latin1_german2_ci;
+--------------------------------------+
| 'ä' = 'ae' COLLATE latin1_german2_ci |
+--------------------------------------+
| 1 |
+--------------------------------------+
```
Please note that this page of the MySQL manual is called *String Comparison Functions*, and `=` is not discussed, which implies that `=` is not strictly a string comparison function.
# How Does `=` Work?
The [SQL Standard § 8.2](http://www.contrib.andrew.cmu.edu/%7Eshadow/sql/sql1992.txt) describes how `=` compares strings:
> The comparison of two character strings is determined as follows:
>
> a) If the length in characters of X is not equal to the length
> in characters of Y, then the shorter string is effectively
> replaced, for the purposes of comparison, with a copy of
> itself that has been extended to the length of the longer
> string by concatenation on the right of one or more pad
> characters, where the pad character is chosen based on CS. If
> CS has the NO PAD attribute, then the pad character is an
> implementation-dependent character different from any
> character in the character set of X and Y that collates less
> than any string under CS. Otherwise, the pad character is a
> <space>.
>
> **b) The result of the comparison of X and Y is given by the
> collating sequence CS.**
>
> c) Depending on the collating sequence, two strings may
> compare as equal even if they are of different lengths or
> contain different sequences of characters. When the operations
> MAX, MIN, DISTINCT, references to a grouping column, and the
> UNION, EXCEPT, and INTERSECT operators refer to character
> strings, the specific value selected by these operations from
> a set of such equal values is implementation-dependent.
(Emphasis added.)
What does this mean? It means that when comparing strings, the `=` operator is just a thin wrapper around the current collation. A collation is a library that has various rules for comparing strings. Here is an example of [a binary collation from MySQL](https://github.com/mysql/mysql-server/blob/5.7/strings/ctype-bin.c):
```
static int my_strnncoll_binary(const CHARSET_INFO *cs __attribute__((unused)),
const uchar *s, size_t slen,
const uchar *t, size_t tlen,
my_bool t_is_prefix)
{
size_t len= MY_MIN(slen,tlen);
int cmp= memcmp(s,t,len);
return cmp ? cmp : (int)((t_is_prefix ? len : slen) - tlen);
}
```
This particular collation happens to compare byte-by-byte (which is why it's called "binary" — it doesn't give any special meaning to strings). Other collations may provide more advanced comparisons.
For example, here is a [UTF-8 collation](https://github.com/mysql/mysql-server/blob/5.7/strings/ctype-utf8.c#L8241) that supports case-insensitive comparisons. The code is too long to paste here, but go to that link and read the body of `my_strnncollsp_utf8mb4()`. This collation can process multiple bytes at a time and it can apply various transforms (such as case insensitive comparison). The `=` operator is completely abstracted from the vagaries of the collation.
# How Does `LIKE` Work?
The [SQL Standard § 8.5](http://www.contrib.andrew.cmu.edu/%7Eshadow/sql/sql1992.txt) describes how `LIKE` compares strings:
> The <predicate>
>
> `M LIKE P`
>
> is true if there exists a partitioning of M into substrings
> such that:
>
> i) A substring of M is a sequence of 0 or more contiguous
> <character representation>s of M and each <character
> representation> of M is part of exactly one substring.
>
> ii) If the i-th substring specifier of P is an arbitrary
> character specifier, the i-th substring of M is any single
> <character representation>.
>
> iii) If the i-th substring specifier of P is an arbitrary string
> specifier, then the i-th substring of M is any sequence of
> 0 or more <character representation>s.
>
> **iv) If the i-th substring specifier of P is neither an
> arbitrary character specifier nor an arbitrary string specifier,
> then the i-th substring of M is equal to that substring
> specifier according to the collating sequence of
> the <like predicate>, without the appending of <space>
> characters to M, and has the same length as that substring
> specifier.**
>
> v) The number of substrings of M is equal to the number of
> substring specifiers of P.
(Emphasis added.)
This is pretty wordy, so let's break it down. Items ii and iii refer to the wildcards `_` and `%`, respectively. If `P` does not contain any wildcards, then only item iv applies. This is the case of interest posed by the OP.
In this case, it compares each "substring" (individual characters) in `M` against each substring in `P` using the current collation.
# Conclusions
The bottom line is that when comparing strings, `=` compares the entire string while `LIKE` compares one character at a time. Both comparisons use the current collation. This difference leads to different results in some cases, as evidenced in the first example in this post.
Which one should you use? Nobody can tell you that — you need to use the one that's correct for your use case. Don't prematurely optimize by switching comparison operators. | The equals (=) operator is a "comparison operator compares two values for equality." In other words, in an SQL statement, it won't return true unless both sides of the equation are equal. For example:
```
SELECT * FROM Store WHERE Quantity = 200;
```
The LIKE operator "implements a pattern match comparison" that attempts to match "a string value against a pattern string containing wild-card characters." For example:
```
SELECT * FROM Employees WHERE Name LIKE 'Chris%';
```
LIKE is generally used only with strings and equals (I believe) is faster. The equals operator treats wild-card characters as literal characters. The difference in results returned are as follows:
```
SELECT * FROM Employees WHERE Name = 'Chris';
```
And
```
SELECT * FROM Employees WHERE Name LIKE 'Chris';
```
Would return the same result, though using LIKE would generally take longer as its a pattern match. However,
```
SELECT * FROM Employees WHERE Name = 'Chris%';
```
And
```
SELECT * FROM Employees WHERE Name LIKE 'Chris%';
```
Would return different results, where using "=" results in only results with "Chris%" being returned and the LIKE operator will return anything starting with "Chris".
Some good info can be found [here](http://www.firstsql.com/tutor2.htm). | Equals(=) vs. LIKE | [
"",
"sql",
"performance",
"equals",
"sql-like",
""
] |
I need to write a select query that joins the tables based on a condition (in this case, based on a value in one of the columns). I would like to do something like this:
```
SELECT *
FROM TableA
INNER JOIN TableB ON (TableA.Column1 = TableB.Column1 OR TableA.Column1 = 0) -- Does not work!
``` | This should achieve what you need:
```
SELECT *
FROM TableA A
WHERE A.Column1 = 0
OR EXISTS(
SELECT B.Column1
FROM TableB B
WHERE B.Column1 = A.Column1
);
``` | I'm not exactly sure what you are doing but it seems like you are looking for an outer join:
```
SELECT *
FROM TableA LEFT OUTER JOIN TableB ON TableA.Column1 = TableB.Column1
WHERE TableB.Column1 IS NOT NULL
OR TableA.Column1 = 0
``` | Conditional Join possible in SQL Server 2005? | [
"",
"sql",
"sql-server",
""
] |
I'm using reflection to add some data to a private variable inside a class from a third-party library. Along the way there are about four different Exceptions that can be thrown; all of them are related to reflection, and all of them are very unlikely to occur. I'm hardcoding the name of the class and variable involved. I'm unlikely to receive any class not found or field not found errors, unless the library gets upgraded some day and has changed significantly.
I'd rather not declare all four of these exceptions for my caller to handle. He's likely to never see them. I'd like to just catch all of these and throw another exception to say "A Java reflection error has occured; it is likely that the library has been upgraded and changed in a way incompatible with this method." Is there a standard Java Exception I can throw that indicates just a general reflection error? Should I define my own? Or would it be best to just declare that this method can throw all of the possible reflection exceptions? | You can turn all the Exceptions into an AssertionError if you never expect them to occur.
InvocationTargetException can be unwrapped if you want to deal with a specific exception.
If you want to throw the actual exception thrown by the method rather than InvocationTargetException you can use this trick, but it may be more confusing than useful
```
} catch (InvocationTargetException e) {
// Throw any exception in the current thread (even if it is a checked exception)
Thread.currentThread().stop(e.getCause());
}
``` | I usually ask myself these questions:
* Can whomever calls this method handle these different exception types differently?
* ...Or would they treat them all the same?
* Can the caller/user even recover from this error?
If the calling code is likely to treat all four of these exceptions the same (as an unrecoverable error), then it absolutely makes sense to capture each of them and re-throw a more general (single) exception. If you do, make sure you attach the generated exception as an inner exception, just to help any debugging or troubleshooting on the other end. | Catching several exceptions and rethrowing a general exception | [
"",
"java",
"exception",
""
] |
I'm trying to learn advanced sql and how to use system queries (sql server). The below query is a little confusing.
```
CREATE PROC dbo.ShowHierarchy
(
@Root int
)
AS
BEGIN
SET NOCOUNT ON
DECLARE @EmpID int, @EmpName varchar(30)
SET @EmpName = (SELECT EmpName FROM dbo.Emp WHERE EmpID = @Root)
PRINT REPLICATE('-', @@NESTLEVEL * 4) + @EmpName
SET @EmpID = (SELECT MIN(EmpID) FROM dbo.Emp WHERE MgrID = @Root)
WHILE @EmpID IS NOT NULL
BEGIN
EXEC dbo.ShowHierarchy @EmpID
SET @EmpID = (SELECT MIN(EmpID) FROM dbo.Emp WHERE MgrID = @Root AND EmpID > @EmpID)
END
END
GO
```
Taken from here:
<http://vyaskn.tripod.com/hierarchies_in_sql_server_databases.htm>
Every time the query runs, how is the @EmpId parameter incremented? Does it do itself? Also, does the @root increment on every recursion? E.g. CEO is root, go to immediate subordinate, that immediate subordinate is now @root, etc.
Thanks | > Every time the query runs, how is the @EmpId parameter incremented? Does it do itself?
This one:
```
SET @EmpID = (SELECT MIN(EmpID) FROM dbo.Emp WHERE MgrID = @Root AND EmpID > @EmpID)
```
selects minimal `EmpID` that is greater than the one already selected, and assigns it to the `@EmpID`.
Of course, if there are any, new `@EmpID` will be greater than the old one, which means that `@EmpID` is incremented. | In the last select @EmpID is set to the lowest EmpID that is larger than @EmpID. This way, @EmpID is "incremented" until there is no larger EmpID. In that case, the select min(EmpID) returns a null and the while-loop ends.
@Root is changed through the recursive call to ShowHierarchy in the EXEC-Statement. Basically, the current @EmpID becomes the new @Root in the execution that is triggered. | Confused how this query works | [
"",
"sql",
"t-sql",
""
] |
I am trying to find a simple way to layout a 3 pane window using wxPython.
I want to have a tree list in the left pane, then have a right pane that is split into two - with an edit component in the top part and a grid component in the bottom part.
Something along the lines of:
```
--------------------------------------
| | |
| | Edit |
| Tree | Control |
| Control | |
| |----------------------|
| | |
| | Grid |
| | |
--------------------------------------
```
I would like the window to be re-sizable and give the user the ability to change the (relative) size of each of the components within the windows by dragging the borders.
I figure that I need some combination of sizers and/or splitter-window components but can't find a decent example of this kind of window in the documentation or on the web. | First of all download [wxGlade](http://wxglade.sourceforge.net/) a gui builder for wxPython (alternative [XRCed](http://xrced.sourceforge.net/), i prefere wxGlade).
Then you have to decide if you want to use a [GridSizer](http://www.wxpython.org/docs/api/wx.GridSizer-class.html) or a [Splitter](http://www.wxpython.org/docs/api/wx.SplitterWindow-class.html) and you are done. Below you find both (between Tree and right side is a GridSizer -> resizes automatically). Between Edit and GridCtrl is a Sizer (manual Resize).
Regards.
one minute work without entering a single line of code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# generated by wxGlade 0.6.3 on Sat Feb 07 10:02:31 2009
import wx
import wx.grid
# begin wxGlade: extracode
# end wxGlade
class MyDialog(wx.Dialog):
def __init__(self, *args, **kwds):
# begin wxGlade: MyDialog.__init__
kwds["style"] = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER|wx.THICK_FRAME
wx.Dialog.__init__(self, *args, **kwds)
self.window_1 = wx.SplitterWindow(self, -1, style=wx.SP_3D|wx.SP_BORDER)
self.tree_ctrl_1 = wx.TreeCtrl(self, -1, style=wx.TR_HAS_BUTTONS|wx.TR_LINES_AT_ROOT|wx.TR_DEFAULT_STYLE|wx.SUNKEN_BORDER)
self.text_ctrl_1 = wx.TextCtrl(self.window_1, -1, "This is the Edit", style=wx.TE_MULTILINE)
self.grid_1 = wx.grid.Grid(self.window_1, -1, size=(1, 1))
self.__set_properties()
self.__do_layout()
# end wxGlade
def __set_properties(self):
# begin wxGlade: MyDialog.__set_properties
self.SetTitle("dialog_1")
self.grid_1.CreateGrid(10, 3)
# end wxGlade
def __do_layout(self):
# begin wxGlade: MyDialog.__do_layout
grid_sizer_1 = wx.FlexGridSizer(1, 2, 3, 3)
grid_sizer_1.Add(self.tree_ctrl_1, 1, wx.EXPAND, 0)
self.window_1.SplitHorizontally(self.text_ctrl_1, self.grid_1)
grid_sizer_1.Add(self.window_1, 1, wx.EXPAND, 0)
self.SetSizer(grid_sizer_1)
grid_sizer_1.Fit(self)
grid_sizer_1.AddGrowableRow(0)
grid_sizer_1.AddGrowableCol(0)
grid_sizer_1.AddGrowableCol(1)
self.Layout()
# end wxGlade
# end of class MyDialog
class MyApp(wx.App):
def OnInit(self):
wx.InitAllImageHandlers()
mainDlg = MyDialog(None, -1, "")
self.SetTopWindow(mainDlg)
mainDlg.Show()
return 1
# end of class MyApp
if __name__ == "__main__":
app = MyApp(0)
app.MainLoop()
``` | This is a very simple layout using wx.aui and three panels. I guess you can easily adapt it to suit your needs.
Orjanp...
```
import wx
import wx.aui
class MyFrame(wx.Frame):
def __init__(self, *args, **kwargs):
wx.Frame.__init__(self, *args, **kwargs)
self.mgr = wx.aui.AuiManager(self)
leftpanel = wx.Panel(self, -1, size = (200, 150))
rightpanel = wx.Panel(self, -1, size = (200, 150))
bottompanel = wx.Panel(self, -1, size = (200, 150))
self.mgr.AddPane(leftpanel, wx.aui.AuiPaneInfo().Bottom())
self.mgr.AddPane(rightpanel, wx.aui.AuiPaneInfo().Left().Layer(1))
self.mgr.AddPane(bottompanel, wx.aui.AuiPaneInfo().Center().Layer(2))
self.mgr.Update()
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(None, -1, '07_wxaui.py')
frame.Show()
self.SetTopWindow(frame)
return 1
if __name__ == "__main__":
app = MyApp(0)
app.MainLoop()
``` | How do I layout a 3 pane window using wxPython? | [
"",
"python",
"layout",
"wxpython",
"elasticlayout",
""
] |
I'm beginning to fall in love with Extension Methods, but I just don't know how to create an EM only for a determinate Object type.
I have for example:
```
public static void AddPhoneNumberToContact(this Contact contact, PhoneType type, String number)
{
lock (contact)
{
PhoneRow pr = PhoneRow.CreateNew();
pr.SetDefaults();
pr.PtypeIdx = type;
pr.PhoneNumber = number;
contact.Phones.Add(pr);
pr = null;
}
}
```
My problem is that I want to also Have this method in the Person object, and that is why I named
```
AddPhoneNumberToContact
AddPhoneNumberToPerson
```
Is there a way to have AddPhoneNumber and deal with the object that is provided?
or the solution is to have
```
public static void AddPhoneNumber(this object contact, ...
{
...
if(typeof(Contact) == contact)
((Contact)contact).Phones.Add(pr);
else if(typeof(Person) == contact)
((Person)contact).Phones.Add(pr);
}
```
Thank you. | How about writing two extension methods:
```
public static void AddPhoneNumber(this Contact contact, PhoneType type);
```
and
```
public static void AddPhoneNumber(this Person person, PhoneType type);
```
Looks cleaner to me.
If there's some common code between the two, extract that into a separate method. | Make `Contact` and `Person` implement common interface - say `IContactWithPhoneNumbers` - and then write an extension method "for this interface".
```
public interface IContactWithPhoneNumbers {}
public class Contact : IContactWithPhoneNumbers {}
public class Person : IContactWithPhoneNumbers {}
public static void AddPhoneNumber(this IContactWithPhoneNumbers obj) {}
``` | How can I apply a common extension method to multiple unrelated types in a third party SDK? | [
"",
"c#",
"extension-methods",
""
] |
If I have a class org.foobar.MyClass and want to put it in a JAR file, do I have to put it in the JAR's /org/foobar/ directory, or can I put it in /bin/org/foobar/ and somehow specify /bin/ as classpath inside the JAR itself? | The usual way that everyone does this -- so far I've never seen a JAR do something different -- is to put class org.foobar.MyClass in the JAR file at the JAR's /org/foobar/ directory. I can't imagine a good reason for doing something differently, as it would impede normal use of this JAR by anyone not doing unusual things. | you can include the Class-Path property in your Manifest, listing the jar files your app depends on. The paths will be considered relative to the location of your executable JAR.
For example if your app.jar contains this in the MANIFEST.MF:
Class-Path: lib1.jar,lib/lib2.jar
Then the JVM will expect to find lib1.jar in the same dir as app.jar and a subdirectory called lib containing lib2.jar. | Specifying classpath for classes inside the JAR itself | [
"",
"java",
"jar",
"classpath",
""
] |
I have a logger system which basically is a fancy way of writing my data to std::clog in a thread safe way.
I also, redirect `std::clog` to a file like this:
```
int main() {
std::ofstream logfile(config::logname, std::ios::app);
std::streambuf *const old_buffer = std::clog.rdbuf(logfile.rdbuf());
// .. the guts of the application
std::clog.rdbuf(old_buffer);
}
```
This works great... however, my application also produces a very large amount of logs. I was wondering what would be a good way to properly rotate my log files. Is there a safe way to switch out the file via a cron task? I would guess no.
The only thing I can think of that would definitely work is if I had the application itself open a new file, and redirect the rdbuf of clog to that while holding the logging mutex. But that feels like a cheap solution, and I would need to check so see if it is time to rotate logs fairly often for it to be effective. There has got to be a better way. | You can use the built-in log rotation method configured in /etc/logrotate.conf and/or /etc/logrotate.d/ - it's common to have logrotate send your app a SIGUSR1 as a signal to close and re-open all your log files. | Or just use syslog instead of your custom logging scheme and the logs get rotated by logrotate anyway. -- depending on how it's configured but on most desktop/server systems it's already set to rotate them. | Linux/c++ log rotation scheme | [
"",
"c++",
"linux",
"logging",
"rotation",
""
] |
When you get a doGet or doPost call in a servlet you can use `getparameterxxx()` to get either the query string or the post data in one easy place.
If the call was a GET, you get data from the url/query string.
If the call was a POST, you get the post data all parsed out for you.
Except as it turns out, if you don't put an 'action' attribute in your form call.
If you specify a fully qualified or partially qualified url for the action param everything works great, if you don't, the browser will call the same url as it did on the previous page submit, and if there happens to be query string data there, you'll get that as well as POST data, and there's no way to tell them apart.
Or is there?
I'm looking through the request object, I see where the post data comes from, I'm just trying to figure out where the GET data comes from, so I can erase the GET data on a post call and erase the post data on a GET call before it parses it out if possible.
Any idea what the safe way to do this is?
And lemme guess: you never tried to not put an action field in a form tag. :-) | In HTML, action is [REQUIRED](http://www.w3.org/TR/html401/interact/forms.html#h-17.3), so I guess the behavior will vary among clients. | You're right, I never tried not to put an action field in a form tag ;-) and I wouldn't, because of exactly what you're talking about. (Also, I think it's not valid HTML)
I don't know of any "clean" way to distinguish between GET and POST parameters, but you can access the raw query string using the `getQueryString()` method of `HttpServletRequest`, and you can access the raw POST data using the `getInputStream()` method of `ServletRequest`. (I'm looking at the Tomcat API docs specifically here, although I think those are both part of the standard Servlet API) Then you could parse the POST data and GET data separately if you want. They will (or should normally) both be formatted the same way, i.e.
```
name1=value1&name2=value2&...
```
though possibly with the ampersands replaced by semicolons (which you can technically do in HTTP/1.1, I didn't know that until recently) | How do I separate out query string params from POST data in a java servlet | [
"",
"java",
"servlets",
"http-post",
"http-get",
""
] |
We are trying to build a Help Desk ticketing system just for intranet. Deciding upon the ASP .NET (C#) with Visual Studio 2008 Express (think we have a full version floating around if we need it). Nothing fancy, couple of pages grabbing NTLM information, system information and storing it along with their problem in a database. Goal is to make it simple, but instead of using our SQL Server 2000 back end, the admin wants me to use MS Access. I have the GridView and connections running smooth. Can pull select queries until my heart is content. However, tying in a couple variables with a text box on a submit button into say an INSERT statement.. well I don't even know where to begin with MS Access. Every internet example is in VB .NET plus seems to be hand coding what Visual Studio has already done for me in a few clicks.
Is MS Access going to be too hard for all we want to do? If not, where do we begin to simply submit this data into the tables?
Edit: After a bunch of playing around we have the OleDB working. It's not pretty, yes SQL Server would be awesome but, sometimes you just have to play ball.
Edit: Anyone looking for an actual coded answer, here you are. There has got to be others out there in the same boat.
```
string userIP = Request.UserHostAddress.ToString();
string userDNS = Request.UserHostName.ToString();
string duser = Request.ServerVariables["LOGON_USER"]; //NTLM Domain\Username
string computer = System.Environment.MachineName.ToString(); //Computer Name
string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\helpdesk.MDB;";
OleDbConnection conn = new OleDbConnection(connectionString);
conn.Open();
OleDbCommand cmd = new OleDbCommand();
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO ticketing ([user], [comp], [issue]) VALUES (@duser, @computer, @col3)";
cmd.Parameters.Add("@duser", OleDbType.VarChar).Value = duser;
cmd.Parameters.Add("@computer", OleDbType.VarChar).Value = computer;
cmd.Parameters.Add("@col3", OleDbType.LongVarChar).Value = TextBox1.Text;
cmd.ExecuteNonQuery();
conn.Close();
``` | I also suggest using SQL Server, but considering your problem:
What is your problem writing an INSERT query for Access ?
You should make use of the classes that you'll find in the System.Data.OleDb namespace:
* OleDbConnection
* OleDbCommand
Quick'n dirty code (not compiled whatsoever):
```
OleDbConnection conn = new OleDbConnection (connectionString);
OleDbCommand command = new OleDbCommand();
command.Connection = conn;
command.CommandText= "INSERT INTO myTable (col1, col2) VALUES (@p_col1, @p_col2)";
command.Parameters.Add ("@p_col1", OleDbType.String).Value = textBox1.Text;
...
command.ExecuteNonQUery();
```
There are some caveats with the OleDb classes however (like adding the Parameters to the collection in the order that they occur in your SQL statement, for instance). | The admin is nuts. Access is an `in-process` database, and as such is not well suited for web sites where users will be creating or updating records.
But as far as creating INSERT queries go, Access is no harder than anything else. If you can't create INSERT queries for Access you'll probably have trouble with SQL Server as well. | INSERT from ASP.NET to MS Access | [
"",
"c#",
"asp.net",
"sql-server",
"visual-studio",
"ms-access",
""
] |
i want to use an `asp:LinkButton`, since it looks like a link, yet also has server-side Click handler.
But the web-server seems unable to detect if javascript is disabled on the client, and doesn't render into a mechanism that still works.
Is it possible to have a link that looks like a link, but has server-side OnClick event handler?
---
## Answer
The answer is no, but below are some workaround ideas. Accepted the one with non-zero up-votes. | You could use CSS to style a button to look like a link, but this will be very much browser dependant, depending on the CSS implementation.
---
**Edit:** I feel compelled to complete my answer since it's been accepted.
An `asp:LinkButton` renders to an HTML link, and as such cannot **post** to a web page; but can only make **get** requests. To work around this MS use JavaScript to action the post. This is not possible however if Javascript is disabled.
`asp:Button` and `asp:ImageButton` are different. They submit the HTML form by posting to a web page (or get depending on the form attributes) by using true HTML form elements. So these will work without JS intervention.
Some browsers will allow CSS styling to style a button to look like a link, and as such this can be used to work around the issue. | Just an idea:
Render an input button and use javascript to change it into a link. The button would work for non-javascript enabled browser and become a link for those who have javascript. | ASP.NET: asp:LinkButton with Javascript disabled? | [
"",
"javascript",
"asp.net",
"linkbutton",
""
] |
Solution to original problem (below) may have been discovered. I commented out
```
<identity>
...
</identity>
```
tag in the app.config file for the client. But I'm not sure if this is going to cause other problems, if it does, could someone let me know?
---
I've been following the [Getting Started tutorial](http://msdn.microsoft.com/en-us/library/ms730144.aspx) at MSDN for WCF.
I'm using Visual Studio.net 2008 on Vista x64. The Service program is running just fine. However, the client is having issues.
When I run the client I get this exception:
> SecurityNegotiationException was unhandled
> SOAP security negotiation with
> '<http://localhost:8000/ServiceModelSamples/Service/CalculatorService>' for target
> '<http://localhost:8000/ServiceModelSamples/Service/CalculatorService>' failed.
> See inner exception for more details.
The inner exception message says:
> "The Security Support Provider Interface (SSPI) negotiation failed."
My code is pretty much exactly the same as the example's. I've never really done anything with web services or WCF or anything similar. Anyone know how I could go about fixing this? Thanks.
Edit - I forgot to mention where the exception is thrown...
In the client's Main method:
```
CalculatorClient client = new CalculatorClient();
double value1 = 100.00;
double value2 = 15.99;
double result = client.Add(value1, value2); //This is the line that throws the error
//...
```
I added client.Open(); right after the client init, because I searched Google and someone was having problems too and that helped him, but when I do that the same exception is thrown on the new line. | SSPI failing is a Kerberos-related security negotiation failure. Are you on a domain, and you might have trouble communicating with the domain controller right now? Or is there another reason that you'd be unable to authenticate at the moment? | In your service contract attributes add following lines
```
[ServiceContract(Namespace = "http://Microsoft.ServiceModel.Samples")]
``` | "WCF Getting Started" MSDN Tutorial Problem | [
"",
"c#",
".net",
"wcf",
""
] |
I almost never hear the word CakePHP without hearing the word Rails shortly afterwards. Are these two frameworks mainly similar based on how they adhere to the MVC model or do they have other significant similarities/differences?
One of the main attractions of Rails for me is how easy it is to do Ajax. Would that also be true of CakePHP? | CakePHP is like a cheap, bastardized ripoff of Rails. It tries to be like Rails without doing any of the stuff that makes Rails great. It kinda feels similar, I guess.
CakePHP has an Ajax helper that does something similar to the Ajax-related helper methods in Rails, so yes, in some way, it's also true.
But CakePHP is really an exercise in futility: its authors wrote it so they wouldn't have to learn Ruby, even though learning Ruby and Rails together is probably easier than figuring out the monstrous mess that is CakePHP.
(This, coming from somebody who does CakePHP at his day job.)
---
Since y'all asked, my biggest complaint about CakePHP is how it manages to totally butcher the conveniences of object-oriented programming: sure, it implements the Active Record pattern just as much as Rails does, but it makes you pass around data structures.
I feel like any logical person would implement an ORM using faulting and dynamic loading of properties in to objects, which is exactly what ActiveRecord (the Rails library) does. The whole idea of setting a member variable called `$recursive` to determine which relationships to load is just plain flawed.
Being based on PHP is pretty fatal, too; you can't do anything with global state, you have to depend on `mod_rewrite`, you pay the startup penalty on every request. Sure, there's optimizations for any environment you're using, but still. People say Ruby is slow, but my own Rails apps run faster than their CakePHP equivalents, last I checked. I admit to being without data on this.
Worst of all, the bugs in CakePHP just about kill it for me. I could tell any number of stories about
* the time we spent two days figuring out why CakePHP refused to connect to the right database host
* the time half of our pages went blank because of the memory ceiling from using too many components
* the amount of code that lives in our AppController because every component load costs several **megabytes** of memory
* the black art of massaging data structures to make XML output work correctly
* how we traced down the blank `<javascript>` tag that shows up at the end of every page | Cake is laid out much like Rails and obviously takes a lot of inspiration & ideas from it. Cake is a nice introduction to MVC frameworks and rails seems pretty straightforward coming from cake experience.
Ajax is super easy with Cake using the JS helper. In fact everything is super easy. Its a great framework, especially for distributed apps (eg cms's) or any other situation where the ease of hosting a php app is a benefit.
I would see the main advantages of rails being Ruby (and therefore the better OO implementation of rails etc) and the community. Gems (much fewer / less comprehensive cake plugins), training materials online, books (eloquent ruby anyone?) meetup groups etc. | How different is CakePHP from Ruby on Rails? | [
"",
"php",
"ruby-on-rails",
"cakephp",
"comparison",
""
] |
Is C# a superset of C in anyway, like Objective-C or C++? Is there a way to compile C online with constructs such compiler flags? | In a word, No. | Not in any meaningful way. It's inspired by C (among others), but they are quite fundamentally different languages. | Is C# a superset of C? | [
"",
"c#",
"c",
"programming-languages",
""
] |
I am having virtually the same problem as this:
[C# Update combobox bound to generic list](https://stackoverflow.com/questions/433281/c-update-combobox-bound-to-generic-list)
However, I am trying to change the displayed strings; not add, remove, or sort. I have tried the BindingList solution provided in the referenced question, but it has not helped.
I can see the combobox's DataSource property is correctly updated as I edit the items, but the contents displayed in the combobox are not those in the DataSource property.
my code looks as follows:
```
mSearchComboData = new List<SearchData>();
mSearchComboData.Add(new SearchData("", StringTable.PatientID));
mSearchComboData.Add(new SearchData("", StringTable.LastName));
mSearchComboData.Add(new SearchData("", StringTable.LastPhysician));
mSearchComboData.Add(new SearchData("", StringTable.LastExamDate));
mBindingList = new BindingList<SearchData>(mSearchComboData);
SearchComboBox.Items.Clear();
SearchComboBox.DataSource = mBindingList;
SearchComboBox.ValueMember = "Value";
SearchComboBox.DisplayMember = "Display";
...
```
When I try to update the content I do the following:
```
int idx = SearchComboBox.SelectedIndex;
mBindingList[idx].Display = value;
SearchComboBox.Refresh();
```
EDIT::
RefreshItems seems to be a private method. I just get the error message:
"'System.Windows.Forms.ListControl.RefreshItems()' is inaccessible due to its protection level"
ResetBindings has no effect. | If you were to change the entire object, meaning your entire SearchData object, then the bindinglist would have knowledge of this change, and therefore the correct events would internaly get fired, and the combobox would update. HOWEVER, since you're only updating one property, the bindinglist has no idea that something has changed.
What you need to do is have your SearchData class implement INotifyPropertyChanged. Here's a quick sample I wrote to demonstrate:
```
public class Dude : INotifyPropertyChanged
{
private string name;
private int age;
public int Age
{
get { return this.Age; }
set
{
this.age = value;
if (this.PropertyChanged != null)
{
this.PropertyChanged(this, new PropertyChangedEventArgs("Age"));
}
}
}
public string Name
{
get
{
return this.name;
}
set
{
this.name = value;
if (this.PropertyChanged != null)
{
this.PropertyChanged(this, new PropertyChangedEventArgs("Name"));
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
}
```
And here's some code to test:
```
private void button1_Click(object sender, EventArgs e)
{
//Populate the list and binding list with some random data
List<Dude> dudes = new List<Dude>();
dudes.Add(new Dude { Name = "Alex", Age = 27 });
dudes.Add(new Dude { Name = "Mike", Age = 37 });
dudes.Add(new Dude { Name = "Bob", Age = 21 });
dudes.Add(new Dude { Name = "Joe", Age = 22 });
this.bindingList = new BindingList<Dude>(dudes);
this.comboBox1.DataSource = bindingList;
this.comboBox1.DisplayMember = "Name";
this.comboBox1.ValueMember = "Age";
}
private void button3_Click(object sender, EventArgs e)
{
//change selected index to some random garbage
this.bindingList[this.comboBox1.SelectedIndex].Name = "Whatever";
}
```
Since my class now implements INotifyPropertyChanged, the binding list gets "notified" when something changes, and all this will thus work. | instead of `SearchComboBox.Refresh();`
try `SearchComboBox.RefreshItems();`
or `SearchComboBox.ResetBindings();`
I think it is really the latter that you need.
You can access the documentation for it's members [here](http://msdn.microsoft.com/en-us/library/system.windows.forms.combobox_methods(VS.80).aspx). | Updating a databound ComboBox | [
"",
"c#",
".net",
"combobox",
"refresh",
""
] |
Doing odd/even styling with jQuery is pretty easy:
```
$(function() {
$(".oddeven tbody tr:odd").addClass("odd");
$(".oddeven tbody tr:even").addClass("even");
});
```
Now I came across an interesitng problem today. What if you want to style alternating **groups** of elements? For example, alternating blocks of 3. Longhand this can be done this way:
```
$(function() {
$(".oddeven3 tbody tr:nth-child(3n+1)").addClass("odd");
$(".oddeven3 tbody tr:nth-child(3n+2)").addClass("odd");
$(".oddeven3 tbody tr:nth-child(3n+3)").addClass("odd");
$(".oddeven3 tbody tr:nth-child(3n+4)").addClass("even");
$(".oddeven3 tbody tr:nth-child(3n+5)").addClass("even");
$(".oddeven3 tbody tr:nth-child(3n+6)").addClass("even");
});
```
Seems a bit longwinded though. Now it can be somewhat simplified and made generic like this:
```
function oddEvenGroupStyle(groupSize) {
for (var i=1; i<=groupSize; i++) {
$(".oddeven" + groupSize + " tbody tr:nth-child(" + groupSize + "n+" + i + ")").addClass("odd");
$(".oddeven" + groupSize + " tbody tr:nth-child(" + groupSize + "n+" + (groupSize+i) " + ")").addClass("even");
}
}
```
and:
```
$(function() {
oddEvenGroupStyle(3);
});
```
Seems like a bit of a hack to me though. Is there some more jQuery-ish way of selecting the right rows? | ```
function oddEvenGroupStyle(groupSize) {
var even = false;
$('tr').each(
function(i){
if(!(i % groupSize)) even = !even;
$(this).attr('class', (even ? 'groupEven':'groupOdd') );
})
}
``` | cletus, if you'd like to get the job done 101% jQuery-ish consider:
1. using `each` instead of classic for loop
2. [extending jQuery selector behaviour](http://james.padolsey.com/javascript/extending-jquerys-selector-capabilities/)? (just a hint) | More efficent method of styling alternating blocks of table rows with jQuery? | [
"",
"javascript",
"jquery",
"jquery-1.3",
""
] |
What is a *"stored procedure"* and how do they work?
What is the make-up of a stored procedure (things each *must* have to be a stored procedure)? | Stored procedures are a batch of SQL statements that can be executed in a couple of ways. Most major DBMs support stored procedures; however, not all do. You will need to verify with your particular DBMS help documentation for specifics. As I am most familiar with SQL Server I will use that as my samples.
To create a stored procedure the syntax is fairly simple:
```
CREATE PROCEDURE <owner>.<procedure name>
<Param> <datatype>
AS
<Body>
```
So for example:
```
CREATE PROCEDURE Users_GetUserInfo
@login nvarchar(30)=null
AS
SELECT * from [Users]
WHERE ISNULL(@login,login)=login
```
A benefit of stored procedures is that you can centralize data access logic into a single place that is then easy for DBA's to optimize. Stored procedures also have a security benefit in that you can grant execute rights to a stored procedure but the user will not need to have read/write permissions on the underlying tables. This is a good first step against SQL injection.
Stored procedures do come with downsides, basically the maintenance associated with your basic [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operation. Let's say for each table you have an Insert, Update, Delete and at least one select based on the primary key, that means each table will have 4 procedures. Now take a decent size database of 400 tables, and you have 1600 procedures! And that's assuming you don't have duplicates which you probably will.
This is where using an [ORM](https://stackoverflow.com/questions/1279613/what-is-an-orm-and-where-can-i-learn-more-about-it) or some other method to auto generate your basic CRUD operations has a ton of merit. | A stored procedure is a set of precompiled SQL statements that are used to perform a special task.
Example: If I have an `Employee` table
```
Employee ID Name Age Mobile
---------------------------------------
001 Sidheswar 25 9938885469
002 Pritish 32 9178542436
```
First I am retrieving the `Employee` table:
```
Create Procedure Employee details
As
Begin
Select * from Employee
End
```
To run the procedure on SQL Server:
```
Execute Employee details
--- (Employee details is a user defined name, give a name as you want)
```
Then second, I am inserting the value into the Employee Table
```
Create Procedure employee_insert
(@EmployeeID int, @Name Varchar(30), @Age int, @Mobile int)
As
Begin
Insert Into Employee
Values (@EmployeeID, @Name, @Age, @Mobile)
End
```
To run the parametrized procedure on SQL Server:
```
Execute employee_insert 003,’xyz’,27,1234567890
--(Parameter size must be same as declared column size)
```
Example: `@Name Varchar(30)`
In the `Employee` table the `Name` column's size must be `varchar(30)`. | What is a stored procedure? | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
[problem euler #5](http://projecteuler.net/index.php?section=problems&id=5)
i found the solution but i don't know why this first code is faster (i put 14 in order to try to make more clear the code) the only difference is that i eliminate the for i wrote for a huge if
```
if($num%14==0 && $num%13==0 &&$num%12==0 &&$num%11==0 &&$num%10==0 && $num%9==0 && $num%8==0 && $num%7==0 && $num%6==0 && $num%5==0 && $num%4==0 && $num%3==0 && $num%2==0 && $num%1==0){
$notFound=0;
}
```
why is this second code hugely more slow than the first one? With the *for* it suppose to be faster. is the same in another languages???
```
$notFound=0;
for ( $i=14; $i>=2 && notFound==0; $i--){
if($num%$i!=0){
$notFound=1;
}
}
``` | The second code sample is simply performing more operations than the first, so I'd expect it to be slower. In this situation, you'd find that whilst offering poorer performance, a `for` loop offers significantly better readability and maintainability. | `for ( $i=14; $i>=2 && notFound==0; $i--){`
should be
`for ( $i=14; $i>=2 && $notFound==0; $i--){` | Project Euler #5 php doubt! | [
"",
"php",
"performance",
"for-loop",
""
] |
I'm tooling around with [Django](http://www.djangoproject.com/) and I'm wondering if there is a simple way to create a "back" link to the previous page using the template system.
I figure that in the worst case I can get this information from the request object in the view function, and pass it along to the template rendering method, but I'm hoping I can avoid all this boilerplate code somehow.
I've checked the Django template docs and I haven't seen anything that mentions this explicitly. | Well you can enable:
```
'django.core.context_processors.request',
```
in your `settings.TEMPLATE_CONTEXT_PROCESSORS` block and hook out the referrer but that's a bit nauseating and could break all over the place.
Most places where you'd want this (eg the edit post page on SO) you have a real object to hook on to (in that example, the post) so you can easily work out what the proper previous page should be. | Actually it's `go(-1)`.
```
<input type=button value="Previous Page" onClick="javascript:history.go(-1);">
``` | How to implement a "back" link on Django Templates? | [
"",
"python",
"django",
"django-templates",
"request",
"referrer",
""
] |
I've got a json encoded dataset that I retrieve via ajax. Some of the data points I'm trying to retrieve will come back us null or empty.
However, I don't want those null or empty to be displayed to the end user, or passed on to other functions.
What I'm doing now is checking for
```
if(this.cityState!='null'){
// do some stuff here
}
```
However, for each line I find myself going through multiple if statements, and it seems very inefficient. Is there a better way to do this? | Since JSON is simply a data format, there really is no way to know which of your data members will be null unless you explicitly check them. You can always refactor your code to make it more compact and easier to read, but you will have to check each item explicitly if you do not know beforehand which will be null and which will contain data.
While I don't know what your code is supposed to do, here is an example of how you might refactor it to make it more compact:
```
var data = { Name: "John Doe", Age: 25, Address: null, CityState: "Denver, CO" };
for (member in data) {
if (data[member] != null)
// Do work here
}
``` | I'm not completely sure of what you want to do... you say that you don't want to pass them on to other functions so I assume you want to delete them:
```
var data = {a:"!",b:"null", c:null, d:0, e:"", hasOwnProperty:"test"};
var y;
for (var x in data) {
if ( Object.prototype.hasOwnProperty.call(data,x)) {
y = data[x];
if (y==="null" || y===null || y==="" || typeof y === "undefined") {
delete data[x];
}
}
}
```
The check for hasOwnProperty is to make sure that it isn't some property from the property chain. | null / empty json how to check for it and not output? | [
"",
"javascript",
"json",
""
] |
I've used [Lucene.net](http://incubator.apache.org/lucene.net/) to implement search functionality (for both database content and uploaded documents) on several small websites with no problem. Now I've got a site where I'm indexing 5000+ documents (mainly PDFs) and the querying is becoming a bit slow.
I'm assuming the best way to speed it up would be to implement caching of some kind. Can anyone give my any pointers / examples on where to start? If you've got any other suggestions aside from caching (e.g should I be using multiple indexes?) I'd like to hear those too.
Edit:
Dumb user error responsible for the slow querying. I was creating highlights for the entire results set at once, instead of just the 'page' I was displaying. Oops. | I'm going to make a big assumption here and assume you're not hanging onto your index searchers in-between calls to query the index.
If that's true, then you should definitely share index searchers for all queries to your index. As the index becomes larger (and it doesn't really have to get very large for this to become a factor), rebuilding the index searcher will become more and more of an overhead. To make this work correctly, you'll need to synchronise access to the query parser class (it isn't thread safe).
BTW, the Java docs are (I've found) just as applicable to the .net version.
For more info on your problem, see here:
<http://wiki.apache.org/lucene-java/ImproveSearchingSpeed> | Lucene uses its own internal "caching" mechanism to make index retrieval a fast operation. I don't think caching is your issue here, though.
A 5000-index document sounds trivial in size, but this largely depends on how you're constructing your index, what you're indexing/storing, how you're querying (operationally), document size, etc.
Please fill in the blanks with as much information as you can about your index. | Caching Lucene.net search results | [
"",
"c#",
"performance",
"caching",
"lucene.net",
""
] |
I open a FIFO file as `ifstream`. As soon as the object is created the thread is blocked until I send something into the FIFO (which is OK for me). Then I call `getline()` to get the data from the stream.
How do I read-block the thread again until more data is written into FIFO file?
Thanks | I haven't tested this code but I'm wondering if the FIFO is simply setting the EOF bit when you read all available data. In that case, you might be able to do this:
```
std::ifstream fifo;
std::string line;
bool done = false;
/* code to open your FIFO */
while (!done)
{
while (std::getline(fifo, line))
{
/* do stuff with line */
}
if (fifo.eof())
{
fifo.clear(); // Clear the EOF bit to enable further reading
}
else
{
done = true;
}
}
```
If you read off the end of the FIFO, reset and wait for further data. If an error occurred, you're done. I got the idea for this from [this website](http://www.usenet-forums.com/linux-general/89222-named-pipes-c.html). You might have to also close and reopen the FIFO in the same block where you do the reset. | The `getline` function provided by `<string>` returns a reference to the stream object. You can test this object for "goodness" to see if it's still open or if an error has occurred:
```
std::ifstream fifo;
std::string line;
/* code to open your FIFO */
while (std::getline(fifo, line))
{
/* do stuff with line */
}
```
When the FIFO closes, the while test will return false to exit the loop. Every loop iteration will effectively read-block the thread until more data is ready. | Blocking read from FIFO via ifstream object | [
"",
"c++",
"file-io",
"filestream",
"fifo",
""
] |
I'm not subclassing the control. Trying to trigger the event via `Control.Size = Control.Size` fails, since it does not trigger then even unless the new size is actually different. | If you are subclassing `Control`, you can call `OnResize` directly, or expose it on the API:
```
public void OnResize() {
this.OnResize(EventArgs.Empty);
}
```
However, you can't do this for arbitrary controls. You could change the `Size` to-and-fro? Alternatively, you could use reflection, but that is hacky:
```
typeof (Control).GetMethod("OnResize",
BindingFlags.Instance | BindingFlags.NonPublic)
.Invoke(myControl, new object[] {EventArgs.Empty});
``` | I always do this by calling the Control's Resize event handler:
```
control_Resize(null, null);
``` | How to trigger an Control.Resize event without actually resizing? | [
"",
"c#",
"winforms",
"events",
"resize",
"controls",
""
] |
My primary language right now is D, and I'm in the process of learning Python because it's required for a course I'm taking. While I understand why dynamic languages would be a breath of fresh air for people programming in static languages without type inference or templates (IMHO templates are to a large extent compile-time duck typing), I'm curious what the benefits are of dynamic languages even when you have those.
The bottom line is that, if I'm going to learn Python, I want to learn it in a way that really changes my thinking about programming, rather than just writing D in Python. I have not used dynamic languages since I was a fairly novice programmer and unable to appreciate the flexibility they supposedly offer, and want to learn to take full advantage of them now. What can be done easily/elegantly in a dynamically typed, interpreted language that's awkward or impossible in a static language, **even with templates, polymorphism, static type inference, and maybe runtime reflection?** | In theory, there's nothing that dynamic languages can do and static languages can't. Smart people put a lot of work into making *very good* dynamic languages, leading to a perception at the moment that dynamic languages are ahead while static ones need to catch up.
In time, this will swing the other way. Already various static languages have:
* Generics, which make static types less stupid by letting it select the right type when objects are passed around, saving the programmer from having to cast it themselves
* Type inference, which saves having to waste time on writing the stuff that should be obvious
* Closures, which among *many* other things help to separate mechanism from intention, letting you pull together complicated algorithms from mostly existing ingredients.
* Implicit conversions, which lets you simulate "monkey patching" without the risks it usually involves.
* Code loading and easy programmatic access to the compiler, so users and third parties can script your program. Use with caution!
* Syntaxes that are more conducive to the creation of Domain Specific Languages within them.
...and no doubt more to come. The dynamic movement has spawned some interesting developments in static language design, and we all benefit from the competition. I only hope more of these features make it to the mainstream.
There's one place where I don't see the dominant dynamic language being replaced, and that's Javascript in the browser. There's just too much of an existing market to replace, so the emphasis seems to be towards making Javascript itself better instead. | [Here's Steve Yegge](http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html) on the subject.
Guido van Rossum also linked to that talk in [his take of Scala](http://neopythonic.blogspot.com/2008/11/scala.html). | Uses for Dynamic Languages | [
"",
"python",
"programming-languages",
"language-design",
"dynamic-languages",
"duck-typing",
""
] |
I just came across a C++ SDK that makes heavy use of this really weird `*new` pattern. I dont understand why they do it like that at all.
What's the point of constructing objects with \*new, e.g. `Widget wag = *new Widget();`?
Update: Interesting, they are actually doing `XPtr<T> p = *new T;` - must be the semantics of some smart pointer magic. Still doesn't make much sense. I do trust the SDK is high quality. | It constructs a new object and then makes a *copy* of it. The pointer to the original object is discarded, so there may be a memory leak.
There isn't *necessarily* a memory leak, though. It could be that `Widget` maintains a list of all its instances, and it updates that list in its constructor and destructor. There might be some other way of attaining the contents of the list.
But it seems rather pointless. The reason was probably due to a misunderstanding of how the code really works. Your best bet for finding out why the code was made that way is to ask the ones who wrote it. There might be clues in the code's comments. Can you reveal more specifics about what code you're looking at? | Maybe they're trying for memory leaks? With the default implementation of new that will allocate a Widget on the heap, then copy construct wag from it, then it will promptly leak the new object.
Bottom line: don't imitate. And I would regard that SDK with suspicion.
**Edit:** If this is done in the context of a smart pointer, then it's certainly possible they're saving the pointer for later deletion. If there's an XPtr template that's doing that, you should be able to look at the code and see if that's what it's doing. The SDK should also provide you some kind of documentation somewhere about that construct, since it isn't normal, and I don't really see much advantage to it. Still, I stand by what I said: even though it's not a guaranteed leak, I still wouldn't imitate it. | Widget wag = *new Widget() | [
"",
"c++",
""
] |
I have c++ code that attempts to dynamically allocate a 2d array of bytes that measures approx 151MB in size. When I attempt to go back and index through the array, my program crashes in exactly the same place every time with an "Access violation reading location 0x0110f000" error, but the indicies appear to be in range. That leads me to believe the memory at those indicies wasn't allocated correctly.
1) What's the max number of bytes you can dynamically allocate using the new operator?
2) If it is the case that I'm failing to dynamically allocate memory, would it make sense that my code is crashing when attempting to access the array at exactly the same two indicies every time? For some reason, I feel like they would be different every time the program is run, but what do i know ;)
3) If you don't think the problem is from an unsuccessful call to new, any other ideas what could be causing this error and crash?
Thanks in advance for all your help!
\*Edit
Here's my code to allocate the 2d array...
```
#define HD_WIDTH 960
#define HD_HEIGHT 540
#define HD_FRAMES 100
//pHDVideo is a char**
pHDVideo->VideoData = new char* [HD_FRAMES];
for(int iFrame = 0; iFrame < HD_FRAMES; iFrame++)
{
//Create the new HD frame
pHDVideo->VideoData[iFrame] = new char[HD_WIDTH * HD_HEIGHT * 3];
memset(pHDVideo->VideoData[iFrame], 0, HD_WIDTH * HD_HEIGHT * 3);
}
```
and here's a [screenshot of the crashing code and debugger](http://people.ict.usc.edu/~crotchett/images/hdcrash.jpg) (Dead Link) it will help.
I should add that the call to memset never fails, which to me means the allocations is successful, but I could be wrong.
*EDIT*
I found a fix everyone, thanks for all your help. Somehow, and I still need to figure out how, there was one extra horizontal line being upscaled, so I changed...
```
for(int iHeight = 0; iHeight < HD_HEIGHT; iHeight++)
```
to
```
for(int iHeight = 0; iHeight < HD_HEIGHT-1; iHeight++)
```
and it suddenly worked. Anyhow, thanks so much again! | Some possibilities to look at or things to try:
* It may be that the pHDVideo->VideoData[iFrame] or pHDVideo->VideoData is being freed somewhere. I doubt this is the case but I'd check all the places this can happen anyway. Output a debug statement each time you free on of those AND just before your crash statement.
* Something might be overwriting the pHDVideo->VideoData[iFrame] values. Print them out when allocated and just before your crash statement to see if they've changed. If 0x0110f000 isn't within the range of one of them, that's almost certainly the case.
* Something might be overwriting the pHDVideo value. Print it out when allocated and just before your crash statement to see if it's changed. This depends on what else is within your pHDVideo structure.
* Please show us the code that crashes, with a decent amount of context so we can check that out as well.
In answer to your specific questions:
1/ It's implementation- or platform-specific, **and** it doesn't matter in this case. If your new's were failing you'd get an exception or null return, not a dodgy pointer.
2/ It's not the case: see (1).
3/ See above for some possibilities and things to try.
Following addition of your screenshot:
You do realize that the error message says "Access violation **reading** ..."?
That means it's not complaining about writing to `pHDVideo->VideoData[iFrame][3*iPixel+2]` but reading from `this->VideoData[iFrame][3*iPixelIndex+2]`.
iPixelIndex is set to 25458, so can you confirm that `this->VideoData[iFrame][76376]` exists? I can't see from your screenshot how this->VideoData is allocated and populated. | How are you accessing the allocated memory? Does it always die on the same statement? It looks very much like you're running off the end of either the one dimensional array of pointers, or the one of the big blocks of chars that it's pointing to. As you say, the memset pretty much proves that the memory was allocated correctly. The total amount of memory you're allocating is around 0x9450C00 bytes, so the address you quoted is off the end of allocated memory if it was allocated continguously.
Your screenshot appears to show that iPixel is in range, but it doesn't show what the value of iFrame was. Is it outside the range 0-99?
Update: The bug isn't in allocating the memory, it's in your conversion from HD to SD coordinates. The value you're reading from on the SD buffer is out of range, because it's at coordinates (144,176), which isn't in the range (0,0)-(143,175). | C++ What's the max number of bytes you can dynamically allocate using the new operator in Windows XP using VS2005? | [
"",
"c++",
"windows",
"visual-studio-2005",
""
] |
Ok, so I'm having a hard time coming up with a 'good' way to do the following:
Group 4 objects so that if I Object A I can easily and without having to go through Objects B & C to find out if Object D relates to Object A. As well as any other combination of referencing.
I am currently using SortedLists to link A to B and B to C and C to D and doing a really sloppy job of cross referencing them. This works, but its really, really ugly. I don't like ugly code unless there is no other way to do it.
I'm thinking of using a class to store the objects. But what would be the best way to reference them?
Thanks in advance
---
More info:
The objects are controls on a form. The 4 different controls are all related to each other in that they correspond to the same setting that they are used to configure. There are multiple settings that get modified on the form, each using 4 controls.
I need a way to easily reference them without having to call Setting4ControlB.Checked or Setting2ControlD.text, thus the linked lists. I'm using generic events for certain controls in order to reuse code, they all do the same thing, except they reference different objects.
The ugly portion is first, setting up the links, and second, doing the referencing from A to D or C to A, etc. I wanted to be able to do something like this:
GetSettingControls(ReferencingControl).ControlA.Checked
ReferencingControl would be the sender in the event, ControlA is a control that is related to ReferencingControl. | First of all, if you have a relationship like A->B->C->D the only way to see if A relates to D is to go all the way from A to D. You can nontheless apply a few tricks
This will involve manually keeping track of each indirect reference present in the program, that is, you'll be in charge of adding the A->D reference dinamically, that may be useful with certain logics like for example being able to "carry" objects in a game.
You need to use a Dictionary of WeakReferences, so that you don't keep objects alive through references...
```
public class MyObject
{
public List<WeakReference> References = new List<WeakReference>();
// My Class Properties and Methods
}
MyObject obj = new MyObject();
obj.refence.Add(new WeakReference(gun));
```
that way if the gun is cleared from the object pool (for example the userObjects collection from your Level.Objects collection) the weakreference will become invalid.
You can check for the object by doing
```
foreach ref in obj.references
if (ref.IsAlive)
object tool = ref.Target;
```
This is just an example, you know, implement good accessors, properties and so on... | You could use the generic Dictionary class to specify key/value pairs between the objects.
This would allow you to look up an object based on another object.
```
System.Collections.Generic.Dictionary<object, object> myDictionary = new Dictionary<object, object>();
myDictionary.Add(objectA, objectD);
myDictionary.Add(objectD, objectA);
``` | Cross referencing multiple objects | [
"",
"c#",
".net-2.0",
""
] |
I got to thinking today: what is the best way of getting a distinct (ie no repeats) list of classes used in a document that (preferably) match a pattern (regular expression) pattern or (alternatively) start with a certain character sequence? JQuery can be used for this or just straight Javascript.
Now it should obviously cater for all legal class usages, for example:
```
<div class="class1 class2 class3">
</div>
```
And I don't want to parse the document with regular expressions. That's simply too error prone. What I'm interested in is a Jaavascript solution that walks the DOM or uses something like jQuery to do that.
Oh this should also include any classes that have been dynamically added/removed through previous Javascript code.
Suggestions? | Using jQuery:
```
var listClasses = function( pattern ){
var allClassesTmp = {}, allClasses = [];
var rx = pattern ? (new RegExp(pattern)) : (new RegExp(".*"));
$('*[class]').each( function(){
var cn = this.className.split(/\s+/);
for(var i=cn.length;--i>-1;){
if(rx.test(cn[i]))allClassesTmp[ cn[i] ] = 1
}
});
for(var i in allClassesTmp)allClasses.push(i);
return allClasses;
}
``` | ```
function gatherClasses() {
var tags = document.getElementsByTagName('*');
var cls, clct = {}, i, j, l = tags.length;
for( i = 0; i < l; i++ ) {
cls = tags[i].className.split(' ');
for( j = 0; j < cls.length; j++ ) {
if( !cls[j] ) continue;
clct[cls[j]] = 'dummy'; //so we only get a class once
}
}
cls = [];
for( i in clct ) {
cls.push( i );
}
return cls;
}
alert(gatherClasses())
```
Heres a version with a regexp match
```
function gatherClasses( matchString ) {
if( matchString ) {
var rxp = new RegExp( matchString );
} else {
var rxp = /.*/;
}
var tags = document.getElementsByTagName('*');
var cls, clct = {}, i, j, l = tags.length;
for( i = 0; i < l; i++ ) {
cls = tags[i].className.split(' ');
for( j = 0; j < cls.length; j++ ) {
if( !cls[j] || !rxp.test( cls[j] ) ) {
continue;
}
clct[cls[j]] = 'dummy'; //so we only get a class once
}
}
cls = [];
for( i in clct ) {
cls.push( i );
}
return cls;
}
//find classes that match 'stack'
alert(gatherClasses('stack'))
``` | Finding all classes that match a pattern in an HTML document? | [
"",
"javascript",
"jquery",
"html",
"css",
"jquery-1.3",
""
] |
I've been using Vim for a while, and I can't get proper HTML indentation working in PHP files.
For example, what I want is for each child to be indented one tab more than it's parent, as shown below.
```
<?php
if(isset($sports)) {
//Do something
?>
<div>
<label>Uniform Size</label>
<ul>
<li class="left"><label for="s" class="small">S</label><input type="radio" name="size[]" value="S" id="s" class="radio" /></li>
<li class="left"><label for="m" class="small">M</label><input type="radio" name="size[]" value="M" id="m" class="radio" /></li>
<li class="left"><label for="l" class="small">L</label><input type="radio" name="size[]" value="L" id="l" class="radio" /></li>
<li class="left"><label for="xl" class="small">XL</label><input type="radio" name="size[]" value="XL" id="xl" class="radio" /></li>
</ul>
</div>
<?php
}
?>
```
Using the [PHP-correct-Indent](http://www.vim.org/scripts/script.php?script_id=1120) script, the code results in being formatted as follows:
```
<?php
if(isset($sports)) {
//Do something
?>
<div>
<label>Uniform Size</label>
<ul>
<li class="left"><label for="s" class="small">S</label><input type="radio" name="size[]" value="S" id="s" class="radio" /></li>
<li class="left"><label for="m" class="small">M</label><input type="radio" name="size[]" value="M" id="m" class="radio" /></li>
<li class="left"><label for="l" class="small">L</label><input type="radio" name="size[]" value="L" id="l" class="radio" /></li>
<li class="left"><label for="xl" class="small">XL</label><input type="radio" name="size[]" value="XL" id="xl" class="radio" /></li>
</ul>
</div>
<?php
}
?>
```
Even with indented HTML which I then add PHP code to, the indentation is ignored, moving new lines of HTML code without any indentation at all.
So, is there any way that I can get the indentation format that I want working with HTML within PHP files, using Vim? | This still bothers me. I only *just* decided that the best work-around (for me personally) is this:
```
:set filetype=html
```
And then highlight your text and hit `=`. BOOM! HTML formatting succes. (Not ideal, I know, but at least it works.) | There is a set of vimrc instructions on the Vim Wiki called [Better indent support for PHP with HTML](http://vim.wikia.com/wiki/Better_indent_support_for_php_with_html) that will use the correct plugin depending on the block.
There is also a [Vundle/Pathogen Plugin](https://jordaneldredge.com/blog/better-indent-support-for-php-in-vim/) that uses the same code but is easier to install and keeps your `.vimrc` clean.
**Pathogen**
```
cd ~/.vim/bundle
git clone https://github.com/captbaritone/better-indent-support-for-php-with-html.git
```
**Vundle**
Place in .vimrc
```
Bundle 'captbaritone/better-indent-support-for-php-with-html'
```
Run in vim
```
:BundleInstall
``` | Correct indentation of HTML and PHP using Vim | [
"",
"php",
"html",
"vim",
"indentation",
""
] |
Allow me to start with: I am a n00b on ASP.NET MVC. I love it, but I am a n00b.
I am trying to pass "complex" data back from a LINQ query. I understand how to use the data context and then just cast that data when I send it back, but when I do a more complicated LINQ query which returns an anonymous type, things break down.
I saw someone ask a similar question ([MVC LINQ to SQL Table Join Record Display](https://stackoverflow.com/questions/278941/mvc-linq-to-sql-table-join-record-display)), and the answer seemed to be to create a new data type to capture the data from the LINQ query. I don't get that I can create a var type in the controller, and access the member fields within the controller, but if I want to pass that var back to my View, I need to create an entire new class for that.
Here’s my Controller code:
```
var vrGoodResults1 = from records in db.Words
group records by records.word1 into g
select new
{
strWord = g.Key,
intWordCount = g.Count()
};
ViewData["GoodWords"] = vrGoodResults1;
return View();
```
And the View looks like this:
```
<% foreach (var kvp in (IEnumerable)ViewData["GoodWords"]) %>
<% { %>
<%= String.Format("{0} was used times", kvp) %> <br />
<% } %>
```
Which outputs:
```
{strWord = cool, intWordCount = 2 } was used times
{strWord = educated, intWordCount = 1 } was used times
{strWord = great, intWordCount = 1 } was used times
{strWord = smart, intWordCount = 6 } was used times
{strWord = strong, intWordCount = 2 } was used times
{strWord = super smart, intWordCount = 2 } was used times
```
So the data is getting to the View, but I cannot refer to the data by the field names I assigned in the LINQ query. When I try dumping kvp.GetType(), I get:
```
<>f__AnonymousType1`2[System.String,System.Int32]
```
All I want to do is something along the lines of:
```
<%= String.Format("{0} was used {1} times", kvp.strWord, kvp.intWordCount) %> <br />
```
But I am getting a compile error on the kvp.strWord.
```
error CS1061: 'object' does not contain a definition for 'strWord' and no extension method 'strWord' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?)
```
If I insert the following code into my Controller:
```
foreach (var kvp in vrGoodResults1)
{
string strNew = kvp.strWord;
}
```
I can reference the fields of my variable kvp without a compile error. So something is getting lost when passing from the Controller to the View. Am I forgetting to include something somewhere? Perhaps a “using” or in the “<@ Page” directive, am I forgetting to inherit something?
When you are using LINQ for clear data contexts, you just set the IEnumerable<”datatype”> where “datatype” = your data context type, and you are all good. When you reduce your data set into something new in LINQ, I can't believe that the best answer is to create a new class so that I can use it in my View. | `var` is a compiler shortcut for declaring a type in the current scope. It doesn't add any dynamic functionality to the .NET runtime, so code outside the scope sees the object as "System.Object", since that is the most-specific type in the inheritance chain the view code is aware of.
You should create a real class if you want to pass tuple objects around; that's what you had to do before `var`, so it's not like you're losing anything by having to do it now :) | The best answer here simply *is* to create a new type. You can do get anonymous types back from `object` ([see here](http://msmvps.com/blogs/jon_skeet/archive/2009/01/09/horrible-grotty-hack-returning-an-anonymous-type-instance.aspx)) but it is ugly and brittle. Don't do it!
You could use reflection (`kvp.GetType().GetProperty("strWord").GetValue(kvp, null)`) - but that also isn't a great idea.
In this case - perhaps use the existing `KeyValuePair<string,int>` from the original select? Or your own `Tuple<T1,T2>`? | Passing data in ASP.NET MVC using LINQ - nuttiness | [
"",
"c#",
".net",
"asp.net-mvc",
"linq",
"linq-to-sql",
""
] |
I have a class made up of several fields, and I have several constructors. I also have a constructor that doesn't take any parameters, but when I try to use it:
```
int main {
A a;
}
```
The compiler generates an error, while if I use it like this:
```
int main {
A a();
}
```
It's ok. What's that?
Thank you | The first `main` uses A's default constructor. The second one declares a function that takes no parameters and returns an A by value, which probably isn't what you intend.
So what does the definition of A look like and what is the error that the compiler generates?
Oh, and you need to provide a parameter list in the declaration of main: `int main() { //...` , not `int main { //...` | By OK you mean it compiles or that it works? The line of code:
```
A a();
```
is a declaration (or prototype) of a function named `a` that takes no parameters and returns an object of type `A`.
I think for anyone to have a chance to help you with your real problem you'll need to post at least the declaration for `class A`. | Strange behavior in constructor | [
"",
"c++",
"parameters",
"constructor",
""
] |
I'm working in Visual Studio 2008 on a C++ programming assignment. We were supplied with files that define the following namespace hierarchy (the names are just for the sake of this post, I know "namespace XYZ-NAMESPACE" is redundant):
```
(MAIN-NAMESPACE){
a bunch of functions/classes I need to implement...
(EXCEPTIONS-NAMESPACE){
a bunch of exceptions
}
(POINTER-COLLECTIONS-NAMESPACE){
Set and LinkedList classes, plus iterators
}
}
```
The MAIN-NAMESPACE contents are split between a bunch of files, and for some reason which I don't understand the operator<< for both Set and LinkedList is entirely outside of the MAIN-NAMESPACE (but within Set and LinkedList's header file).
Here's the Set version:
```
template<typename T>
std::ostream& operator<<(std::ostream& os,
const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set<T>& set)
```
Now here's the problem: I have the following data structure:
```
Set A
Set B
Set C
double num
```
It's defined to be in a class within MAIN-NAMESPACE. When I create an instance of the class, and try to print one of the sets, it tells me that:
error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set' (or there is no acceptable conversion)
However, if I just write a main() function, and create Set A, fill it up, and use the operator- it works.
Any idea what is the problem? (note: I tried any combination of using and include I could think of). | OK I figured this out.
jpalecek's intuition about there existing another operator<< in the namespace was correct (apparently I forgot to comment it out).
The lookup rules for namespaces first start the search in the function call's namespace and search up the enclosing namespaces, right up to the global namespace (then it does the Argument dependent lookup if no match is found). However, if along the way it finds some match for operator<<, it stops the search, regardless of the fact that the types used in those functions may be incompatible, as was the case here.
The solution is either to include it into the MAIN-NAMESPACE (which I'm not allowed to), or import it from the global namespace with "using ::operator<<". | Strange - even though putting free functions associated with a type to a different namespace is a bad practice, the global namespace declarations are always visible.
The only thing I can think of is that declaration with the same name in `MAIN-NAMESPACE` would shadow the one in the global namespace - isn't there an `operator<<`, possibly for totally unrelated type, in `MAIN-NAMESPACE`? If so, you should fix that by `using ::operator<<` declaration in `MAIN-NAMESPACE`. Example:
```
namespace A
{
namespace B
{
class C{};
}
}
void f(A::B::C*);
namespace A
{
void f(int*); // try commenting
using ::f; // these two lines
void g()
{
B::C* c;
f(c);
}
}
``` | C++ compiler unable to find function (namespace related) | [
"",
"c++",
"namespaces",
"operator-keyword",
""
] |
I have run into a few cases where i have been asked to deploy an application (C#, .Net 2.0) to a server and users need to test the application over the network. I have found the following that works with out any hitches other than a warning telling you "hey your running this over a network are you sure you want to do this?":
```
%systemroot%\Microsoft.NET\Framework\v2.0.50727\caspol -m -cg 1.2 -url \\<Path> FullTrust
```
Is there a better way of centralizing the application than the above? | The problem with granting FullTrust, is that [it really really means *full* trust](http://blogs.msdn.com/eugene_bobukh/archive/2005/05/06/415217.aspx). If your app has any sort of identity/security control, the granting fulltrust will bypass all of that.
I guess otherwise, you're "exposing" your machine to files running from that other one, so they could potentially do damage, however this is no worse than any other native win32 executable running on a remote machine, so it's no big deal IMHO.
I'd recommend the best solution is to simply install the app on your user's machines. You could use something like ClickOnce to make this painless and easy for them. | Try using [ClickOnce](http://en.wikipedia.org/wiki/ClickOnce) deployment. We use it for all of our network-deployed applications. | C# Application over a Network | [
"",
"c#",
"deployment",
"networking",
""
] |
I have an excel document represented as a byte[] and I'm wanting to send it as an attachment in an email.
I'm having a bit of trouble constructing the attachment.
I can create an Attachment which has the following constructors:
```
(Stream contentStream, ContentType contentType)
(Stream contentStream, string name)
(Stream contentStream, string name, string mediaType)
```
My idea at the moment is to create a MemoryStream from the byte[] and pass it to the method which creates the attachment.
Unfortunately I can't see a way to obtain the intended filename and content type from the MemoryStream and I also can't see how to supply the correct content type. There are options for plain text, Pdf, Rtf etc but none that I can see that immediately jump out at me as the one I should use for an Excel document.
The closest I can find is [MediaTypeNames.Application.Octet](http://msdn.microsoft.com/en-us/library/system.net.mime.mediatypenames.application.octet.aspx) which states:
> The Octet member designates that the
> attachment contains generic binary
> data.
However, even if this is the one to use, unless it can be passed as a property of the Stream then my method for sending emails will only be able to send a byte[] as an Excel document...
Is there perhaps some other sort of Stream I could use? Or will I have to create my own type of Stream that has the details I need.
Surely someone out there has done this thing before and surely Microsoft would have thought this through to this level....
Any help would be much appreciated.
**Update:**
Please don't vote for any answers that use the constructors that take the filename as a string. I'm really needing help using the ones that take a Stream...I want to avoid having to write the file to disk, email it, and then immediately delete it. Since there is a method that allows me to do that I'd like to use that one if at all possible.
**Solution Update**
Conrad managed to find what I was looking for! Thanks heaps man!
I'll just document the suggested solution just in case something happens to the content at the supplied link.
Credit for this solution goes to [www.systemnetmail.com](http://www.systemnetmail.com/faq/3.4.2.aspx)
```
static void AttachmentFromStream()
{
//create the mail message
MailMessage mail = new MailMessage();
//set the addresses
mail.From = new MailAddress("me@mycompany.com");
mail.To.Add("you@yourcompany.com");
//set the content
mail.Subject = "This is an email";
mail.Body = "this content is in the body";
//Get some binary data
byte[] data = GetData();
//save the data to a memory stream
MemoryStream ms = new MemoryStream(data);
//create the attachment from a stream. Be sure to name the data
//with a file and
//media type that is respective of the data
mail.Attachments.Add( new Attachment( ms, "example.txt", "text/plain" ));
SmtpClient smtp = new SmtpClient("127.0.0.1");
smtp.Send(mail);
}
```
In my case, it just means I'll have to change my method to take the filename and fileformat as strings. I'll try using the Octet one...but failing that I'll just pass in the official MIME type.
All things considered, this is a pretty obvious solution...but I do appreciate the help in solving it...and the good thing is this solution will be documented for future programmers who have the same problem.
Thanks again everyone for you help! | The attachment constructor does indeed have a constructor that does what you need. I'm assuming you're using the System.Net.MailMessage class from .NET Framework 2. If so [read this link](http://www.systemnetmail.com/faq/3.4.2.aspx) for some sample code of what you need | Since the [link](http://www.systemnetmail.com/faq/3.4.2.aspx) from the [accepted answer](https://stackoverflow.com/a/527309/1037948) is gone, here it is from the [Wayback Machine](https://web.archive.org/web/20161231192915/http://www.systemnetmail.com/faq/3.4.2.aspx)
**TL;DR:** `mail.Attachments.Add(new Attachment(contentStream, "yourfilename.txt", "text/plain"));`
**Full:**
```
static void AttachmentFromStream()
{
//create the mail message
MailMessage mail = new MailMessage();
//set the addresses
mail.From = new MailAddress("me@mycompany.com");
mail.To.Add("you@yourcompany.com");
//set the content
mail.Subject = "This is an email";
mail.Body = "this content is in the body";
//Get some binary data
byte[] data = GetData();
//save the data to a memory stream
MemoryStream ms = new MemoryStream(data);
//create the attachment from a stream. Be sure to name the data with a file and
//media type that is respective of the data
mail.Attachments.Add(new Attachment(ms, "example.txt", "text/plain"));
//send the message
SmtpClient smtp = new SmtpClient("127.0.0.1");
smtp.Send(mail);
}
static byte[] GetData()
{
//this method just returns some binary data.
//it could come from anywhere, such as Sql Server
string s = "this is some text";
byte[] data = Encoding.ASCII.GetBytes(s);
return data;
}
``` | How do I add an attachment to an email using System.Net.Mail? | [
"",
"c#",
"email",
"binary",
"attachment",
"system.net.mail",
""
] |
I can't seem to find any way to add a horizontal separator in a MenuStrip. Visual Studio complains Cannot add ToolStropSeparator to MenuStrip.
Any idea's how I can do this? | I'm able to run code like this:
```
this.menuMain.Items.Add(new ToolStripSeparator());
```
without any trouble... What kind of error are you getting? | In the space between the two fields you want separated by the divider, type:
```
-
```
then hit enter (in the designer)
If you need to do this programmatically you can use the same trick:
```
contextMenu1.MenuItems.Add(new MenuItem("-"));
``` | Adding a horizontal separator in a MenuStrip | [
"",
"c#",
".net-3.5",
"menustrip",
""
] |
**EDITED** to show real example
How can I call a generic function from a generic type passed to a function? This seems like it should be intuitive, but I can't seem to get it to work.
For example, can I call the cache.ResetCache() function in LocalDataObjectEngine below?
The error I'm getting is 'Type T cannot be used as a parameter'
```
public interface ISimpleCache<T1>
{
...
void ResetCache<T>() where T : T1;
}
internal class LocalDataObjectEngine_Cache : ISimpleCache<IBrokeredDataObject>
{
ISimpleCache<IBrokeredDataObject> _cache;
...
public void ResetCache<T>() where T : IBrokeredDataObject
{
//logic here
}
...
}
public partial class LocalDataObjectEngine : IEngine
{
ISimpleCache<IBrokeredDataObject> _cache = new LocalDataObjectEngine_Cache();
public void ResetCache<T>() where T : IBrokeredDataObject
{
_cache.ResetCache<T>();
}
}
}
``` | Found it, Jon Skeet's reference to removing IEngine pointed me in the right direction, there was a
```
void ResetCache<T>() where T : IDataObject
```
on IEngine (IDataObject is a base if IBrokeredDataObject), that I changed to
```
void ResetCache<T>() where T : IBrokeredDataObject
```
Thanks all for tolerating my bug, +1 to you all | I'm not sure what's going on unless there's something in your definition of `IBrokeredDataObject`. What you've written looks right and compiles fine for me.
[Edited to match the edit in the OP] | How to use a generic type in generic method | [
"",
"c#",
"generics",
""
] |
So I'm reading the "3D Math Primer For Graphics And Game Development" book, coming from pretty much a non-math background I'm finally starting to grasp vector/matrix math - which is a relief.
But, yes there's always a but, I'm having trouble understand the translation of an object from one coordinate space to another. In the book the author takes an example with [gun shooting at a car (image)](http://totmacher.eu/upload/car_gun.jpg) that is turned 20 degrees (just a 2D space for simplicity) in "world space". So we have three spaces: World Space, Gun Object Space and Car Object Space - correct? The book then states this:
> *"In this figure, we have introduced a rifle that is firing a bullet at the car. As indicated by the
> coordinate space on the left, we would normally begin by knowing about the gun and the trajectory
> of the bullet in world space. Now, imagine transforming the coordinate space in line with the
> car’s object space while keeping the car, the gun, and the trajectory of the bullet still. Now we
> know the position of the gun and the trajectory of the bullet in the object space of the car, and we
> could perform intersection tests to see if and where the bullet would hit the car."*
And I follow this explanation, and when I beforehand know that the car is rotated 20\* degrees in world space this isn't a problem - but how does this translate into a situation say when I have an archer in a game shooting from a hill down on someone else? I don't know the angle at which everything is displaced there?
And which object space is rotated here? The World or Gun space? Yeah as you can see I'm a bit confused.
I think the ideal response would be using the car and gun example using arbitrary variables for positions, angle, etc. | You should read how to [change basis](http://en.wikipedia.org/wiki/Change_of_basis) and think in *vector*, not arrays but the math ones :P | I used to be a game programmer and I did that time after time. Eventually, I got away from using angles. For every object, I had a forward-facing vector and an up vector. You can get the right-facing vector, then, from a cross-product. And all the conversions between spaces become dot products. | Transforming an object between two coordinate spaces | [
"",
"c++",
"math",
"3d",
"vector",
"matrix",
""
] |
Given the code:
```
new Thread(new BackgroundWorker()).start();
```
Intuitively it feels like the BackgroundWorker instance should be safe from GC until the thread exits, but is this the case ? And why ?
**Edit:**
All this heat is basically generated by me asking at least two different questions in the same post. The question in the title has one answer, the code sample leads in a different direction - with two possible outcomes depending on inlining.
The answers posted are really excellent. I will be awarding Software Monkey the green checkbox. Please note that Darron's answer is equally valid, but Software Monkey explained the problem *I* was having; it was the answer that worked for me.
Thank you all for making this a memorable affair ;) | Yes, because GC can only collect objects not reachable by any thread, and Thread must hold a reference to it's runnable (or it would not be able to invoke it). So, clearly, your Runnable object is reachable while your thread is running.
Regardless of the semantics required for *execution*, your object will not be GC'd until it is no longer reachable by this new thread or any other; that will be at least long enough to invoke your Runnable's run(), and for the entire life of the thread if that thread is able to reach the Runnable instance, so your construct is guaranteed to be safe by the JVM specification.
---
EDIT: Because Darron is beating this to death, and some seem convinced by his argument I'm going to expand upon my explanation, based on his.
> Assume for the moment that it was not legal for anyone except Thread itself to call Thread.run(),
>
> In that case it would be legal for the default implementation of Thread.run() to look like:
```
void run() {
Runnable tmp = this.myRunnable; // Assume JIT make this a register variable.
this.myRunnable = null; // Release for GC.
if(tmp != null) {
tmp.run(); // If the code inside tmp.run() overwrites the register, GC can occur.
}
}
```
I contend that in this case tmp is still a reference to the runnable reachable by the thread executing within Thread.run() and therefore is not eligible for GC.
What if (for some inexplicable reason) the code looked like:
```
void run() {
Runnable tmp = this.myRunnable; // Assume JIT make this a register variable.
this.myRunnable = null; // Release for GC.
if(tmp != null) {
tmp.run(); // If the code inside tmp.run() overwrites the register, GC can occur.
System.out.println("Executed runnable: "+tmp.hashCode());
}
}
```
Clearly, the instance referred to by tmp cannot be GC'd while tmp.run() is executing.
**I think Darron mistakingly believes that *reachable* means only those references which can be found by chasing instance references starting with all Thread instances as roots, rather than being defined as a reference which can be seen by any executing thread. Either that, or I am mistaken in believing the opposite.**
Further, Darron can assume that the JIT compiler makes any changes he likes - the compiler is ***not*** permitted to change the referential semantics of the executing code. If I write code that has a reachable reference, the compiler cannot optimize that reference away and cause my object to be collected while that reference is in scope.
I don't know the detail of how reachable objects are actually found; I am just extrapolating the logic which I think must hold. If my reasoning were not correct, then any object instantiated within a method and assigned only to a local variable in that method would be immediately eligible for GC - clearly this is not and can not be so.
Furthermore, the entire debate is moot. If the only reachable reference is in the Thread.run() method, because the runnable's run does not reference it's instance and no other reference to the instance exists, including the implicit *this* passed to the run() method (in the bytecode, not as a declared argument), then *it doesn't matter whether the object instance is collected - doing so, by definition, can cause no harm since it's not needed to execute the code if the implicit* this *has been optimized away*. That being the case, even if Darron is correct, the end practical result is that the construct postulated by the OP is perfectly safe. Either way. It doesn't matter. Let me repeat that one more time, just to be clear - *in the end analysis it doesn't matter*. | Yes, it is safe. The reason why is not as obvious as you might think.
Just because code in BackgroundWorker is running does not make it safe -- the code in question may not actually reference any members of the current instance, allowing "this" to be optimized away.
However, if you carefully read the specification for the java.lang.Thread class's run() method you'll see that the Thread object must keep a reference to the Runnable in order to fulfill its contract.
EDIT: because I've been voted down several times on this answer I'm going to expand upon my explanation.
Assume for the moment that it was not legal for anyone except Thread itself to call Thread.run(),
In that case it would be legal for the default implementation of Thread.run() to look like:
```
void run() {
Runnable tmp = this.myRunnable; // Assume JIT make this a register variable.
this.myRunnable = null; // Release for GC.
if (tmp != null)
tmp.run(); // If the code inside tmp.run() overwrites the register, GC can occur.
}
```
What I keep saying is that **nothing in the JLS** prevents an object from being garbage collected just because a thread is executing an instance method. This is part of what makes getting finalization correct so hard.
For excruciating detail on this from people who understand it much better than I do, see this discussion [thread](http://cs.oswego.edu/pipermail/concurrency-interest/2009-January/005747.html) from the concurrency interest list. | Does a running thread in an object prevent it from being garbage collected in java? | [
"",
"java",
"concurrency",
"garbage-collection",
""
] |
I'm working on porting my open source particle engine test from SDL to SDL + OpenGL. I've managed to get it compiling, and running, but the screen stays black no matter what I do.
main.cpp:
```
#include "glengine.h"
int WINAPI WinMain(
HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPSTR lpCmdLine,
int nCmdShow
)
{
//Create a glengine instance
ultragl::glengine *gle = new ultragl::glengine();
if(gle->init())
gle->run();
else
std::cout << "glengine initializiation failed!" << std::endl;
//If we can't initialize, or the lesson has quit we delete the instance
delete gle;
return 0;
};
```
glengine.h:
```
//we need to include window first because GLee needs to be included before GL.h
#include "window.h"
#include <math.h> // Math Library Header File
#include <vector>
#include <stdio.h>
using namespace std;
namespace ultragl
{
class glengine
{
protected:
window m_Window; ///< The window for this lesson
unsigned int m_Keys[SDLK_LAST]; ///< Stores keys that are pressed
float piover180;
virtual void draw();
virtual void resize(int x, int y);
virtual bool processEvents();
void controls();
private:
/*
* We need a structure to store our vertices in, otherwise we
* just had a huge bunch of floats in the end
*/
struct Vertex
{
float x, y, z;
Vertex(){}
Vertex(float x, float y, float z)
{
this->x = x;
this->y = y;
this->z = z;
}
};
struct particle
{
public :
double angle;
double speed;
Vertex v;
int r;
int g;
int b;
int a;
particle(double angle, double speed, Vertex v, int r, int g, int b, int a)
{
this->angle = angle;
this->speed = speed;
this->v = v;
this->r = r;
this->g = g;
this->b = b;
this->a = a;
}
particle()
{
}
};
particle p[500];
float particlesize;
public:
glengine();
~glengine();
virtual void run();
virtual bool init();
void glengine::test2(int num);
void glengine::update();
};
};
```
window.h:
```
#include <string>
#include <iostream>
#include "GLee/GLee.h"
#include <SDL/SDL.h>
#include <SDL/SDL_opengl.h>
#include <GL/glu.h>
using namespace std;
namespace ultragl
{
class window
{
private:
int w_height;
int w_width;
int w_bpp;
bool w_fullscreen;
string w_title;
public:
window();
~window();
bool createWindow(int width, int height, int bpp, bool fullscreen, const string& title);
void setSize(int width, int height);
int getHeight();
int getWidth();
};
};
```
glengine.cpp (the main one to look at):
```
#include "glengine.h"
namespace ultragl{
glengine::glengine()
{
piover180 = 0.0174532925f;
}
glengine::~glengine()
{
}
void glengine::resize(int x, int y)
{
std::cout << "Resizing Window to " << x << "x" << y << std::endl;
if (y <= 0)
{
y = 1;
}
glViewport(0,0,x,y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f,(GLfloat)x/(GLfloat)y,1.0f,100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
bool glengine::processEvents()
{
SDL_Event event;
while (SDL_PollEvent(&event))//get all events
{
switch (event.type)
{
// Quit event
case SDL_QUIT:
{
// Return false because we are quitting.
return false;
}
case SDL_KEYDOWN:
{
SDLKey sym = event.key.keysym.sym;
if(sym == SDLK_ESCAPE) //Quit if escape was pressed
{
return false;
}
m_Keys[sym] = 1;
break;
}
case SDL_KEYUP:
{
SDLKey sym = event.key.keysym.sym;
m_Keys[sym] = 0;
break;
}
case SDL_VIDEORESIZE:
{
//the window has been resized so we need to set up our viewport and projection according to the new size
resize(event.resize.w, event.resize.h);
break;
}
// Default case
default:
{
break;
}
}
}
return true;
}
bool glengine::init()
{
srand( time( NULL ) );
for(int i = 0; i < 500; i++)
p[i] = particle(0, 0, Vertex(0.0f, 0.0f, 0.0f), 0, 0, 0, 0);
if (!m_Window.createWindow(640, 480, 32, false, "Paricle Test GL"))
{
return false;
}
particlesize = 0.01;
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glEnable(GL_BLEND);
glBlendFunc(GL_ONE , GL_ONE_MINUS_SRC_ALPHA);
return true;
}
void glengine::test2(int num)
{
glPushMatrix();
glTranslatef(p[num].v.x, p[num].v.y, p[num].v.z);
glBegin(GL_QUADS);
glColor4i(p[num].r, p[num].g, p[num].b, p[num].a); // Green for x axis
glVertex3f(-particlesize, -particlesize, particlesize);
glVertex3f( particlesize, -particlesize, particlesize);
glVertex3f( particlesize, particlesize, particlesize);
glVertex3f(-particlesize, particlesize, particlesize);
glEnd();
glPopMatrix();
}
void glengine::draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
gluLookAt(0, 5, 20, 0, 0, 0, 0, 0, 0);
for(int i = 0; i < 500; i++)
test2(i);
}
void glengine::update()
{
for(int i = 0; i < 500; i++)
{
if(p[i].a <= 0)
p[i] = particle(5 + rand() % 360, (rand() % 10) * 0.1, Vertex(0.0f, 0.0f, 0.0f), 0, 255, 255, 255);
else
p[i].a -= 1;
p[i].v.x += (sin(p[i].angle * (3.14159265/180)) * p[i].speed);
p[i].v.y -= (cos(p[i].angle * (3.14159265/180)) * p[i].speed);
}
}
void glengine::run()
{
while(processEvents())
{
update();
draw();
SDL_GL_SwapBuffers();
}
}
};
```
And finally window.cpp:
```
#include "window.h"
namespace ultragl
{
window::window(): w_width(0), w_height(0), w_bpp(0), w_fullscreen(false)
{
}
window::~window()
{
SDL_Quit();
}
bool window::createWindow(int width, int height, int bpp, bool fullscreen, const string& title)
{
if( SDL_Init( SDL_INIT_VIDEO ) != 0 )
return false;
w_height = height;
w_width = width;
w_title = title;
w_fullscreen = fullscreen;
w_bpp = bpp;
//Set lowest possiable values.
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
//Set title.
SDL_WM_SetCaption(title.c_str(), title.c_str());
// Flags tell SDL about the type of window we are creating.
int flags = SDL_OPENGL;
if(fullscreen == true)
flags |= SDL_FULLSCREEN;
// Create window
SDL_Surface * screen = SDL_SetVideoMode( width, height, bpp, flags );
if(screen == 0)
return false;
//SDL doesn't trigger off a ResizeEvent at startup, but as we need this for OpenGL, we do this ourself
SDL_Event resizeEvent;
resizeEvent.type = SDL_VIDEORESIZE;
resizeEvent.resize.w = width;
resizeEvent.resize.h = height;
SDL_PushEvent(&resizeEvent);
return true;
}
void window::setSize(int width, int height)
{
w_height = height;
w_width = width;
}
int window::getHeight()
{
return w_height;
}
int window::getWidth()
{
return w_width;
}
};
```
Anyway, I really need to finish this, but I've already tried everything I could think of. I tested the glengine file many different ways to where it looked like this at one point:
```
#include "glengine.h"
#include "SOIL/SOIL.h"
#include "SOIL/stb_image_aug.h"
#include "SOIL/image_helper.h"
#include "SOIL/image_DXT.h"
namespace ultragl{
glengine::glengine()
{
piover180 = 0.0174532925f;
}
glengine::~glengine()
{
}
void glengine::resize(int x, int y)
{
std::cout << "Resizing Window to " << x << "x" << y << std::endl;
if (y <= 0)
{
y = 1;
}
glViewport(0,0,x,y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f,(GLfloat)x/(GLfloat)y,1.0f,1000.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
bool glengine::processEvents()
{
SDL_Event event;
while (SDL_PollEvent(&event))//get all events
{
switch (event.type)
{
// Quit event
case SDL_QUIT:
{
// Return false because we are quitting.
return false;
}
case SDL_KEYDOWN:
{
SDLKey sym = event.key.keysym.sym;
if(sym == SDLK_ESCAPE) //Quit if escape was pressed
{
return false;
}
m_Keys[sym] = 1;
break;
}
case SDL_KEYUP:
{
SDLKey sym = event.key.keysym.sym;
m_Keys[sym] = 0;
break;
}
case SDL_VIDEORESIZE:
{
//the window has been resized so we need to set up our viewport and projection according to the new size
resize(event.resize.w, event.resize.h);
break;
}
// Default case
default:
{
break;
}
}
}
return true;
}
bool glengine::init()
{
srand( time( NULL ) );
for(int i = 0; i < 500; i++)
p[i] = particle(0, 0, Vertex(0.0f, 0.0f, 0.0f), 0, 0, 0, 0);
if (!m_Window.createWindow(640, 480, 32, false, "Paricle Test GL"))
{
return false;
}
particlesize = 10.01;
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LEQUAL); // The Type Of Depth Testing To Do
glEnable(GL_BLEND);
glBlendFunc(GL_ONE , GL_ONE_MINUS_SRC_ALPHA);
return true;
}
void glengine::test2(int num)
{
//glPushMatrix();
//glTranslatef(p[num].v.x, p[num].v.y, p[num].v.z);
glColor4i(255, 255, 255, 255);
glBegin(GL_QUADS);
glNormal3f( 0.0f, 0.0f, 1.0f);
glVertex3f(-particlesize, -particlesize, particlesize);
glVertex3f( particlesize, -particlesize, particlesize);
glVertex3f( particlesize, particlesize, particlesize);
glVertex3f(-particlesize, particlesize, particlesize);
glEnd();
// Back Face
glBegin(GL_QUADS);
glNormal3f( 0.0f, 0.0f,-1.0f);
glVertex3f(-particlesize, -particlesize, -particlesize);
glVertex3f(-particlesize, particlesize, -particlesize);
glVertex3f( particlesize, particlesize, -particlesize);
glVertex3f( particlesize, -particlesize, -particlesize);
glEnd();
// Top Face
glBegin(GL_QUADS);
glNormal3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-particlesize, particlesize, -particlesize);
glVertex3f(-particlesize, particlesize, particlesize);
glVertex3f( particlesize, particlesize, particlesize);
glVertex3f( particlesize, particlesize, -particlesize);
glEnd();
// Bottom Face
glBegin(GL_QUADS);
glNormal3f( 0.0f,-1.0f, 0.0f);
glVertex3f(-particlesize, -particlesize, -particlesize);
glVertex3f( particlesize, -particlesize, -particlesize);
glVertex3f( particlesize, -particlesize, particlesize);
glVertex3f(-particlesize, -particlesize, particlesize);
glEnd();
// Right face
glBegin(GL_QUADS);
glNormal3f( 1.0f, 0.0f, 0.0f);
glVertex3f( particlesize, -particlesize, -particlesize);
glVertex3f( particlesize, particlesize, -particlesize);
glVertex3f( particlesize, particlesize, particlesize);
glVertex3f( particlesize, -particlesize, particlesize);
glEnd();
// Left Face
glBegin(GL_QUADS);
glNormal3f(-1.0f, 0.0f, 0.0f);
glVertex3f(-particlesize, -particlesize, -particlesize);
glVertex3f(-particlesize, -particlesize, particlesize);
glVertex3f(-particlesize, particlesize, particlesize);
glVertex3f(-particlesize, particlesize, -particlesize);
glEnd();
//glPopMatrix();
}
void glengine::draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear Screen And Depth Buffer
glLoadIdentity(); // Reset The Current Modelview Matrix
gluLookAt(0, 5, 20, 0, 0, 0, 0, 1, 0);
for(int i = 0; i < 500; i++)
test2(i);
}
void glengine::update()
{
for(int i = 0; i < 500; i++)
{
if(p[i].a <= 0)
p[i] = particle(5 + rand() % 360, (rand() % 10) * 0.1, Vertex(0.0f, 0.0f, -5.0f), 0, 255, 255, 255);
else
p[i].a -= 1;
p[i].v.x += (sin(p[i].angle * (3.14159265/180)) * p[i].speed);
p[i].v.y -= (cos(p[i].angle * (3.14159265/180)) * p[i].speed);
}
}
void glengine::run()
{
while(processEvents())
{
update();
draw();
SDL_GL_SwapBuffers();
}
}
};
```
It still didn't work. I'm really at my wits end on this one. | Okay, I managed to fix it using a lot of your suggestions, and some other source code I had laying around. Turns out the problem was from 3 different lines.
particlesize = 0.01; should have been bigger: particlesize = 1.01;
`glColor4i(255, 255, 255, 255)` was turning the cube the same color as the clear color because I was using it wrong. I couldn't figure out how to use it right, so I'm using `glColor4f(0.0f,1.0f,1.0f,0.5f)` instead, and that works.
Last of all `gluLookAt(0, 5, 20, 0, 0, 0, 0, 0, 0)` needed to be `gluLookAt(0, 5, 20, 0, 0, 0, 0, 1, 0)`
Thank you all for your help, and your time. | I haven't checked your code, but one thing I always do when debugging this kind of problems is to set the clear color to something colorful like (1, 0, 1) or so.
This will help you see if the problem is that your drawn object is completely black or if it's not drawn at all.
EDIT:
As someone mentioned in the comment: It also shows if you have a correct GL context if the clear operation clears to the right color or if it stays black. | Why won't my OpenGL draw anything? | [
"",
"c++",
"opengl",
"sdl",
""
] |
```
<root>
<thing>
<specs>
<spec1 />
<spec3 />
<spec2 />
</specs>
<details />
<more_info>
<info1 />
<info2 />
</more_info>
</thing>
</root>
```
okeee so i got this sample xml and the problem is i cant seem to get the values of the innerxml,
when i use `$reader->readInnerXML()` it returns the whole string though im sure that my xml is valid
what i wanted was to get the values of spec1, spec2, spec3 separately
the code is pretty long so i posted it [here](http://pastebin.com/f48b9bd38)
i've been stuck with this for 3days now T\_T poor me, i'd gladly accept any corrections | It depends what you mean by "value". If you have something like
```
<spec3 />Value</spec3>
```
Then readInnerXML should be giving you your value.
If your value is in an attribute,
```
<spec1 foo="my attribute" />
```
You'll need to use the getAttribute method of the XMLReader object, or explicitly tell the reader to start parsing attributes. See the code example below for a few ways to accomplish this.
Finally, if the node contains more nested XML,
```
<spec2><foo><baz thing="la de da">Value</baz></foo></spec2>
```
There's no direct way at that moment in time for the reader to understand values/elements inside of it. You'd need to do one of the following
1. Change you reader parsing code to hook into elements at those depths
2. Take the XML chunk from readInnerXML and start parsing it with a second XMLReader instance,
3. Take the XML chunk from readInnerXML and start parsing it with a another XML parsing library.
Here's some example code for parsing attributes
```
$reader = new XMLReader();
$reader->xml(trim('
<root>
<thing>
<specs>
<spec1 foo="my attribute">Value</spec1>
<spec3>
My Text
</spec3>
<spec2 foo="foo again" bar="another attribute" baz="yet another attribute" />
</specs>
<details />
<more_info>
<info1 />
<info2 />
</more_info>
</thing>
</root>
'));
$last_node_at_depth = array();
$already_processed = array();
while($reader->read()){
$last_node_at_depth[$reader->depth] = $reader->localName;
if(
$reader->depth > 0 &&
$reader->localName != '#text' &&
$last_node_at_depth[($reader->depth-1)] == 'specs' &&
!in_array ($reader->localName,$already_processed)
){
echo "\n".'Processing ' . $reader->localName . "\n";
$already_processed[] = $reader->localName;
echo '--------------------------------------------------'."\n";
echo 'The Value for the inner node ';
echo ' is [';
echo trim($reader->readInnerXML());
echo ']'."\n";
if($reader->attributeCount > 0){
echo 'This node has attributes, lets process them' . "\n";
//grab attribute by name
echo ' Value of attribute foo: ' . $reader->getAttribute('foo') . "\n";
//or use the reader to itterate through all the attributes
$length = $reader->attributeCount;
for($i=0;$i<$length;$i++){
//now the reader is pointing at attributes instead of nodes
$reader->moveToAttributeNo($i);
echo ' Value of attribute ' . $reader->localName;
echo ': ';
echo $reader->value;
echo "\n";
}
}
//echo $reader->localName . "\n";
}
}
``` | That's working [as advertised](http://www.php.net/manual/en/xmlreader.readinnerxml.php):
> **readInnerXML**
>
> Reads the contents of the current node, including child nodes and markup.
I think your confusion could be between nodes and attributes. `<spec1 />` is not an attribute - it's a node without any children. Writing `<spec1 />` is just short hand for `<spec1></spec1>`. So what you need is either to use actual attributes:
```
<root>
<thing>
<specs spec1="" spec3="" spec2="" />
<details />
<more_info info1="" info2="" />
</thing>
</root>
```
or read those nodes.
Anyway. I'm not sure if this is just because you're showing us some example code or not, but naming nodes `spec1`, `spec2`, `spec3` etc. is probably not a good idea. Node names don't need to be unique in XML. | help in reading nested xml using xmlreader in php | [
"",
"php",
"xmlreader",
""
] |
Most of experienced programmer knows data alignment is important for program's performance. I have seen some programmer wrote program that allocate bigger size of buffer than they need, and use the aligned pointer as begin. I am wondering should I do that in my program, I have no idea is there any guarantee of alignment of address returned by C++'s new operation. So I wrote a little program to test
```
for(size_t i = 0; i < 100; ++i) {
char *p = new char[123];
if(reinterpret_cast<size_t>(p) % 4) {
cout << "*";
system("pause");
}
cout << reinterpret_cast<void *>(p) << endl;
}
for(size_t i = 0; i < 100; ++i) {
short *p = new short[123];
if(reinterpret_cast<size_t>(p) % 4) {
cout << "*";
system("pause");
}
cout << reinterpret_cast<void *>(p) << endl;
}
for(size_t i = 0; i < 100; ++i) {
float *p = new float[123];
if(reinterpret_cast<size_t>(p) % 4) {
cout << "*";
system("pause");
}
cout << reinterpret_cast<void *>(p) << endl;
}
system("pause");
```
The compiler I am using is Visual C++ Express 2008. It seems that all addresses the new operation returned are aligned. But I am not sure. So my question is: are there any guarantee? If they do have guarantee, I don't have to align myself, if not, I have to. | The alignment has the following guarantee from the standard (3.7.3.1/2):
> The pointer returned shall be suitably aligned so that it can be converted to a
> pointer of any complete object type and then used to access the object or array in the
> storage allocated (until
> the storage is explicitly deallocated by a call to a corresponding deallocation function).
**EDIT**: Thanks to [timday](https://stackoverflow.com/users/24283/timday) for highlighting a [bug](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15795) in gcc/glibc where the guarantee does not hold.
**EDIT 2**: Ben's comment highlights an intersting edge case. The requirements on the allocation routines are for those provided by the standard only. If the application has it's own version, then there's no such guarantee on the result. | This is a late answer but just to clarify the situation on Linux - on 64-bit systems
memory is always 16-byte aligned:
<http://www.gnu.org/software/libc/manual/html_node/Aligned-Memory-Blocks.html>
> The address of a block returned by malloc or realloc in the GNU system is always a
> multiple of eight (or sixteen on 64-bit systems).
The `new` operator calls `malloc` internally
(see `./gcc/libstdc++-v3/libsupc++/new_op.cc`)
so this applies to `new` as well.
The implementation of `malloc` which is part of the `glibc` basically defines
`MALLOC_ALIGNMENT` to be `2*sizeof(size_t)` and `size_t` is 32bit=4byte and 64bit=8byte
on a x86-32 and x86-64 system, respectively.
```
$ cat ./glibc-2.14/malloc/malloc.c:
...
#ifndef INTERNAL_SIZE_T
#define INTERNAL_SIZE_T size_t
#endif
...
#define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
...
#ifndef MALLOC_ALIGNMENT
#define MALLOC_ALIGNMENT (2 * SIZE_SZ)
#endif
``` | Is there any guarantee of alignment of address return by C++'s new operation? | [
"",
"c++",
"performance",
"alignment",
"new-operator",
""
] |
Let's say I have the following class structure:
```
class Car;
class FooCar : public Car;
class BarCar : public Car;
class Engine;
class FooEngine : public Engine;
class BarEngine : public Engine;
```
Let's also give a `Car` a handle to its `Engine`. A `FooCar` will be created with a `FooEngine*` and a `BarCar` will be created with a `BarEngine*`. Is there a way to arrange things so a `FooCar` object can call member functions of `FooEngine` without downcasting?
Here's why the class structure is laid out the way it is right now:
1. All `Car`s have an `Engine`. Further, a `FooCar` will only ever use a `FooEngine`.
2. There are data and algorithms shared by all `Engine`s that I'd rather not copy and paste.
3. I might want to write a function that requires an `Engine` to know about its `Car`.
As soon as I typed `dynamic_cast` when writing this code, I knew I was probably doing something wrong. Is there a better way to do this?
UPDATE:
Based on the answers given so far, I'm leaning towards two possibilities:
1. Have `Car` provide a pure virtual `getEngine()` function. That would allow `FooCar` and `BarCar` to have implementations that return the correct kind of `Engine`.
2. Absorb all of the `Engine` functionality into the `Car` inheritance tree. `Engine` was broken out for maintenance reasons (to keep the `Engine` stuff in a separate place). It's a trade-off between having more small classes (small in lines of code) versus having fewer large classes.
Is there a strong community preference for one of these solutions? Is there a third option I haven't considered? | I'm assuming that Car holds an Engine pointer, and that's why you find yourself downcasting.
Take the pointer out of your base class and replace it with a pure virtual get\_engine() function. Then your FooCar and BarCar can hold pointers to the correct engine type.
**(Edit)**
Why this works:
Since the virtual function `Car::get_engine()` would return a **reference or a pointer**, C++ will allow derived classes to implement this function with a **different return type**, as long as the return type only differs by being a more derived type.
This is called [covariant return types](http://en.wikipedia.org/wiki/Covariant_return_type), and will allow each `Car` type to return the correct `Engine`. | Just one thing I wanted to add: this design already smells bad to me because of what I call **parallel trees**.
Basically if you end up with parallel class hierarchies (as you have with Car and Engine) then you're just asking for trouble.
I would rethink if Engine (and even Car) needs to have subclasses or those are all just different instances of the same respective base classes. | How can I avoid dynamic_cast in my C++ code? | [
"",
"c++",
"oop",
"inheritance",
"dynamic-cast",
"car-analogy",
""
] |
Implementing a 'sandbox' environment in Python used to be done with the rexec module (<http://docs.python.org/library/rexec.html>). Unfortunately, it has been deprecated/removed due to some security vulnerabilities. Is there an alternative?
My goal is to have Python code execute semi-trusted Python scripts. In a perfect world, calls to any functions outside of a pre-defined set would raise exceptions. From what I've read about rexec's deprecation, this may not be possible. So I'll settle for as much as I can get. I can spawn a separate process to run the scripts, which helps a lot. But they could still abuse I/O or processor/memory resources. | You might want to provide your own `__import__` to prevent inclusion of any modules you deem "abuse I/O or processor/memory resources."
You might want to start with [pypy](http://codespeak.net/pypy/dist/pypy/doc/) and create your own interpreter with limitations and constraints on resource use. | in cpython "sandboxing" for security reasons is a:
**"*don't do that at your company kids*"-thing**.
try :
* jython with java "sandboxing"
* pypy -> see Answer S.Lott
* maybe ironpython has a solution ?
see [Warning](http://www.wingware.com/psupport/python-manual/2.6/library/restricted.html):
***Warning***
In Python 2.3 these modules have been disabled due to various known and not readily fixable security holes. The modules are still documented here to help in reading old code that uses the rexec and Bastion modules. | Is there an alternative to rexec for Python sandboxing? | [
"",
"python",
"security",
"sandbox",
"rexec",
""
] |
I am trying to limit what content the subscribers can see but nothing I do seems to work. I have spent hours trawling the web and through the wordpress code, all to no avail. Does anyone know how I would be able to go about this?
Ideally the code structure would look like:
```
if(get_role() = 'subscriber'){
redirect
}
```
Thanks
Incidentally i have tried get\_role($role) and that doesn't work for me. | I've used current\_user\_can for this. There's a list of roles and capabilities here:
<http://codex.wordpress.org/Roles_and_Capabilities#Capabilities:_5>
So, since everyone above the level of "subscriber" can edit posts, one way to accommodate the requirement you've outlined would be:
```
if (!current_user_can('edit_posts')){
//redirect, error, etc as you like
}
``` | **update**: there's a function `current_user_can(capability)` which you can use to find out what user can and cannot do. i'd imagine that you will need to add another role or capability.
try the following:
```
if ('subscriber' == get_role()) {
# do whatever
}
```
= assignment
== comparison | How to distinguish between user roles in wordpress? | [
"",
"php",
"wordpress",
""
] |
I have a databound TextBlock control (which is being used inside a DataTemplate to display items in a ListBox) and I want to make all the text in the control bold. I can't seem to find a property in the properties explorer to set the whole text to bold, and all I can find online is the use of the `<Bold>` tag inside the TextBlock, but I can't put that in as the data is coming directly from the data source.
There must be a way to do this - but how? I'm very inexperienced in WPF so I don't really know where to look. | Am I missing something, or do you just need to set the FontWeight property to "Bold"?
```
<TextBlock FontWeight="Bold" Text="{Binding Foo}" />
``` | Rather than just having a TextBlock, try this:
```
<TextBlock>
<Bold>
<Run />
</Bold>
</TextBlock>
```
Then databind to the Run.TextProperty instead. | Set TextBlock to be entirely bold when DataBound in WPF | [
"",
"c#",
".net",
"wpf",
"xaml",
"textblock",
""
] |
In Java, I can validate an XML document against an XSD schema using javax.xml.validation.Validator, or against a DTD by simply parsing the document using org.xml.sax.XMLReader.
What I need though is a way of programmatically determining whether the document itself validates against a DTD (i.e. it contains a `<!DOCTYPE ...>` statement) or an XSD. Ideally I need to do this without loading the whole XML document into memory. Can anyone please help?
(Alternatively, if there's a *single* way of validating an XML document in Java that works for both XSDs and DTDs - and allows for custom resolving of resources - that would be even better!)
Many thanks,
A | There is no 100% foolproof process for determining how to validate an arbitrary XML document.
For example, this version 2.4 [web application deployment descriptor](http://en.wikipedia.org/wiki/Deployment_Descriptor) specifies a [W3 schema](http://www.w3.org/XML/Schema) to validate the document:
```
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.4"
xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
```
However, this is an equally valid way of expressing the same thing:
```
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.4"
xmlns="http://java.sun.com/xml/ns/j2ee">
```
[RELAX NG](http://relaxng.org/) doesn't seem to have a mechanism that offers *any* hints in the document that you should use it. Validation mechanisms are determined by document consumers, not producers. If I'm not mistaken, this was one of the impetuses driving the switch from DTD to more modern validation mechanisms.
In my opinion, your best bet is to tailor the mechanism detector to the set of document types you are processing, reading header information and interpreting it as appropriate. The [StAX parser](http://java.sun.com/javase/6/docs/api/javax/xml/stream/package-summary.html) is good for this - because it is a pull mechanism, you can just read the start of the file and then quit parsing on the first element.
[Link to more of the same and sample code and whatnot.](http://illegalargumentexception.blogspot.com/2009/02/java-finding-validation-mechanism-for.html) | See the package description for [javax.xml.validation](http://java.sun.com/javase/6/docs/api/javax/xml/validation/package-summary.html). It contains information about and examples for validating both XSDs and DTDs | How to tell whether an XML document validates against a DTD or XSD? | [
"",
"java",
"xml",
"validation",
"xsd",
"dtd",
""
] |
Consider following class
```
class test
{
public:
test(int x){ cout<< "test \n"; }
};
```
Now I want to create array of 50 objects of class test . I cannot change class test.
Objects can be created on heap or stack.
Creating objs on stack is not possible in this case since we dont have default constructor in class
```
test objs(1)[50]; /// Error...
```
Now we may think of creating objs on heap like this..
```
test ** objs = NULL;
objs = (test **) malloc( 50 * sizeof (test *));
for (int i =0; i<50 ; ++ i)
{
objs[i] = new test(1);
}
```
I dont want to use malloc .Is there any other way??
If you guys can think of some more solutions , please post them... | You cannot create an array of objects, as in **Foo foo [N]**, without a default constructor. It's part of the language spec.
**Either** do:
```
test * objs [50];
for() objs[i] = new test(1).
```
You don't need malloc(). You can just declare an array of pointers.
```
c++decl> explain int * objs [50]
declare objs as array 50 of pointer to int
```
But you probably ought to have some sort of automatic RAII-type destruction attached.
---
**OR** subclass *test* publicly:
```
class TempTest : public test
{
public:
TempTest() : test(1) {}
TempTest(int x) : test(x) {}
TempTest(const test & theTest ) : test(theTest) {}
TempTest(const TempTest & theTest ) : test(theTest) {}
test & operator=( const test & theTest ) { return test::operator=(theTest); }
test & operator=( const TempTest & theTest ) { return test::operator=(theTest); }
virtual ~TempTest() {}
};
```
and then:
```
TempTest array[50];
```
You can treat every *TempTest* object as a *test* object.
Note: operator=() & copy constructor are not inherited, so respecify as necessary. | Why do you need array?
```
std::vector<test*> v(50);
```
Or as `@j_random_hacker` suggested in the comments:
```
std::vector<test> v(50, test(1));
```
An example:
```
/** g++ -Wall -o vector_test *.cpp && vector_test */
#include <algorithm>
#include <iostream>
#include <iterator>
#include <vector>
struct Test {
int value;
Test(int x) : value(x)
{
std::cout << "Test(" << value << ")" << " ";
}
operator int() const
{
std::cout << "int(" << value << ")" << " ";
return value;
}
};
int main()
{
using namespace std;
vector<Test> v(5, Test(1));
cout << endl;
copy(v.begin(), v.end(), ostream_iterator<int>(cout, " "));
cout << endl;
v[1] = 2;
v[2].value = 3;
cout << endl;
copy(v.begin(), v.end(), ostream_iterator<int>(cout, " "));
cout << endl;
return 0;
}
```
Output:
```
Test(1)
int(1) 1 int(1) 1 int(1) 1 int(1) 1 int(1) 1
Test(2)
int(1) 1 int(2) 2 int(3) 3 int(1) 1 int(1) 1
``` | Create an array of class objs | [
"",
"c++",
""
] |
It's easy to set `CssClass` in the code-behind, but this runs the risk of overwriting existing classes.
I need to set certain elements to `ReadOnly = true;` and I'd like to apply a style as a visual cue that the item cannot be altered...easy enough:
```
.CssClass += " ReadOnlyStyle";
```
But at times I will *also* need to change the same element to `ReadOnly = false;` which means that I will need to remove the CSS class that I set without removing any other styles that I might have assigned.
What's the best way to do this? | I've taken AnthonyWJones original code and amended it so that it works no matter what scenario:
```
static class WebControlsExtensions
{
public static void AddCssClass(this WebControl control, string cssClass)
{
List<string> classes = control.CssClass.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries).ToList();
classes.Add(cssClass);
control.CssClass = classes.ToDelimitedString(" ");
}
public static void RemoveCssClass(this WebControl control, string cssClass)
{
List<string> classes = control.CssClass.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries).ToList();
classes.Remove(cssClass);
control.CssClass = classes.ToDelimitedString(" ");
}
}
static class StringExtensions
{
public static string ToDelimitedString(this IEnumerable<string> list, string delimiter)
{
StringBuilder sb = new StringBuilder();
foreach (string item in list)
{
if (sb.Length > 0)
sb.Append(delimiter);
sb.Append(item);
}
return sb.ToString();
}
}
``` | In C# 3 you can add some extension methods.
```
static class WebControlsExtensions
{
public static void AddCssClass (this WebControl control, string cssClass)
{
control.CssClass += " " + cssClass;
}
public static void RemoveCssClass (this WebControl control, string cssClass)
{
control.CssClass = control.CssClass.replace(" " + cssClass, "");
}
}
```
Usage:-
```
ctl.AddCssClass("ReadOnly");
ctl.RemoveCssClass("ReadOnly");
```
Note the RemoveCssClass is designed to remove only those classes added by AddCssClass and has the limitation that where 2 additional class names is added the shortest name should not match exactly the start of the longest name. E.g., If you added "test" and "test2" you can't remove test without corrupting the CssClass. This could be improved with RegEx by I expect the above to be adequate for your needs.
Note if you don't have C#3 then remove the `this` keyword from the first parameter and use the static methods in the conventional manner. | Change CSS classes from code | [
"",
"c#",
"css",
"asp.net",
"webforms",
""
] |
I am learning how to use NUnit. I have my main project in it's solution, and created a separate project in that same solution which will hold my unit tests, with it's own namespace. From that project I add a reference to the main project and add a
```
using MainProjectNamespace;
```
to the top of it.
When I go to NUnit, any tests I have that don't reference the main project work. These are tests I setup just to get used to NUnit, and are pretty much useless. When NUnit runs the real tests the test throws this exception:
> TestLibrary.Test.TestMainProject:
> System.IO.FileNotFoundException :
> Could not load file or assembly
> 'WpfApplication2, Version = 1.0.0.0,
> Culture=neutral, PublicKeyToken=null'
> or one of its dependencies. The
> system cannot find the specified file.
Why am I getting this exception?
EDIT:
Now when I try to load the assembly into NUnit, it won't even load (so I can't even get a chance to run the tests)
This is the Exception that come sup, and the stack trace:
System.IO.DirectoryNotFoundException: Could not find a part of the path 'LONG PATH HERE I DON'T WANT TO TYPE'
```
System.IO.DirectoryNotFoundException...
Server stack trace:
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.Directory.SetCurrentDirectory(String path)
at NUnit.Core.DirectorySwapper..ctor(String directoryName)
at NUnit.Core.Builders.TestAssemblyBuilder.Load(String path)
at NUnit.Core.Builders.TestAssemblyBuilder.Build(String assemblyName, Boolean autoSuites)
at NUnit.Core.Builders.TestAssemblyBuilder.Build(String assemblyName, String testName, Boolean autoSuites)
at NUnit.Core.TestSuiteBuilder.Build(TestPackage package)
at NUnit.Core.SimpleTestRunner.Load(TestPackage package)
at NUnit.Core.ProxyTestRunner.Load(TestPackage package)
at NUnit.Core.ProxyTestRunner.Load(TestPackage package)
at NUnit.Core.RemoteTestRunner.Load(TestPackage package)
at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs)
at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg, Int32 methodPtr, Boolean fExecuteInContext)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at NUnit.Core.TestRunner.Load(TestPackage package)
at NUnit.Util.TestDomain.Load(TestPackage package)
at NUnit.Util.TestLoader.LoadTest(String testName)
```
EDIT2:
The above path clearly IS in my hard drive
EDIT3:
I just switched from Debug to Release, on NUnit, and loaded the dll from the release folder of TestingLibrary... And it loaded! 1 of the 3 namespace-specific tests worked. Getting somewhere, I am.
EDIT4:
Welllllllll... I can actually run the tests now, but I am back to the original error. IT is not finding the assembly for the main project | The compiler removes all unused references, and doesn't deploy the dll unnecessarily. A `using` (by itself) does not count as a use. Either mark the dll for deployment via the "Copy to output directory" setting, or add some code that *really* uses types declared in the dll. | Did you rename the name of the output assembly OR namespace in the source project?
Looks like your source file is "WPFApplication1" & I am speculating that you might have changed the output type from dll to exe? | Why am I getting a FileNotFound exception when referencing another project from the same solution? | [
"",
"c#",
".net",
"nunit",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.