Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm looking for a Delphi component / library to open and read from an mdb (MS Access) database. I will not be writing to the db or displaying the data; just need to read the db using whatever sql Access supports.
This is for a personal side-project (programming is not my paying job), so I need a free or a very inexpensive solution that works with any of Delphi 6, Delphi 2007 or Delphi 2009 (Professional editions all). Performance doesn't matter, simplicity does :) | <http://www.teachitza.com/delphi/databasehowto.htm>
it is realy simple and easy task with 5-10 line of code.
that was very usefull for me when i needed to just read some data from ms access files.
for starting u can use simple connection string like this
```
DataSource := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=' + Filename +
';Persist Security Info=False';
ADOConnection1.ConnectionString := DataSource;
ADOConnection1.LoginPrompt := False;
ADOConnection1.Connected := true;
// ADOConnection1.GetTableNames(listbox1.items);
AdoTable1.Connection := ADOConnection1;
AdoTable1.ReadOnly := false; //if u want to make changes
ADOTable1.active := false;
ADOTable1.TableName := 'B2777'; //table name
ADOTable1.active := true;
```
filnename is ur mdb file path+name. that is what i use for very simple tasks. | I use ADO components included with Delphi for this ("Microsoft Jet 4.0 OLE Provider"). It requires MDAC installed on client, which is already included in XP and newer systems. | Reading MS Access mdb files in Delphi (for free)? | [
"",
"sql",
"delphi",
"ms-access",
"components",
"ms-jet-ace",
""
] |
I'm getting this error when trying to access my webservice running inside tomcat.
> Caused by: java.lang.LinkageError: JAXB 2.0 API is being loaded from the bootstrap classloader, but this RI (from jar:file:/C:/software/tomcat6/webapps/messaging/WEB-INF/lib/jaxb-impl-2.1.5.jar!/com/sun/xml/bind/v2/model/impl/ModelBuilder.class) needs 2.1 API. Use the endorsed directory mechanism to place jaxb-api.jar in the bootstrap classloader. (See <http://java.sun.com/j2se/1.5.0/docs/guide/standards/>)
I googled for the error and did what should solve it (I put jaxb-api.jar, version 2.1 in JDK/lib/endorsed and JDK/jre/lib/endorsed) but it doesn't appear to have any effect.
I didn't have it before, and I'm not sure what was changed. I use JDK 6u10. | Java 6u10 includes JAXB 2.1, so there is no need to include it at all (it has been included since 6u4).
Right now it looks like you have a conflict between JAXB included with a webapp and the bundled JAXB that comes with JRE. You could try removing the JAXB jar from your webapp (`C:/software/tomcat6/webapps/messaging/WEB-INF/lib/jaxb-impl-2.1.5.jar`) and rely on the built-in version. | The recommended way is to define the JAVA\_ENDORSED\_DIRS environmental variable. Then Tomcat will use this Java property:
```
-Djava.endorsed.dirs=$JAVA_ENDORSED_DIRS
```
in order to load the endorsed jars. So, create a folder, put jaxb-api.jar v2.1 there and define the environmental variable to point at that folder.
This is better than using the global endorsed directory, because it doesn't affect the whole JVM and you don't have to repeat the process, when the JRE is updated. | LinkageError JAXB 2.0 -> 2.1 (Tomcat) | [
"",
"java",
"tomcat",
"jaxb",
"endorsed",
""
] |
I need to decide whether file name fits to file mask. The file mask could contain \* or ? characters. Is there any simple solution for this?
```
bool bFits = Fits("myfile.txt", "my*.txt");
private bool Fits(string sFileName, string sFileMask)
{
??? anything simple here ???
}
``` | Try this:
```
private bool FitsMask(string sFileName, string sFileMask)
{
Regex mask = new Regex(sFileMask.Replace(".", "[.]").Replace("*", ".*").Replace("?", "."));
return mask.IsMatch(sFileName);
}
``` | I appreciate finding Joel's answer--saved me some time as well ! I did, however, have to make a few changes to make the method do what most users would expect:
* I removed the 'this' keyword preceding the first argument. It does nothing here (though it could be useful if the method is intended to be an extension method, in which case it needs to be public and contained within a static class and itself be a static method).
* I made the regular expression case-independent to match standard Windows wildcard behavior (so e.g. "c\*.\*" and "C\*.\*" both return the same result).
* I added starting and ending anchors to the regular expression, again to match standard Windows wildcard behavior (so e.g. "stuff.txt" would be matched by "stuff\*" or "s\*" or "s\*.\*" but not by just "s").
---
```
private bool FitsMask(string fileName, string fileMask)
{
Regex mask = new Regex(
'^' +
fileMask
.Replace(".", "[.]")
.Replace("*", ".*")
.Replace("?", ".")
+ '$',
RegexOptions.IgnoreCase);
return mask.IsMatch(fileName);
}
```
## 2009.11.04 Update: Match one of several masks
For even more flexibility, here is a plug-compatible method built on top of the original. This version lets you pass multiple masks (hence the plural on the second parameter name *fileMasks*) separated by lines, commas, vertical bars, or spaces. I wanted it so that I could let the user put as many choices as desired in a ListBox and then select all files matching *any* of them. Note that some controls (like a ListBox) use CR-LF for line breaks while others (e.g. RichTextBox) use just LF--that is why both "\r\n" and "\n" show up in the Split list.
```
private bool FitsOneOfMultipleMasks(string fileName, string fileMasks)
{
return fileMasks
.Split(new string[] {"\r\n", "\n", ",", "|", " "},
StringSplitOptions.RemoveEmptyEntries)
.Any(fileMask => FitsMask(fileName, fileMask));
}
```
## 2009.11.17 Update: Handle fileMask inputs more gracefully
The earlier version of FitsMask (which I have left in for comparison) does a fair job but since we are treating it as a regular expression it will throw an exception if it is not a valid regular expression when it comes in. The solution is that we actually want any regex metacharacters in the input fileMask to be considered literals, not metacharacters. But we still need to treat period, asterisk, and question mark specially. So this improved version of FitsMask safely moves these three characters out of the way, transforms all remaining metacharacters into literals, then puts the three interesting characters back, in their "regex'ed" form.
One other minor improvement is to allow for case-independence, per standard Windows behavior.
```
private bool FitsMask(string fileName, string fileMask)
{
string pattern =
'^' +
Regex.Escape(fileMask.Replace(".", "__DOT__")
.Replace("*", "__STAR__")
.Replace("?", "__QM__"))
.Replace("__DOT__", "[.]")
.Replace("__STAR__", ".*")
.Replace("__QM__", ".")
+ '$';
return new Regex(pattern, RegexOptions.IgnoreCase).IsMatch(fileName);
}
```
## 2010.09.30 Update: Somewhere along the way, passion ensued...
I have been remiss in not updating this earlier but these references will likely be of interest to readers who have made it to this point:
* I embedded the **FitsMask** method as the heart of a WinForms user control aptly called a **FileMask**--see the API [here](http://cleancode.sourceforge.net/api/csharp/html/T_CleanCode_GeneralComponents_Controls_FileMask.htm).
* I then wrote an article featuring the FileMask control published on Simple-Talk.com, entitled [Using LINQ Lambda Expressions to Design Customizable Generic Components](http://www.simple-talk.com/dotnet/.net-framework/using-linq-lambda-expressions-to-design-customizable-generic-components/). (While the method itself does not use LINQ, the FileMask user control does, hence the title of the article.) | How to determine if a File Matches a File Mask? | [
"",
"c#",
".net",
"regex",
""
] |
POCO = Plain Old CLR (or better: Class) Object
DTO = Data Transfer Object
In this [post](http://rlacovara.blogspot.com/2009/03/what-is-difference-between-dto-and-poco.html) there is a difference, but frankly most of the blogs I read describe POCO in the way DTO is defined: DTOs are simple data containers used for moving data between the layers of an application.
Are POCO and DTO the same thing? | A POCO follows the rules of OOP. It should (but doesn't have to) have state *and* behavior. POCO comes from POJO, coined by Martin Fowler [[anecdote here](http://www.martinfowler.com/bliki/POJO.html)]. He used the term POJO as a way to make it more sexy to reject the framework heavy EJB implementations. POCO should be used in the same context in .Net. Don't let frameworks dictate your object's design.
A DTO's only purpose is to transfer state, and should have no behavior. See Martin Fowler's [explanation of a DTO](http://martinfowler.com/eaaCatalog/dataTransferObject.html) for an example of the use of this pattern.
Here's the difference: **POCO describes an approach to programming** (good old fashioned object oriented programming), where **DTO is a pattern** that is used to "transfer data" using objects.
While you can treat POCOs like DTOs, you run the risk of creating an [anemic domain model](http://www.martinfowler.com/bliki/AnemicDomainModel.html) if you do so. Additionally, there's a mismatch in structure, since DTOs should be designed to transfer data, not to represent the true structure of the business domain. The result of this is that DTOs tend to be more flat than your actual domain.
In a domain of any reasonable complexity, you're almost always better off creating separate domain POCOs and translating them to DTOs. DDD (domain driven design) defines the [anti-corruption layer](http://books.google.com/books?id=7dlaMs0SECsC&lpg=PA366&ots=ulHUZZRdr2&dq=anti%20corruption%20layer&pg=PA364#v=onepage&q&f=false) (another link [here](http://moffdub.wordpress.com/2008/09/21/anatomy-of-an-anti-corruption-layer-part-1/), but best thing to do is [buy the book](https://rads.stackoverflow.com/amzn/click/com/0321125215)), which is a good structure that makes the segregation clear. | It's probably redundant for me to contribute since I already stated my position in my blog article, but the final paragraph of that article kind of sums things up:
*So, in conclusion, learn to love the POCO, and make sure you don’t spread any misinformation about it being the same thing as a DTO. DTOs are simple data containers used for moving data between the layers of an application. POCOs are full fledged business objects with the one requirement that they are Persistence Ignorant (no get or save methods). Lastly, if you haven’t checked out Jimmy Nilsson’s book yet, pick it up from your local university stacks. It has examples in C# and it’s a great read.*
BTW, Patrick I read the POCO as a Lifestyle article, and I completely agree, that is a fantastic article. It's actually a section from the Jimmy Nilsson book that I recommended. I had no idea that it was available online. His book really is the best source of information I've found on POCO / DTO / Repository / and other DDD development practices. | Plain Old CLR Object vs Data Transfer Object | [
"",
"c#",
".net",
"poco",
"dto",
""
] |
I'm heavily geared towards C++ thinking and need some guidance on a specific C# matter. Let's assume we have the following class:
```
public class Foo
{
private IList<Bar> _bars = new List<Bar>(); // Note IList<> vs List<>.
public IList<Bar> Bars
{
get { return _bars; }
set
{
...
}
}
}
```
Now, in place of the `...`, I'm leaning towards clearing `_bars` and `AddRange` the items from the `set` argument `value`, instead of just assigning `value` to `_bars`. The way I see it, is that I want to keep referencing the same *items* that the `value` items references, not the actual `IList<Bar>` that `value` references.
Is that wrong thinking on my side? How do you think here?
Edit: After some comments, I realized I must add that I want to be able to use an existing collection of Bars in the Foo ctor and initialize \_bars from that collection. So, with that revision and the comments so far, this feels better:
```
public class Foo
{
private readonly List<Bar> _bars = new List<Bar>();
public Foo(IEnumerable<Bar> bars)
{
_bars.AddRange(bars);
}
public IList<Bar> Bars
{
get { return _bars; }
}
}
```
Is it better? | Absolutely, there is no need to expose a setter for the Bars property. Internally you hold a reference to the collection, which as Kent suggests could be marked as Readonly. Through the getter the caller can do what they want with the collection with the methods available (Add, Remove, Clear, AddRange, etc), but crucially they can never change the internal reference you hold to the collection object.
This then allows you to control what methods are allowed. As Jamie suggests, having the property return type `IEnumerable` would result in the Bars property exposing a readonly collection. Exposing `IList` means the contents of the collection could be modified. A setter on the property would leave it wide open for the caller to do what they want and you are no longer in control.
**Edit**
Following the question edit above. It really depends on how the `Foo` object will be used.
Since your main concern is to initialise `Foo` from an existing list of `Bar` objects...
```
IList<Bar> bars = ...some list of Bars previously constructed...
```
Your latest code example for `Foo` forces the caller to initialise via the constructor but then also allows them to change the collection via the property
```
Foo foo = new Foo(bars);
...
foo.Bars.Clear();
foo.Bars.AddRange(bars);
```
When allowing an object to be initialised via the constructor you need to ask yourself why you are doing this. Is it...
1. for the caller's convenience? To allow the calling code to supply values that can be subsequently changed via properties.
2. because you want to restrict how the object is used? Forcing values (or certain combinations of values) to be set on object construction and remain fixed throughout the lifetime of the object.
You need to ask yourself – Do you want the caller to be able to change the contents of the Bars collection after the `Foo` object has been constructed?
If No – Make the Bars property expose a read only collection.
If Yes – Add a default constructor to the `Foo` object so that the caller doesn’t have to supply a list to initialise it. But they will have the option to do so if they so choose via the overloaded constructor. | If we're talking about business (domain) objects, I expose `IEnumerable<T>` and whatever methods are appropriate for the collection, typically Add, Remove, and Clear. I almost never expose a setter for a collection. | Best practice when assigning a collection reference to a property | [
"",
"c#",
"properties",
""
] |
I am converting some code from another language to python. That code reads a rather large file into a string and then manipulates it by array indexing like:
```
str[i] = 'e'
```
This does not work directly in python due to the strings being immutable. What is the preferred way of doing this in python ?
I have seen the `string.replace()` function, but it returns a copy of the string which does not sound very optimal as the string in this case is an entire file. | ```
l = list(str)
l[i] = 'e'
str = ''.join(l)
``` | Assuming you're not using a variable-length text encoding such as UTF-8, you can use `array.array`:
```
>>> import array
>>> a = array.array('c', 'foo')
>>> a[1] = 'e'
>>> a
array('c', 'feo')
>>> a.tostring()
'feo'
```
But since you're dealing with the contents of a file, [`mmap`](http://docs.python.org/library/mmap.html) should be more efficient:
```
>>> f = open('foo', 'r+')
>>> import mmap
>>> m = mmap.mmap(f.fileno(), 0)
>>> m[:]
'foo\n'
>>> m[1] = 'e'
>>> m[:]
'feo\n'
>>> exit()
% cat foo
feo
```
Here's a quick benchmarking script (you'll need to replace dd with something else for non-Unix OSes):
```
import os, time, array, mmap
def modify(s):
for i in xrange(len(s)):
s[i] = 'q'
def measure(func):
start = time.time()
func(open('foo', 'r+'))
print func.func_name, time.time() - start
def do_split(f):
l = list(f.read())
modify(l)
return ''.join(l)
def do_array(f):
a = array.array('c', f.read())
modify(a)
return a.tostring()
def do_mmap(f):
m = mmap.mmap(f.fileno(), 0)
modify(m)
os.system('dd if=/dev/random of=foo bs=1m count=5')
measure(do_mmap)
measure(do_array)
measure(do_split)
```
Output I got on my several-year-old laptop matches my intuition:
```
5+0 records in
5+0 records out
5242880 bytes transferred in 0.710966 secs (7374304 bytes/sec)
do_mmap 1.00865888596
do_array 1.09792494774
do_split 1.20163106918
```
So mmap is slightly faster but none of the suggested solutions is particularly different. If you're seeing a huge difference, try using [cProfile](http://docs.python.org/library/profile.html#module-pstats) to see what's taking the time. | String manipulation in Python | [
"",
"python",
"string",
"replace",
""
] |
I have an array being returned but as I'm new to programming I can't work out how to read each value. Can someone please point me in the right direction?
My code is below.
```
private void testToolStripMenuItem_Click(object sender, EventArgs e)
{
foreach (int name in getNames()) //<----I'm wrong here I guess
{
MessageBox.Show(name.ToString());
}
}
private ArrayList getNames()
{
//some code...
..
...
return names;
}
```
Thanks all | Firstly, you're not returning an array, but rather an `ArrayList` instance. Secondly, why would a name be an `int`? Surely you mean `string`? Finally, if you're on .NET 2.0 or later, you can use generics to improve your code. So, use `List<string>` rather than `ArrayList`.
```
private void testToolStripMenuItem_Click(object sender, EventArgs e)
{
foreach (string name in getNames())
{
MessageBox.Show(name);
}
}
private List<string> GetNames()
{
var names = new List<string>();
names.Add("Kent");
return names;
}
``` | The issue here is that getNames returns an arraylist *of objects*--not ints.
You could rewrite your code with this change:
```
foreach (object nameobj in getNames())
{
string name = (string)nameobj;
MessageBox.Show(name);
}
```
Alternatively, you could (and should) use generics to keep everything nice and tidy. That might look like this (untested) code:
```
private void testToolStripMenuItem_Click(object sender, EventArgs e)
{
foreach (string name in getNames())
{
MessageBox.Show(name);
}
}
private IList<string> getNames()
{
//some code...
List<string> names = new List<string>();
names.Add("Scott");
..
...
return names;
}
```
Add a using for `System.Collections.Generic`. Also, as others noticed, I changed it to `string` from `int`. | C# Reading each value of a returned array | [
"",
"c#",
"arrays",
""
] |
I want to implement an Observer of a Model class which does not change the Model. Thus, it should be able to use a const-Reference to access the Model. But the Registering of the Observer prohibits this.
Here is how the observer pattern is implemented in my Project:
```
//Attributes of type Observable are used by classes that want to notify others
//of state changes. Observing Objects register themselves with AddObserver.
//The Observable Object calls NotifyObservers when necessary.
class Notifier
{
public:
AddObserver(Observer*);
RemoveObserver(Observer*);
NotifyObservers();
};
class Model
{
public:
Notifier& GetNotifier() //Is non const because it needs to return a non-const
{ //reference to allow Observers to register themselves.
return m_Notifier;
}
int QueryState() const;
void ChangeModel(int newState)
{
m_Notifier.NotifyObservers();
}
private:
Notifier m_Notifier;
};
//This View does not Modify the Model.
class MyNonModifingView : public Observer
{
public:
SetModel(Model* aModel) //should be const Model* aModel...
{
m_Model = aModel;
m_Model->GetNotifier().AddObserver(this); //...but can't because
//SetModel needs to call GetNotifier and add itself, which requires
//non-const AddObserver and GetNotifier methods.
}
void Update() //Part of Observer-Interface, called by Notifiers
{
m_Model->QueryState();
}
};
```
The only place where a non-modifying observer needs to "change" the Model is when it wants to register with it. I feel that I can not avoid a const\_cast here, but I wanted to know if there are better solutions.
Sidenote:
Put another way, I don't consider the "List of Observers" which a model object manages to be part of the state of the Model. C++ can not tell the difference and lumps state and observers together, forcing both to be const or non-const.
Cheers, Felix | If you consider the Notifier object not to be part of the Model object which owns it, so that modifying the Notifier doesn't "count" as modifying the Model, then make getNotifier a const method returning a non-const reference:
```
Notifier& GetNotifier() const //Is const but returns a non-const
{ //reference to allow Observers to
//register themselves.
return m_Notifier;
}
```
You will then have to either mark m\_Notifier as mutable, or else own it by pointer (or smart pointer) or reference rather than by inclusion. Either way, you avoid a const\_cast. It is usually preferable to embed objects rather than point/refer to them, but if this is a case where a Notifier isn't considered part of the Model which uses it, then embedding is not essential. Owning it by reference forces you to initialize the reference when Model is constructed, which leads to dependency injection, which is no bad thing. Owning by smart pointer means that, as with embedding, you don't have to do anything about cleanup.
There may be other ways to design things (such as Vinay's addition of another class), but your comment "Is non-const because it needs to return a non-const reference" suggests to me that you can do exactly what you originally wanted, you just don't realise you can. | I'm not clear from your code, but if you have a member that is logically const, but physically non-const, the usual solution is to make it **mutable**. | Const-correct Notifier in Observer Pattern | [
"",
"c++",
"constants",
"observer-pattern",
"const-correctness",
""
] |
I have few types that derive from simplified `Base` as shown below.
I am not sure whether to use base class's constructor or `this` constructor when overloading constructors.
`ConcreteA` overloads constructors purely using `base` constructors, while
`ConcreteB` overloads using `this` for the first two overloads.
What would be a better way of overloading constructors?
```
public abstract class Base
{
public string Name { get; set; }
public int? Age { get; set; }
protected Base() : this(string.Empty) {}
protected Base(string name) : this(name, null) {}
protected Base(string name, int? age)
{
Name = name;
Age = age;
}
}
public class ConcreteA : Base
{
public ConcreteA(){}
public ConcreteA(string name) : base(name) {}
public ConcreteA(string name, int? age) : base(name, age)
{
}
}
public class ConcreteB : Base
{
public ConcreteB() : this(string.Empty, null){}
public ConcreteB(string name): this(name, null){}
public ConcreteB(string name, int? age) : base(name, age)
{
}
}
```
**[Edit]**
It looks like what Ian Quigley has suggested in his [answer](https://stackoverflow.com/questions/751575/overload-base-contructor-or-this-contructor/751590#751590) seemed to make sense.
If I were to have a call that initialize validators, `ConcreteA(string)` will never initialize validators in following case.
```
public class ConcreteA : Base
{
public ConcreteA(){}
public ConcreteA(string name) : base(name) {}
public ConcreteA(string name, int? age) : base(name, age)
{
InitializeValidators();
}
private void InitializeValidators() {}
}
``` | **This**. Because if you ever place code in ConcreteB(string, int?) then you want the string only constructor to call it. | In general, I'd call "this" rather than "base". You'll probably reuse more code that way, if you expand your classes later on. | Overload "base" constructor or "this" constructor? | [
"",
"c#",
".net",
"overloading",
"constructor-overloading",
""
] |
I've got a class with a bunch of [ColumnName("foo")] NHibernate attributes.
Is there an easy way to ask NHibernate to list all of the ColumnNames for a given class?
It sounds like it should be really easy but I'm just not seeing any kind of inspection in the NHibernate docs (or maybe I'm just blind today). | Use LINQ and reflection:
```
var columns = typeof(TheClass).GetProperties()
.Where(property => property.GetCustomAttributes(typeof(ColumnNameAttribute), false).Count > 0)
.Select(property => property.Name);
``` | I had this same problem, but found IClassMetadata doesn't have any column information, just property types, names, identifier, and table information.
What worked for me:
```
PersistentClass persistentClass = cfg.GetClassMapping(typeof(MyEntity));
Property property = persistentClass.GetProperty(propertyName);
property.ColumnIterator // <-- the column(s) for the property
``` | How to enumerate column names with NHibernate? | [
"",
"c#",
"nhibernate",
"inspection",
""
] |
This the situation: I have one webservice without SSL, which provides two pages for the other web application. When the user submits these pages, an XML file with private information is sent to the webservice.
How can I provide the necessary privacy protection on the XML file? Is the one certificate good enough to give the appropriate security?
I'm not sure about this one, and am in the preparation phase of a project... So need to know the involved work on this part... | As an alternative to SSL you could encrypt the file yourself using any of the algorithms available in using System.Security.Cryptography but then you have to work out a mechanism to exchange your key(s).
However by far the easiest way will be to have both web services using SSL endpoints. That will take care of all your confidentiality, integrity and identity considerations in one fell swoop. | Certificates are tied to the hostname of the server (or, with wildcard certificates, all the hosts in a domain). So if the two services are on the same host, then both can use the same certificate.
If they are not on the same host there will be no transport security on the non-SSL service unless this is added separately. WCF has support for message (or part of message) encryption. | Communication between web applications, 1 SSL certificate, other has none | [
"",
"c#",
"asp.net",
"xml",
"ssl",
""
] |
I am parsing out some emails. Mobile Mail, iPhone and I assume iPod touch append a signature as a separate boundary, making it simple to remove. Not all mail clients do, and just use '--' as a signature delimiter.
I need to chop off the '--' from a string, but only the last occurrence of it.
Sample copy
```
hello, this is some email copy-- check this out
--
Tom Foolery
```
I thought about splitting on '--', removing the last part, and I would have it, but `explode()` and `split()` neither seem to return great values for letting me know if it did anything, in the event there is not a match.
I can not get `preg_replace()` to go across more than one line. I have standardized all line endings to `\n`.
What is the best suggestion to end up with `hello, this is some email copy-- check this out`, taking not, there will be cases where there is no signature, and there are of course going to be cases where I can not cover all the cases. | I think in the interest of being more bulletproof, I will take the non regex route
```
echo substr($body, 0, strrpos($body, "\n--"));
``` | Actually [correct signature delimiter](https://www.rfc-editor.org/rfc/rfc3676#section-4.3) is `"-- \n"` (note the space before newline), thus the delimiter regexp should be `'^-- $'`. Although you might consider using `'^--\s*$'`, so it'll work with OE, which gets it wrong. | Remove all characters starting from last occurrence of specific sequence of characters | [
"",
"php",
"regex",
"parsing",
"split",
"explode",
""
] |
I have some extra functionality i need to add which includes adding a new property to an object and extending methods in another class that handles this object. I'm dealing with source code of a product (in C# 2.0) we use which i really don't want to modify, i just want to extend on it.
I'd ideally like to have a separate assembly to do this so that it's clear it's our code. It seems like partial classes and delegates may be the way to go, but I'm not sure if this is possible to do. Has anyone done something similar? or know any good articles? | Check out the [Decorator](http://www.dofactory.com/Patterns/PatternDecorator.aspx#_self1) pattern. This is exactly what you are looking for. You can inherit the exiting classes and extend it with any additional logic you have. There's also a [Proxy](http://www.dofactory.com/Patterns/PatternProxy.aspx) pattern if you want hiding some base functionality. | Like hfcs101 said.
See if you can extend the class given some existing OO design patterns.
Take a look at:
* [Adapter patten](http://www.fluffycat.com/Java-Design-Patterns/Adapter/)
* [Decorator pattern](http://www.fluffycat.com/Java-Design-Patterns/Decorator/)
* [Facade pattern](http://www.fluffycat.com/Java-Design-Patterns/Facade/)
See if you can find a pattern that closely matches your problem. Decorator looks like the best candidate. | Extending c# code with separate assembly | [
"",
"c#",
".net",
"delegates",
"c#-2.0",
"partial-classes",
""
] |
I'm writing a client/server that will allow Customer Data to be shared between our head office and on-the-move sales folks within the company.
The server downloads and writes the customer data in XML files but also keeps the data in memory so that it can act as a local client as it were.
I'm planning to Serialize the ArrayList so that the customer data can be easily sent across the internet. How secure is this? Should I look into some form encryption before I transmit the Serialized object? | I wouldn't perform the the encryption as part of the Serialisation.
There are two issues here:
1. Putting the object in form that can be transmitted, i.e. the Serialisation.
2. Making sure the transmission is secure.
1 and 2 are separate problems and combining them into a single solution will only create problems for yourself in future.
I would use out of the box Serialisation and then use a secure transmission channel, like [TLS](http://en.wikipedia.org/wiki/Transport_Layer_Security). | My guess, look into HTTPS. | Secure Serialization | [
"",
"c#",
".net",
"serialization",
"tcp",
"rmi",
""
] |
I have a web application running in cluster mode with a load balancer.
It consists in two tomcats (T1, and T2) addressing only one DB.
T2 is nfs mounted to T1. This is the only dofference between both nodes.
I have a java method generating some files. If the request
runs on T1 there is no problem but if the request is running on node 2
I get an exception as follows:
```
java.io.IOException: Invalid argument
at java.io.FileOutputStream.close0(Native Method)
at java.io.FileOutputStream.close(FileOutputStream.java:279)
```
The corresponding code is as follows:
```
for (int i = 0; i < dataFileList.size(); i++) {
outputFileName = outputFolder + fileNameList.get(i);
FileOutputStream fileOut = new FileOutputStream(outputFileName);
fileOut.write(dataFileList.get(i), 0, dataFileList.get(i).length);
fileOut.flush();
fileOut.close();
}
```
The exception appears at the fileOut.close()
Any hint?
Luis | Finally I found the reason.
First I've notices that NOT always this exception comes
at the same point.
Sometimes was a
java.io.IOException: Invalid argument
at java.io.FileOutputStream.close0(Native Method)
at java.io.FileOutputStream.close(FileOutputStream.java:279)
^^^^^
and sometimes was
```
java.io.IOException: Invalid argument
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:260)
```
Therefore the problem is NOT a java problem. Not even a NFS problem.
The problem **is** the underlying File System type which is an DRBD
file system.
Testing at a shell to write across the nodes works if one is writing a small
file. I.e:
at the nfs mounted node
```
cd /tmp
date > /shared/path-to-some-not-mounted-dir/today
will work
```
but
```
cat myBigFile > /shared/path-to-some-not-mounted-dir/today
```
will deliver the following error
```
cat: write error: Invalid argument
```
Therefore the solution is to use other type of file system, gfs for example. | Setting this line in the `.profile` resolved the issue:
```
ulimit –n 2048
``` | java.io.IOException: Invalid argument | [
"",
"java",
"cluster-computing",
"ioexception",
""
] |
I'm working on a small web form that requires the user to input (among other things), the scheduled backup time of whatever server they're adding to the system. The problem is, I'm struggling to find out the best way to take the user input and build a DateTime object (which is what the database requires).
I only really care about the Day of Week, Time of Day (12 or 24 hour clock).
I thought about just creating an empty DateTime object and then just adding my input values from the user, but you can only get, not set, the day of week, time of day, etc.
I've been looking at the Calender asp control, which would work for the day of the week selection, but I can't seem to find any support of time of day.
Thanks. | I don't think you want to use a DateTime for a recurring event such as a backup. A DateTime is useful for storing a particular date and time, but not a "template" for a recurring event. Instead I'd use separate columns to store the day of week value (0-6) and time of date (minutes after midnight) for the event. | If you going to use datepicker [here](http://sowkot.blogspot.com/2008/08/jquery-is-excellent-javascript-library.html) is one great sample for adding JQuery date picker using C#. That helped me including in my project evrn if I did know anything abaut JQuery and java sripts at all. | Building a Datetime Object for SQL database insert | [
"",
"c#",
"asp.net",
""
] |
I have seen a lot of C# programs that use the `[]`, for example `[STAThread]` and then the code follows. Another classic example is `[DLLImport]`.
I know what `STAThread` means but my question is what is the significance of the square brackets, essentially what do they tell the compiler? | It's an attribute. Attributes are a form of metadata that you can attach to various code elements: classes, methods, assemblies etc.
Some attributes have special meaning to the C# compiler, for instance the `[Serializable]` probably tells the compiler to emit some code that can serialize an instance of the class (I say 'probably' since I do not know the inner workings of the C# compiler).
You can also create your own attributes (by inheriting `System.Attribute`). Using reflection you could then at run-time extract information from the attributes.
A simple example would be to create an attribute to specify what kind of input field to use in a HTML form when displaying an object's property.
Some links:
* [Book chapter on attributes](http://oreilly.com/catalog/progcsharp/chapter/ch18.html)
* [Attributes overview (MSDN)](http://msdn.microsoft.com/en-us/library/xtwkdas5.aspx)
* <https://stackoverflow.com/search?q=C%23+attributes>
* <http://www.google.com/search?q=C%23+attributes> | These are [attributes](http://msdn.microsoft.com/en-us/library/z0w1kczw.aspx).
Attributes have many uses - `[Obsolete]` marks a method as obsolete and the compiler will warn you. Others like `[DebuggerNonUserCode]` tell nothing to the compiler and are there to let the debugger know that the code in the marked method is auto-generated.
You can also create your own attributes and use them to mark any kind of metadata. For example, your Customer object might have an attribute `[MarketingInformation("Customer is rich! Milk him good!")].` | Meaning of text between square brackets | [
"",
"c#",
"attributes",
""
] |
I've been using Java almost since it first came out but have over the last five years gotten burnt out with how complex it's become to get even the simplest things done. I'm starting to learn Ruby at the recommendation of my psychiatrist, uh, I mean my coworkers (younger, cooler coworkers - they use Macs!). Anyway, one of the things they keep repeating is that Ruby is a "flexible" language compared to older, more beaten-up languages like Java but I really have no idea what that means. Could someone explain what makes one language "more flexible" than another? Please. I kind of get the point about dynamic typing and can see how that could be of benefit for conciseness. And the Ruby syntax is, well, beautiful. What else? Is dynamic typing the main reason? | Dynamic typing doesn't come close to covering it. For one big example, Ruby makes metaprogramming easy in a lot of cases. In Java, metaprogramming is either painful or impossible.
For example, take Ruby's normal way of declaring properties:
```
class SoftDrink
attr_accessor :name, :sugar_content
end
# Now we can do...
can = SoftDrink.new
can.name = 'Coke' # Not a direct ivar access — calls can.name=('Coke')
can.sugar_content = 9001 # Ditto
```
This isn't some special language syntax — it's a method on the Module class, and it's easy to implement. Here's a sample implementation of `attr_accessor`:
```
class Module
def attr_accessor(*symbols)
symbols.each do |symbol|
define_method(symbol) {instance_variable_get "@#{symbol}"}
define_method("#{symbol}=") {|val| instance_varible_set("@#{symbol}", val)}
end
end
end
```
This kind of functionality allows you a lot of, yes, flexibility in how you express your programs.
A lot of what seem like language features (and which would be language features in most languages) are just normal methods in Ruby. For another example, here we dynamically load dependencies whose names we store in an array:
```
dependencies = %w(yaml haml hpricot sinatra couchfoo)
block_list %w(couchfoo) # Wait, we don't really want CouchDB!
dependencies.each {|mod| require mod unless block_list.include? mod}
``` | It's also because it's a classless (in the Java sense) but totally object oriented (properties pattern) so you can call any method, even if not defined, and you still get a last chance to dynamically respond to the call, for example creating methods as necessarry on the fly. Also Ruby doesn't need compilation so you can update a running application easily if you wanted to. Also an object can suddenly inherit from another class/object at anytime during it's lifetime through mixins so it's another point of flexibility. Anyways I agree with the kids that this language called Ruby , which has actually been around as long as Java, is very flexible and great in many ways, but I still haven't been able to agree it's beatiful (syntax wise), C is more beatiful IMHO (I'm a sucker for brackets), but beauty is subjective, the other qualities of Ruby are objective | Besides dynamic typing, what makes Ruby "more flexible" than Java? | [
"",
"java",
"ruby",
"duck-typing",
"dynamic-languages",
""
] |
How can I add external library into a project built by Qt Creator RC1 (version 0.9.2)? For example, the win32 function `EnumProcesses()` requires `Psapi.lib` to be added in the project to build. | The proper way to do this is like this:
```
LIBS += -L/path/to -lpsapi
```
This way it will work on all platforms supported by Qt. The idea is that you have to separate the directory from the library name (without the extension and without any 'lib' prefix). Of course, if you are including a Windows specific lib, this really doesn't matter.
In case you want to store your lib files in the project directory, you can reference them with the `$$_PRO_FILE_PWD_` variable, e.g.:
```
LIBS += -L"$$_PRO_FILE_PWD_/3rdparty/libs/" -lpsapi
``` | Are you using `qmake` projects? If so, you can add an external library using the [`LIBS`](https://doc.qt.io/qt-5/qmake-variable-reference.html#libs) variable. E.g:
```
win32:LIBS += path/to/Psapi.lib
``` | Adding external library into Qt Creator project | [
"",
"c++",
"winapi",
"qt",
"qt-creator",
""
] |
I have a control in which we show some links to different sites based on some business rules. Currently all business logic to build link list is in control.
I plan to move out the busincess logic from the control.
what will be a good design for this?
can I use any design pattern? | You shouldn't get too caught up in thinking about patterns. Most of the time they are overkill and add too much complexity. Particularly with a trivial scenario like this.
Just utilize good object-oriented practices and you'll be fine. Encapsulate your business logic in another class and provide public properties for your control to access it. Keep it simple! | How about the [Model-View-Presenter](http://en.wikipedia.org/wiki/Model_View_Presenter) pattern?
Another good choice might be the [Mediator pattern](http://en.wikipedia.org/wiki/Mediator_pattern). | Design Question | [
"",
"c#",
"asp.net",
""
] |
I have an URL:
> ```
> /Account.aspx/Confirm/34a1418b-4ff3-4237-9c0b-9d0235909d76
> ```
and a form:
```
<% using (Html.BeginForm())
{ %>
<fieldset>
<p>
<label for="password" class="instructions">
Contraseña:</label>
<%= Html.Password("password") %>
<%= Html.ValidationMessage("password", "*") %>
</p>
<p>
<input type="submit" value="Validar" />
</p>
</fieldset>
<% } %>
```
In the controller's action:
```
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Confirm(string id, string password)
{
//code
}
```
I want to obtain the value of the GUID in the URL (the part after Confirm) and the value of the input password.
How can I do that?
**EDIT:**
I have registered this routes:
```
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.IgnoreRoute("Error.aspx/{*pathInfo}");
routes.IgnoreRoute("Admin/{*pathInfo}");
routes.MapRoute(
"Default",
"{controller}.aspx/{action}/{id}",
new { controller = "Home", action = "Index", id = "" }
);
routes.MapRoute("Root", "",
new { controller = "Home", action = "Index", id = "" });
}
```
I think anything is odd, but in the id parameter of the Confirm action I'm getting an empty string. | Isn't this because you are POSTING your form that doesnt have the ID inside the form?
What you could do is on your Form page set the Model as the ID passed in then assign a hidden input value to this ID...
```
<%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master"
Inherits="System.Web.Mvc.ViewPage<string>" %>
```
```
public ActionResult Confirm(string id)
{
return View(id);
}
```
Now the ViewData.Model will contain your ID. Put this inside your form.
```
<input type="hidden" id="id" name="id" value="<%= ViewData.Model %>" />
```
Then when your form is submitted it will pass the ID through.
```
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Confirm(string id, string password)
{
//...now you have access to ID and password
}
``` | In your Global.asax.cs, define a route for Confirm that specifies the last part of the URL is a parameter.
Eg:
```
"Account/Confirm/{id}"
```
The MVC framework will automatically parse the URL and return the GUID in the id parameter | How to get query string's value in a form post | [
"",
"c#",
"asp.net-mvc",
""
] |
I am compiling classes at run-time using the `CodeDomProvider` class. This works fine for classes only using the `System` namespace:
```
using System;
public class Test
{
public String HelloWorld()
{
return "Hello World!";
}
}
```
If I try to compile a class using `System.Web.UI.WebControls` though, I get this error:
> {error CS0006: Metadata file 'System.Web.UI.WebControls' could not be found}
> System.CodeDom.Compiler.CompilerError
Here's a snippet of my code:
```
var cp = new CompilerParameters();
cp.ReferencedAssemblies.Add("System.Web.UI.WebControls");
```
How do I reference the `System.Web.UI.WebControls` namespace? | You reference assemblies, not namespaces. You should use MSDN to look up the name of the assembly that contains the classes you need to use: in this case it's going to be:
```
var cp = new CompilerParameters();
cp.ReferencedAssemblies.Add("System.Web.dll");
``` | You can loop through all the currently loaded assemblies:
```
var assemblies = AppDomain.CurrentDomain
.GetAssemblies()
.Where(a => !a.IsDynamic)
.Select(a => a.Location);
cp.ReferencedAssemblies.AddRange(assemblies.ToArray());
``` | CompilerParameters.ReferencedAssemblies -- Add reference to System.Web.UI.WebControls | [
"",
"c#",
"compiler-construction",
"assemblies",
""
] |
I was wondering what the best or most widely used apis are to convert a java object to xml. I'm fairly new on the subject. Is there some sort of api call you can make to pass in an object and return xml? Or is it much more tedious where as you need to construct the document manually by pulling out object values?
I have been reading about xerces, sax, and jaxb. I would like to continue along this open source route.
Thanks! | **JAXB** is definitely **the** solution.
Why? Well, it's **inside the JDK 6**, so you'll never find it unmaintained.
It uses Java annotations to declare XML-related properties for classes, methods and fields.
[**Tutorial 1**](http://java.sun.com/webservices/docs/1.6/tutorial/doc/JAXBWorks.html)
[**Tutorial 2**](http://jaxb.java.net/guide/)
Note: JAXB also enables you to easily 'unmarshal' XML data
(which was previously marshalled from Java object instances) back
to object instances.
One more great thing about JAXB is: It is supported by other Java-related
technologies, such as **JAX-RS** (a Java RESTful API, which is availible
as part of **Java EE 6**). JAX-RS can serve and receive JAXB
objects **on the fly**, without the need of marshalling/unmarshalling them.
You might want to check out [Netbeans](http://netbeans.org), which contains
out-of-the-box support for JAX-RS. Read [this tutorial](http://www.netbeans.org/kb/60/websvc/rest.html) for getting started.
**edit:**
To marshall/unmarshall 'random' (or foreign) Java objects, JAXB
offers fairly simple possibility: One can declare an **XmlAdapter**
and 'wrap' existing Java classes to be JAXB-compatible.
Usage of such XmlAdapter is done by using the **@XmlJavaTypeAdapter**-annotation. | You might want to look at XStream: <http://x-stream.github.io> | What is the best way to convert a java object to xml with open source apis | [
"",
"java",
"xml",
"serialization",
""
] |
I have an xsd that looks like this (snippet):
```
<xs:complexType name="IDType">
<xs:choice minOccurs="1" maxOccurs="2">
<xs:element name="FileID" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="IDNumber1" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="Number" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="PNumber" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="SS" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="Player" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="Prior" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="BIN" minOccurs="0" maxOccurs="1" type="an..35" />
<xs:element name="Mutual" minOccurs="0" maxOccurs="1" type="an..35" />
</xs:choice>
</xs:complexType>
<xs:simpleType name="an..35">
<xs:restriction base="an">
<xs:maxLength value="35" />
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="an">
<xs:restriction base="xs:string">
<xs:pattern value="[ !-~]*" />
</xs:restriction>
</xs:simpleType>
```
For some reason this is the Java code that gets generated:
```
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "IDType", propOrder = {
"fileID"
})
public class PatientIDType {
@XmlElementRefs({
@XmlElementRef(name = "FileED", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "IDNumber1", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "Number", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "PNumber", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "SS", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "Plaer", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "Prior", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "BIN", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class),
@XmlElementRef(name = "Mutual", namespace = "http://www.surescripts.com/messaging", type = JAXBElement.class)
})
protected List<JAXBElement<String>> fileID;
/**
* Gets the value of the fileID property.
*
* <p>
* This accessor method returns a reference to the live list,
* not a snapshot. Therefore any modification you make to the
* returned list will be present inside the JAXB object.
* This is why there is not a <CODE>set</CODE> method for the fileID property.
*
* <p>
* For example, to add a new item, do as follows:
* <pre>
* getFileID().add(newItem);
* </pre>
*
*
* <p>
* Objects of the following type(s) are allowed in the list
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
* {@link JAXBElement }{@code <}{@link String }{@code >}
*/
public List<JAXBElement<String>> getFileID() {
if (fileID == null) {
fileID = new ArrayList<JAXBElement<String>>();
}
return this.fileID;
}
```
Why is the class generated like this and simply not some kind of string array? I really don't want to have to create JAXBElements every time I want to create something?
How can I have it generate classes for each of the types that simply represent string or something like that?
Thanks in advance,
Ian | This code is generated because your complex type, IDType, contains a choice with a maxOccurrence greater than one, here:
<xs:choice minOccurs="1" maxOccurs="2">
The contents of this lists are elements with different *names* but the same *type*. This has no equivalent in standard object-oriented models. JAXB then uses the `JAXBElement` class to work around this: a `JAXBElement` wraps a simple object that contains the data and assigns it a `QName`.
Thus you can read from this list and write to the list unambiguously by providing:
* The data object (in your case a String since all restrictions from String are represented as a Java String)
* A `JAXBElement` whose `getValue()` method returns the string
The JAXB specification contains fairly detailed and reasonably complex explanations of how to deal with repeating choices and repeating sequences. Note that if your sequence contained objects of different types as well as different names, you would end up with *List<Object>*.
That was the long explanation, now here are some options:
* If you can modify the schema, put a wrapper element around the 8 items, say, "Wrapper". Wrapper will contain a single choice of the elements; Then make IDType contain a sequence of Wrapper elements with minOccurs = 1 and maxOccurs = 2.
* Create yourself some helper functions to quickly create the JAXBElements. JAXB puts a Factory class in your target package that can help you - for example, it contains references to the Schema namespace, etc. | Your ObejctFActory class should have the createMethod for thosr values, somthing like
```
@XmlElementDecl(namespace = "http://www.surescripts.com/messaging", name = "SocialSecurity", scope = PatientIDType.class)
public JAXBElement<String> createPatientIDTypeSocialSecurity(String value) {
return new JAXBElement<String>(_PayerIDTypeSocialSecurity_QNAME, String.class, PatientIDType.class, value);
}
``` | JAXB Types problem | [
"",
"java",
"xml",
"code-generation",
"jaxb",
""
] |
Is it possible to access & control external devices in PHP?
For example, would it be capable of changing the speed of a USB fan or change the direction of a wireless toy car.
I'm trying to go beyond web dev and I'm wondering if the programming language I use is capable of handling my ideas or should I consider changing the environment.
If these would be possible, I'd very much appreciate any pointers to reading materials or suggestions on other languages that might be more suitable.
Thanks! | On Linux it surely is possible by accessing /dev/ files. But it'll be very tedious. I'd recommend you switching to Python, Ruby, Lua or Java.
For example there are bindings for [libusb](http://libusb.wiki.sourceforge.net/) for [Python](http://pyusb.berlios.de/), [Ruby](http://www.a-k-r.org/ruby-usb/), [Lua](http://luaforge.net/projects/lualibusb/) and [Java](http://libusbjava.sourceforge.net/wp/res/doc/ch/ntb/usb/LibusbJava.html). | You could write an external program, then use PHP's exec (or was it system?) function to interact with the executable or script.
Seems like the most sane way to do it. Another good alternative is to build a program or script that controls an external device that can communicate with a RESTfull type API exposed via HTTP - and then use lib\_curl in PHP land to talk back and forth between it. Believe me, building a basic HTTP server in C++ that can be used to be remote controlled with PHP (or JS for that matter) is very simple.
**Wait**
I think I read the question wrong ;)
If you want to get into really cool stuff, I say that you learn C++. C++ is a great language that not only opens a lot of doors, but also provides a good learning experience. C++ is lots and lots of fun.
**In response to comment**
In the case with USB its a bit different and more complicated (as USB has an established protocol and such) but serial is as easy as dumping data into a handle.
You should be able to pick up C++ to get to that point fairly soon. Either way it's a great experience. | PHP complex uses - Interact with external devices | [
"",
"php",
""
] |
What is the most efficient way to solve this problem:
I've traversed a XML file and created the following set of linked (String) lists:
* a > b > c
* a > b > d
* a > f > [i]
and now I'm trying to rebuild the XML into its original structure:
```
<a>
<b>
<c/><d/>
</b>
<f>i</f>
</a>
```
Any help would really be appreciated! | You probably don't want to use Lists as a data structure for this. You might be better off creating a `Node` type or something similar, which can contain text and child nodes, so that you can store the data in a tree / hierarchy of nodes. Something simple like this should do the trick:
```
public class Node {
private String text;
private List<Node> children = new ArrayList<Node>();
public String getText() {
return text;
}
public void setText(String text) {
this.text = text;
}
public List<Node> getChildren() {
return children;
}
}
```
It should then be trivial to create a tree of these `Node`s when you read in the file, and to use the same structure to write it back out. | You'll need to store more information about the structure of the original XML. Those 3 lists don't have information on the order of child nodes, for instance. | Java: Traversed Tree to Tree | [
"",
"java",
"xml",
"parsing",
""
] |
Trying to do the following:
```
order.ExpirationDate =(DateTime) ( ExpMonth + "/" + ExpYear);
```
ExpMonth, Expyear are both ints. | This is going to be better for you:
```
order.ExpirationDate = new DateTime(ExpYear, ExpMonth, 1)
``` | You need to use:
```
DateTime.Parse(ExpMonth.ToString() + "/" + ExpYear.ToString());
``` | Cannot convert type 'string' to 'System.DateTime' | [
"",
"c#",
".net",
"datetime",
""
] |
When using a `Guid` as an index for a `Dictionary`, is it better to use the `Guid` object, or the string representation of the Guid?
I just refactored some code which was using string to use the object, because there were `new Guid()` calls all over the place. But that left me wondering what the performance issues might be. (The collections are fairly small, but they get iterated lots of times.) | The `Guid` should be quicker, as the comparison is simpler - just a few direct bytes. The string involves a dereference and lots more work.
Of course - you could profile ;-p
Evidence:
```
Searching for 7f9b349f-f36f-94de-ad96-04279ddf6ecf
As guid: 466; -1018643328
As string: 512; -1018643328
Searching for 870ba465-08f2-c872-cfc9-b3cc1ffa09de
As guid: 470; 1047183104
As string: 589; 1047183104
Searching for d2376f8a-b8c9-4633-ee8e-9679bb30f918
As guid: 423; 1841649088
As string: 493; 1841649088
Searching for 599889e8-d5fd-3618-4c4f-cb620e6f81bb
As guid: 488; -589561792
As string: 493; -589561792
Searching for fb64821e-c541-45f4-0fd6-1c772189dadf
As guid: 450; 1389733504
As string: 511; 1389733504
Searching for 798b9fe5-ba15-2753-357a-7637161ee48a
As guid: 415; 779298176
As string: 504; 779298176
Searching for 12ba292e-8e59-e5d0-7d04-e811a237dc21
As guid: 457; 558250944
As string: 564; 558250944
Searching for 05b3ce14-dfbf-4d3a-1503-ced515decb81
As guid: 413; 1658205056
As string: 504; 1658205056
Searching for 8db4a556-0a65-d8cb-4d0d-0104245d18b8
As guid: 415; 696231936
As string: 506; 696231936
Searching for c49cf80c-5537-fba5-eebd-8ad21bba09c4
As guid: 459; 2100976384
As string: 557; 2100976384
```
based on:
```
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
static class Program
{
static void Main()
{
Random rand = new Random(123456);
int COUNT = 1000;
Dictionary<Guid, int> guids = new Dictionary<Guid, int>(COUNT);
Dictionary<string, int> strings = new Dictionary<string, int>(
COUNT, StringComparer.Ordinal);
byte[] buffer = new byte[16];
for (int i = 0; i < COUNT; i++)
{
rand.NextBytes(buffer);
Guid guid = new Guid(buffer);
int val = rand.Next();
guids.Add(guid, val);
strings.Add(guid.ToString(), val);
}
for(int i = 0 ; i < 10 ; i++) {
int index = rand.Next(COUNT);
Guid guid = guids.Keys.Skip(index).First();
Console.WriteLine("Searching for " + guid);
int chk = 0;
const int LOOP = 5000000;
Stopwatch watch = Stopwatch.StartNew();
for (int j = 0; j < LOOP; j++)
{
chk += guids[guid];
}
watch.Stop();
Console.WriteLine("As guid: " + watch.ElapsedMilliseconds
+ "; " + chk);
string key = guid.ToString();
chk = 0;
watch = Stopwatch.StartNew();
for (int j = 0; j < LOOP; j++)
{
chk += strings[key];
}
watch.Stop();
Console.WriteLine("As string: " + watch.ElapsedMilliseconds
+ "; " + chk);
}
Console.ReadLine();
}
}
``` | > The collections are fairly small, but they get iterated lots of times
If you are iterating, there are no key to key comparisons. If you are adding/modifying or looking up by key, then keys will be hashed and the hashes compared; only if the hashes are equal will the keys be compared.
Therefore, unless you are performing a lot of key based operations on huge dictionaries with many hash collisions the speed of key to key comparisons will not be a major factor. | Performance - using Guid object or Guid string as Key | [
"",
"c#",
"performance",
"guid",
""
] |
See Also: [How to enable a timer from a different thread/class](https://stackoverflow.com/questions/725735/how-to-enable-a-timer-from-a-different-thread-class)
The timer is assigned to a form and I'd like to Enable it at a specific location,but from another class.*I don't want to make it public*
This is the code I use to access memo
```
public string TextValue
{
set
{
if (this.Memo.InvokeRequired)
{
this.Invoke((MethodInvoker)delegate
{
this.Memo.Text += value + "\n";
});
}
else
{
this.Memo.Text += value + "\n";
}
}
}
public static void addtxt(string txt)
{
var form = Form.ActiveForm as Form1;
if(form != null)
form.TextValue = txt;
}
``` | How would you like to enable the timer ? What action is undertaken in order to enable it ?
Is it possible to add an event to the class from which you want to enable the timer, and, on the form which contains the timer, subscribe to that event ?
In the event-handler for that event, you can then enable the timer.
When the other class raises the event, the eventhandler will enable the timer.
```
public class SomeOtherClassThatDoesStuff
{
public event EventHandler SomethingHappened;
public void DoStuff()
{
...
if( SomethingHappened != null )
SomethingHappened;
...
}
}
public class Form1
{
private void Button1_Click(object sender, EventArgs e )
{
SomeOtherClassThatDoesStuff o = new SomeOtherClassThatDoesStuff();
o.SomethingHappened += new EventHandler(EnableTimer);
o.DoStuff();
}
private void EnableTimer(object sender, EventArgs e )
{
myTimer.Enabled = true;
}
}
```
Something like this. (I haven't tested, nor did I even compile it, but I think you'll catch the drift :) ). | If you don't want to expose the timer itself, expose a `public` method or property that you can call to enable the timer. Obviously the `Form` that enables the Timer will need a reference to the `Form` that owns the `Timer`. | How to access a timer from another class in C# | [
"",
"c#",
"winforms",
"timer",
""
] |
Firstly, is there a way to use `document.write()` inside of JQuery's `$(document).ready()` method? If there is, please clue me in because that will resolve my issue.
Otherwise, I have someone's code that I'm supposed to make work with mine. The catch is that I am not allowed to alter his code in any way. The part that doesn't work looks something like this:
```
document.write('<script src=\"http://myurl.com/page.aspx?id=1\"></script>');
```
The script tag is referencing an aspx page that does a series of tests and then spits out something like so:
```
document.write('<img src=\"/image/1.jpg\" alt=\"Second image for id 1\">')
```
The scripts are just examples of what is actually going on. The problem here is that I've got a `document.write()` in the initial script and a `document.write()` in the script that get's appended to the first script and I've got to somehow make this work within JQuery's `$(document).ready()` function, without changing his code.
I have no idea what to do. Help? | **With the requirements given**, no, you can't use document.write without really hosing up the document. If you're really bent on not changing the code, you can override the functionality of document.write() like so and tack on the result later:
```
var phbRequirement = "";
$(function() {
document.write = function(evil) {
phbRequirement += evil;
}
document.write("Haha, you can't change my code!");
$('body').append(phbRequirement);
});
```
Make sure you overwrite the document.write function before it is used. You can do it at anytime.
The other answers are boring, this is fun, but very pretty much doing it the wrong way for the sake of fulfilling the requirements given. | picardo has the approach I would've used. To expand on the concept, take a read:
```
$('<script/>')
.attr('src', 'http://myurl.com/page.aspx?id=1')
.appendTo('body');
```
Alternate style:
```
var imgnode = $('<img alt="Second image for id 1"/>')
.attr('src', "image1.jpg");
$('#id1').append(imgnode);
```
Be sure to use the `attr` method to set any dynamic attributes. No need to escape special symbols that way.
Also, I'm not sure what the effectiveness of dynamically generating script tags; I never tried it. Though, it's expected that they contain or reference client-side script. My assumption is that what `page.aspx` will return. Your question is a little vague about what you're trying to do there. | JQuery $(document).ready() and document.write() | [
"",
"javascript",
"jquery",
""
] |
What is the closest thing to groovy/java combo in the C# .net world?
If I am writing an app with static and dynamic parts, what's the dynamic part like groovy on the .NET runtime? | [Boo](http://boo.codehaus.org/), [IronPython](http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython), [IronRuby](http://www.ironruby.net/) are probably the options.
There are other [CLI](http://en.wikipedia.org/wiki/List_of_CLI_Languages) languages too. | Iron Python or any other language targeting [DLR](http://en.wikipedia.org/wiki/Dynamic_Language_Runtime).. | Is there an equivalent to Groovy in C#? | [
"",
"c#",
".net",
"dynamic",
"groovy",
""
] |
I have the following function:
```
def my_func():
"""My docstring is both funny and informative"""
pass
```
How do I get access to the docstring? | Interactively, you can display it with
```
help(my_func)
```
Or from code you can retrieve it with (surround it with `print(.)` to get a formatted output):
```
my_func.__doc__
``` | You can also use [`inspect.getdoc`](http://docs.python.org/library/inspect.html#inspect.getdoc). It cleans up the `__doc__` by normalizing tabs to spaces and left shifting the doc body to remove common leading spaces. | Getting the docstring from a function | [
"",
"python",
""
] |
After prepairing an answer for [this question](https://stackoverflow.com/questions/761044/sql-query-help) I found I couldn't verify my answer.
In my first programming job I was told that a query within the `IN ()` predicate gets executed for every row contained in the parent query, and therefore using `IN` should be avoided.
For example, given the query:
```
SELECT count(*) FROM Table1 WHERE Table1Id NOT IN (
SELECT Table1Id FROM Table2 WHERE id_user = 1)
```
```
Table1 Rows | # of "IN" executions
----------------------------------
10 | 10
100 | 100
1000 | 1000
10000 | 10000
```
Is this correct? How does the `IN` predicate actually work? | The warning you got about subqueries executing for each row is true -- for **correlated subqueries.**
```
SELECT COUNT(*) FROM Table1 a
WHERE a.Table1id NOT IN (
SELECT b.Table1Id FROM Table2 b WHERE b.id_user = a.id_user
);
```
Note that the subquery references the `id_user` column of the outer query. The value of `id_user` on each row of `Table1` may be different. So the subquery's result will likely be different, depending on the current row in the outer query. The RDBMS *must* execute the subquery many times, once for each row in the outer query.
The example you tested is a **non-correlated subquery**. Most modern RDBMS optimizers worth their salt should be able to tell when the subquery's result *doesn't* depend on the values in each row of the outer query. In that case, the RDBMS runs the subquery a single time, caches its result, and uses it repeatedly for the predicate in the outer query.
PS: In SQL, `IN()` is called a "predicate," not a statement. A predicate is a part of the language that evaluates to either true or false, but cannot necessarily be executed independently as a statement. That is, you can't just run this as an SQL query: "2 IN (1,2,3);" Although this is a valid predicate, it's not a valid statement. | It will entirely depend on the database you're using, and the exact query.
Query optimisers are very smart at times - in your sample query, I'd expect the better databases to be able to use the same sort of techniques that they do with a join. More naive databases may just execute the same query many times. | How does the IN predicate work in SQL? | [
"",
"sql",
"performance",
"optimization",
""
] |
Ok, someone tell me which would be better. I need to |= the elements of one vector with another. That is, I want to
```
void orTogether(vector<char>& v1, const vector<char>& v2)
{
typedef vector<char>::iterator iter;
for (iter i = v1.begin(), iter j = v2.begin() ; i != v1.end(); ++i, ++j)
*i |= *j;
}
```
I can't use for\_each due to needing to process 2 collections. I suppose I could do something like
```
struct BitWiseOr
{
char operator()(const char& a, const char& b) {return a | b;}
};
void orTogether2(vector<char>& v1, const vector<char>& v2)
{
transform(v1.begin(), v1.end(), v2.begin(),
v1.begin(), BitwiseOr());
}
```
Is this a more efficient solution even though the top one is in place, but the bottom is an assign? This is right in the middle of a processing loop and I need the fastest code possible.
Edit: Added (obvious?) code for BitwiseOr. Also, I'm getting a lot of comments on non-related things like checking the lengths of v2 and changing the names. This is just an example, the real code is more complicated.
**Well, I profiled both. orTogether2 is much faster than orTogether, so I'll be going with the transform method.** I was surprised, orTogether2 was about 4 times faster in MSVC9 release mode. I ran it twice, changing the order the second time to make sure it wasn't some sort of cache issue, but same results. Thanks for the help everyone. | The bottom one will compile to effectively the same as the first, your OR functor is going to be inlined for sure. So the second idiom is more flexible if you ever need to add more flexibility or debugging frameworks or whatever.
Since there's no benefit to the first, use the transform method. Once you get into that habit you'll stop even considering the explicit loop choice for all your apps since it's unnecessary. The **only** advantage to the first method is it's easier to explain to beginner C++ programmers who are more comfortable with raw C. | Grab your watch and measure | should I use the algorithm or hand-code it in this case? | [
"",
"c++",
"algorithm",
"stl",
"transform",
""
] |
I need to close the tab which displays my webpage, by the click of a button. But firefox does not allow to close the window by javascript as long as it is not opened by javascript.
If I set the value of `dom.allow_scripts_to_close_windows` to be "true", then even normal `window.close()` works too. But that is not a good solution. :(
I tried the following workaround suggested in one of the forums:
```
<script language="javascript" type="text/javascript">
function closeWindow() {
window.open('','_parent','');
window.close();
}
</script>
```
It is supposed to fool the browser into thinking that it was actually opened by javascript, but this does not work in Firefox 3.
Can anyone please suggest a workaround? | ```
function closeWindow() {
netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserWrite");
alert("This will close the window");
window.open('','_self');
window.close();
}
closeWindow();
``` | For security reasons, your script cannot close a window/tab that it did not open.
The solution is to present the age prompt at an earlier point in the navigation history. Then, you can choose to allow them to enter your site or not based on their input.
Instead of closing the page that presents the prompt, you can simply say, "Sorry", or perhaps redirect the user to their homepage. | How can I close a window with Javascript on Mozilla Firefox 3? | [
"",
"javascript",
"firefox",
""
] |
I have a list which stores a lost of integers.
I don't like the default List.Sort() works, as I want the list to be ordered by size of the actual int.
So far I have this:
Oh, and the ints are stored in strings, e.g "1234". It is something I can not change.
```
public class IntComparer : IComparer<string>
{
public int Compare(string x, string y)
{
if (x == null)
{
if (y == null)
{
// If x is null and y is null, they're
// equal.
return 0;
}
else
{
// If x is null and y is not null, y
// is greater.
return -1;
}
}
else
{
// If x is not null...
//
if (y == null)
// ...and y is null, x is greater.
{
return 1;
}
else
{
// ...and y is not null, compare the
// lengths of the two strings.
//
int xInt = Convert.ToInt32(x);
int yInt = Convert.ToInt32(y);
if (x > y)
{
// If the strings are not of equal length,
// the longer string is greater.
//
return 1;
}
else if (xInt == yInt)
{
return 0;
}
else
{
// If the strings are of equal length,
// sort them with ordinary string comparison.
//
return -1;
}
}
}
}
```
But to my knowledge, this is bubble-sort, correct?
What should I implement instead? Quicksort? also, I might need help writing it.
Oh and my list contains short of 2 thousand elements which stores numbers in strings
Also, I call my IComparer like this:
```
IntComparer intSort = New IntComparer();
List<T>.Sort(intSort);
``` | Assuming you want to sort by the value of the integer stored as a string, you can simply do something like this:
```
numbers.Sort((x,y) => Int32.Parse(x).CompareTo(Int32.Parse(y)));
``` | You should be aware that the comparer and the sort algorithm do not determine each other. So this comparer can be used with bubble-sort as well as with quicksort, heapsort or any other sort algorithm. The built-in sort algorithm of List.Sort is quicksort, according to MSDN. | Optimize a list<T>.Sort(Comparer) | [
"",
"c#",
"generics",
"sorting",
"collections",
""
] |
I'd like to know if it is possible to use an expression as a variable/parameter in C#. I would like to do something like this:
```
int x = 0;
public void g()
{
bool greaterThan = f("x>2");
bool lessThan = f("x<2");
}
public bool f(Expression expression)
{
if(expression)
return true;
else
return false;
}
```
Here's what I don't want to do:
```
int x = 0;
public void g()
{
bool greaterThan = f(x, '<', 2);
}
public bool f(int x, char c, int y)
{
if(c == '<')
return x < y;
if(c == '>')
return x > y;
}
```
Really what I'm getting at is a way to get around using a switch or series of if statements for each of: < > <= >= == !=. Is there a way to do it?
Edit: Suppose that the expression is a string, like "x < 2". Is there a way to go from the string to a predicate without using a series of if statements on the condition? | Its very possible, just not in the exact syntax you have.
```
int x = 0;
public void g()
{
bool greaterThan = f(i => i > 2, x);
bool lessThan = f(i => i < 2, x);
}
public bool f(Func<int,bool> expression, int value)
{
return expression(value);
}
```
Actually, this should be closer to what you want.
```
int x = 0;
public void g()
{
bool greaterThan = f(() => x > 2);
bool lessThan = f(() => x < 2);
}
public bool f(Func<bool> expression)
{
return expression();
}
```
---
Reply to Edit
If you want be able to say `f("x < 2")`, it's going to be almost impossible. Ignoring parsing it (which could get nasty), you have to capture the value of x, but its just a character to `f`, which makes it pretty much impossible. | If you really want to pass around code for this, you want a Predicate:
```
int x = 0;
public void g()
{
bool greaterThan = f(i => i>2, x);
bool lessThan = f(i => i<2, x);
}
public bool f(Predicate<int> expression, int value)
{
return expression(value);
}
```
Otherwise, if you just substitute `bool` for `Expression` in your first example your code would compile just fine:
```
int x = 0;
public void g()
{
bool greaterThan = f(x>2);
bool lessThan = f(x<2);
}
public bool f(bool expression)
{
if(expression)
return true;
else
return false;
}
``` | C#: Is there a way to use expressions as a variable/parameter? | [
"",
"c#",
""
] |
I have the following tables:
```
ALERT (ID,Name)
1 | Alert A
2 | Alert B
ALERT_BRAND_XREF (ALERT_ID, BRAND_ID)
1 | 1
1 | 2
2 | 1
BRAND (ID, NAME)
1 | Brand A
2 | Brand B
```
I am trying to write one statement to return a list of alerts with the applicable brands as a CSV list in one field. Desired results:
```
Alert A | Brand A, Brand B
Alert B | Brand A
```
Is there a way to do this without writing a separate function? I would like to do it in one self-contained SQL statement if possible.
This is Oracle 9i. | In MySQL this would be easy with the `GROUP_CONCAT()` function, but it looks like to do the equivalent in Oracle it's a little messy:
[Oracle group\_concat() updated (again)](http://halisway.blogspot.com/2006/08/oracle-groupconcat-updated-again.html) | Look to [this solutions](http://blog.lishman.com/2008/05/how-to-pivot-column-to-csv-list-in.html), its very useful. Using SYS\_CONNECT\_BY\_PATH and analytic functions. | Oracle in-line method to produce CSV for relation | [
"",
"sql",
"oracle",
""
] |
Relating to another [question I asked yesterday](https://stackoverflow.com/questions/744693/how-do-i-detect-if-im-running-in-the-console) with regards to logging I was introduced to TraceListeners which I'd never come across before and sorely wish I had. I can't count the amount of times I've written loggers needlessly to do this and nobody had ever pointed this out or asked my why I didn't use the built in tools. This leads me to wonder what other features I've overlooked and written into my applications needlessly because of features of .NET that I'm unaware of.
Does anyone else have features of .NET that would've completely changed the way they wrote applications or components of their applications had they only known that .NET already had a built in means of supporting it?
It would be handy if other developers posted scenarios where they frequently come across components or blocks of code that are completely needless in hindsight had the original developer only known of a built in .NET component - such as the TraceListeners that I previously noted.
This doesn't necessarily include *newly* added features of 3.5 per se, but could if pertinent to the scenario.
**Edit** - As per previous comments, I'm not really interested in the "Hidden Features" of the language which I agree have been documented before - I'm looking for often overlooked framework components that through my own (or the original developer's) ignorance have written/rewritten their own components/classes/methods needlessly. | Also:
* [Hidden Features of ASP.NET](https://stackoverflow.com/questions/54929/hidden-features-of-asp-net)
* [Hidden Features of VB.NET?](https://stackoverflow.com/questions/102084/hidden-features-of-vb-net)
* [Hidden Features of F#](https://stackoverflow.com/questions/181613/hidden-features-of-f) | The yield keyword changed the way I wrote code. It is an AMAZING little keyword that has a ton of implications for writing really great code.
Yield creates "differed invoke" with the data that allows you to string together several operations, but only ever traverse the list once. In the following example, with yield, you would only ever create one list, and traverse the data set once.
```
FindAllData().Filter("keyword").Transform(MyTransform).ToList()
```
The yield keyword is what the majority of LINQ extensions were built off of that gives you the performance that LINQ has. | Rewriting Existing Functionality in the .NET Base Class Library | [
"",
"c#",
".net",
"asp.net",
"vb.net",
""
] |
Strange how I can do it in C++,but not in C#.
To make it clear,i'll paste the two functions in C++ and then in C# and mark the problematic lines in the C# code with a comment "//error".
What the two function does is encoding the parameter and then add it in to a global variable named byte1seeds.
These are the functions in C++
```
//Global var:
unsigned char byte1seeds[3];
unsigned long GenerateValue( unsigned long * Ptr )
{
unsigned long val = *Ptr;
for( int i = 0; i < 32; i++ )
val = (((((((((((val >> 2)^val) >> 2)^val) >> 1)^val) >> 1)^val) >> 1)^val)&1)|((((val&1) << 31)|(val >> 1))&0xFFFFFFFE);
return ( *Ptr = val );
}
void SetupCountByte( unsigned long seed )
{
if( seed == 0 ) seed = 0x9ABFB3B6;
unsigned long mut = seed;
unsigned long mut1 = GenerateValue( &mut );
unsigned long mut2 = GenerateValue( &mut );
unsigned long mut3 = GenerateValue( &mut );
GenerateValue( &mut );
unsigned char byte1 = (mut&0xFF)^(mut3&0xFF);
unsigned char byte2 = (mut1&0xFF)^(mut2&0xFF);
if( !byte1 ) byte1 = 1;
if( !byte2 ) byte2 = 1;
byte1seeds[0] = byte1^byte2;
byte1seeds[1] = byte2;
byte1seeds[2] = byte1;
}
```
Now the C# code:
I've changed the function GenerateValue.Instead of having a pointer as a parameter, it has a ulong parameter.
To call it and change both values i use:
1. ulong mut1 = GenerateValue(mut);
2. mut = mut1;
Here are the translated functions(the problematic lines are marked with "//error");
```
//Global var:
public static byte[] byte1seeds = new byte[3];
public static ulong GenerateValue(ulong val)
{
for( int i = 0; i < 32; i++ )
val = (((((((((((val >> 2)^val) >> 2)^val) >> 1)^val) >> 1)^val) >> 1)^val)&1)|((((val&1) << 31)|(val >> 1))&0xFFFFFFFE);
return val ;
}
public static void SetupCountByte( uint seed )
{
if( seed == 0 ) seed = 0x9ABFB3B6;
ulong mut = seed;
ulong mut1 = GenerateValue(mut);
mut = mut1;
ulong mut2 = GenerateValue(mut);
mut = mut2;
ulong mut3 = GenerateValue(mut);
mut = mut3;
mut = GenerateValue(mut);
byte byte1 = (mut & 0xFF) ^ (mut3 & 0xFF); //error
byte byte2 = (mut1 & 0xFF) ^ (mut2 & 0xFF); //error
if( byte1 != 0 )
byte1 = 1;
if( byte2 != 0 )
byte2 = 1;
byte1seeds[0] = byte1^byte2; //error
byte1seeds[1] = byte2;
byte1seeds[2] = byte1;
}
```
The error is:
Cannot implicitly convert type 'ulong' to 'byte'. An explicit conversion exists (are you missing a cast?)
edit:the error at problematic line 3 is:
Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)
Here comes the question: How to solve those errors?
Thanks in advance! | Add a `(byte)` to cast it. As you could lose precision, you have to tell the compiler that the value will fit into a byte, i.e.
```
byte byte1 = (byte)((mut & 0xFF) ^ (mut3 & 0xFF));
byte byte2 = (byte)((mut1 & 0xFF) ^ (mut2 & 0xFF));
``` | ## Symptom
The following code compiles in C++ but is rejected by a C# compiler, with the third line reporting type incompatibility.
```
ulong mut = 5;
ulong mut3 = 6;
byte foo = (mut & 0xFF) ^ (mut3 & 0xFF);
```
## Explanation
The expression `(mut & 0xFF) ^ (mut3 & 0xFF)` is of type `ulong` and cannot be assigned to a variable of type `byte`.
The variable `mut` is a `ulong`. All overloads of `&` require operand type symmetry, so in the expression `(mut & 0xFF)`, the value `0xFF` is promoted to `ulong`, and the result of the operation has the type `ulong`.
While a similar process does also give the second subexpression the type `ulong`, this is incidental because in the larger expression `A ^ B`, the fact that expression `A` has the type `ulong` would cause expression `B` to be promoted.
Thus, the expression `(mut & 0xFF) ^ (mut3 & 0xFF)` is of type `ulong` and requires an explicit cast before it can be assigned to a variable of type `byte`.
## Solution
Explicitly typecast the entire expression prior to assignment.
## Remarks
People turn off warnings instead of thinking about them because most C+ libraries are riddled with defects. If you switch warnings back on you get so many it's useless trying to wade through them, even though somewhere in the mess there'll be a note to the effect of "a potentially lossy implicit typecast was required".
If you read the C# language specification, with particular regard to operators, you will learn a great many useful things. For example, this code will fail:
```
byte b = 0xF0 | 0x0E; //b should contain 0xFE
```
but these will succeed:
```
byte b1 = (byte)(0xF0 | 0x0E); //typecast fixes it
byte b2 = 0xF0;
b2 |= 0x0E; //reflexive operator typed by target variable
``` | C#: Cannot convert from ulong to byte | [
"",
"c#",
"byte",
"long-integer",
"unsigned",
""
] |
I am having trouble with a session id not being defined.
The session variable is being called on a page that a form posts to via an ajax/jquery query.
The session variable is called on every other page without a problem. The variable is set when a user logs in, and is checked by an auth.php file. The auth.php file finds the session variable just fine, however, the insert.php file spits out an error: "Undefined index: var....".
Is there a reason for a session variable to get lost?
**EDIT:**
I am calling session\_start in my auth.php file. I get an error message if I try to start another session.
Is it possible that the session variable is not being found because the file is posted to via an ajax request?
**EDIT/ANSWER:**
I found out what was wrong. I was using a flash uploader, and sessions are lost when making an upload with flash. Oh what a night.
Thanks for all your help! | I found out what was wrong. I was using a flash uploader, and sessions are lost when making an upload with flash. | Is the page 'insert.php' has session started? It may be the reason.
Try to start it if the PHP setting is not set to auto start.
```
session_start();
```
I also find out print out the $\_SESSION sometimes handy.
```
print_r($_SESSION);
```
From my experience, there is another possibility, which is the other file is called from different host/servername/IP number, which cause the session variables not found. Although this is quite remote possibility.
In your case, try to check it out just in case. | PHP: Possible reasons for $_SESSION['var'] to be mysteriously undefined with flash upload | [
"",
"php",
"ajax",
"session",
""
] |
Given an xml document that looks like this here:
```
<parentRecords>
<parentRecord field1="foo" field2="bar">
<childRecord field1="test" field2="text" />
<childRecord field1="test2" field2="text2" />
</parentRecord>
<parentRecord field1="foo2" field2="bar2">
<childRecord field1="test3" field2="text3" />
<childRecord field1="test4" field2="text4" />
</parentRecord>
</parentRecords>
```
What would be the fastest way in SQL Server 2005 to pass this document into a stored procedure which would insert the parent and child records into a set of tables, where the parent table has an identity column, and the child table refers to the parent by a foreign key?
```
ParentTable
-----------
ParentID identity PK int
Field1 nvarchar
Field2 nvarchar
ChildTable
----------
ChildID identity PK int
ParentID FK int
Field1 nvarchar
Field2 nvarchar
```
I'm using ADO.NET and .NET 3.5 with C#. I have the option of sending the data as an xml parameter type or a text type. I can use the new-fangled sql 2005 XQuery stuff or the oldschool SQL 2000 OPENXML style. Or if it's actually possible to accomplish these inserts using SqlBulkInsert or something like that, I'm down with whatever is the fastest (performance is important in this situation.) Thanks for your help!
---
**EDIT:**
Looks like inserting parent/child sets is indeed as difficult as it seems. I was not in a position to try learning LINQ to SQL and integrating that framework into my product (we're in a dev cycle here!) and I wasn't able to get much traction with the Xml Bulk Insert tool although it appears it could be used for this purpose. In the end I restructured the tables to use GUID primary keys on both tables, and generated the full records in the API. Then I was able to use ADO 2.0 SqlBulkInsert to send the data down at high speed. Answer awarded to Daniel Miller because SQL Server Bulk Load had the best chance of success without re-architecting my application altogether. | Sounds like you need [SQL Server XML Bulk Load](http://msdn.microsoft.com/en-us/library/ms171769.aspx) | Something like this will extract parents followed by children with parent fields
```
DECLARE @fooxml xml
SET @fooxml = N'<parentRecords>
<parentRecord field1="foo" field2="bar">
<childRecord field1="test" field2="text" />
<childRecord field1="test2" field2="text2" />
</parentRecord>
<parentRecord field1="foo2" field2="bar2">
<childRecord field1="test3" field2="text3" />
<childRecord field1="test4" field2="text4" />
</parentRecord>
</parentRecords>'
SELECT
x.item.value('@field1', 'varchar(100)') AS field1,
x.item.value('@field2', 'varchar(100)') AS field2
FROM
@fooxml.nodes('/parentRecords/parentRecord') x(item)
SELECT
x.item.value('@field1', 'varchar(100)') AS field1,
x.item.value('@field2', 'varchar(100)') AS field2,
y.item.value('@field2', 'varchar(100)') AS childfield2,
y.item.value('@field2', 'varchar(100)') AS childfield2
FROM
@fooxml.nodes('/parentRecords/parentRecord') x(item)
CROSS APPLY
x.item.nodes('./childRecord') AS y(item)
``` | SQL Server 2005 Insert parent/child xml data | [
"",
"c#",
"sql-server",
"xml",
"ado.net",
"bulkinsert",
""
] |
This is a C# question.
I was just wandering if anyone tried storing their data (like config files for example), in the \*.cs file, instead of XML?
I really hate XML. The whole idea of parsing html-like structure every time you need some data seems stupid.
Are there lightweight alternatives (like YAML) to XML?
In my case I actually need to store some data for a (game) Level, where there are a lot of different objects, terrain types, etc... in a flat file, so I can read it later.
I guess I could serialize my **Level** class, but I want the data-file to be EDITABLE in any text editor for example.
Any help would be welcome, thanks!
PS: Regarding .cs files, I forgot to mention that I want the ability to create (and save) new levels in game, through an editor. | Since you don't want to use XML, I'd recommend creating your own file format that is text editable.
```
PROPERTIES
name = First Level
number = 1
END
MONSTERS
Barbarian
pos = (3, 6)
health = 2
Dragon
pos = (10, 10)
health = 8
END
```
and so on so forth for whatever else you need.
Creating a parser yourself is really straight forward. Pretty much just read in the file line by line.
At the first line, you see the title is PROPERTIES so you then read in properties until you read in the END line. Same thing for any other headers you'd have. There are many ways you can vary this, so you can make the syntax however you want it to be. | I've been working in the game industry for years, and we use XML because it's got the best tool support. Yaml or Ini files are used for configs, but game data is stored in XML. Of course we normally have tools that convert XML into binary code, for shipping games, but during development and custom game tools development XML simply rocks.
Sometimes XML is the right hammer for your needs, don't place prejudice on technologies just because they aren't the best in some fields (say Ajax(transport) vs Data Definition) | Store data in C# source files vs. XML etc...? | [
"",
"c#",
"xml",
"serialization",
"xna",
"yaml",
""
] |
What's the best open source Java library to add Facebook functionality to a Java web app? | Since facebook stopped supporting a Java API, the mantle of trying to provide one has been taken up by a google coding group:
<http://code.google.com/p/facebook-java-api/>
The API is provided under the [MIT License](http://www.opensource.org/licenses/mit-license.php). | For the simplest and most updated API, I'm liking RestFB so far.
<http://restfb.com/> | Best open source Java library to incorporate Facebook functionality into a Java web app? | [
"",
"java",
"web-applications",
"open-source",
"facebook",
""
] |
I have date in mm/dd/yyyy format in database.I want to display in form as dd/mm/yyyy.
Can anybody help?I want to get time along with date. | CONVERT(VARCHAR(10), YourField, 103)
Per your comment - you want the time as well.
```
select (CONVERT(VARCHAR(10), YourField, 103) + ' ' + CONVERT(VARCHAR(15), YourField, 108)) as DateTime
```
[<http://msdn.microsoft.com/en-us/library/aa226054.aspx>](http://msdn.microsoft.com/en-us/library/aa226054.aspx) | A date value doesn't have a format at all. It gets it's format when you convert it to a string.
You can use the `convert` function to convert the value in the database, but you should rather leave that to the code in the user interface. | To change date format in sql | [
"",
"sql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
With generated Java source code, like
* code generated with Hibernate tools
* code generated with JAXB schema binding (xjc)
* code generated with WDSL2Java (cxf)
all generated classes are "value object" types, without business logic. And if I add methods to the generated source code, I will loose these methods if I repeat the source code generation.
Do these Java code generation tools offer ways to "extend" the generated code?
For example,
* to override the ToString method (for logging)
* to implement the visitor pattern (for data analysis / validation) | For JAXB, see [Adding Behaviours](https://jaxb.java.net/guide/Adding_behaviors.html).
Basically, you configure JAXB to return a custom instance of the object you'd normally expect. In the below example you create a new object PersonEx which extends the JAXB object Person. This mechanism works well in that you're deriving from the generated classes, and not altering the JAXB classes or schemas at all.
```
package org.acme.foo.impl;
class PersonEx extends Person {
@Override
public void setName(String name) {
if(name.length()<3) throw new IllegalArgumentException();
super.setName(name);
}
}
@XmlRegistry
class ObjectFactoryEx extends ObjectFactory {
@Override
Person createPerson() {
return new PersonEx();
}
}
```
Note that the @Override directive is important in case your JAXB object changes - it will prevent your customisation becoming *orphaned*. | As for Hibernate you may tweak the template files used in code generation to change their behaviour. If you want to tweak the HIbernate Tools you can edit, for example: ***dao/daohome.ftl***
You may even add fields to the "toString()" output editing the ***.hbm.xml*** files
```
...
<property name="note" type="string">
<meta attribute="use-in-tostring">true</meta>
<column name="note" />
</property>
...
```
Both for logging and validation you may consider using [AOP with AspectJ](http://en.wikipedia.org/wiki/AspectJ) (I don't recommend messing with the generated code, since you might want to build that from scratch many times over). | How can I extend Java code generated by JAXB, CXF or Hibernate tools? | [
"",
"java",
"hibernate",
"code-generation",
"jaxb",
"cxf",
""
] |
how do i round off decimal values ?
Example :
decimal Value = " 19500.98"
i need to display this value to textbox with rounded off like " 19501 "
if decimal value = " 19500.43"
then
value = " 19500 " | Look at [`Math.Round(decimal)`](http://msdn.microsoft.com/en-us/library/3s2d3xkk.aspx) or [the overload which takes a `MidpointRounding` argument](http://msdn.microsoft.com/en-us/library/ms131274.aspx).
Of course, you'll need to parse and format the value to get it from/to text. If this is input entered by the user, you should probably use `decimal.TryParse`, using the return value to determine whether or not the input was valid.
```
string text = "19500.55";
decimal value;
if (decimal.TryParse(text, out value))
{
value = Math.Round(value);
text = value.ToString();
// Do something with the new text value
}
else
{
// Tell the user their input is invalid
}
``` | Math.Round( value, 0 ) | Round Off decimal values in C# | [
"",
"c#",
"decimal",
"numbers",
"rounding",
""
] |
I'm starting a fairly complex Swing application that heavily graphics-oriented with about 1000 separate jpegs, 30+ different forms, and timers keeping track of the rate of user-interactions throughout.
My question is from a practical programming perspective, after I've already written a storyboard for the entire project and got it approved by the client, where's the best place (code-wise) to begin programming this massive project and in what order should I program the elements?
(Example Answer: first begin coding the declare and init statements of all the necessary pieces, then write skeleton versions of all the methods, then deal with swing design and layout manager (gridbag), and then deal with Events and Listeners)
Thanks for the advice everyone, oh and btw I really love StackOverflow! | You're describing a "waterfall" development approach - completing some level of the program first, then completing the next level etc. It's indeed one possible approach, but many people find it so called [tracer bullet](http://www.artima.com/intv/tracer.html) approach better; first make something *functional*, then learn from it, adjust what's needed and proceed. It's especially useful if you're working with a client, because by showing prototypes you can get feedback and avoid misunderstandings. | I'd pick up a book on TDD and even if you're not going to write automated tests, it will be full of good advice on how to approach your project.
After that I'd pick a single piece of functionality that slices all the way through your application vertically and implement that end-to-end. This should allow you to get any infrastructure/frameworks in place and spot any gotcha's that may get thrown up out of your design.
If your client has the time free, show them each piece of functionality along the way and make sure that every piece you do adds some value to the product.
--EDIT
In addition I'd take a look at using a graphical designer for your screens instead of using the GridBagLayout. That will just waste time and can quickly become a maintainence nightmare when changing screens. I personally prefer the ones that work in a resource file type way, where the screen is essentially "compiled" and you just load it into your code | Best programming process for creating a graphically-complex Java Swing Application? | [
"",
"java",
"user-interface",
"design-patterns",
"swing",
""
] |
I'm writing a compressor for a long stream of 128 bit numbers. I would like to store the numbers as differences -- storing only the difference between the numbers rather than the numbers themselves because I can pack the differences in fewer bytes because they are smaller.
However, for compression then I need to subtract these 128 bit values, and for decompression I need to add these values. Maximum integer size for my compiler is 64 bits wide.
Anyone have any ideas for doing this efficiently? | If all you need is addition and subtraction, and you already have your 128-bit values in binary form, a library might be handy but isn't strictly necessary. This math is trivial to do yourself.
I don't know what your compiler uses for 64-bit types, so I'll use INT64 and UINT64 for signed and unsigned 64-bit integer quantities.
```
class Int128
{
public:
...
Int128 operator+(const Int128 & rhs)
{
Int128 sum;
sum.high = high + rhs.high;
sum.low = low + rhs.low;
// check for overflow of low 64 bits, add carry to high
if (sum.low < low)
++sum.high;
return sum;
}
Int128 operator-(const Int128 & rhs)
{
Int128 difference;
difference.high = high - rhs.high;
difference.low = low - rhs.low;
// check for underflow of low 64 bits, subtract carry to high
if (difference.low > low)
--difference.high;
return difference;
}
private:
INT64 high;
UINT64 low;
};
``` | Take a look at [GMP](http://gmplib.org/manual/).
```
#include <stdio.h>
#include <gmp.h>
int main (int argc, char** argv) {
mpz_t x, y, z;
char *xs, *ys, *zs;
int i;
int base[4] = {2, 8, 10, 16};
/* setting the value of x in base 10 */
mpz_init_set_str(x, "100000000000000000000000000000000", 10);
/* setting the value of y in base 16 */
mpz_init_set_str(y, "FF", 16);
/* just initalizing the result variable */
mpz_init(z);
mpz_sub(z, x, y);
for (i = 0; i < 4; i++) {
xs = mpz_get_str(NULL, base[i], x);
ys = mpz_get_str(NULL, base[i], y);
zs = mpz_get_str(NULL, base[i], z);
/* print all three in base 10 */
printf("x = %s\ny = %s\nz = %s\n\n", xs, ys, zs);
free(xs);
free(ys);
free(zs);
}
return 0;
}
```
The output is
```
x = 10011101110001011010110110101000001010110111000010110101100111011111000000100000000000000000000000000000000
y = 11111111
z = 10011101110001011010110110101000001010110111000010110101100111011111000000011111111111111111111111100000001
x = 235613266501267026547370040000000000
y = 377
z = 235613266501267026547370037777777401
x = 100000000000000000000000000000000
y = 255
z = 99999999999999999999999999999745
x = 4ee2d6d415b85acef8100000000
y = ff
z = 4ee2d6d415b85acef80ffffff01
``` | How can I add and subtract 128 bit integers in C or C++ if my compiler does not support them? | [
"",
"c++",
"integer",
"128-bit",
""
] |
I am using struts2, for that my struts.xml file contains code like :
```
<?xml version="1.0" encoding="UTF-8" ?>
```
```
<constant name="struts.enable.DynamicMethodInvocation" value="false" />
<constant name="struts.devMode" value="true" />
<include file="strutsAuthentication.xml"/>
<package name="default" extends="struts-default">
<interceptors>
<interceptor-stack name="crudStack">
<interceptor-ref name="checkbox" />
<interceptor-ref name="params" />
<interceptor-ref name="static-params" />
<interceptor-ref name="defaultStack" />
</interceptor-stack>
</interceptors>
</package>
```
And i have specified all the required actions inside the strutsAuthentication.xml. That code is :
```
<struts>
<package name="authentication" extends="default" namespace="/authentication">
<action name="saveCountry" class="saveCountryAction">
<interceptor-ref name="defaultStack" />
<result name="success">/savecountry.jsp</result>
<result name="error">/error.jsp</result>
</action>
</package>
```
When i am deploying my application into tomcat, it gives me warning that :
WARN (org.apache.struts2.components.Form:308) - No configuration found for the specified action: 'saveCountry' in namespace: ''. Form action defaulting to 'action' attribute's literal value.
It means struts.xml can't include strutsAuthentication.xml. Anyone have a solution ?? Thanx in advance.... | Got d solution.... For above problem i was done a mistake in calling the action from jsp page. So namespace name "authentication" should be included at the time of calling the action class. Final solution is : "authentication/saveCountry.action". | I don't know what version of struts2 you're using but if you're using the 2.1.x branch you should look at the convention plugin <http://cwiki.apache.org/S2PLUGINS/convention-plugin.html>. You can get rid of 99% XML configuration. | struts2.xml - can't include other .xml file | [
"",
"java",
"struts2",
""
] |
I just want the equivalent of closing and reopening my main program. I want to invoke it when a "new"-like option from a drop-down menu is clicked on. Something like calling root.destroy() and then re-initiating the mainloop.
How can I get this done? | You could take all your GUI building logic and initial state code out of the mainloop and put it into functions. Call these functions from the mainloop (something like: buildgui() & initstate()) and then, when the user clicks your menu icon, just call initstate() to set it back like it was when the application first started. | There are at least three ways you can solve this.
Method one: **the head fake**. When you create your app, don't put all the widgets in the root window. Instead, hide the root window and create a new toplevel that represents your application. When you restart it's just a matter of destroying that new toplevel and re-running all your start-up logic.
Method two: **nuke and pave**. Similar in concept but slightly different in execution. In this model, when you want to restart you simply delete all the widgets in the main window, reset the geometry to null (so the window will once again resize itself based on its contents) and then run the logic that draws all the other widgets.
Method three: **if it worked the first time...** As suggested by Martin v. Löwis, simply have your program exec a new instance of the program, then exit.
The first two methods are potentially faster and have the (dis?)advantage of preserving the current environment. For example you could save the copy of the clipboard, column widths, etc. The third method absolutely guarantees a blank slate. | Resetting the main GUI window | [
"",
"python",
"tkinter",
""
] |
> Duplicate: [How to truncate a date in
> .net?](https://stackoverflow.com/questions/252493/how-to-truncate-a-date-in-net/252494#252494)
I have datetime field containing '4/1/2009 8:00:00AM'. I want to get '4/1/2009' without the time. | Use the [Date](http://msdn.microsoft.com/en-us/library/system.datetime.date.aspx) property of the datetime field (if you need to do this on the client) | DateTime.Date will give you just the date portion of the datetime if you want to pass it around your application | How to get date part from datetime? | [
"",
"asp.net",
"sql",
"vb.net",
""
] |
Are there any PHP-scripts out there that can unzip regular zip-files? The requirements we have is that we must run with safe\_mode on and we're not allowed to install any extensions to enable this functionality. Thus, any functionality must be regular php scripts.
Any thoughts?
---
EDIT: Unfortunately, neither of the posted solution works for us. We may not change the safe\_mode requirement unless switching hosts, which is currently not an option. | pclzip.lib.php is a fairly popular and well used php class that is able to extract zip file susing pure php
<http://www.phpconcept.net/pclzip/index.en.php> | The usual answer here is [this](http://www.php.net/manual/en/function.zip-open.php) or [this](http://www.php.net/manual/en/function.ziparchive-open.php). However, according to the [PHP documentation](http://www.php.net/manual/en/zip.requirements.php):
> ## PHP 4
>
> The bundled PHP 4 version requires [» ZZIPlib](http://zziplib.sourceforge.net/), by Guido Draheim, version 0.10.6 or later
>
> ## PHP 5.2.0 or later
>
> This extension uses the functions of [» zlib](http://www.zlib.net/) by Jean-loup Gailly and Mark Adler.
So, both need external dependencies. | How to unzip files using PHP if safe mode is on and you can't install extensions? | [
"",
"php",
"zip",
""
] |
**Is it possible to store a .Net object in the registry?**
I want to store a generic `List<>` in the registry, and then retrieve it and parse it back to the List.
Is this possible, or do I need to manually serialize it and then deserialize it back?
**[UPDATE]**
Following the posted answers, I am going to serialize the object and save it in the *Current User's AppData folder*. | Why the registry?
The appropriate place to store these kind of serialised objects is usually in the users' Application Data folder or [Isolated Storage](http://msdn.microsoft.com/en-us/library/3ak841sy(VS.80).aspx). Though of course the method of serialisation is up to you. XML, binary etc, it's essentially a file on disk.
You could consider things like "Local Database" or SQL Server Express, depending on your data and concurrency needs.
Some applications do store a Most Recently Used (MRU) list in the registry, but that's just by iterating all the values of a given key. I don't recommend that approach. | It's *possible*, **if** the type included in the list is serializable. If that's the case, you can serialize it and store it in a string value.
But it's probably a *very bad idea*. The registry gets big enough as it is. Instead, put this kind of thing on the *file system*, in the All Users Application Data folder. | C#: Storing a .Net Object in the Registry | [
"",
"c#",
"registry",
""
] |
```
typedef pair<double, double> dd;
const double epsilon = 1e-6;
struct sort_by_polar_angle {
dd center;
// Constuctor of any type
// Just find and store the center
template<typename T> sort_by_polar_angle(T b, T e) {
int count = 0;
center = dd(0,0);
while(b != e) {
center.first += b->first;
center.second += b->second;
b++;
count++;
}
double k = count ? (1.0/count) : 0;
center.first *= k;
center.second *= k;
}
// Compare two points, return true if the first one is earlier
// than the second one looking by polar angle
// Remember, that when writing comparator, you should
// override not ‘operator <’ but ‘operator ()’
bool operator () (const dd& a, const dd& b) const {
double p1 = atan2(a.second-center.second, a.first-center.first);
double p2 = atan2(b.second-center.second, b.first-center.first);
return p1 + epsilon < p2;
}
};
// ...
vector < dd > points;
sort(all(points), sort_by_polar_angle(all(points)));
```
When sort\_by\_polar\_angle() is called, is it function as a construnctor?
How the overloaded operator () correctly used? | When you call `sort_by_polar_angle()` in the `sort()` function, you are creating a temporary object of type `sort_by_polar_angle` (i.e. its constructor is called). Inside the sort algorithm, the functor object you passed is used something like `functor()` which will be translated into `functor.operator()`. | It won't work for points on a straight line from the center. The 'angle' of these is equal, so their order isn't determined. Sorting these points will return undetermined results.
That's because any "sort" operation should be 'strict': if a < b, then b > a. (actually the [definition](http://www.sgi.com/tech/stl/StrictWeakOrdering.html) is somewhat more complicated.)
```
dd a0 = { 0, 1 };
dd b0 = { 0, 2 };
assert( sort_by_polar_angle ()( a0, b0 ) && ! sort_by_polar_angle () ( b0, a0 ) );
``` | Why this sample works? | [
"",
"c++",
"stl",
"functor",
""
] |
We're running a fairly complex app as a portlet on Websphere Portal Server 5.1 on AIX using IBM JDK 1.4.2. On our production system I can see a strange behaviour in the verbose GC logs. After a period of normal behaviour the system can start rapidly allocating larger and larger blocks. The system starts to spend > 1000 ms to complete each GC, but blocks are being allocated so quickly that there is only a 30 ms gap between allocation failures.
* Each allocation failure slightly larger than the last by some integer amount x 1024 bytes. E.g. you might have 5 MB, and then a short while later 5 MB + 17 \* 1024.
* This can go on for up to 10 minutes.
* The blocks tend to grow up to 8 to 14 MB in size before it stops.
* It's a quad-core system, and I assume that it's now spending >95% of it's time doing GC with three cores waiting for the other core to complete GC. For 10 minutes. Ouch.
* Obviously system performance dies at this point.
* We have JSF, hibernate & JDBC, web services calls, log4j output and not much else.
I interpret this as likely to be something infrastructural rather than our application code. If it was a bad string concatenation inside a loop we would expect more irregular growth than blocks of 1024. If it was StringBuffer or ArrayList growth we would see the block sizes doubling. The growth is making me think of log buffering or something else. I can't think of anything in our app that allocations even 1 MB, let alone 14. Today I looked for logging backing up in memory before being flushed to disk, but the volume of logging statements over this period of GC thrashing was nowhere near the MB range.
Clearly the problem is with the excessive memory allocation rather than with the garbage collection, which is just doing its best to keep up. Something is allocating a large block and trying to grow it inefficiently in increments that are far too small.
Any ideas what might be causing all this when the system is under load? Anybody seen anything similar with Portal Server?
Note: for anybody who's interested it's starting to look like the cause is an occasional but enormous database query. It seems the culprit is either Hibernate or the JDBC driver. | Depending on the exact version of the IBM JDK you are using, there are various options for tracking "large allocations". The differences are mainly in the implementation, and the result is a logging Java stack trace when an allocation over a certain size is made (which should help you track down the culprit).
"Sovereign" 1.4.2 SR4+:
<http://www-01.ibm.com/support/docview.wss?uid=swg21236523>
"J9" 1.4.2 (if Java is running under -Xj9 option):
You need to get hold of a JVMPI / JVMTI agent for the same purpose, I can't find a link for this one right now. | Not sure what could cause the problem, but here is an idea on how to investigate more:
The IBM JDK is great because it can be configured to do a heap dump when it receives a SIGQUIT signal.
In a previous project, it was not our JDK, but we would use it whenever we had memory issues to investigate.
Here's how to enable the heapdump:
<http://publib.boulder.ibm.com/infocenter/javasdk/v1r4m2/index.jsp?topic=/com.ibm.java.doc.diagnostics.142j9/html/enabling_a_heapdump.html>
Then there's a tool called heaproot that will allow you to see what's in these dumps.
Finding the type of objects should lead you to the culprit. | Strange garbage collection behaviour with Websphere Portal Server | [
"",
"java",
"garbage-collection",
"websphere",
"portal-server",
""
] |
Is it true that *const\_cast* is just a way to tell the compiler "stop moaning, treat this as a non-const pointer"? Are there any cases when *const\_cast* itself is translated into actual machine code? | Conceivably,there could be architectures where a const pointer had a different representation to a non-const one, in which case the compiler would have to emit some code. I'm not aware of any such architectures, however. | No, it just removes const attribute at compile time. | Does const_cast ever cause actual code emission? | [
"",
"c++",
"constants",
"memory-model",
"const-cast",
"data-representation",
""
] |
I'm working on a mod\_wsgi script.. at the beginning is:
```
sys.path.extend(map(os.path.abspath, ['/media/server/www/webroot/']))
```
But I've noticed, that every time I update the script the sys.path var keeps growing with duplicates of this extension:
```
['/usr/lib64/python25.zip'
'/usr/lib64/python2.5'
'/usr/lib64/python2.5/plat-linux2'
'/usr/lib64/python2.5/lib-tk'
'/usr/lib64/python2.5/lib-dynload'
'/usr/lib64/python2.5/site-packages'
'/usr/lib64/python2.5/site-packages/Numeric'
'/usr/lib64/python2.5/site-packages/gtk-2.0'
'/usr/lib64/python2.5/site-packages/scim-0.1'
'/usr/lib/python2.5/site-packages'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot'
'/media/server/www/webroot']
```
It resets every time I restart apache.. is there any way to make sure this doesn't happen? I want the module path to be loaded only once.. | No need to worry about checking or using abspath yourself. Use the ‘site’ module's built-in [addsitedir](http://docs.python.org/library/site.html#site.addsitedir) function. It will take care of these issues and others (eg. pth files) automatically:
```
import site
site.addsitedir('/media/server/www/webroot/')
```
(This function is only documented in Python 2.6, but it has pretty much always existed.) | One fairly simple way to do this is to check to see if the path has already been extended before extending it::
```
path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[0] not in sys.path:
sys.path.extend(path_extension)
```
This has the disadvantage, however, of always scanning through most of `sys.path` when checking to see if it's been extended. A faster, though more complex, version is below::
```
path_extension = map(os.path.abspath,['/media/server/www/webroot/'])
if path_extension[-1] not in reversed(sys.path):
sys.path.extend(path_extension)
```
A better solution, however, is probably to either add the path extensions to your `PYTHONPATH` environment variable or put a `.pth` file into your `site-packages` directory:
<http://docs.python.org/install/index.html> | mod_wsgi/python sys.path.exend problems | [
"",
"python",
"apache",
"mod-wsgi",
""
] |
I am creating dynamically a PDF file. After creating I want to open the pdf file. For that I am using this code:
```
System.Diagnostics.Process p = new System.Diagnostics.Process();
p = new System.Diagnostics.Process();
p.StartInfo.FileName = CreatePDF(); // method that creats my pdf and returns the full path
try
{
if (!p.Start())
Controller.Error = "Opening acrobat failed..";
}
catch(Exception ex)
{
Controller.Error = "Create PDF::" + ex.Message;
}
```
When executing this code, nothing happens and I don;t get any errors. What am I doing wrong? | **UPDATE:**
Since this is an ASP.NET app, this code will not work. It cannot interact with the desktop of the server hosting ASP.NET.
If the intention is to display the PDF for the users accessing from a browser, then the code for that is completely different. | Asp.net? What I would do is to take the memory stream and write it to the response stream like follows:
```
Response.ContentType = "application/pdf";
Response.AddHeader("content-disposition", string.Format("attachment; filename={0}", file.FileName));
Response.BinaryWrite(file.FileBytes);
Response.Flush();
Response.End();
```
For windows forms I'd take a look at using Foxit Reader instead. I have a [blog post](http://blog.zoolutions.se/post/2009/01/27/Printing-PDFe28099s-programmatically-from-C.aspx) about printing directly from foxit. You can open similarly.
**EDIT:** To create an attachment you add a reference to System.Net.Mail and do something like:
```
var stream = GetTheFileAsStream();
var attachment = new Attachment(stream);
``` | C# Process object doesn't open PDF | [
"",
"c#",
"pdf",
""
] |
The docs for the subprocess module state that 'If *shell* is True, the specified command will be executed through the shell'. What does this mean in practice, on a Windows OS? | When you execute an external process, the command you want may look something like "foo arg1 arg2 arg3". If "foo" is an executable, that is what gets executed and given the arguments.
However, often it is the case that "foo" is actually a script of some sort, or maybe a command that is built-in to the shell and not an actual executable file on disk. In this case the system can't execute "foo" directly because, strictly speaking, these sorts of things aren't executable. They need some sort of "shell" to execute them. On \*nix systems this shell is typically (but not necessarily) /bin/sh. On windows it will typically be cmd.exe (or whatever is stored in the COMSPEC environment variable).
This parameter lets you define what shell you wish to use to execute your command, for the relatively rare case when you don't want the default. | It means that the command will be executed using the program specified in the `COMSPEC` environment variable. Usually `cmd.exe`.
To be exact, subprocess calls the [`CreateProcess`](http://msdn.microsoft.com/en-us/library/ms682425.aspx) windows api function, passing `"cmd.exe /c " + args` as the `lpCommandLine` argument.
If shell==False, the `lpCommandLine` argument to CreateProcess is simply `args`. | What does the 'shell' argument in subprocess mean on Windows? | [
"",
"python",
"subprocess",
""
] |
My code won't compile due to the error below:
The call is ambiguous between the following methods or properties: 'System.Math.Round(double, int)' and 'System.Math.Round(decimal, int)
My code is
```
Math.Round(new FileInfo(strFilePath).Length / 1024, 1)
```
How can I fix this?
Thanks | The problem is that you make an integer division (results also in an `int`) and a `int` can be implicitly converted to both `double` and `decimal`. Therefore, you need to make sure the expression results in one of those; `double` is probably what you want.
```
Math.Round(new FileInfo(strFilePath).Length / 1024.0, 1)
``` | ```
Math.Round(new FileInfo(strFilePath).Length / 1024d, 1)
``` | C# The call is ambiguous between the following methods or properties: 'System.Math.Round(double, int)' and 'System.Math.Round(decimal, int) | [
"",
"c#",
"ambiguity",
""
] |
This is all asp.net c#.
I have an enum
```
public enum ControlSelectionType
{
NotApplicable = 1,
SingleSelectRadioButtons = 2,
SingleSelectDropDownList = 3,
MultiSelectCheckBox = 4,
MultiSelectListBox = 5
}
```
The numerical value of this is stored in my database. I display this value in a datagrid.
```
<asp:boundcolumn datafield="ControlSelectionTypeId" headertext="Control Type"></asp:boundcolumn>
```
The ID means nothing to a user so I have changed the boundcolumn to a template column with the following.
```
<asp:TemplateColumn>
<ItemTemplate>
<%# Enum.Parse(typeof(ControlSelectionType), DataBinder.Eval(Container.DataItem, "ControlSelectionTypeId").ToString()).ToString()%>
</ItemTemplate>
</asp:TemplateColumn>
```
This is a lot better... However, it would be great if there was a simple function I can put around the Enum to split it by Camel case so that the words wrap nicely in the datagrid.
Note: I am fully aware that there are better ways of doing all this. This screen is purely used internally and I just want a quick hack in place to display it a little better. | Indeed a regex/replace is the way to go as described in the other answer, however this might also be of use to you if you wanted to go a different direction
```
using System.ComponentModel;
using System.Reflection;
```
...
```
public static string GetDescription(System.Enum value)
{
FieldInfo fi = value.GetType().GetField(value.ToString());
DescriptionAttribute[] attributes = (DescriptionAttribute[])fi.GetCustomAttributes(typeof(DescriptionAttribute), false);
if (attributes.Length > 0)
return attributes[0].Description;
else
return value.ToString();
}
```
this will allow you define your Enums as
```
public enum ControlSelectionType
{
[Description("Not Applicable")]
NotApplicable = 1,
[Description("Single Select Radio Buttons")]
SingleSelectRadioButtons = 2,
[Description("Completely Different Display Text")]
SingleSelectDropDownList = 3,
}
```
Taken from
<http://www.codeguru.com/forum/archive/index.php/t-412868.html> | I used:
```
public static string SplitCamelCase(string input)
{
return System.Text.RegularExpressions.Regex.Replace(input, "([A-Z])", " $1", System.Text.RegularExpressions.RegexOptions.Compiled).Trim();
}
```
Taken from <http://weblogs.asp.net/jgalloway/archive/2005/09/27/426087.aspx>
vb.net:
```
Public Shared Function SplitCamelCase(ByVal input As String) As String
Return System.Text.RegularExpressions.Regex.Replace(input, "([A-Z])", " $1", System.Text.RegularExpressions.RegexOptions.Compiled).Trim()
End Function
```
Here is a [dotnet Fiddle](https://dotnetfiddle.net/2aiv7g) for online execution of the c# code. | Splitting CamelCase | [
"",
"c#",
"asp.net",
"string",
""
] |
Does anyone know if/how I can convert a binary formatted Mac OS X plist file to a plain XML string in C#?
I know there are some plist editors for Windows available that says they support binary formatted plist files, but I need to do this inline in my own application. | I realize this is super old, but I'm posting my solution for posterity.
I couldn't find anything usable sort of launching an external process when I embarked on binary plist serialization a few weeks ago, so I had to roll my own.
For others looking for C#/.NET binary plist serialization, you can find my implementation at <https://github.com/ChadBurggraf/plists-cs>.
Hopefully this helps some folks out. | a quick google reveals [plutil.pl](http://scw.us/iPhone/plutil/), but that will only work if perl is installed (which I'm fairly certain is not the default in windows) | Convert an Mac OS X binary formatted plist to readable format in C# | [
"",
"c#",
".net",
"macos",
"binary",
"plist",
""
] |
We're using SQL Server 2005 to track a fair amount of constantly incoming data (5-15 updates per second). We noticed after it has been in production for a couple months that one of the tables has started to take an obscene amount of time to query.
The table has 3 columns:
* `id` -- autonumber (clustered)
* `typeUUID` -- GUID generated before the insert happens; used to group the types together
* `typeName` -- The type name (duh...)
One of the queries we run is a distinct on the `typeName` field:
```
SELECT DISTINCT [typeName] FROM [types] WITH (nolock);
```
The `typeName` field has a non-clusted, non-unique ascending index on it. The table contains approximately 200M records at the moment. When we run this query, the query took 5m 58s to return! Perhaps we're not understanding how the indexes work... But I didn't think we mis-understood them *that* much.
To test this a little further, we ran the following query:
```
SELECT DISTINCT [typeName] FROM (SELECT TOP 1000000 [typeName] FROM [types] WITH (nolock)) AS [subtbl]
```
This query returns in about 10 seconds, as I would expect, it's scanning the table.
Is there something we're missing here? Why does the first query take so long?
**Edit:** Ah, my apologies, the first query returns 76 records, thank you ninesided.
**Follow up:** Thank you all for your answers, it makes more sense to me now (I don't know why it didn't before...). Without an index, it's doing a table scan across 200M rows, with an index, it's doing an index scan across 200M rows...
SQL Server does prefer the index, and it does give a little bit of a performance boost, but nothing to be excited about. Rebuilding the index did take the query time down to just over 3m instead of 6m, an improvement, but not enough. I'm just going to recommend to my boss that we normalize the table structure.
Once again, thank you all for your help!! | You do misunderstand the index. Even if it did use the index it would still do an index scan across 200M entries. This is going to take a long time, plus the time it takes to do the DISTINCT (causes a sort) and it's a bad thing to run. Seeing a DISTINCT in a query always raises a red flag and causes me to double check the query. In this case, perhaps you have a normalization issue? | There is an issue with the SQL Server optimizer when using the `DISTINCT` keyword. The solution was to force it to keep the same query plan by breaking out the distinct query separately.
So we took queries such as:
```
SELECT DISTINCT [typeName] FROM [types] WITH (nolock);
```
and break it up into the following:
```
SELECT typeName INTO #tempTable1 FROM types WITH (NOLOCK)
SELECT DISTINCT typeName FROM #tempTable1
```
Another way to get around it is to use a `GROUP BY`, which gets a different optimization plan. | Slow distinct query in SQL Server over large dataset | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I want to make the results of a foreach loop into a string variable I can use later all over (so I don't need to paste the foreach loop everywhere). I have this:
```
foreach($pairs as $d=>$m) {
$orderedpairs .= "[".$d."],[".$m."]"+"<br />";
}
echo $orderedpairs;
```
If I substitute the assignment operator with "echo", it works fine, so the loop is ok, I think it's just the variable assignment that's at issue. Thanks! | The plus sign is causing your concatenation to fail - change it to a `.`
*Contrary to what others are saying*, the scope of your variable is not the problem. You **CAN declare them inside a loop and access them after it**. PHP variables are not scoped like Java, C#, and other languages. | You have a + in there for concatenation. You need .
Also, you should define $orderedpairs as an empty string before the loop. | Why won't this PHP iteration work? | [
"",
"php",
"iteration",
"concatenation",
""
] |
Here is my issue. I am trying to call a page: foo.php?docID=bar and return a PDF to the screen which is stored as a BLOB in the DB.
Here is the portion of my code which actually returns the PDF:
```
$docID = isset($_REQUEST['docID']) ? $_REQUEST['docID'] : null;
if ($docID == null){
die("Document ID was not given.");
}
$results = getDocumentResults($docID);
if (verifyUser($user, $results['ProductId'])){
header('Content-type: application/pdf');
// this is the BLOB data from the results.
print $results[1];
}
else{
die('You are not allowed to view this document.');
}
```
This works perfectly fine in Firefox.
However, in IE, it doesn't show anything at all. If i'm on another page (i.e. google.com), and I type in the URL to go to this page, it will say it's done, but I will still have google.com on my screen.
I checked the headers for the responses from both firefox and IE. They are identical.
Does anyone have any suggestions? Need more information?
**EDIT**: If it helps at all, here's the response header and the first line of the content:
```
HTTP/1.1 200 OK
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Length: 349930
Content-Type: application/pdf
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: PHP/5.1.2
Set-Cookie: PHPSESSID=cql3n3oc13crv3r46h2q04dvq4; path=/; domain=.example.com
Content-Disposition: inline; filename='downloadedFile.pdf'
X-Powered-By: ASP.NET
Date: Tue, 21 Apr 2009 16:35:59 GMT
%PDF-1.4
```
**EDIT**: Also, the page which pulls out the pdf file actually uses HTTPS instead of HTTP.
Thanks in advance,
~Zack | I figured out what the issue was. It's an IE bug dealing with IE, HTTPS and addons. (See [here](http://simonwillison.net/2008/May/30/radiacnet/))
It was a caching issue. When I set:
```
header("Cache-Control: max-age=1");
header("Pragma: public");
```
(see [here](http://us.php.net/manual/en/function.header.php#83219)), the PDF was in cache long enough for the adobe reader add-on to grab it. | I had this issue too, i used the following which seems to work fine
```
header("Content-type: application/pdf");
header("Content-Length: $length");
header("Content-Disposition: inline; filename='$filename'");
``` | IE (HTTPS): generating pdf from php file doesn't work | [
"",
"php",
"internet-explorer",
"pdf",
"https",
""
] |
I am watching the ASP.NET learn videos on asp.net/learn. In this tutorial, they are building a quiz engine. At one point, the narrator explains that we are going to use the Session object to maintain the state between each page (each page contains a question and four answers). He says that "since this is a low traffic website" it is okay to use Session and that he doesn't have the time to implement a more sophisticated method.
I am just wondering what alternate method(s) is he hinting at? And why is session a bad choice for a high traffic website? | Storing data in a database, or in cookies or some other method that is not directly tying up web server memory.
In addition to load, session also raises issues with the ability to use farms since you would either need to synchronize the session across the farm, or make sessions sticky, which can impact scalability. | For alternatives you can read the article [Nine Options for Managing Persistent User State in Your ASP.NET Application](http://msdn.microsoft.com/en-us/magazine/cc300437.aspx).
In the articles the author explains the pros and cons of each method.
From the summary:
> ASP.NET provides many different ways
> to persist data between user requests.
> You can use the Application object,
> cookies, hidden fields, the Session or
> Cache objects, and lots of other
> methods. Deciding when to use each of
> these can sometimes be difficult. This
> article will introduce the
> aforementioned techniques and present
> some guidelines on when to use them.
> Although many of these techniques
> existed in classic ASP, best practices
> for when to use them have changed with
> the introduction of the .NET
> Framework. To persist data in ASP.NET,
> you'll have to adjust what you learned
> previously about handling state in
> ASP. | Why is it a bad idea to use Session to store state in high traffic websites? | [
"",
"c#",
".net",
"asp.net",
""
] |
What's the best way to lazily initialize a collection, I'm specifically looking at Java. I've seen some people decide to do this in modification methods (which seems a bit yucky), as follows:
```
public void addApple(final Apple apple) {
if (this.apples == null) {
apples = new LinkedList<Apple>();
}
this.apples.add(apple);
}
```
You could refactor the initialization into a method and call it from add/delete/update etc... but it seems a bit yuck. It's often compounded by the fact that people also expose the collection itself via:
```
public Collection<Apple> getApples() {
return apples;
}
```
which breaks encapsulation and leads to people accessing the Collection directly.
The purpose for lazy initialization is purely performance related.
I'm curious to see what other peoples proposed approaches are for this. Any ideas? | I put the lazy instantiation into the getter for a given function. Usually I instantiate a list lazily to avoid the DB hit if possible. Example:
```
public final Collection<Apple> getApples() {
if (apples == null) {
// findApples would call the DB, or whatever it needs to do
apples = findApples();
return apples;
}
public void addApple(final Apple apple) {
//we are assured that getApples() won't return
//null since it's lazily instantiated in the getter
getApples().add(apple);
}
```
This approach means that other functions (say, removeApples()) won't need to worry about instantiation either. They, too, would just call getApples(). | To safely lazily initialize a member in a multi-threaded environment, you need some concurrency mechanism to make the initialization atomic and visible to other threads. This cost is paid both during initialization *and* each time the lazily initialized member is accessed. This ongoing expense can significantly undermine performance. It is very important to profile the effect of lazy initialization. The right choice is going to vary widely depending on the application. | Lazily Initialization of a Collection | [
"",
"java",
"collections",
""
] |
I wrote this sql query to search in a table:
```
SELECT * FROM TableName WHERE Name LIKE '%spa%'
```
The table contain these row for example:
1. Space Company.
2. Spa resort.
3. Spa hotel.
4. Spare Parts.
5. WithoutTheKeyword.
I want to know how to edit this query so it return the results sorted like this:
> 2 Spa resort
>
> 3 Spa hotel
>
> 1 Space Company
>
> 4 Spare Parts
Means the items which contain the exact word first then the like ones. | something like
```
Select * from TableName where Name Like 'Spa%'
ORDER BY case when soundex(name) = soundex('Spa') then '1' else soundex(name) end
```
should work ok.
actually this will work better
```
Select * from TableName where Name Like 'Spa%'
ORDER BY DIFFERENCE(name, 'Spa') desc;
```
FWIW I did some quick tests and if 'Name' is in a NONCLUSTERED INDEX SQL will use the index and doesn't do a table scan. Also, LIKE seems to use less resources than charindex (which returns less desirable results). Tested on sql 2000. | You realize, I presume, that your schema just about eliminates any usefulness of indexes for these kinds of queries?
A big problem is your "LIKE '%spa%'". Any "LIKE" key starting with a wildcard is an automatic table scan.
---
EDIT:
I read your question to say that there is a single field, Name, with field values something like "1 Space Company", "2 Spa resort", etc. with a number followed by words. And you needed the wild card in front of your search key to get past the number part. (This is to clarify my first comment.) Am I guessing correctly or not? | How to make a sql search query more powerful? | [
"",
"sql",
"mysql",
"sql-server",
"sql-server-2005",
"t-sql",
""
] |
I dynamically load a UserControl into a View that's in a MultiView control. Although the UserControl adds an event handler, the event never fires.
What am I missing here?
Thanks!
**Containing ASPX page:**
```
protected override void OnPreRender(EventArgs e)
{
if (MultiView1.ActiveViewIndex == 2) //If the tab is selected, load control
{
Control Presenter = LoadControl("Presenter.ascx");
(MultiView1.ActiveViewIndex.Views[2].Controls.Add(Presenter);
}
base.OnPreRender(e);
}
```
**Presenter.ascx.cs**
```
override protected void OnInit(EventArgs e)
{
Retry.Click += this.Retry_Click; //This is a .Net 2.0 project
base.OnInit(e);
}
protected void Retry_Click(object sender, EventArgs e)
{
//This never fires
}
``` | I am thinking it is not firing because you are loading the control in your page's prerender event. Upon postback, the control is being lost because there is no view state for it. Therefore there is no control to fire its event. Try to load the control in the page's init event. Let us know what happens! | Postback event handling is done before rendering so the control is not present in the page in your case.
The life cycle events are fired in this order (skipped a few):
1. Init
2. Load
3. PreRender
4. Unload
And event handling is done between Load and PreRender (in case some events change the way the page should be rendered, it makes sense).
So just move your code that loads the Retry control to Load or Init.
Reference: [Asp.Net Page Life Cycle Overview](http://msdn.microsoft.com/en-us/library/ms178472.aspx) | UserControl's Event Handler not firing | [
"",
"c#",
"events",
"user-controls",
"multiview",
""
] |
What am I doing wrong here?
```
import re
x = "The sky is red"
r = re.compile ("red")
y = r.sub(x, "blue")
print x # Prints "The sky is red"
print y # Prints "blue"
```
How do i get it to print "The sky is blue"? | The problem with your code is that there are two sub functions in the re module. One is the general one and there's one tied to regular expression objects. Your code is not following either one:
The two methods are:
`re.sub(pattern, repl, string[, count])` [(docs here)](http://docs.python.org/library/re.html#re.sub)
Used like so:
```
>>> y = re.sub(r, 'blue', x)
>>> y
'The sky is blue'
```
And for when you compile it before hand, as you tried, you can use:
`RegexObject.sub(repl, string[, count=0])` [(docs here)](http://docs.python.org/library/re.html#re.RegexObject.sub)
Used like so:
```
>>> z = r.sub('blue', x)
>>> z
'The sky is blue'
``` | You read the API wrong
<http://docs.python.org/library/re.html#re.sub>
pattern.sub(repl, string[, count])¶
```
r.sub(x, "blue")
# should be
r.sub("blue", x)
``` | What is wrong with my attempt to do a string replace operation in Python? | [
"",
"python",
""
] |
**Platform**: Visual Studio 2008 SP1 with Resharper 4.1, .NET 3.5
I have a class with a static method, `GetProperty<T>` that returns a property value lazily.
```
private static T GetProperty<T>(T backingField, Func<T> factory)
where T : class
{
if (backingField == null)
backingField = factory();
return backingField;
}
```
But when I use above method to return a property,
I am getting two warnings that says that private backing fields are not assigned.
But they are assigned later on only when they are needed.

Is this warning *ignorable*?
-- Or --
Is my appoach to loading a property flawed? | Your method is flawed. To take this approach you need to make `backingField` a `ref` parameter.
```
private static T GetProperty<T>(ref T backingField, Func<T> factory)
```
Then on `GetProperty`, pass `ref _ImagXpress` or `ref _PdfXpress`.
The way you're doing it now just assigns a new value to the parameter, not to the actual backing field. | Your approach is flawed. You're fields will never be set to anything. the `backingField` argument is being set in the `GetProperty<T>` method, but this does not update the field you're passing in. You'll want to pass in that argument with the `ref` keyword attached to it like this:
```
private static T GetProperty<T>(ref T backingField, Func<T> factory)
``` | Is my approach to lazy loading flawed? | [
"",
"c#",
"resharper",
"lazy-loading",
"warnings",
""
] |
This is probably pretty basic, but to save me an hour or so of grief can anyone tell me how you can work out the number of bits required to represent a given positive integer in Java?
e.g. I get a decimal 11, (1011). I need to get the answer, 4.
I figured if I could work out how to set all the bits other than the most significant bit to 0, and then >>> it, I'd get my answer. But... I can't. | Well, you can just count how many times you shift right before you're left with just zero:
```
int value = 11;
int count = 0;
while (value > 0) {
count++;
value = value >> 1;
}
``` | Well, the answer is pretty simple. If you have an int value:
```
int log2(int value) {
return Integer.SIZE - Integer.numberOfLeadingZeros(value);
}
```
The same exists for Long...
[Edit]
If shaving milliseconds is an issue here, Integer.numberOfLeadingZeros(int) is reasonably efficient, but still does 15 operations... Expanding a reasonable amount of memory (300 bytes, static) you could shave that to between 1 and 8 operations, depending on the range of your integers. | Find out number of bits needed to represent a positive integer in binary? | [
"",
"java",
"bit-manipulation",
"bit",
""
] |
I'm building an app that will store some of our clients details, things like usernames / passwords, information that we need to remember and keep secure.
What's the best method for storing this information securely? | Such an open-ended question with not a lot of detail to go on. I'd suggest reading Chris Shiflett's excellent "[Essential PHP Security](https://rads.stackoverflow.com/amzn/click/com/059600656X)" before you go any further. It's short, to the point and very practical.
There's also a reasonable amount of the advice available from the book's website too at <http://phpsecurity.org/> | Pretty simple actually. Set up a quick MySQL database, and a user table. In that user table, store the usernames in a column and a hashed version of the password in another column.
As added security, I like to generate a random 8 character string and store that as well in each row - I call that column the "Keycode". When the user signs in with a correct username / password, I store their authentication in session variables AS WELL as the matching "Keycode".
That way, the session authentication can not only look for the right username / password, but quickly query the db, and check to see if the "Keycode" stored in the session variable is the same as the keycode in the row.
It works well because not even the user knows their keycode. | Best way to secure data PHP + MYSQL | [
"",
"php",
"mysql",
"security",
"administration",
""
] |
I've got some very large XML files which I read using a `System.Xml.Serialization.XmlSerializer`. It's pretty fast (well, fast enough), but I want it to pool strings, as some long strings occur very many times.
The XML looks somewhat like this:
```
<Report>
<Row>
<Column name="A long column name!">hey</Column>
<Column name="Another long column name!!">7</Column>
<Column name="A third freaking long column name!!!">hax</Column>
<Column name="Holy cow, can column names really be this long!?">true</Column>
</Row>
<Row>
<Column name="A long column name!">yo</Column>
<Column name="Another long column name!!">53</Column>
<Column name="A third freaking long column name!!!">omg</Column>
<Column name="Holy cow, can column names really be this long!?">true</Column>
</Row>
<!-- ... ~200k more rows go here... -->
</Report>
```
And the classes the XML is deserialized into look somewhat like this:
```
class Report
{
public Row[] Rows { get; set; }
}
class Row
{
public Column[] Columns { get; set; }
}
class Column
{
public string Name { get; set; }
public string Value { get; set; }
}
```
When the data is imported, a new string is allocated for every column name. I can see why that is so, but according to my calculations, that means a few duplicated strings make up some ~50% of the memory used by the imported data. I'd consider it a very good trade-off to spend some extra CPU cycles to cut memory consumption in half. Is there some way to have the `XmlSerializer` pool strings, so that duplicates are discarded and can be reclaimed the next time a gen0 GC occurs?
---
Also, some final notes:
* I can't change the XML schema. It's an exported file from a third-party vendor.
* I know could (theoretically) make a faster parser using an `XmlReader` instead, and it would not only allow me to do my own string pooling, but also to process data during mid-import so that not all 200k lines have to be saved in RAM until I've read the entire file. Still, I'd rather not spend the time writing and debugging a custom parser. The real XML is a bit more complicated than the example, so it's quite a non-trivial task. And as mentioned above - the `XmlSerializer` really does perform well enough for my purposes, I'm just wondering if there is an easy way to tweak it a little.
* I could write a string pool of my own and use it in the `Column.Name` setter, but I'd rather not as (1) that means fiddling with auto-generated code, and (2) it opens up for a slew of problems related to concurrency and memory leaks.
* And no, by "pooling", I don't mean "interning" as that can cause memory leaks. | Personally, I wouldn't hesitate to hand-crank the entities - either by assuming ownership of the generated code, or doing it manually (and getting rid of the arrays ;-p).
Re concurrency - you could perhaps have a thread-static pool? AFAIK, `XmlSerializer` just uses the one thread, so this should be fine. It would also allow you to throw the pool away when you're done. So then you could have something *like* a static pool, but per thread. Then perhaps tweak the setters:
```
class Column
{
private string name, value;
public string Name {
get { return this.name; }
set { this.name= MyPool.Get(value); }
}
public string Value{
get { return this.value; }
set { this.value = MyPool.Get(value); }
}
}
```
where the static `MyPool.Get` method talks to a static field (`HashSet<string>`, presumably) decorated with `[ThreadStatic]`. | I know its old thread but I found a nice way for it:
Create `XmlReader` that override the `Value` Property in a way that before the value is returned, you check if its exist in your string pool and then return it.
The `Value` property of `XmlReader` from [msdn](https://learn.microsoft.com/en-us/dotnet/api/system.xml.xmlreader.value?view=netframework-4.7.2):
> The value returned depends on the NodeType of the node. The following
> table lists node types that have a value to return. All other node
> types return String.Empty.
For example, for `Attribute` `NodeType` it returned the value of the attribute.
Hence the implementation will look like this:
```
public class StringPoolXmlTextReader : XmlTextReader
{
private readonly Dictionary<string, string> stringPool = new Dictionary<string, string>();
internal StringPoolXmlTextReader(Stream stream)
: base(stream)
{
}
public override string Value
{
get
{
if (this.NodeType == XmlNodeType.Attribute)
return GetOrAddFromPool(base.Value);
return base.Value;
}
}
private string GetOrAddFromPool(string str)
{
if (str == null)
return null;
if (stringPool.TryGetValue(str, out var res) == false)
{
res = str;
stringPool.Add(str, str);
}
return res;
}
}
```
How to use:
```
using (var stream = File.Open(@"..\..\Report.xml", FileMode.Open))
{
var reader = new StringPoolXmlTextReader(stream);
var ser = new XmlSerializer(typeof(Report));
var data = (Report)ser.Deserialize(reader);
}
```
**Performance:** I have checked the performance for 200K rows with random column values and I found that the deserialize time was the same and the `Report` memory went down from 78,551,460 bytes to 48,890,016 bytes (decreased by ~38%).
**Notes:**
1. The example inherit from `XmlTextReader` but you can inherit from any `XmlReader`
2. You can also use the string pool for the column values by override Value property like this `public override string Value => GetOrAddFromPool(base.Value);` but it can increase the deserialize time by about 20% when the values are not duplicated (like in my test when they are random). | Can an XmlSerializer pool strings to avoid large duplicate strings? | [
"",
"c#",
".net",
"xml-serialization",
"memory-management",
""
] |
I have hardly ever noticed a python program that uses else in a for loop.
I recently used it to perform an action based on the loop variable condition while exiting; as it is in the scope.
What is the pythonic way to use an else in a for loop? Are there any notable use cases?
And, yea. I dislike using break statement. I'd rather set the looping condition complex. Would I be able to get any benefit out of it, if I don't like to use break statement anyway.
Worth noting that for loop has an else since the language inception, the first ever version. | What could be more pythonic than PyPy?
Look at what I discovered starting at line 284 in ctypes\_configure/configure.py:
```
for i in range(0, info['size'] - csize + 1, info['align']):
if layout[i:i+csize] == [None] * csize:
layout_addfield(layout, i, ctype, '_alignment')
break
else:
raise AssertionError("unenforceable alignment %d" % (
info['align'],))
```
And here, from line 425 in pypy/annotation/annrpython.py ([clicky](http://codespeak.net/pypy/dist/pypy/annotation/annrpython.py))
```
if cell.is_constant():
return Constant(cell.const)
else:
for v in known_variables:
if self.bindings[v] is cell:
return v
else:
raise CannotSimplify
```
In pypy/annotation/binaryop.py, starting at line 751:
```
def is_((pbc1, pbc2)):
thistype = pairtype(SomePBC, SomePBC)
s = super(thistype, pair(pbc1, pbc2)).is_()
if not s.is_constant():
if not pbc1.can_be_None or not pbc2.can_be_None:
for desc in pbc1.descriptions:
if desc in pbc2.descriptions:
break
else:
s.const = False # no common desc in the two sets
return s
```
A non-one-liner in pypy/annotation/classdef.py, starting at line 176:
```
def add_source_for_attribute(self, attr, source):
"""Adds information about a constant source for an attribute.
"""
for cdef in self.getmro():
if attr in cdef.attrs:
# the Attribute() exists already for this class (or a parent)
attrdef = cdef.attrs[attr]
s_prev_value = attrdef.s_value
attrdef.add_constant_source(self, source)
# we should reflow from all the reader's position,
# but as an optimization we try to see if the attribute
# has really been generalized
if attrdef.s_value != s_prev_value:
attrdef.mutated(cdef) # reflow from all read positions
return
else:
# remember the source in self.attr_sources
sources = self.attr_sources.setdefault(attr, [])
sources.append(source)
# register the source in any Attribute found in subclasses,
# to restore invariant (III)
# NB. add_constant_source() may discover new subdefs but the
# right thing will happen to them because self.attr_sources
# was already updated
if not source.instance_level:
for subdef in self.getallsubdefs():
if attr in subdef.attrs:
attrdef = subdef.attrs[attr]
s_prev_value = attrdef.s_value
attrdef.add_constant_source(self, source)
if attrdef.s_value != s_prev_value:
attrdef.mutated(subdef) # reflow from all read positions
```
Later in the same file, starting at line 307, an example with an illuminating comment:
```
def generalize_attr(self, attr, s_value=None):
# if the attribute exists in a superclass, generalize there,
# as imposed by invariant (I)
for clsdef in self.getmro():
if attr in clsdef.attrs:
clsdef._generalize_attr(attr, s_value)
break
else:
self._generalize_attr(attr, s_value)
``` | Basically, it simplifies any loop that uses a boolean flag like this:
```
found = False # <-- initialize boolean
for divisor in range(2, n):
if n % divisor == 0:
found = True # <-- update boolean
break # optional, but continuing would be a waste of time
if found: # <-- check boolean
print(n, "is divisible by", divisor)
else:
print(n, "is prime")
```
and allows you to skip the management of the flag:
```
for divisor in range(2, n):
if n % divisor == 0:
print(n, "is divisible by", divisor)
break
else:
print(n, "is prime")
```
Note that there is already a natural place for code to execute when you do find a divisor - right before the `break`. The only new feature here is a place for code to execute when you tried all divisors and did not find any.
*This helps only in conjuction with `break`*. You still need booleans if you can't break (e.g. because you looking for the last match, or have to track several conditions in parallel).
Oh, and BTW, this works for while loops just as well.
## any/all
If the only purpose of the loop is a yes-or-no answer, `any()`/`all()` functions with a generator or generator expression can be utilized:
```
if any(n % divisor == 0
for divisor in range(2, n)):
print(n, "is composite")
else:
print(n, "is prime")
```
Note the elegancy! The code is 1:1 what you want to say!
[This has similar efficiency to a loop with a `break`, because the `any()` function is short-circuiting, only running the generator expression until it yeilds `True`.]
But that won't give you the actual divisor, as `any()` always returns exactly `True` or `False`. A loop with `else:` is hard to beat when you need both (A) access to current value that was "found" (B) separate code paths for "found" vs. "not found" cases. | Pythonic ways to use 'else' in a for loop | [
"",
"for-loop",
"python",
""
] |
I've just started to play around with [Eclipse GMF](http://www.eclipse.org/modeling/gmf/).
* Has anyone used the framework?
* Any good or bad experiences you had using it?
* Any alternatives for graphical modeling you could suggest?
* **EDIT:** What good examples are available? | **Has anyone used the framework?** Yes, I am using it right now. It works, but it is typically quite a bit of coding for the graphical figures. I currently am struggling to leverage the IBM RSA/RSM UML editparts/figures/nodes etc built on top of GMF.
**Any good or bad experiences you made using it?** Looking back on my initial dives into GMF/EMF/GEF etc I can say for certain, study the examples. There are important patterns that you have to pick-up on from the examples and not the documentation. I would also suggest a new book ([Eclipse Modeling Project: A Domain-Specific Language (DSL)](https://rads.stackoverflow.com/amzn/click/com/0321534077)) specific for GMF Modeling in Eclipse. I paged through it and it seemed to be the missing manual to some of the more basic concepts. Why the book is good is that is focuses on the key to making UML/Models useful through constraining it to a specific domain and providing a tool that only allows for valid models to be created. There is not a lot of documentation online and the API only tells you so much. WATCH OUT for repaint/paint loops caused by calling setBounds() or other set methods on children, it crashes the eclipse instance, not fun. Oh yes and the APIs are split between eclipse help documentation versions or **not included at all.**
**Any alternatives for graphical modeling you could suggest?**
Consider UML profiles with custom images and icons rather than full shape generation. It takes about 2 hours to put together a pretty good Image/Icon editor building on top of UML graphical objects and UML profiles. The IBM RSM tool UML Profile tooling project does this quickly. There is a lot you can do with constrained UML profiles (via Eclipse plug-ins or OCL). Entering GMF land is more than a order of magnitude effort increase, from 1 to 10 hours no problem.
Consider pure DSL (Domain specific language) tools out there. Google will provide a good list. From what I have seen the main reason to use GMF is eclipse integration and leveraging existing ecore/UML models, this is why I use GMF.
Ask yourself do I need model which is easy, or do I need a tool for creating instances of this model. If there are only 1-5 expert users there may not be a need for a sleek tool. | The [TOPCASED](http://topcased.org) project makes use of GMF. It provides various graphical editors for UML and other diagrams. | Does anyone have any experiences with Eclipse GMF? | [
"",
"java",
"modeling",
"eclipse-gmf",
""
] |
I have a project for school where we need to use flex and bison. I want to use C++ so that I have access to STL and my own classes that I wrote. We were provided with the following Makefile:
```
CC = gcc
CFLAGS = -g
OBJs = parse.tab.o symtab.o attr.o lex.yy.o
default: parser
parser: ${OBJs}
${CC} ${CFLAGS} ${OBJs} -o parser -lfl
lex.yy.c: scan.l parse.tab.h attr.h
flex -i scan.l
parse.tab.c: parse.y attr.h symtab.h
bison -dv parse.y
parse.tab.h: parse.tab.c
clean:
rm -f parser lex.yy.c *.o parse.tab.[ch] parse.output
depend:
makedepend -I. *.c
```
scan.l and parse.y have some initial flex/bison stuff to generate the scanner and parser. I need to add my own stuff to those files. symtab.{h, c} is supposed to be a implementation of a symbol table. attr.{h, c} are for some attribute magic. I want to make symtab.c a .cc file so I can use STL. I also have other reasons for wanting to use C++.
I tried to use a parse.ypp file, so that a .cpp file would be generated. But the problem is that I'm not getting the right .h file. I changed the Makefile to look like this:
```
CC = g++ # Change gcc to g++
CFLAGS = -g
OBJs = lex.yy.o parse.tab.o symtab.o attr.o
default: lex.yy.c parser # added lex.yy.c so I could just keep lex stuff in C since I don't really need C++ there
parser: ${OBJs}
${CC} ${CFLAGS} ${OBJs} -o parser -lfl
lex.yy.o: scan.l parse.tab.h attr.h # added this rule to use gcc instead of g++
gcc -c -o lex.yy.o lex.yy.c
lex.yy.c: scan.l parse.tab.h attr.h
flex -i scan.l
parse.tab.cpp: parse.ypp attr.h symtab.h
bison -dv parse.ypp
parse.tab.h: parse.tab.cpp # I want a parse.tab.h but I get parse.tab.hpp
clean:
rm -f parser lex.yy.c *.o parse.tab.cpp parse.tab.h parse.output
depend:
makedepend -I. *.c
```
Can someone tell me what I need to add or do to get C++ working? It should be noted I added some stuff in the .y (or .ypp) file to deal with moving from C to C++. In particular, I had to declare some stuff as extern. My main problem is that when I run make, scan.l has a bunch of syntax errors, and they seem to be because it cannot include parse.tab.h (because it is never generated). | You don't need to do anything with flex or bison to use C++, I have done it many times. You just have to make sure you use g++, not gcc.
Your problems are with the Makefile, not the code. | ```
For using flex with C++:
1: read the flex docs:
2: use flex -+ -o file.cc parser.ll
3: In the .ll file:
%option c++
%option yyclass="Your_class_name"
%option batch
4: In your .hh file, derive Your_class_name from public yyFlexLexer
5: you can then use your_class_instance.yylex()
``` | How do I use C++ in flex and bison? | [
"",
"c++",
"c",
"bison",
"flex-lexer",
""
] |
As the title says when should I use `List` and when should I use `ArrayList`?
Thanks | The main time to use `ArrayList` is in .NET 1.1
Other than that, `List<T>` all the way (for your local `T`)...
For those (rare) cases where you don't know the type up-front (and can't use generics), even `List<object>` is more helpful than `ArrayList` (IMO). | You should always use `List<TypeOfChoice>` (introduced in .NET 2.0 with generics) since it is TypeSafe and faster than `ArrayList` (no un-necessary boxing/unboxing).
Only case I could think of where an ArrayList could be handy is if you need to interface with old stuff (.NET 1.1) or you need an array of objects of different type and you load up everything as object - but you could do the latter with `List<Object>` which is generally better. | c# When should I use List and when should I use arraylist? | [
"",
"c#",
"list",
"arraylist",
""
] |
I wonder if there are any suggestions for declarative GUI programming in Java. (I abhor visual-based GUI creator/editor software, but am getting a little tired of manually instantiating JPanels and Boxes and JLabels and JLists etc.)
That's my overall question, but I have two specific questions for approaches I'm thinking of taking:
1. JavaFX: is there an example somewhere of a realistic GUI display (e.g. not circles and rectangles, but listboxes and buttons and labels and the like) in JavaFX, which can interface with a Java sourcefile that accesses and updates various elements?
2. Plain Old Swing with something to parse XUL-ish XML: has anyone invented a declarative syntax (like XUL) for XML for use with Java Swing? I suppose it wouldn't be hard to do, to create some code based on STaX which reads an XML file, instantiates a hierarchy of Swing elements, and makes the hierarchy accessible through some kind of object model. But I'd rather use something that's well-known and documented and tested than to try to invent such a thing myself.
3. [JGoodies Forms](http://www.jgoodies.com/freeware/forms/) -- not exactly declarative, but kinda close & I've had good luck with JGoodies Binding. But their syntax for Form Layout seems kinda cryptic.
**edit:** lots of great answers here! (& I added #3 above) I'd be especially grateful for hearing any experiences any of you have had with using one of these frameworks for real-world applications.
p.s. I did try a few google searches ("java gui declarative"), just didn't quite know what to look for. | You might have a look at [javabuilders](http://code.google.com/p/javabuilders/); it uses [YAML](http://en.wikipedia.org/wiki/YAML) to build Swing UIs.
A simple example from the [manual](http://github.com/jacek99/javabuilders/raw/39a2f1854c1bd53162e53e845f5ce663d7214d22/org.javabuilders.docs/javabuilders/out/pdf/swing.javabuilder.pdf) [PDF]:
```
JFrame:
name: myFrame
title: My Frame
content:
- JLabel:
name: myLabel2
text: My First Label
- JLabel:
name: myLabel2
text: My Second Label
```
Alternatively:
```
JFrame:
name: myFrame
title: My Frame
content:
- JLabel: {name: myLabel2, text: My First Label}
- JLabel: {name: myLabel2, text: My Second Label}
```
Or even:
```
JFrame(name=myFrame,title=My Frame):
- JLabel(name=myLabel2, text=My First Label)
- JLabel(name=myLabel2, text=My Second Label)
``` | As the author of CookSwing, a tool that does what you need, I've given this subject a long hard look before doing the actual implementation. I made a living writing Java Swing GUI applications.
IMO, if you are going to use any kind of imperative programming languages to describe Java Swing component, you might as well just use Java. Groovy etc only adds complications without much simplification.
Declarative languages are much better, because even non-programmers can make sense out of it, especially when you need to delegate the task of fine tuning of specific layouts to artists. XML is perfect for declarative languages (over other choices) because of simplicity, readability, and plenty of editors/transformation tools etc available.
Here are the problems faced in declarative GUI programming, not in any particular order. These issues have been addressed in CookSwing.
1. Readability and simplicity. (JavaFX is not any simpler than XML. Closing tags of XML helps reading quite a bit, and doesn't add extra typing much since XML editors usually do it for you)
2. Extensibility. Very important, because custom Swing components will come up for any non-trivial projects.
3. GUI layouts. Also very important. Being able to handle BorderLayout, GridBagLayout, JGoodies FormsLayout, etc are practically a must.
4. Simplicity of copy/paste. In the course of the designing the layout, it is necessary to try out different ones. So one need to be able to copy / paste and moving things around. XML is better because the hierarchy of components and layouts are easy to see. JavaFX is somewhat problematic due to multi-line attributes and indentation issues. Having a good editor is a must, and there are plenty of good XML editors.
5. Templates (i.e. being able to include another layout file) is very useful for consistent look. For example, one might want to have a consistent look of dialogs, button panels, etc.
6. Interactions with Java code. This is crucial. Some GUI components can only be created with Java code (for whatever the reason). It is thus necessary to be able load these objects. It is also necessarily being able to directly hook up listeners and other Java objects/components within the XML code. Using ids to hook them up later WILL not work well, as it is very tedious.
7. Internationalization (i18n). Being able to load text / string from a resource bundle rather than hard coded text. This feature can be crucial for some applications.
8. Localization (l10n). The advantage of declarative programming (particularly with XML) is that you can just switch to a different GUI form for a specific locale and that's it. If you code with Java or any other imperative languages, it is not so easy.
9. Error check / tolerance. Initial designs often will contain errors here and there. Sometimes the error might be because the corresponding Java code hasn't been designed yet. Or an icon resource is missing. Dealing with errors with imperative coding is extremely tedious. Thus it is desirable to be able to locate the errors, yet at the same time being error tolerant, so the preview of the GUI layout can be made as early as possible.
10. GUI component replacement. That is, replace textfield which used to have JTextField with some fancier version of components. Replace the meaning of dialog with some fancy UI dialogs (such as JIDE's) instead of JDialog. This feature can save significant amount of efforts. XML itself is also useful due to XSLT and other transformation tools.
11. Beyond Swing. Because sooner or later you will find many component configurations use object types such as arrays, icons, images, vectors, etc. | suggestions for declarative GUI programming in Java | [
"",
"java",
"swing",
"user-interface",
"layout",
"declarative",
""
] |
How would you build your domain objects and create its respective NHibernate mapping files for a multi language application. The UI part is stored in resource files but user data needs to go into the database.
I want to do the following:
```
Product p = DALProduct.getByID(2)
p.name //results in the language of the current UICulture
```
I have found the following article which is really close:
<http://ayende.com/Blog/archive/2006/12/26/LocalizingNHibernateContextualParameters.aspx>
As I am new to NHibernate I am not sure if that will perfectly work for enterprise solutions.
Do you have other suggestions? How do you solve such scenario?
It should be flexible to:
* Inserting, Updating and selecting
* Collections | Ayendes post is a great start how it should be designed.
It will perfectly work for enterprise solutions. The names in a separate table is like any other list of values. The special thing is, that it is filtered in the mapping.
Edit - Options:
**Use another entity to edit the data**
There is Product entity that has all the names as list. LocalizedProduct has only the current languages name.
Get the filtered entity
* by either mapping it as described in the blog, with the filter.
* by selecting it with a result transformer (Transformers.AliasToBean) or with 'select new LocalizedProduct(id, name, prize ...)'. LocalizedProduct would not be mapped in this case. Should be second-level-cache-friendly.
If you have many references to Product, it is probably not so nice to have two classes, because you don't know which class the reference should have.
**Use the same entity for editing and displaying**
```
class Product
{
string LocalizedName
{
get { return AllProductNames[Thread.CurrentThread.CurrentCulture.LCID]; }
}
IDictionary<int, string> AllProductNames { get; private set; }
}
```
There are properties for a localized product name (get) *and* all product names.
* don't filter them at all :-) There is a bit of network overhead. if you have only 3 to 5 languages, it is not so bad. if you have 20 or more, it's probably better to filter the names.
* use a (optional) filter as described in the blog, but on the product names
* use a (optional) ResultTransformer (CriteriaUtil.AliasToEntityMap) to filter the names.
To be honest, I would choose a option that dies not need filters in the mapping files. Filters belong to queries where they are more maintainable. | [Here](http://blog.hibernate.org/1592.lace) is Gavin King's Post I which seems to offer a different solution. | Globalization with NHibernate | [
"",
"c#",
"nhibernate",
"model",
"relational",
""
] |
Quite often in my [LINQ to SQL](http://en.wikipedia.org/wiki/Language_Integrated_Query#LINQ_to_SQL_.28formerly_called_DLINQ.29) code, I need to "find or create" an entity as such:
```
var invoiceDb = ctx.Invoices.FirstOrDefault(a => a.InvoicerId == InvoicerId &&
a.Number == invoiceNumber);
if (invoiceDb == null)
{
invoiceDb = new Invoice();
invoiceDb.Number = invoiceNumber;
ctx.Invoices.InsertOnSubmit(invoiceDb);
}
```
I'm looking to make this a generic method... Any good ideas? | I came up with these extension methods that seems to work well for me.
```
public static T FindOrCreate<T>(this Table<T> table, Func<T, bool> find, Action<T> create) where T : class, new()
{
T val = table.FirstOrDefault(find);
if (val == null)
{
val = new T();
create(val);
table.InsertOnSubmit(val);
}
return val;
}
public static T FindOrCreate<T>(this Table<T> table, Func<T, bool> find) where T : class, new()
{
return FindOrCreate(table, find, a => { });
}
```
And it's used like so:
```
var invoiceDb = ctx.Invoices.FindOrCreate(a => a.InvoicerId == InvoicerId &&
a.Number == invoiceNumber);
invoiceDb.Number = invoiceNumber;
```
Or
```
var invoiceDb = ctx.Invoices.FindOrCreate(a => a.InvoicerId == InvoicerId &&
a.Number == invoiceNumber,
a => a.Number = invoiceNumber);
``` | VB.NET version:
```
Module dbi
<System.Runtime.CompilerServices.Extension()> _
Public Function FindOrCreate( _
Of T As {Class, New})(ByVal table As Data.Linq.Table(Of T), _
ByVal find As Func(Of T, Boolean), _
ByVal create As Action(Of T)) _
As T
Dim val As T = table.FirstOrDefault(find)
If val Is Nothing Then
val = New T()
create(val)
table.InsertOnSubmit(val)
End If
Return val
End Function
<System.Runtime.CompilerServices.Extension()> _
Public Function FindOrCreate( _
Of T As {Class, New})(ByVal table As Data.Linq.Table(Of T), _
ByVal find As Func(Of T, Boolean)) _
As T
Return FindOrCreate(table, find, Function(a))
End Function
End Module
``` | Generic way to find or create an object in LINQ to SQL? | [
"",
"c#",
"linq",
"linq-to-sql",
""
] |
Are there any Java code generation persistence frameworks?
Something that is "database first" rather than object model first?
I know this is probably heresy in the Java world, I'd just like to know if any frameworks like this exist.
I'm more of a .NET guy these days. So on the .NET-side tools like .NET Tiers or CSLA come to mind. | sure, hibernate and netbeans for example can reverse engineering a database. You may be want to look at:
* [Hibernate Tools](http://tools.hibernate.org/); site in maintenance
* netbeans, with recent version of netbeans you can create JPA entities from a JDBC connection
My 2 cents. | The [Apache Cayenne](http://cayenne.apache.org/) ORM framework is able to reverse engineer a database. See [here](http://cayenne.apache.org/doc12/reverse-engineer-database.html). | Java, Code Generation, and Persistence Frameworks | [
"",
"java",
"database",
"persistence",
"orm",
""
] |
```
private ArrayList<String> colors = new ArrayList<String>();
```
Looking at the example above, it seems the main point of generics is to enforce type on a collection. So, instead of having an array of "Objects", which need to be cast to a String at the programmer's discretion, I enforce the type "String" on the collection in the ArrayList. This is new to me but I just want to check that I'm understanding it correctly. Is this interpretation correct? | Yeah, that's basically it. Before generics, one had to create an ArrayList of Objects. This meant that one could add *any* type of Object to the list - even if you only meant for the ArrayList to contain Strings.
All generics do is add type safety. That is, now the JVM will make sure that any object in the list is a String, and prevent you from adding a non-String object to the list. Even better: this check is done at compile time. | That's by far not the **only** use of generics, but it's definitely the most visible one.
Generics can be (and are) used in many different places to ensure static type safety, not just with collections.
I'd just like to mention that, because you'll come accross places where generics could be useful, but if you're stuck with the generics/collections association, then you might overlook that fact. | Are Java generics mainly a way of forcing static type on elements of a collection? | [
"",
"java",
"generics",
""
] |
I'm looking at changing our Auditing process for our SQL Server 2005 databases and I came across Change Data Capture in SQL Server 2008.
This looks like a good idea, and I'm tempted to try it, but before I do has anyone used it in a commercial environment and what are your thoughts?
I noticed when I was reading about CDC in the MS help, it said that audit data would usually be kept for a couple of days. That's not possible here, I'd like to keep the data indefinitely, does anyone know of problems with this approach?
If this isn't a good solution for reasons I'm unaware of, have you any other solutions for auditing of data changes. I'm keen to use a system that can be implemented across the board on any tables I desire.
I'm after the basic: "Before, After, Who By, When" information for any changes. | The CDC should is just a means to an end in my opinion. I have implemented audit trail solutions in the past and they have involved the use of Triggers. This got to be very messy and performance intensive for highly transactional databases.
What the CDC gives you is the ability to log the audit data without the use of triggers, but you still need a means to take that data into a permanent table. This can be done with a mirror table for each table to be audited or a single table that tracks all the changes to all the tables (I have done the latter).
**Here are some links with additional information on how it was done using triggers:**
[SQL Audit Trail](http://www.simple-talk.com/sql/database-administration/pop-rivetts-sql-server-faq-no.5-pop-on-the-audit-trail/)
[sql-server-history-table-populate-through-sp-or-trigger](https://stackoverflow.com/questions/349524/sql-server-history-table-populate-through-sp-or-trigger)
Here's an open source audit tracking solution that uses LINQ: [DoddleAudit](http://www.codeplex.com/DoddleAudit) | *Quite late but hopefully it will be useful for other readers…*
Below are several different techniques for auditing with its pros and cons. There is no “right” solution that would fit all. It depends on the requirements and the system being audited.
**Triggers**
* **Advantages**: relatively easy to implement, a lot of flexibility on what is audited and how is audit data stored because you have full control
* **Disadvantages**: It gets messy when you have a lot of tables and even more triggers. Maintenance can get heavy unless there is some third party tool to help. Also, depending on the database it can cause a performance impact.
**CDC**
* **Advantages**: Very easy to implement, natively supported
* **Disadvantages**: Only available in enterprise edition, not very robust – if you change the schema your data will be lost. I wouldn’t recommend this for keeping a long term audit trail
**Traces**
* **Advantages**: a lot of flexibility on what is being audited. Even select statements can be audited.
* **Disadvantages**: You would need to create a separate application in order to parse trace files and gather useful information from these.
**Reading transaction log**
* **Advantages**: all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
* **Disadvantages**: You need a third party log reader in order to read this effectively
I’ve worked with several auditing tools from [ApexSQL](http://www.apexsql.com/) but there are also good tools from [Idera](http://www.idera.com/) (compliance manager) and [Krell](http://krell-software.com/) software (omni audit)
[ApexSQL Audit](http://www.apexsql.com/sql_tools_audit.aspx) – Trigger based auditing tool. Generated and manages auditing triggers
[ApexSQL Log](http://www.apexsql.com/sql_tools_log.aspx) – Allows auditing by reading transaction log | Auditing SQL Server data changes | [
"",
"sql",
"sql-server-2005",
"sql-server-2008",
""
] |
If you have for example > 5 left joins in a query is that a code smell that there is ...
* something wrong with your design?
* you're doing too much in one query?
* you're database is too normalized? | It's a perfectly legitimate solution for some designs.
Say you have a hierarchy of one-to-many relations like `Customer` - `Order` - `Basket` - `Item` - `Price`, etc., which can be unfilled on any level: a `Customer` may have no `Orders`, an `Order` can have no `Baskets`, etc.
In this case you issue something like:
```
SELECT *
FROM Customer c
LEFT OUTER JOIN
Order o
ON o.CustomerID = c.ID
LEFT OUTER JOIN
Basket b
ON b.OrderID = c.ID
…
```
Note that it may be inefficient in some cases, and may be replaced with `EXISTS` or `NOT EXISTS` (if you only want to figure out that the corresponding records exist or do not exist in other tables).
See this article in my blog for performance details:
* [**Finding incomplete orders**](http://explainextended.com/2009/03/13/finding-incomplete-orders/) - how to benefit from replacing `LEFT JOIN`'s with `NOT EXISTS` | In the sense that it's something you could/should investigate I'd say yes. It's likely you can get better utility and maintenance by factoring some views out of that.
In the sense that it's "bad code" no, this could quite easily be reasonable especially for larger DBs and modern databases will likely optimise any inefficiencies out. | Is too many Left Joins a code smell? | [
"",
"sql",
"sql-server-2005",
"left-join",
""
] |
I'm working on a large C++ system built with ant+[cpptasks](http://ant-contrib.sourceforge.net/cpptasks/index.html). It works well enough, but the build.xml file is getting out of hand, due to standard operating procedure for adding a new library or executable target being to copy-and-paste another lib/exe's rules (which are already quite large). If this was "proper code", it'd be screaming out for refactoring, but being an ant newbie (more used to make or VisualStudio solutions) I'm not sure what the options are.
What are ant users' best-practices for stopping ant build files exploding ?
One obvious option would be to produce the build.xml via XSLT, defining our own tags for commonly recurring patterns. Does anyone do that, or are there better ways ? | you may be interested in:
* [<import>](http://ant.apache.org/manual/Tasks/import.html)
* [<macrodef>](http://ant.apache.org/manual/Tasks/macrodef.html)
* [<subant>](http://ant.apache.org/manual/Tasks/subant.html)
Check also this article on "[ant features for big projects](http://web.archive.org/web/20060822050723/http://www.oracle.com/technology/pub/articles/bodewig_ant1.6.html)". | If the rules are repetitive then you can factor them into an ant macro using macrodef and reuse that macro.
If it is the sheer size of the file that is unmanageable, then you can perhaps break it into smaller files and have the main build.xml call targets within those files.
If it's neither of these, then you may want to consider using a build system. Even though I have not used Maven myself, I hear it can solve many issues of large and unmanageable build files. | How do you "refactor" ant build.xml files? | [
"",
"c++",
"configuration",
"ant",
"build-process",
""
] |
How can I set cookies in PHP server-side with Ajax and read cookies in the real-time with JavaScript?
**Example:**
After pressed `ok` button, the client-side will call Ajax and Ajax will call to PHP server-side to collect data. It will also assign a value into cookies in PHP server-side while it's getting the data. At the sametime, I want to read the assigned value cookies with another function in real-time(the function will call from the Ajax when it starts calling to PHP server) and display the cookie's value on the client-side.
I tried this many time, but seem like the function can only get the updated cookie's value after the Ajax process is completed. | How would you be able to read a cookie on the client before it has arrived there?
1. you make an (ajax, or otherwise) request to the server
2. the server assembles a response, adding a cookie to it
3. the client receives the response
4. after the response is there, it can examine the cookie value
To me your question sounds like you try to read the cookie right in step 1. This won't be possible.
If that's not what you are trying to do, then your question needs some re-wording. :) | Cookies exist only on the client side. They're included with each HTTP request, allowing the server to perform actions on them. Javascript can set these cookies for you if you need:
```
// http://www.quirksmode.org/js/cookies.html#script
function createCookie(name,value,days) {
if (days) {
var date = new Date();
date.setTime(date.getTime()+(days*24*60*60*1000));
var expires = "; expires="+date.toGMTString();
}
else var expires = "";
document.cookie = name+"="+value+expires+"; path=/";
}
``` | How can I set cookies in php server-side with Ajax and read cookies in the real time with Ajax or JavaScript? | [
"",
"javascript",
"ajax",
""
] |
Given a seed string, I want to find its neighbors with at most differ in 2 positions. All the digits involve in generating string are only four (i.e. 0,1,2,3). This is the example for what I mean:
```
# In this example, 'first' column
# are neighbors with only 1 position differ.
# The rest of the columns are 2 positions differ
Seed = 000
100 110 120 130 101 102 103
200 210 220 230 201 202 203
300 310 320 330 301 302 303
010 011 012 013
020 021 022 023
030 031 032 033
001
002
003
Seed = 001
101 111 121 131 100 102 103
201 211 221 231 200 202 203
301 311 321 331 300 302 303
011 010 012 013
021 020 022 023
031 030 032 033
000
003
002
Hence given a tag of length L
we will have 3*L + 9L(L-1)/2 neighbors
```
But why this code of mine fails to generate it correctly? Especially when the seed string is **other than** "000".
Other approaches are also welcomed, escpecially with speed improvement. Since
we will be processing millions of seed tags of length 34 to 36.
```
#include <iostream>
#include <vector>
#include <fstream>
#include <sstream>
using namespace std;
string ConvertInt2String(int IntVal) {
std::string S;
std::stringstream out;
out << IntVal;
S = out.str();
return S;
}
string Vec2Str (vector <int> NTg) {
string StTg = "";
for (unsigned i = 0; i < NTg.size(); i++) {
StTg += ConvertInt2String(NTg[i]);
}
return StTg;
}
template <typename T> void prn_vec(const std::vector < T >&arg, string sep="")
{
for (unsigned n = 0; n < arg.size(); n++) {
cout << arg[n] << sep;
}
return;
}
vector <int> neighbors(vector<int>& arg, int posNo, int baseNo) {
// pass base position and return neighbors
vector <int> transfVec;
transfVec = arg;
//modified according to strager's first post
transfVec[posNo % arg.size()] = baseNo;
return transfVec;
}
int main () {
vector <int> numTag;
numTag.push_back(0);
numTag.push_back(0);
numTag.push_back(1); // If "000" this code works, but not 001 or others
// Note that in actual practice numTag can be greater than 3
int TagLen = static_cast<int>(numTag.size());
for ( int p=0; p< TagLen ; p++ ) {
// First loop is to generate tags 1 position differ
for ( int b=1; b<=3 ; b++ ) {
int bval = b;
if (numTag[p] == b) {
bval = 0;
}
vector <int> nbnumTag = neighbors(numTag, p, bval);
string SnbnumTag = Vec2Str(nbnumTag);
cout << SnbnumTag;
cout << "\n";
// Second loop for tags in 2 position differ
for (int l=p+1; l < TagLen; l++) {
for (int c=1; c<=3; c++) {
int cval = c;
if (nbnumTag[l] == c) {
cval = c;
}
vector <int> nbnumTag2 = neighbors(nbnumTag, l, cval);
string SnbnumTag2 = Vec2Str(nbnumTag2);
cout << "\t" << SnbnumTag2;
cout << "\n";
}
}
}
}
return 0;
}
``` | Would this do it? It enumerates the tree of possible strings, pruning all with >2 differences from the original.
```
void walk(char* s, int i, int ndiff){
char c = s[i];
if (ndiff > 2) return;
if (c == '\0'){
if (ndiff > 0) print(s);
}
else {
s[i] = '0'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1);
s[i] = '1'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1);
s[i] = '2'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1);
s[i] = '3'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1);
s[i] = c;
}
}
char seed[] = "000";
main(){
walk(seed, 0, 0);
}
``` | Here's one way to do it that should work for any number of characters and length of string:
```
string base = "000";
char values[] = {'0', '1', '2', '3' };
for (int i = 0; i < base.length(); ++i)
{
for (int j = 0; j < countof(values); ++j)
{
if (base[i] != values[j])
{
string copy = base;
copy[i] = values[j];
cout << copy << endl;
for (int k = i+1; k < base.length(); ++k)
{
for (int l = 0; l < countof(values); ++l)
{
if (copy[k] != values[l])
{
string copy2 = copy;
copy[k] = values[l];
cout << copy2 << endl;
}
}
}
}
}
}
``` | Finding Strings Neighbors By Up To 2 Differing Positions | [
"",
"c++",
"algorithm",
"string",
"enumeration",
"combinatorics",
""
] |
How do I easily edit the style of the selected text in a JTextPane? There doesn't seem to be many resources on this. Even if you can direct me to a good resource on this, I'd greatly appreciate it.
Also, how do I get the current style of the selected text? I tried `styledDoc.getLogicalStyle(textPane.getSelectionStart());` but it doesn't seem to be working. | Take a look at the following code in this pastebin:
<http://pbin.oogly.co.uk/listings/viewlistingdetail/d6fe483a52c52aa951ca15762ed3d3>
The example is from here:
<http://www.java2s.com/Code/Java/Swing-JFC/JTextPaneStylesExample3.htm>
It looks like you can change the style using the following in an action listener:
```
final Style boldStyle = sc.addStyle("MainStyle", defaultStyle);
StyleConstants.setBold(boldStyle, true);
doc.setCharacterAttributes(0, 10, boldStyle, true);
```
It sets the style of the text between the given offset and length to a specific style.
See the full pastebin for more details. That should fix your problem though. | Here's a code snippet to insert a formatted "Hello World!" string in a `JEditorPane`:
```
Document doc = yourEditorPane.getDocument();
StyleContext sc = new StyleContext();
Style style = sc.addStyle("yourStyle", null);
Font font = new Font("Arial", Font.BOLD, 18);
StyleConstants.setForeground(style, Color.RED);
StyleConstants.setFontFamily(style, font.getFamily());
StyleConstants.setBold(style, true);
doc.insertString(doc.getLength(), "Hello World!", style);
``` | How do I easily edit the style of the selected text in a JTextPane? | [
"",
"java",
"swing",
"jtextpane",
""
] |
I'm creating an Access DB for use in an C# application at school. I've not had much experience working with DB's so if this sounds stupid just ignore it. I want the user to be able to select **all** the classes that a certain student has had in our IT department. We have about 30 in all and the maximum that a person can take in 4 years of high school is 15. Right now my DB has 15 different columns for each class that a user could have. How can I compress this down to one column (if there is a way)? | Excellent question Lucas, and this delves into the act of database normalization.
The fact that you recognized why having multiple columns to represent classes is bad already shows that you have great potential.
What if we wanted to add a new class? Now we have to add a whole new column. There is little flexibility for this.
So what can be done?
We create THREE tables.
One table is for students:
```
Student
|-------------------------|
| StudentID | Student_Name|
|-------------------------|
| 1 | John |
| 2 | Sally |
| 3 | Stan |
---------------------------
```
One table is for Classes:
```
Class
------------------------
| ClassID | Class_Name|
------------------------
| 1 | Math |
| 2 | Physics |
------------------------
```
And finally, one table holds the relationship between Students and Classes:
```
Student_Class
-----------------------
| StudentID | ClassID |
-----------------------
```
If we wanted to enroll John into Physics, we would insert a row into the Student\_Class table.
```
INSERT INTO Student_Class (StudentID, ClassID) VALUES (1, 2);
```
Now, we have a record saying that Student #1 (John) is attending Class #2 (Physics). Lets make Sally attend Math, and Stan attend Physics and Math.
```
INSERT INTO Student_Class (StudentID, ClassID) VALUES (2, 1);
INSERT INTO Student_Class (StudentID, ClassID) VALUES (3, 1);
INSERT INTO Student_Class (StudentID, ClassID) VALUES (3, 2);
```
To pull that data back in a readable fashion, we join the three tables together:
```
SELECT Student.Student_Name,
Class.Class_Name
FROM Student,
Class,
Student_Class
WHERE Student.StudentID = Student_Class.StudentID
AND Class.ClassID = Student_Class.ClassID;
```
This would give us a result set like this:
```
------------------------------
| Student_Name | Class_Name |
------------------------------
| John | Physics |
| Sally | Math |
| Stan | Physics |
| Stan | Math |
------------------------------
```
And that is how database normalization works in a nutshell. | It sounds like you need to think about [normalizing](http://en.wikipedia.org/wiki/Database_normalization) your database schema.
There is a [many-to-many relationship](http://www.databasedev.co.uk/many_to_many.html) between students and classes such that many students can take many classes and many classes can be taken by many students. The most common approach to handling this scenario is to use a [junction table](http://en.wikipedia.org/wiki/Junction_table).
Something like this
```
Student Table
-------------
id
first_name
last_name
dob
Class Table
-----------
id
class_name
academic_year
Student_Class Table
-------------------
student_id
class_id
year_taken
```
Then your queries would join on the tables, for example,
```
SELECT
s.last_name + ', ' + s.first_name AS student_name,
c.class_name,
sc.year_taken
FROM
student s
INNER JOIN
student_class sc
ON
s.id = sc.student_id
INNER JOIN
class c
ON
sc.class_id = class.id
ORDER BY
s.last_name, sc.year_taken
```
One word of advice that I would mention is that Access requires you to use parentheses when joining more than table in a query, I believe this is because it requires you to specify an order in which to join them. Personally, I find this awkward, particularly when I am used to writing a lot of SQL without designers. Within Access, I would recommend using the designer to join tables, then modify the generated SQL for your purposes. | How to get rid of multiple columns in a database? | [
"",
"c#",
"database",
"database-design",
"oledb",
""
] |
So I have two forms, mainform and extraform.
extraform is set always moved to the right of mainform when mainform initializes
Sometimes mainform takes up both monitors and extraform is pushed off the screen never to be seen again. I would like to prevent this if possible. How can I do so? It must support dual monitors, that may or may not have distance between them (i.e. screen 1 is 20px to the left of screen 2).
How can I do this? | You can use the Screen class to work out where your window is relative to the desktop. The Screen class has a FromRectangle method, so you can figure out which screen you should be positioning your Form on (by passing your form's Bounds property in).
Each Screen object has a Bounds property, which you can use to compare to the location and size of your window, and adjust them accordingly. | It depends what you want should happen when extraform is pushed beyond the bounds of the screen(s).
However, to find out whether or not it's being pushed off, it's quite simple using the System.Windows.Forms.Screens class. Then you can do bounds checking like so:
```
foreach (var screen in Screen.AllScreens)
{
if(screen.Bounds.Contains(this.Bounds))
{
Console.WriteLine("Device "+screen.DeviceName+" contains form!");
}
}
```
Code assumes being in a form. Note that this code only prints that a screen contains the form if the form is completely contained on the screen. But this should be rather simple to fix, depending on your needs. | How can I tell if my form is displayed beyond the screen? | [
"",
"c#",
"winforms",
".net-3.5",
"screen-positioning",
""
] |
I have requirement to build new rule engine in which i have rule like this which is stored in Excel sheet, in tabular format.
```
<If> "Name = Nita "
<Value> "200"
<else> "Name = Gita"
<value> "300"
<LookInto> "/Name/@Income"
```
I have two files say 1 n 2. i need to see in first file that whether Name is Nita or Gita. Based on execution i will get value. Now this result value i need to compare with second file , which is a xml file and whose path has been defined into .
can anybody suggest me that is there anything in C# that i can use effectively to develop the same,, or can anybody suggest me how i can achieve this with C#.. i need some idea for class design of the same.
I am using .Net 1.1. | Firstly, you need to create classes that can define your rules. Once you have this structure in place, you will be able to build an engine that utilises those classes.
I would look at creating something like:
```
class RuleEngine
{
public RuleMatch[] RuleMatches { get; set; }
public void RunEngine(inputdata...)
{
// do processing in here
}
}
class RuleMatch
{
public Rule[] Rules { get; set; }
public Object ValueIfMatched { get; set; }
}
class Rule
{
public String FieldName { get; set; }
public MatchType Match { get; set; }
public Object Value { get; set; }
)
enum MatchType
{
Equal = 1,
NotEqual = 2,
GreaterThan = 4,
LessThan = 8,
Like = 16
}
```
Then go on from there....
This structure would be better if it was changed to have a group of rules that could be added to a group of rules, for example (a AND b) OR (c AND d) - I'll leave this to you to think about for now.
---
Please note I'm using some C# 3.0 constructs here, you will need to create full private properties in 1.1. | You could create a DSL with boo, but I don't know if that will work with 1.1 | how can i build rule engine which can execute user defined if else rules in C#.net | [
"",
"c#",
".net",
"rule-engine",
""
] |
**Update:** Here's a [similar question](https://stackoverflow.com/questions/419019/split-list-into-sublists-with-linq)
---
Suppose I have a `DataTable` with a few thousand `DataRows` in it.
I'd like to break up the table into chunks of smaller rows for processing.
I thought C#3's improved ability to work with data might help.
This is the skeleton I have so far:
```
DataTable Table = GetTonsOfData();
// Chunks should be any IEnumerable<Chunk> type
var Chunks = ChunkifyTableIntoSmallerChunksSomehow; // ** help here! **
foreach(var Chunk in Chunks)
{
// Chunk should be any IEnumerable<DataRow> type
ProcessChunk(Chunk);
}
```
Any suggestions on what should replace `ChunkifyTableIntoSmallerChunksSomehow`?
I'm really interested in how someone would do this with access C#3 tools. If attempting to apply these tools is inappropriate, please explain!
---
Update 3 (revised chunking as I really want tables, not ienumerables; going with an extension method--thanks Jacob):
Final implementation:
Extension method to handle the chunking:
```
public static class HarenExtensions
{
public static IEnumerable<DataTable> Chunkify(this DataTable table, int chunkSize)
{
for (int i = 0; i < table.Rows.Count; i += chunkSize)
{
DataTable Chunk = table.Clone();
foreach (DataRow Row in table.Select().Skip(i).Take(chunkSize))
{
Chunk.ImportRow(Row);
}
yield return Chunk;
}
}
}
```
Example consumer of that extension method, with sample output from an ad hoc test:
```
class Program
{
static void Main(string[] args)
{
DataTable Table = GetTonsOfData();
foreach (DataTable Chunk in Table.Chunkify(100))
{
Console.WriteLine("{0} - {1}", Chunk.Rows[0][0], Chunk.Rows[Chunk.Rows.Count - 1][0]);
}
Console.ReadLine();
}
static DataTable GetTonsOfData()
{
DataTable Table = new DataTable();
Table.Columns.Add(new DataColumn());
for (int i = 0; i < 1000; i++)
{
DataRow Row = Table.NewRow();
Row[0] = i;
Table.Rows.Add(Row);
}
return Table;
}
}
``` | This seems like an ideal use-case for Linq's Skip and Take methods, depending on what you want to achieve with the chunking. This is completely untested, never entered in an IDE code, but your method might look something like this.
```
private List<List<DataRow>> ChunkifyTable(DataTable table, int chunkSize)
{
List<List<DataRow>> chunks = new List<List<DataRow>>();
for (int i = 0; i < table.Rows.Count / chunkSize; i++)
{
chunks.Add(table.Rows.Skip(i * chunkSize).Take(chunkSize).ToList());
}
return chunks;
}
``` | This is quite readable and only iterates through the sequence once, perhaps saving you the rather bad performance characteristics of repeated redundant `Skip()` / `Take()` calls:
```
public IEnumerable<IEnumerable<DataRow>> Chunkify(DataTable table, int size)
{
List<DataRow> chunk = new List<DataRow>(size);
foreach (var row in table.Rows)
{
chunk.Add(row);
if (chunk.Count == size)
{
yield return chunk;
chunk = new List<DataRow>(size);
}
}
if(chunk.Any()) yield return chunk;
}
``` | What's a clean way to break up a DataTable into chunks of a fixed size with Linq? | [
"",
"c#",
"linq",
"c#-3.0",
"ienumerable",
""
] |
I need to perform some date operations in ASP.net using C#.
1. The date i would enter should be of format 'Jul-05' (mmm-yy Format and type-**string**)...
how can i check with this????
Or how can i validate this with whatever user is entering as a string???
2. After validating that, i need to compare tht with a value in Database(say a column name *buy\_period* which has a value (say) 04/31/2007).
How can i write a Query for comparing both?? (as both dates would be of different formats)
Can u pls help me in this ??? | Use the `TryParseExact` method to validate the string and parse it to a `DateTime` value:
```
DateTime month;
if (DateTime.TryParseExact("MMM-yy", CultureInfo.InvariantCulture, DateTimeStyles.None, out month)) {
// parsing was successful
}
```
The `DateTime` value will use the first day of month and the time 0:00 to fill up a complete value, so a string like `"jul-05"` will be parsed into a complete `DateTime` value like `2005-07-01 00:00:00.0000`, so it will be the starting point of that month.
To compare this to a date in the database you also need the starting point of the next month, which you get with:
```
DateTime nextMonth = month.AddMonths(1);
```
Now you can just compare a date to the starting and ending point of the month in this manner:
```
where date >= @Month and date < @NextMonth
``` | ```
DateTime myDateTime = DateTime.ParseExact( input, "MMM-yy" );
```
You can then happily pass it to a stored procedure (etc.) as a parameter to do your comparison on the server (or just use the DateTime returned as the result of an existing query) | String to mmm-yy format of time in C# | [
"",
"c#",
"datetime",
""
] |
I'm using the Zend framework and the openid selector from <http://code.google.com/p/openid-selector/> - however I find I can't login using sites like Google and Yahoo as they use direct identity based login system whereby one is just redirected to a url as opposed to entering a unique url of their own for authentication.
I've checked out many options and hacks but none of them seem to work. How can i get this to work here btw - how is it implemented at stack overflow? I could really use all the help here guys..
---
Edit
Well the issue here is that from what I have noticed is that the Zend OpenID class doesn't support OpenID 2.0 the thing is that a typical open ID providor gives you a unique url such as your-name.openid-providor.com or openid-providor.com/your-name and the Zend OpenId class just parses through that url and then redirects you to the providor website where upon authentication you are redirected back.
In the case of Yahoo and google - you don't enter a unique url instead you are redirected to the providors login site and upon login and authentication you are redirected back - so basically whats happeining is that the zend\_openID object when it parses to tell who the providor is it fails to tell from the general url itself. Like when you click on teh Google link it redirects you to <https://www.google.com/accounts/o8/id>
Its more an issue with the zend openid object here and there isn't any help on zend related forums - so I was wondering if someone had already hacked or had an alteration I could make to the class to accomplish this. Sorry if I'm missing something but I'm kinda new to this and programming with open ID and have just started to get my feet wet.
---
Thanks for the follow up - I did check into RPX a while back and they do have a php class but I wasnt able to check it out plus I really just want to for now get the code selector used as on stackoverflow to work with Yahoo and Google authentication. There has to be some kind of way to tweak the parsing which the Zend OpenID class uses as it runs a series of regular expression checks to make a discovery. | Little late to the game but I was able to get this working with some hacks I found around the interwebs.
First. Yahoo. To get Yahoo working all I had to do was change the JavaScript to use **me.yahoo.com** instead of just **yahoo.com** and it worked perfectly with the version of the Zend Framework I'm using. Unfortunately Google still wasn't, so some hacking was in order.
All of these changes go in `Zend/OpenId/Consumer.php`
First, in the `_discovery` method add the following on the series of preg\_match checks that starts at around line 740.
```
} else if (preg_match('/<URI>([^<]+)<\/URI>/i', $response, $r)) {
$version = 2.0;
$server = $r[1];
```
I added this right before the `return false;` statement that's in the else {} block.
Second, in the `_checkId` method you'll need to add 3 new blocks (I haven't dug around enough to know what causes each of these three cases to be called, so I covered all to be on the safe side.
Inside the $version <= 2.0 block, you'll find an if/else if/else block. In the first if statement `($this->_session !== null)` add this to the end:
```
if ($server == 'https://www.google.com/accounts/o8/ud') {
$this->_session->identity = 'http://specs.openid.net/auth/2.0/identifier_select';
$this->_session->claimed_id = 'http://specs.openid.net/auth/2.0/identifier_select';
}
```
In the else if (defined('SID') block add this to the end:
```
if ($server == 'https://www.google.com/accounts/o8/ud') {
$_SESSION['zend_openid']['identity'] = 'http://specs.openid.net/auth/2.0/identifier_select';
$_SESSION['zend_openid']['claimed_id'] = 'http://specs.openid.net/auth/2.0/identifier_select';
}
```
And then after the else block (so outside the if/else if/else block all together, but still inside the $version <= 2.0 block) add this:
```
if ($server == 'https://www.google.com/accounts/o8/ud') {
$params['openid.identity'] = 'http://specs.openid.net/auth/2.0/identifier_select';
$params['openid.claimed_id'] = 'http://specs.openid.net/auth/2.0/identifier_select';
}
```
[Link to the bug in Zend Framework Issue Tracker](http://framework.zend.com/issues/browse/ZF-6905) | I need to use Google's OpenID stuff, and I tried Steven's code and couldn't get it to work as-is. I've made some modifications.
The \_discovery change method is still the same:
Zend/OpenId/Consumer.php, line 765, add:
```
} else if (preg_match('/<URI>([^<]+)<\/URI>/i', $response, $r)) {
$version = 2.0;
$server = $r[1];
```
The rest is different, though:
Zend/OpenId/Consumer.php, line 859 (after making the above change), add:
```
if (stristr($server, 'https://www.google.com/') !== false) {
$id = 'http://specs.openid.net/auth/2.0/identifier_select';
$claimedId = 'http://specs.openid.net/auth/2.0/identifier_select';
}
```
This is right before:
```
$params['openid.identity'] = $id;
$params['openid.claimed_id'] = $claimedId;
```
And to get it to return the ID, once authorized:
Zend/Auth/Adapter/OpenId.php, line 278:
```
if(isset($_REQUEST['openid_identity']))
{
$this->_id = $_REQUEST['openid_identity'];
$id = $this->_id;
}
```
This is right before:
```
return new Zend_Auth_Result(
Zend_Auth_Result::SUCCESS,
$id,
array("Authentication successful"));
```
Note that I have not thoroughly tested this code. The code below is even more shakey.
I have spent more time and I've gotten it to work with my Google Apps domain with the following changes, in addition to the above:
Zend/OpenId/Consumer.php, line 734
```
$discovery_url = $id;
if(strpos($discovery_url, '/', strpos($discovery_url, '//')+2) !== false) {
$discovery_url = substr($discovery_url, 0, strpos($discovery_url, '/', strpos($discovery_url, '//')+2));
}
$discovery_url .= '/.well-known/host-meta';
$response = $this->_httpRequest($discovery_url, 'GET', array(), $status);
if ($status === 200 && is_string($response)) {
if (preg_match('/Link: <([^><]+)>/i', $response, $r)) {
$id = $r[1];
}
}
```
This is right after:
```
/* TODO: OpenID 2.0 (7.3) XRI and Yadis discovery */
```
I believe that was the only change I had to make. I'm pretty sure there's supposed to be some checking involved with the above for security reasons, but I haven't looked far enough into it to see what they would be. | How do I implement Direct Identity based OpenID authentication with Zend OpenID | [
"",
"php",
"zend-framework",
"authentication",
"openid",
""
] |
So yea, im working on a windows system and while this works locally, know it will break on other peoples servers. Whats a cross platform way to do the same as this
```
function fetch($get,$put){
file_put_contents($put,file_get_contents($get));
}
``` | Here would be the solution using simple file operations:
```
<?php
$file = "http://www.domain.com/thisisthefileiwant.zip";
$hostfile = fopen($file, 'r');
$fh = fopen("thisisthenameofthefileiwantafterdownloading.zip", 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
?>
```
Ensure your directory has write permissions enabled. (CHMOD)
Therefore, a replacement for your fetch($get, $put) would be:
```
function fetch($get, $put) {
$hostfile = fopen($get, 'r');
$fh = fopen($put, 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
}
```
Hope it helped! =)
---
Cheers,
KrX | I don't see why that would fail unless the other computer is on PHP4. What you would need to do to make that **backwards** compatible is add functionality to provide replacements for file\_get\_contents & file\_put\_contents:
```
if(version_compare(phpversion(),'5','<')) {
function file_get_contents($file) {
// mimick functionality here
}
function file_put_contents($file,$data) {
// mimick functionality here
}
}
``` | Whats a better way to write this function. Its gets a remote file and copys it localy, in php | [
"",
"php",
"windows",
"linux",
"http",
"winapi",
""
] |
I hope that in this post, I can get people's opinions on best practices for the interface between JSF pages and backing beans.
One thing that I never can settle on is the structure of my backing beans. Furthermore, I have never found a good article on the subject.
What properties belong on which backing beans? When is it appropriate to add more properties to a given bean as opposed to creating a new bean and adding the properties onto it? For simple applications, does it make sense to just have a single backing bean for the whole page, considering the complexity involved with injecting one bean into another? Should the backing bean contain any actual business logic, or should it strictly contain data?
Feel free to answer these questions and any others that may come up.
---
As for reducing coupling between the JSF page and the backing bean, I never allow the JSF page to access any backing bean property's properties. For example, I never allow something such as:
```
<h:outputText value="#{myBean.anObject.anObjectProperty}" />
```
I always require something like:
```
<h:outputText value="#{myBean.theObjectProperty}" />
```
with a backing bean value of:
```
public String getTheObjectProperty()
{
return anObject.getAnObjectProperty();
}
```
When I loop over a collection, I use a wrapper class to avoid drilling down into an object in a data table, for instance.
In general, this approach feels "right" to me. It avoids any coupling between the view and the data. Please correct me if I'm wrong. | You might want to check this out: [making distinctions between different kinds of JSF managed beans](http://java.dzone.com/articles/making-distinctions-between).
Here is a description of the different bean types, as defined in the above article by Neil Griffin:
* **Model Managed-Bean**: *Normally session scope.* This type of managed-bean participates in the "Model" concern of the MVC design pattern. When you see the word "model" -- think DATA. A JSF model-bean should be a POJO that follows the JavaBean design pattern with getters/setters encapsulating properties. The most common use case for a model bean is to be a database entity, or to simply represent a set of rows from the result set of a database query.
* **Backing Managed-Bean**: *Normally request scope.* This type of managed-bean participates in the "View" concern of the MVC design pattern. The purpose of a backing-bean is to support UI logic, and has a 1::1 relationship with a JSF view, or a JSF form in a Facelet composition. Although it typically has JavaBean-style properties with associated getters/setters, these are properties of the View -- not of the underlying application data model. JSF backing-beans may also have JSF actionListener and valueChangeListener methods.
* **Controller Managed-Bean**: *Normally request scope.* This type of managed-bean participates in the "Controller" concern of the MVC design pattern. The purpose of a controller bean is to execute some kind of business logic and return a navigation outcome to the JSF navigation-handler. JSF controller-beans typically have JSF action methods (and not actionListener methods).
* **Support Managed-Bean**: *Normally session or application scope.* This type of bean "supports" one or more views in the "View" concern of the MVC design pattern. The typical use case is supplying an ArrayList to JSF h:selectOneMenu drop-down lists that appear in more than one JSF view. If the data in the dropdown lists is particular to the user, then the bean would be kept in session scope. However, if the data applies to all users (such as a dropdown lists of provinces), then the bean would be kept in application scope, so that it can be cached for all users.
* **Utility Managed-Bean**: *Normally application scope.* This type of bean provides some type of "utility" function to one or more JSF views. A good example of this might be a FileUpload bean that can be reused in multiple web applications. | Great question. I suffered a lot with the same dilemma when I moved to JSF. It really depends on your application. I'm from the Java EE world so I would recommend to have as little business logic in your backing beans as possible. If the logic is purely related to the presentation of your page, then it is fine to have it in the backing bean.
I believe one of the (many) strengths of JSF is actually the fact that you can expose domain objects directly on the managed beans. I would therefore strongly recommend the `<:outputText value="#{myBean.anObject.anObjectProperty}" />` approach, otherwise you end up making too much work for yourself in manually exposing each property. Furthermore it would be a bit of a mess when inserting or updating data if you encapsulated all the properties. There are situations where a single domain object may not be sufficient. In those cases I prepare a *ValueObject* before exposing it on the bean.
*EDIT: Actually, if you are going to encapsulate every object property that you want to expose, I would recommend that you instead bind UI components to the backing bean and then inject the content directly into the value of the component.*
In terms of bean structure the turning point for me was when I forcefully ignored all the stuff I knew about building web applications and started treating it as a GUI application instead. JSF mimics Swing a lot and therefore the best practices for developing Swing applications would mostly also apply to building JSF applications. | JSF backing bean structure (best practices) | [
"",
"java",
"jsf",
""
] |
This might be a bit of an anti-pattern, but it is possible for a property on a C# class to accept multiple values?
For example, say I have an Public int property and I always want it to return an int, but I would like to be able to have the property set by assigning a decimal, an integer or some other data type. So my question is if it possible for properties to accept multiple values? | I think what you mean to ask is: **How does [implicit](http://msdn.microsoft.com/en-us/library/z5z9kes2.aspx) and [explicit](http://msdn.microsoft.com/en-us/library/xhbhezf4.aspx) casting work for `int` and `decimal`?**
You are asking about implicit casting which automatically coverts one object to another defined type. You will not be able to do this for an `int` and `decimal` because they are already defined in the framework and you are not able to reduce the scope of the `decimal` by casting it to an `int`. But if you were using that as an example for actual objects that you created you can use the [implicit](http://msdn.microsoft.com/en-us/library/z5z9kes2.aspx) link above, to learn more about how this works and how to implement it.
But you can always use the convert method to convert them to the right type;
```
public int MyProperty { get; set; }
...
obj.MyProperty = Convert.ToInt32(32.0M);
obj.MyProperty = Convert.ToInt32(40.222D);
obj.MyProperty = Convert.ToInt32("42");
``` | **Edit:** This method can't be used since the op is specifically bound to properties.
I do not believe this is possible with the robustness that you describe. In this case you would likely be better off using an **overloaded method** (polymorphism).
This is what is typically known as a *setter* (or mutator) and you can overload the method to accept multiple different types of parameters. Each will perform differently if you wish. The way I have them set up might not be syntactically correct but that is the general idea you're looking for I believe.
```
public class MyClass {
private Int32 mySomeValue;
public void setSomeValue(Double value) { this.mySomeValue = Convert.ToInt32(value); }
public void setSomeValue(Int32 value) { this.mySomeValue = value; }
}
``` | Can a C# property accept multiple values? | [
"",
"c#",
".net",
"properties",
"anti-patterns",
""
] |
If I have an array of image filenames,
```
var preload = ["a.gif", "b.gif", "c.gif"];
```
and I want to preload them in a loop, is it necessary to create an image object each time? Will all the methods listed below work? Is one better?
A.
```
var image = new Image();
for (i = 0; i < preload.length; i++) {
image.src = preload[i];
}
```
B.
```
var image;
for (i = 0; i < preload.length; i++) {
image = new Image();
image.src = preload[i];
}
```
C.
```
var images = [];
for (i = 0; i < preload.length; i++) {
images[i] = new Image();
images[i].src = preload[i];
}
```
Thanks! | **EDIT**:
Actually, I just put it to the test, and Method A does not work as intended:
Check it out: <http://www.rootspot.com/stackoverflow/preload.php>
If you click on the 2nd image when the page is finished loading, it should appear instantaneously because it was preloaded, but the first one doesn't because it didn't have time to load before the source was changed. Interesting. With this new development, I'd just go ahead and use Method C. | The following code seems to work for me. Its based on [A]
JQuery:
```
var gallery= ['img/1.png','img/2.png','img/3.png','img/4.png','img/5.png'];
var preload_image_object=new Image();
```
//Solution:
```
$.each(gallery,function(i,c){preload_image_object.src=c.logo})
```
OR
```
$.each(['img/1.png','img/2.png','img/3.png','img/4.png','img/5.png'],function(i,c){preload_image_object.src=c})
``` | What is the best way to preload multiple images in JavaScript? | [
"",
"javascript",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.