Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
On a question of just performance, how does Python 3 compare to Python 2.x?
|
3.0 is slower than 2.5 on official benchmarks. From ["What’s New in Python 3.0"](http://docs.python.org/3.0/whatsnew/3.0.html#performance):
> The net result of the 3.0
> generalizations is that Python 3.0
> runs the pystone benchmark around 10%
> slower than Python 2.5. Most likely
> the biggest cause is the removal of
> special-casing for small integers.
> There’s room for improvement, but it
> will happen after 3.0 is released!
|
I'd say any difference will be below trivial. For example, looping over a list will be the exact same.
The idea behind Python 3 is to clean up the language syntax itself - remove ambigious stuff like `except Exception1, Exception2`, cleanup the standard modules (no urllib, urllib2, httplib etc).
There really isn't much you can do to improve it's performance, although I imagine stuff like the garbage collection and memory management code will have had some tweaks, but it's not going to be a "wow, my database statistic generation code completes in half the time!" improvement - that's something you get by improving the code, rather than the language!
Really, performance of the language is irrelevant - all interpreted languages basically function at the same speed.
Why I find Python "faster" is all the built-in moudles, and the nice-to-write syntax - something that *has* been improved in Python3, so I guess in those terms, yes, python3's performance is better then python2.x..
|
Performance: Python 3.x vs Python 2.x
|
[
"",
"python",
"performance",
"python-3.x",
"python-2.x",
""
] |
According to the [feedparser documentation](http://feedparser.org/docs/introduction.html), I can turn an RSS feed into a parsed object like this:
```
import feedparser
d = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
```
but I can't find anything showing how to go the other way; I'd like to be able do manipulate 'd' and then output the result as XML:
```
print d.toXML()
```
but there doesn't seem to be anything in feedparser for going in that direction. Am I going to have to loop through d's various elements, or is there a quicker way?
|
Appended is a not hugely-elegant, but working solution - it uses feedparser to parse the feed, you can then modify the entries, and it passes the data to PyRSS2Gen. It preserves *most* of the feed info (the important bits anyway, there are somethings that will need extra conversion, the parsed\_feed['feed']['image'] element for example).
I put this together as part of a [little feed-processing framework](http://github.com/dbr/pyfeedproc) I'm fiddling about with.. It may be of some use (it's pretty short - should be less than 100 lines of code in total when done..)
```
#!/usr/bin/env python
import datetime
# http://www.feedparser.org/
import feedparser
# http://www.dalkescientific.com/Python/PyRSS2Gen.html
import PyRSS2Gen
# Get the data
parsed_feed = feedparser.parse('http://reddit.com/.rss')
# Modify the parsed_feed data here
items = [
PyRSS2Gen.RSSItem(
title = x.title,
link = x.link,
description = x.summary,
guid = x.link,
pubDate = datetime.datetime(
x.modified_parsed[0],
x.modified_parsed[1],
x.modified_parsed[2],
x.modified_parsed[3],
x.modified_parsed[4],
x.modified_parsed[5])
)
for x in parsed_feed.entries
]
# make the RSS2 object
# Try to grab the title, link, language etc from the orig feed
rss = PyRSS2Gen.RSS2(
title = parsed_feed['feed'].get("title"),
link = parsed_feed['feed'].get("link"),
description = parsed_feed['feed'].get("description"),
language = parsed_feed['feed'].get("language"),
copyright = parsed_feed['feed'].get("copyright"),
managingEditor = parsed_feed['feed'].get("managingEditor"),
webMaster = parsed_feed['feed'].get("webMaster"),
pubDate = parsed_feed['feed'].get("pubDate"),
lastBuildDate = parsed_feed['feed'].get("lastBuildDate"),
categories = parsed_feed['feed'].get("categories"),
generator = parsed_feed['feed'].get("generator"),
docs = parsed_feed['feed'].get("docs"),
items = items
)
print rss.to_xml()
```
|
If you're looking to read in an XML feed, modify it and then output it again, there's [a page on the main python wiki indicating that the RSS.py library might support what you're after](http://wiki.python.org/moin/RssLibraries) (it reads most RSS and is able to output RSS 1.0). I've not looked at it in much detail though..
|
How do I turn an RSS feed back into RSS?
|
[
"",
"python",
"rss",
""
] |
I agree, that programming against interfaces is a good practice. In most cases in Java "interface" in this sense means the language construct interface, so that you write an interface and an implementation class and that you use the interface instead of the implementation class most of the time.
I wonder if this is a good practice for writing domain models as well. So, for example if you've got a domain class Customer and each customer may have a list of Orders, would you *generally* also write interfaces ICustomer and IOrder. And also would Customer have a list of IOrders instead of Orders? Or would you use interfaces in the domain model, only if it is really driven by the domain, e.g. you've got at least two different types of Orders? In other words, would you use interfaces because of only technical needs in the domain model, or only when it is really appropriate with respect to the actual domain?
|
Writing interfaces "just because" strikes me as a waste of time and energy, not to mention a violation of the KISS-principle.
I write them when they are actually useful in representing common behavior of related classes, not just as a fancy header file.
|
Don't over design your system. If you find out that you have several types of Orders and think it's appropriate to declare an interface for Orders than refactor it when the need arises. For domain models, the probability is high that the specific interface will change much over the lifetime of development, so it rarely useful to write an interface early.
|
Programming against interfaces: Do you write interfaces for all your domain classes?
|
[
"",
"java",
"model",
"dns",
"interface",
""
] |
I'm interested in actual examples of using fixed point combinators (such as the [y-combinator](https://stackoverflow.com/questions/93526/what-is-a-y-combinator) in C++. Have you ever used a fixed point combinator with [egg](http://p-stade.sourceforge.net/boost/libs/egg/doc/html/boost_egg/function_adaptors.html#boost_egg.function_adaptors.fix) or [bind](http://www.boost.org/doc/libs/1_36_0/libs/bind/bind.html) in real live code?
I found this example in egg a little dense:
```
void egg_example()
{
using bll::_1;
using bll::_2;
int r =
fix2(
bll::ret<int>(
// \(f,a) -> a == 0 ? 1 : a * f(a-1)
bll::if_then_else_return( _2 == 0,
1,
_2 * lazy(_1)(_2 - 1)
)
)
) (5);
BOOST_CHECK(r == 5*4*3*2*1);
}
```
Can you explain how this all works?
Is there a nice simple example perhaps using bind with perhaps fewer dependancies than this one?
|
Here is the same code converted into `boost::bind` notice the y-combinator and its application site in the main function. I hope this helps.
```
#include <boost/function.hpp>
#include <boost/bind.hpp>
#include <iostream>
// Y-combinator compatible factorial
int fact(boost::function<int(int)> f,int v)
{
if(v == 0)
return 1;
else
return v * f(v -1);
}
// Y-combinator for the int type
boost::function<int(int)>
y(boost::function<int(boost::function<int(int)>,int)> f)
{
return boost::bind(f,boost::bind(&y,f),_1);
}
int main(int argc,char** argv)
{
boost::function<int(int)> factorial = y(fact);
std::cout << factorial(5) << std::endl;
return 0;
}
```
|
```
#include <functional>
#include <iostream>
template <typename Lamba, typename Type>
auto y (std::function<Type(Lamba, Type)> f) -> std::function<Type(Type)>
{
return std::bind(f, std::bind(&y<Lamba, Type>, f), std::placeholders::_1);
}
int main(int argc,char** argv)
{
std::cout << y < std::function<int(int)>, int> ([](std::function<int(int)> f, int x) {
return x == 0 ? 1 : x * f(x - 1);
}) (5) << std::endl;
return 0;
}
```
|
Fixed point combinators in C++
|
[
"",
"c++",
"bind",
"y-combinator",
""
] |
I have a UI widget that needs to be put in an IFRAME both for performance reasons and so we can syndicate it out to affiliate sites easily. The UI for the widget includes tool-tips that display over the top of other page content. See screenshot below or **[go to the site](http://www.bookabach.co.nz/)** to see it in action. Is there any way to make content from within the IFRAME overlap the parent frame's content?

|
No it's not possible. Ignoring any historical reasons, nowadays it would be considered a security vulnerability -- eg. many sites put untrusted content into iframes (the iframe source being a different origin so cannot modify the parent frame, per the same origin policy).
If such untrusted content had a mechanism to place content outside of the bounds of the iframe it could (for example) place an "identical" login div (or whatever) over a parent frame's real login fields, and could thus steal username/password information. Which would suck.
|
I couldn't find a way to make the content of the frame flow out of the frame, but I did find a way to hack around it, by moving the tooltip into the parent document and placing it above (z-index) the iframe.
The approach was:
1) find the iframe in the parent document
2) remove the tooltip element from where it is in the DOM, and add it to the parent document inside the element that contains your iframe.
3) you probably need to adjust the z-index and positioning, depending on how you were doing that in the first place.
You can access the parent document of an iframe using parent.document.
```
jQuery(tooltip).remove();
var iframeParent = jQuery("#the_id_of_the_iframe", parent.document)[0].parentNode;
iframeParent.appendChild(tooltip);
//adjust z-index, positioning
```
|
Is there a way to have content from an IFRAME overflow onto the parent frame?
|
[
"",
"javascript",
"css",
"iframe",
"tooltip",
""
] |
Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not?
I found the functions bisect\_left/right in the [bisect module](http://docs.python.org/library/bisect.html), but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything).
I thought of using `bisect_left` and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it.
**Edit** To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list.
Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements.
I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested.
|
`bisect_left` finds the first position `p` at which an element could be inserted in a given sorted range while maintaining the sorted order. That will be the position of `x` if `x` exists in the range. If `p` is the past-the-end position, `x` wasn't found. Otherwise, we can test to see if `x` is there to see if `x` was found.
```
from bisect import bisect_left
def binary_search(a, x, lo=0, hi=None):
if hi is None: hi = len(a)
pos = bisect_left(a, x, lo, hi) # find insertion position
return pos if pos != hi and a[pos] == x else -1 # don't walk off the end
```
|
Why not look at the code for bisect\_left/right and adapt it to suit your purpose.
like this:
```
def binary_search(a, x, lo=0, hi=None):
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
midval = a[mid]
if midval < x:
lo = mid+1
elif midval > x:
hi = mid
else:
return mid
return -1
```
|
Binary search (bisection) in Python
|
[
"",
"python",
"binary-search",
"bisection",
""
] |
I have an embedded webserver that has a total of 2 Megs of space on it. Normally you gzip files for the clients benefit, but this would save us space on the server. I read that you can just gzip the js file and save it on the server. I tested that on IIS and I didn't have any luck at all. What exactly do I need to do on every step of the process to make this work?
This is what I imagine it will be like:
1. gzip foo.js
2. change link in html to point to foo.js.gz instead of just .js
3. Add some kind of header to the response?
Thanks for any help at all.
-fREW
**EDIT**: My webserver can't do anything on the fly. It's not Apache or IIS; it's a binary on a ZiLog processor. I know that you can compress streams; I just heard that you can also compress the files once and leave them compressed.
|
As others have mentioned mod\_deflate does that for you, but I guess you need to do it manually since it is an embedded environment.
First of all you should leave the name of the file foo.js after you gzip it.
You should not change anything in your html files. Since the file is still foo.js
In the response header of (the gzipped) foo.js you send the header
```
Content-Encoding: gzip
```
This should do the trick. The client asks for foo.js and receives Content-Encoding: gzip followed by the gzipped file, which it automatically ungzips before parsing.
Of course this assumes your are sure the client understands gzip encoding, if you are not sure, you should only send gzipped data when the request header contains
```
Accept-Encoding: gzip
```
|
Using gzip compression on a webserver usually means compressing the output from it to conserve your bandwidth - not quite what you have in mind.
[Look at this description](http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/25d2170b-09c0-45fd-8da4-898cf9a7d568.mspx?mfr=true)
or
[This example](http://www.keylimetie.com/Blog/2008/5/20/How-to-enable-HTTP-Compression-on-Windows-Server-2003/)
|
How do I set up gzip compression on a web server?
|
[
"",
"javascript",
"compression",
"gzip",
""
] |
I am trying to create a multi dimensional array using this syntax:
```
$x[1] = 'parent';
$x[1][] = 'child';
```
I get the error: `[] operator not supported for strings` because it is evaluating the `$x[1]` as a string as opposed to returning the array so I can append to it.
What is the correct syntax for doing it this way? The overall goal is to create this multidimensional array in an iteration that will append elements to a known index.
The syntax `${$x[1]}[]` does not work either.
|
The parent has to be an array!
```
$x[1] = array();
$x[1][] = 'child';
```
|
```
$x = array();
$x[1] = array();
$x[1][] = 'child';
```
|
Error: [] operator not supported for strings
|
[
"",
"php",
"arrays",
""
] |
From the Java 6 [Pattern](http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html) documentation:
> Special constructs (non-capturing)
>
> `(?:`*X*`)` *X*, as a non-capturing group
>
> …
>
> `(?>`*X*`)` *X*, as an independent, non-capturing group
Between `(?:X)` and `(?>X)` what is the difference? What does the **independent** mean in this context?
|
It means that the grouping is [atomic](http://www.regular-expressions.info/atomic.html), and it throws away backtracking information for a matched group. So, this expression is possessive; it won't back off even if doing so is the only way for the regex as a whole to succeed. It's "independent" in the sense that it doesn't cooperate, via backtracking, with other elements of the regex to ensure a match.
|
I think [this tutorial](https://www.regular-expressions.info/atomic.html) explains what exactly "independent, non-capturing group" or "Atomic Grouping" is
> The regular expression `a(bc|b)c` (capturing group) matches **abcc**
> and **abc**. The regex `a(?>bc|b)c` (atomic group) matches **abcc**
> but not **abc**.
>
> When applied to **abc**, both regexes will match `a` to **a**, `bc` to
> **bc**, and then `c` will fail to match at the end of the string. Here their paths diverge. The regex with the ***capturing group*** has
> remembered a backtracking position for the alternation. The group will
> give up its match, `b` then matches **b** and `c` matches **c**. Match
> found!
>
> The regex with the ***atomic group***, however, exited from an atomic
> group after `bc` was matched. At that point, all backtracking
> positions for tokens inside the group are discarded. In this example,
> the alternation's option to try `b` at the second position in the
> string is discarded. As a result, when `c` fails, the regex engine has
> no alternatives left to try.
|
What is a regex "independent non-capturing group"?
|
[
"",
"java",
"regex",
""
] |
It's been a while since I've programmed a GUI program, so this may end up being super simple, but I can't find the solution anywhere online.
Basically my problem is that when I maximize my program, all the things inside of the window (buttons, textboxes, etc.) stay in the same position in the window, which results in a large blank area near the bottom and right side.
Is there a way of making the the elements in the program to stretch to scale?
|
You want to check and properly set the Anchor and Dock properties on each control in the Form. The Anchor property on a control tells which sides of the form (top, bottom, left, right) the control is 'anchored' to. When the form is resized, the distance between the control and its anchors will stay the same. This lets you make a control stay in the bottom right corner for example.
The Dock property instructs the control to fill the entire parent form or to fill one side of it (again top, bottom, left or right).
|
Anchor and Dock properties
|
Stuff in Windows Form Move When Maximized - C#
|
[
"",
"c#",
"windows",
"user-interface",
""
] |
How do I pick a random element from a set?
I'm particularly interested in picking a random element from a
HashSet or a LinkedHashSet, in Java.
|
```
int size = myHashSet.size();
int item = new Random().nextInt(size); // In real life, the Random object should be rather more shared than this
int i = 0;
for(Object obj : myhashSet)
{
if (i == item)
return obj;
i++;
}
```
|
A somewhat related Did You Know:
There are useful methods in [`java.util.Collections`](https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html) for shuffling whole collections: [`Collections.shuffle(List<?>)`](https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html#shuffle-java.util.List-) and [`Collections.shuffle(List<?> list, Random rnd)`](https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html#shuffle-java.util.List-java.util.Random-).
|
Picking a random element from a set
|
[
"",
"java",
"algorithm",
"random",
"set",
""
] |
From Wikipedia, the free encyclopedia: **Closure (computer science)**
> In computer science, a closure is a
> function that is evaluated in an
> environment containing one or more
> bound variables. When called, the
> function can access these variables.
> The explicit use of closures is
> associated with functional programming
> and with languages such as ML and
> Lisp. Constructs such as objects in
> other languages can also be modeled
> with closures.
To use this inside of JavaScript, can someone point me to an example of how this applies?
|
(using an example from jQuery)
```
function SetClassOnHover(className){
$("td").hover(
function () {
$(this).addClass(className);
},
function () {
$(this).removeClass(className);
}
);
}
```
The closure comes into play when the variable className is used inside the scope of each function. When SetClassOnHover exits, both functions must retain a handle on className in order to access its value when the functions are called. That's what the closure enables.
|
Searching for "javascript closures" gave plenty of encouraging-looking links. The top three were these two (the third link was a reformatted version of the second):
* [Javascript closures](http://www.jibbering.com/faq/faq_notes/closures.html)
* [JavaScript closures for dummies](http://web.archive.org/web/20101113013100/http://blog.morrisjohns.com/javascript_closures_for_dummies.html)
If these didn't help you, please explain why so we're in a better position to actually help. If you didn't search before asking the question, well - please do so next time :)
|
JavaScript - How do I learn about "closures" usage?
|
[
"",
"javascript",
"closures",
""
] |
I'm a Java developer looking to learn some C#/ASP.NET. One thing I've never liked about .NET from the get-go was that it didn't have support for MVC. But now it does! So I was wondering if anybody knew where to get started learning C# MVC.
Also, do you need the non-free version of developer-studio to do this?
|
Just to add to Will's answer, Scott has a series on ASP.NET MVC development:
[ASP.NET MVC Framework Part 1](http://weblogs.asp.net/scottgu/archive/2007/11/13/asp-net-mvc-framework-part-1.aspx)
[ASP.NET MVC Framework (Part 2): URL Routing](http://weblogs.asp.net/scottgu/archive/2007/12/03/asp-net-mvc-framework-part-2-url-routing.aspx)
[ASP.NET MVC Framework (Part 3): Passing ViewData from Controllers to Views](http://weblogs.asp.net/scottgu/archive/2007/12/06/asp-net-mvc-framework-part-3-passing-viewdata-from-controllers-to-views.aspx)
[ASP.NET MVC Framework (Part 4): Handling Form Edit and Post Scenarios](http://weblogs.asp.net/scottgu/archive/2007/12/09/asp-net-mvc-framework-part-4-handling-form-edit-and-post-scenarios.aspx)
|
Keep an eye on [Scott Guthrie](http://weblogs.asp.net/Scottgu/)'s and [Phil Haack](http://haacked.com/)'s blogs. They are the primary source of documentation right now.
Be wary, as most posts about MVC are about previous versions and don't apply anymore (anything that uses a lambda is right out, unfortunately).
Of course, you've got a pretty good resource here as well. Haack occasionally answers questions about MVC...
|
Where can I get some information on starting C# programming with MVC / ASP.NET?
|
[
"",
"c#",
"asp.net-mvc",
""
] |
Scenario:
I'm currently writing a layer to abstract 3 similar webservices into one useable class. Each webservice exposes a set of objects that share commonality. I have created a set of intermediary objects which exploit the commonality. However in my layer I need to convert between the web service objects and my objects.
I've used reflection to create the appropriate type at run time before I make the call to the web service like so:
```
public static object[] CreateProperties(Type type, IProperty[] properties)
{
//Empty so return null
if (properties==null || properties.Length == 0)
return null;
//Check the type is allowed
CheckPropertyTypes("CreateProperties(Type,IProperty[])",type);
//Convert the array of intermediary IProperty objects into
// the passed service type e.g. Service1.Property
object[] result = new object[properties.Length];
for (int i = 0; i < properties.Length; i++)
{
IProperty fromProp = properties[i];
object toProp = ReflectionUtility.CreateInstance(type, null);
ServiceUtils.CopyProperties(fromProp, toProp);
result[i] = toProp;
}
return result;
}
```
Here's my calling code, from one of my service implementations:
```
Property[] props = (Property[])ObjectFactory.CreateProperties(typeof(Property), properties);
_service.SetProperties(folderItem.Path, props);
```
So each service exposes a different "Property" object which I hide behind my own implementation of my IProperty interface.
The reflection code works in unit tests producing an array of objects whose elements are of the appropriate type. But the calling code fails:
> System.InvalidCastException: Unable to
> cast object of type 'System.Object[]'
> to type
> 'MyProject.Property[]
Any ideas?
I was under the impression that any cast from Object will work as long as the contained object is convertable?
|
Alternative answer: generics.
```
public static T[] CreateProperties<T>(IProperty[] properties)
where T : class, new()
{
//Empty so return null
if (properties==null || properties.Length == 0)
return null;
//Check the type is allowed
CheckPropertyTypes("CreateProperties(Type,IProperty[])",typeof(T));
//Convert the array of intermediary IProperty objects into
// the passed service type e.g. Service1.Property
T[] result = new T[properties.Length];
for (int i = 0; i < properties.Length; i++)
{
T[i] = new T();
ServiceUtils.CopyProperties(properties[i], t[i]);
}
return result;
}
```
Then your calling code becomes:
```
Property[] props = ObjectFactory.CreateProperties<Property>(properties);
_service.SetProperties(folderItem.Path, props);
```
Much cleaner :)
|
Basically, no. There are a few, limited, uses of array covariance, but it is better to simply know which type of array you want. There is a generic Array.ConvertAll that is easy enough (at least, it is easier with C# 3.0):
```
Property[] props = Array.ConvertAll(source, prop => (Property)prop);
```
The C# 2.0 version (identical in meaning) is much less eyeball-friendly:
```
Property[] props = Array.ConvertAll<object,Property>(
source, delegate(object prop) { return (Property)prop; });
```
Or just create a new Property[] of the right size and copy manually (or via `Array.Copy`).
As an example of the things you *can* do with array covariance:
```
Property[] props = new Property[2];
props[0] = new Property();
props[1] = new Property();
object[] asObj = (object[])props;
```
Here, "asObj" is *still* a `Property[]` - it it simply accessible as `object[]`. In C# 2.0 and above, generics usually make a better option than array covariance.
|
Unable to cast object of type 'System.Object[]' to 'MyObject[]', what gives?
|
[
"",
"c#",
".net",
"arrays",
"reflection",
"casting",
""
] |
Is there a way to use .NET reflection to capture the values of all parameters/local variables?
|
You could get at this information using the [CLR debugging API](http://msdn.microsoft.com/en-us/library/bb384548.aspx) though it won't be a simple couple of lines to extract it.
|
Reflection is not used to capture information from the stack. It reads the Assembly.
You might want to take a look at StackTrace
<http://msdn.microsoft.com/en-us/library/system.diagnostics.stacktrace.aspx>
Good article here:
<http://www.codeproject.com/KB/trace/customtracelistener.aspx>
|
Capturing method state using Reflection
|
[
"",
"c#",
".net",
"reflection",
""
] |
I have a combo box on a WinForms app in which an item may be selected, but it is not mandatory. I therefore need an 'Empty' first item to indicate that no value has been set.
The combo box is bound to a DataTable being returned from a stored procedure (I offer no apologies for Hungarian notation on my UI controls :p ):
```
DataTable hierarchies = _database.GetAvailableHierarchies(cmbDataDefinition.SelectedValue.ToString()).Copy();//Calls SP
cmbHierarchies.DataSource = hierarchies;
cmbHierarchies.ValueMember = "guid";
cmbHierarchies.DisplayMember = "ObjectLogicalName";
```
How can I insert such an empty item?
I do have access to change the SP, but I would really prefer not to 'pollute' it with UI logic.
**Update:** It was the DataTable.NewRow() that I had blanked on, thanks. I have upmodded you all (all 3 answers so far anyway). I am trying to get the Iterator pattern working before I decide on an 'answer'
**Update:** I think this edit puts me in Community Wiki land, I have decided not to specify a single answer, as they all have merit in context of their domains. Thanks for your collective input.
|
There are two things you can do:
1. Add an empty row to the `DataTable` that is returned from the stored procedure.
```
DataRow emptyRow = hierarchies.NewRow();
emptyRow["guid"] = "";
emptyRow["ObjectLogicalName"] = "";
hierarchies.Rows.Add(emptyRow);
```
Create a DataView and sort it using ObjectLogicalName column. This will make the newly added row the first row in DataView.
```
DataView newView =
new DataView(hierarchies, // source table
"", // filter
"ObjectLogicalName", // sort by column
DataViewRowState.CurrentRows); // rows with state to display
```
Then set the dataview as `DataSource` of the `ComboBox`.
2. If you really don't want to add a new row as mentioned above. You can allow the user to set the `ComboBox` value to null by simply handling the "Delete" keypress event. When a user presses Delete key, set the `SelectedIndex` to -1. You should also set `ComboBox.DropDownStyle` to `DropDownList`. As this will prevent user to edit the values in the `ComboBox`.
|
```
cmbHierarchies.SelectedIndex = -1;
```
|
How to insert 'Empty' field in ComboBox bound to DataTable
|
[
"",
"c#",
"combobox",
""
] |
I wonder what the best way to make an entire tr clickable would be?
The most common (and only?) solution seems to be using JavaScript, by using onclick="javascript:document.location.href('bla.htm');" (not to forget: Setting a proper cursor with onmouseover/onmouseout).
While that works, it is a pity that the target URL is not visible in the status bar of a browser, unlike normal links.
So I just wonder if there is any room for optimization? Is it possible to display the URL that will be navigated to in the status bar of the browser? Or is there even a non-JavaScript way to make a tr clickable?
|
Fortunately or unfortunately, most modern browsers do not let you control the status bar anymore (it was possible and popular back in the day) because of fraudulent intentions.
Your better bet would be a title attribute or a [javascript tooltip](http://www.google.com/search?q=javascript+tooltip).
|
If you don't want to use javascript, you can do what Chris Porter suggested by wrapping each td element's content in matching anchor tags. Then set the anchor tags to `display: block` and set the `height` and `line-height` to be the same as the td's height. You should then find that the td's touch seamlessly and the effect is that the whole row is clickable. Watch out for padding on the td, which will cause gaps in the clickable area. Instead, apply padding to the anchor tags as it will form part of the clickable area if you do that.
I also like to set the row up to have a highlight effect by applying a different background color on tr:hover.
### Example
For the latest Bootstrap (version 3.0.2), here's some quick CSS to show how this can be done:
```
table.row-clickable tbody tr td {
padding: 0;
}
table.row-clickable tbody tr td a {
display: block;
padding: 8px;
}
```
Here's a sample table to work with:
```
<table class="table table-hover row-clickable">
<tbody>
<tr>
<td><a href="#">Column 1</a></td>
<td><a href="#">Column 2</a></td>
<td><a href="#">Column 3</a></td>
</tr>
</tbody>
</table>
```
Here's [an example](http://jsbin.com/AgosUpe/1) showing this in action.
|
Making a Table Row clickable
|
[
"",
"javascript",
"html",
""
] |
Given a week number, e.g. `date -u +%W`, how do you calculate the days in that week starting from Monday?
Example rfc-3339 output for week 40:
```
2008-10-06
2008-10-07
2008-10-08
2008-10-09
2008-10-10
2008-10-11
2008-10-12
```
|
**PHP**
```
$week_number = 40;
$year = 2008;
for($day=1; $day<=7; $day++)
{
echo date('m/d/Y', strtotime($year."W".$week_number.$day))."\n";
}
```
---
Below post was because I was an idiot who didn't read the question properly, but will get the dates in a week starting from Monday, given the date, not the week number..
**In PHP**, adapted from [this post](https://www.php.net/manual/en/function.date.php#85258) on the [PHP date manual page](https://www.php.net/manual/en/function.date.php):
```
function week_from_monday($date) {
// Assuming $date is in format DD-MM-YYYY
list($day, $month, $year) = explode("-", $_REQUEST["date"]);
// Get the weekday of the given date
$wkday = date('l',mktime('0','0','0', $month, $day, $year));
switch($wkday) {
case 'Monday': $numDaysToMon = 0; break;
case 'Tuesday': $numDaysToMon = 1; break;
case 'Wednesday': $numDaysToMon = 2; break;
case 'Thursday': $numDaysToMon = 3; break;
case 'Friday': $numDaysToMon = 4; break;
case 'Saturday': $numDaysToMon = 5; break;
case 'Sunday': $numDaysToMon = 6; break;
}
// Timestamp of the monday for that week
$monday = mktime('0','0','0', $month, $day-$numDaysToMon, $year);
$seconds_in_a_day = 86400;
// Get date for 7 days from Monday (inclusive)
for($i=0; $i<7; $i++)
{
$dates[$i] = date('Y-m-d',$monday+($seconds_in_a_day*$i));
}
return $dates;
}
```
Output from `week_from_monday('07-10-2008')` gives:
```
Array
(
[0] => 2008-10-06
[1] => 2008-10-07
[2] => 2008-10-08
[3] => 2008-10-09
[4] => 2008-10-10
[5] => 2008-10-11
[6] => 2008-10-12
)
```
|
If you've got Zend Framework you can use the Zend\_Date class to do this:
```
require_once 'Zend/Date.php';
$date = new Zend_Date();
$date->setYear(2008)
->setWeek(40)
->setWeekDay(1);
$weekDates = array();
for ($day = 1; $day <= 7; $day++) {
if ($day == 1) {
// we're already at day 1
}
else {
// get the next day in the week
$date->addDay(1);
}
$weekDates[] = date('Y-m-d', $date->getTimestamp());
}
echo '<pre>';
print_r($weekDates);
echo '</pre>';
```
|
Calculating days of week given a week number
|
[
"",
"php",
"date",
""
] |
I've worked with a couple of Visual C++ compilers (VC97, VC2005, VC2008) and I haven't really found a clearcut way of adding external libraries to my builds. I come from a Java background, and in Java libraries are everything!
I understand from compiling open-source projects on my Linux box that all the source code for the library seems to need to be included, with the exception of those .so files.
Also I've heard of the .lib static libraries and .dll dynamic libraries, but I'm still not entirely sure how to add them to a build and make them work. How does one go about this?
|
In I think you might be asking the mechanics of how to add a lib to a project/solution in the IDEs...
In 2003, 2005 and 2008 it is something similar to:
from the solution explorer - right click on the project
select properties (typically last one)
I usually select all configurations at the top...
Linker
Input
Additional dependencies go in there
I wish I could do a screen capture for this.
In VC6 it is different bear with me as this is all from memory
project settings or properties and then go to the linker tab and find where the libs can be added.
Please excuse the haphazard nature of this post. I think that is what you want though.
|
Libraries in C++ are also considered helpful, but the way you integrate them is different to Java because the compiler only has to see the *interface* of the library, which is usually declared in header files. In Java, the compiler will have to inspect the actual libraries because Java doesn't have this distinction between an externally visible header file and the generated object code providing the implementation.
What you normally do is build the libraries separately, once, and put the generated lib/dll files plus the header files into a place that projects requiring the library can access. A common idiom is to put the header files into `include`, the static libraries into `lib` and the dynamic libraries into `bin` subdirectories for your compiled library.
The reason you have found that most C++ libraries are provided in source code form and not in precompiled form is that every C++ compiler has a certain freedom as to how to mangle symbol names etc and the resulting object code isn't portable across compilers, let alone operating systems. So shipping the compiled code doesn't make sense for a lot of applications. You'll occasionally find it with closed-source C++ libraries on Windows (C libraries are an entirely different matter), but then the vendor will have to provide a compiled version for each and every build type (Release, Debug, 32 bit, 64 bit etc) and target compiler (various versions of Visual Studio require different binaries, then there is Borland and a bunch of other compilers) and it quickly becomes a nightmare to support...
When you take a library and build it as a dynamic library on Windows (ie, a DLL), the compiler/linker will normally generate a static 'import' library for it (same name, just with a .lib extension). When you link your project against the dynamic library, you specify the .lib file as a library dependency. Linking your application against said import library allows the linker to record the dependency on the .dll file and also which symbols it should expect the library to provide.
Making them work - in the sense of your program finding them on Windows - usually requires that the .dll file is either in the same directory as the executable or accessible via the 'PATH' environment variable and its equivalent in Visual C++.
|
How do you add external libraries for compilation in VC++?
|
[
"",
"c++",
"visual-studio-2008",
"visual-c++",
"visual-studio-2005",
""
] |
I have done a little Django development, but it has all been in a text editor. I was curious what more advanced development tools others are using in their Django development.
I am used to using Visual Studio for development and really like the [IntelliSense](https://en.wikipedia.org/wiki/Intelligent_code_completion#IntelliSense), code completion, and file organization it provides and would like to find something (or a combination of tools) that would provide some of this in the Django/Python environment.
|
I use Eclipse and a plain vanilla [PyDev](https://en.wikipedia.org/wiki/PyDev). There isn't any specific Django functionality. The best I came up with was setting up a run profile to run the development web server.
If you add the web tools project (WTP), you'll get syntax highlighting in your templates, but nothing that relates to the specific template language. PyDev is a decent plugin, and if you are already familiar with Eclipse and use it for other projects it is a good way to go.
I recall NetBeans starting to get Python support, but I have no idea where that is right now. Lots of people rave about NetBeans 6, but in the Java world Eclipse still reigns as the king of the OSS IDEs.
Update: LiClipse is also fantastic for Django.
[](https://i.stack.imgur.com/lBgOz.png)
Install it, use [this method](https://nrecursions.blogspot.com/2019/03/which-ide-to-choose-for-python.html) to get the icon into Ubuntu's menu. Start LiClipse and in File > New > Project ..., select PyDev and PyDev Django project. You may have to set up your Python interpreter etc, but that you'll be able to figure out on your own. Once the Django project is created, you can right click on the project and the menu will have a "Django" option, which allows various things like creating a Django app or running migrations etc. LiClipse is good because it consumes far lesser memory than PyCharm and supports refactoring and autocomplete reasonably well.
|
There is [PyCharm](http://www.jetbrains.com/pycharm/index.html) from JetBrains which supports Django and [Google Apps](http://en.wikipedia.org/wiki/Google_Apps). It looks promising.
Note: You need to buy a license for the Professional version if you want Django support. The Community version desn't support Django.
|
Django development IDE
|
[
"",
"python",
"django",
"ide",
""
] |
I'm writing an winforms app that needs to set internet explorer's proxy settings and then open a new browser window. At the moment, I'm applying the proxy settings by going into the registry:
```
RegistryKey registry = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true);
registry.SetValue("ProxyEnable", 1);
registry.SetValue("ProxyServer", "127.0.0.1:8080");
```
Is going into the registry the *best* way to do this, or is there a more recommended approach? I'd like to avoid registry changes if there's an alternative solution.
|
This depends somewhat on your exact needs. If you are writing a C# app and simply want to set the default proxy settings that your app will use, use the class System.Net.GlobalProxySelection (<http://msdn.microsoft.com/en-us/library/system.net.globalproxyselection.aspx>). You can also set the proxy for any particular connection with System.Net.WebProxy (<http://msdn.microsoft.com/en-us/library/system.net.webproxy.aspx>).
If you actually want to update the proxy settings in the registry, I believe that you'll need to use P/Invoke to call the WinAPI function WinHttpSetDefaultProxyConfiguration (<http://msdn.microsoft.com/en-us/library/aa384113.aspx>).
|
from: <http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/19517edf-8348-438a-a3da-5fbe7a46b61a>
Add these lines at the beginning of your code:
using System.Runtime.InteropServices;
using Microsoft.Win32;
```
[DllImport("wininet.dll")]
public static extern bool InternetSetOption(IntPtr hInternet, int dwOption, IntPtr lpBuffer, int dwBufferLength);
public const int INTERNET_OPTION_SETTINGS_CHANGED = 39;
public const int INTERNET_OPTION_REFRESH = 37;
bool settingsReturn, refreshReturn;
```
And imply the code:
```
RegKey.SetValue("ProxyServer", YOURPROXY);
RegKey.SetValue("ProxyEnable", 1);
// These lines implement the Interface in the beginning of program
// They cause the OS to refresh the settings, causing IP to realy update
settingsReturn = InternetSetOption(IntPtr.Zero, INTERNET_OPTION_SETTINGS_CHANGED, IntPtr.Zero, 0);
refreshReturn = InternetSetOption(IntPtr.Zero, INTERNET_OPTION_REFRESH, IntPtr.Zero, 0);
```
|
Programmatically Set Browser Proxy Settings in C#
|
[
"",
"c#",
"proxy",
"registry",
""
] |
I have a stored procedure that returns multiple tables. How can I execute and read both tables?
I have something like this:
```
SqlConnection conn = new SqlConnection(CONNECTION_STRING);
SqlCommand cmd = new SqlCommand("sp_mult_tables",conn);
cmd.CommandType = CommandType.StoredProcedure);
IDataReader rdr = cmd.ExecuteReader();
```
I'm not sure how to read it...whats the best way to handle this type of query, I am guessing I should read the data into a DataSet? How is the best way to do this?
Thanks.
|
Adapted from [MSDN](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldataadapter(VS.71).aspx):
```
using (SqlConnection conn = new SqlConnection(connection))
{
SqlDataAdapter adapter = new SqlDataAdapter();
adapter.SelectCommand = new SqlCommand(query, conn);
adapter.Fill(dataset);
return dataset;
}
```
|
If you want to read the results into a DataSet, you'd be better using a DataAdapter.
But with a DataReader, first iterate through the first result set, then call NextResult to advance to the second result set.
|
How can I read multiple tables into a dataset?
|
[
"",
"c#",
"ado.net",
"dataset",
""
] |
As far as I can understand, when I new up a *Linq to SQL class*, it is the equivalent of new'ing up a *SqlConnection object*.
Suppose I have an object with two methods: `Delete()` and `SubmitChanges()`. Would it be wise of me to new up the *Linq to SQL class* in each of the methods, or would a private variable holding the *Linq to SQL class* - new'ed up by the constructor - be the way to go?
What I'm trying to avoid is a time-out.
**UPDATE:**
```
namespace Madtastic
{
public class Comment
{
private Boolean _isDirty = false;
private Int32 _id = 0;
private Int32 _recipeID = 0;
private String _value = "";
private Madtastic.User _user = null;
public Int32 ID
{
get
{
return this._id;
}
}
public String Value
{
get
{
return this._value;
}
set
{
this._isDirty = true;
this._value = value;
}
}
public Madtastic.User Owner
{
get
{
return this._user;
}
}
public Comment()
{
}
public Comment(Int32 commentID)
{
Madtastic.DataContext mdc = new Madtastic.DataContext();
var comment = (from c in mdc.Comments
where c.CommentsID == commentID
select c).FirstOrDefault();
if (comment != null)
{
this._id = comment.CommentsID;
this._recipeID = comment.RecipesID;
this._value = comment.CommentsValue;
this._user = new User(comment.UsersID);
}
mdc.Dispose();
}
public void SubmitChanges()
{
Madtastic.DataContext mdc = new Madtastic.DataContext();
var comment = (from c in mdc.Comments
where c.CommentsID == this._id
select c).FirstOrDefault();
if (comment != null && this._isDirty)
{
comment.CommentsValue = this._value;
}
else
{
Madtastic.Entities.Comment c = new Madtastic.Entities.Comment();
c.RecipesID = this._recipeID;
c.UsersID = this._user.ID;
c.CommentsValue = this._value;
mdc.Comments.InsertOnSubmit(c);
}
mdc.SubmitChanges();
mdc.Dispose();
}
public void Delete()
{
Madtastic.DataContext mdc = new Madtastic.DataContext();
var comment = (from c in mdc.Comments
where c.CommentsID == this._id
select c).FirstOrDefault();
if (comment != null)
{
mdc.Comments.DeleteOnSubmit(comment);
mdc.SubmitChanges();
this._isDirty = false;
this._id = 0;
this._recipeID = 0;
this._value = "";
this._user = null;
}
mdc.Dispose();
}
}
}
```
**REFACTORED CODE (according to Grank's spec):**
```
namespace Madtastic
{
public sealed class CommentNew : IDisposable
{
private Madtastic.DataContext _mdc;
private Madtastic.Entities.Comment _comment;
private Madtastic.User _user;
public Int32 ID
{
get
{
return this._comment.CommentsID;
}
}
public String Value
{
get
{
return this._comment.CommentsValue;
}
set
{
this._comment.CommentsValue = value;
}
}
public Madtastic.User Owner
{
get
{
return this._user;
}
}
public void Comment(Int32 commentID)
{
this._mdc = new Madtastic.DataContext();
this._comment = (from c in _mdc.Comments
where c.CommentsID == commentID
select c).FirstOrDefault();
if (this._comment == null)
{
this._comment = new Madtastic.Entities.Comment();
this._mdc.Comments.InsertOnSubmit(this._comment);
}
else
{
this._user = new Madtastic.User(this._comment.User.UsersID);
}
}
public void SubmitChanges()
{
this._mdc.SubmitChanges();
}
public void Delete()
{
this._mdc.Comments.DeleteOnSubmit(this._comment);
this.SubmitChanges();
}
void IDisposable.Dispose()
{
this._mdc.Dispose();
}
}
}
```
|
Having now reviewed the code sample you edited to post, I would definitely refactor your class to take advantage of LINQ-to-SQL's built in functionality. (I won't edit my previous comment because it's a better answer to the general question)
Your class's fields appear to be a pretty direct mapping of the columns on the Comments table in the database. Therefore you don't need to do most of what you're doing manually in this class. Most of the functionality could be handled by just having a private member of type Madtastic.Entities.Comment (and just mapping your properties to its properties if you have to maintain how this class interacts with the rest of the project). Then your constructor can just initialize a private member Madtastic.DataContext and set your private member Madtastic.Entities.Comment to the result of the LINQ query on it. If the comment is null, create a new one and call InsertOnSubmit on the DataContext. (but it doesn't make sense to submit changes yet because you haven't set any values for this new object anyway)
In your SubmitChanges, all you should have to do is call SubmitChanges on the DataContext. It keeps its own track of whether or not the data needs to be updated, it won't hit the database if it doesn't, so you don't need \_isDirty.
In your Delete(), all you should have to do is call DeleteOnSubmit on the DataContext.
You may in fact find with a little review that you don't need the Madtastic.Comment class at all, and the Madtastic.Entities.Comment LINQ-to-SQL class can act directly as your data access layer. It seems like the only practical differences are the constructor that takes a commentID, and the fact that the Entities.Comment has a UsersID property where your Madtastic.Comment class has a whole User. (However, if User is also a table in the database, and UsersID is a foreign key to its primary key, you'll find that LINQ-to-SQL has created a User object on the Entities.Comment object that you can access directly with comment.User)
If you find you can eliminate this class entirely, it might mean that you can further optimize your DataContext's life cycle by bubbling it up to the methods in your project that make use of Comment.
Edited to post the following example refactored code (apologies for any errors, as I typed it in notepad in a couple seconds rather than opening visual studio, and I wouldn't get intellisense for your project anyway):
```
namespace Madtastic
{
public class Comment
{
private Madtastic.DataContext mdc;
private Madtastic.Entities.Comment comment;
public Int32 ID
{
get
{
return comment.CommentsID;
}
}
public Madtastic.User Owner
{
get
{
return comment.User;
}
}
public Comment(Int32 commentID)
{
mdc = new Madtastic.DataContext();
comment = (from c in mdc.Comments
where c.CommentsID == commentID
select c).FirstOrDefault();
if (comment == null)
{
comment = new Madtastic.Entities.Comment();
mdc.Comments.InsertOnSubmit(comment);
}
}
public void SubmitChanges()
{
mdc.SubmitChanges();
}
public void Delete()
{
mdc.Comments.DeleteOnSubmit(comment);
SubmitChanges();
}
}
}
```
You will probably also want to implement IDisposable/using as a number of people have suggested.
|
Depends on to what you refer by a "LINQ-to-SQL class", and what the code in question looks like.
If you're talking about the DataContext object, and your code is a class with a long lifetime or your program itself, I believe it would be best to initialize it in the constructor. It's not really like creating and/or opening a new SqlConnection, it's actually very smart about managing its database connection pool and concurrency and integrity so that you don't need to think about it, that's part of the joy in my experience so far with LINQ-to-SQL. I've never seen a time-out problem occur.
One thing you should know is that it's very difficult to share table objects across DataContext scope, and it's really not recommended if you can avoid it. Detach() and Attach() can be bitchy. So if you need to pass around a LINQ-to-SQL object that represents a row in a table on your SQL database, you should try to design the life cycle of the DataContext object to encompass all the work you need to do on any object that comes out of it.
Furthermore, there's a lot of overhead that goes into instantiating a DataContext object, and a lot of overhead that is managed by it... If you're hitting the same few tables over and over it would be best to use the same DataContext instance, as it will manage its connection pool, and in some cases cache some things for efficiency. However, it's recommended to not have every table in your database loaded into your DataContext, only the ones you need, and if the tables being accessed are very separate in very separate circumstances, you can consider splitting them into multiple DataContexts, which gives you some options on when you initialize each one if the circumstances surrounding them are different.
|
Linq to SQL class lifespan
|
[
"",
"c#",
"linq-to-sql",
"oop",
"c#-3.0",
""
] |
Recent versions of PHP have a cache of filenames for knowing the real path of files, and `require_once()` and `include_once()` can take advantage of it.
There's a value you can set in your *php.ini* to set the size of the cache, but I have no idea how to tell what the size should be. The default value is 16k, but I see no way of telling how much of that cache we're using. The docs are vague:
[Determines the size of the realpath cache to be used by PHP. This value should be increased on systems where PHP opens many files, to reflect the quantity of the file operations performed.](https://www.php.net/manual/en/ini.core.php#ini.realpath-cache-size)
Yes, I can jack up the amount of cache allowed, and run tests with `ab` or some other testing, but I'd like something with a little more introspection than just timing from a distance.
|
You've probably already found this, but for those who come across this question, you can use realpath\_cache\_size() and realpath\_cache\_get() to figure out how much of the realpath cache is being used on your site and tune the settings accordingly.
|
Though I can't offer anything specific to your situation, my understanding is that 16k is pretty low for most larger PHP applications (particularly ones that use a framework like the [Zend Framework](http://framework.zend.com)). I'd say at least double the cache size if your application uses lots of includes and see where to go from there. You might also want to increase the TTL as long as your directory structure is pretty consistent.
|
How can I tune the PHP realpath cache?
|
[
"",
"php",
"optimization",
"require-once",
"realpath",
""
] |
I'm writing an article about editing pages in order to hand pick what you really want to print. There are many tools (like "Print What you like") but I also found this script. Anyone knows anything about it? I haven't found any kind of documentation or references.
```
javascript:document.body.contentEditable='true'; document.designMode='on'; void 0
```
Thanks!
|
The contentEditable property is what you want -- It's supported by IE, Safari (and by chrome as a byproduct), and I *think* firefox 3 (alas not FFX2). And hey, it's also part of HTML5 :D
Firefox 2 supports designMode, but that is restricted to individual frames, whereas the contentEditable property applies applies to individual elements, so you can have your editable content play more nicely with your page :D
[Edit (olliej): Removed example as contentEditable attribute doesn't get past SO's output filters (despite working in the preview) :( ]
[Edit (olliej): I've banged up a very simple [demo](http://www.nerget.com/contentEditableDemo.html) to illustrate how it behaves]
[Edit (olliej): So yes, the contentEditable attribute in the linked demo works fine in IE, Firefox, and Safari. Alas resizing is a css3 feature only webkit seems to support, and IE is doing its best to fight almost all of the CSS. *sigh*]
|
document.designMode is supported in IE 4+ (which started it apparently) and FireFox 1.3+.
You turn it on and you can edit the content right in the browser, it's pretty trippy.
I've never used it before but it sounds like it would be pretty perfect for hand picking printable information.
Edited to say: It also appears to work in Google Chrome. I've only tested it in Chrome and Firefox, as those are the browsers in which I have a javascript console, so I can't guarantee it works in Internet Explorer as I've never personally used it. My understanding is that this was an IE-only property that the other browsers picked up and isn't currently in any standards, so I'd be surprised if Firefox and Chrome support it but IE stopped.
|
Java Script to edit page content on the fly
|
[
"",
"javascript",
"html",
"editing",
""
] |
Is there a Java equivalent to .NET's App.Config?
If not is there a standard way to keep you application settings, so that they can be changed after an app has been distributed?
|
For WebApps, web.xml can be used to store application settings.
Other than that, you can use the [Properties](http://java.sun.com/javase/6/docs/api/java/util/Properties.html) class to read and write properties files.
You may also want to look at the [Preferences](http://java.sun.com/javase/6/docs/api/java/util/prefs/Preferences.html) class, which is used to read and write system and user preferences. It's an abstract class, but you can get appropriate objects using the `userNodeForPackage(ClassName.class)` and `systemNodeForPackage(ClassName.class)`.
|
To put @Powerlord's suggestion (+1) of using the `Properties` class into example code:
```
public class SomeClass {
public static void main(String[] args){
String dbUrl = "";
String dbLogin = "";
String dbPassword = "";
if (args.length<3) {
//If no inputs passed in, look for a configuration file
URL configFile = SomeClass.class.getClass().getResource("/Configuration.cnf");
try {
InputStream configFileStream = configFile.openStream();
Properties p = new Properties();
p.load(configFileStream);
configFileStream.close();
dbUrl = (String)p.get("dbUrl");
dbLogin = (String)p.get("dbUser");
dbPassword = (String)p.get("dbPassword");
} catch (Exception e) { //IO or NullPointer exceptions possible in block above
System.out.println("Useful message");
System.exit(1);
}
} else {
//Read required inputs from "args"
dbUrl = args[0];
dbLogin = args[1];
dbPassword = args[2];
}
//Input checking one three items here
//Real work here.
}
}
```
Then, at the root of the container (e.g. top of a jar file) place a file `Configuration.cnf` with the following content:
```
#Comments describing the file
#more comments
dbUser=username
dbPassword=password
dbUrl=jdbc\:mysql\://servername/databasename
```
This feel not perfect (I'd be interested to hear improvements) but good enough for my current needs.
|
Java equivalent to app.config?
|
[
"",
"java",
"configuration-files",
""
] |
Anyone know a simple method to swap the background color of a webpage using JavaScript?
|
Modify the JavaScript property `document.body.style.background`.
For example:
```
function changeBackground(color) {
document.body.style.background = color;
}
window.addEventListener("load",function() { changeBackground('red') });
```
Note: this does depend a bit on how your page is put together, for example if you're using a DIV container with a different background colour you will need to modify the background colour of that instead of the document body.
|
You don't need AJAX for this, just some plain java script setting the background-color property of the body element, like this:
```
document.body.style.backgroundColor = "#AA0000";
```
If you want to do it as if it was initiated by the server, you would have to poll the server and then change the color accordingly.
|
How do I change the background color with JavaScript?
|
[
"",
"javascript",
"css",
""
] |
I need to find out the time a function takes for computing the performance of the application / function.
is their any open source Java APIs for doing the same ?
|
You're in luck as there are quite a few [open source Java profilers](http://java-source.net/open-source/profilers) available for you.
|
Take a look at the official [TPTP plugin](http://www.eclipse.org/tptp/) for Eclipse. This pretty much does all you describe and a (frikkin') whole lot more. I can really recommend it.
|
computing performance
|
[
"",
"java",
"performance",
"profiling",
""
] |
After discovering [Clojure](http://clojure.org) I have spent the last few days immersed in it.
What project types lend themselves to Java over Clojure, vice versa, and in combination?
What are examples of programs which you would have never attempted before Clojure?
|
Clojure lends itself well to [concurrent programming](http://clojure.org/concurrent_programming). It provides such wonderful tools for dealing with threading as Software Transactional Memory and mutable references.
As a demo for the Western Mass Developer's Group, Rich Hickey made an ant colony simulation in which each ant was its own thread and all of the variables were immutable. Even with a very large number of threads things worked great. This is not only because Rich is an amazing programmer, it's also because he didn't have to worry about locking while writing his code. You can check out his [presentation on the ant colony here](http://blip.tv/file/812787).
|
If you are going to try concurrent programming, then I think clojure is much better than what you get from Java out of the box. Take a look at this presentation to see why:
<http://blip.tv/file/812787>
I documented my first 20 days with Clojure on my blog
<http://loufranco.com/blog/files/category-20-days-of-clojure.html>
I started with the SICP lectures and then built a parallel prime number sieve. I also played around with macros.
|
How can I transition from Java to Clojure?
|
[
"",
"java",
"functional-programming",
"clojure",
"use-case",
""
] |
Many times I've seen links like these in HTML pages:
```
<a href='#' onclick='someFunc(3.1415926); return false;'>Click here !</a>
```
What's the effect of the `return false` in there?
Also, I don't usually see that in buttons.
Is this specified anywhere? In some spec in w3.org?
|
The return value of an event handler determines whether or not the default browser behaviour should take place as well. In the case of clicking on links, this would be following the link, but the difference is most noticeable in form submit handlers, where you can cancel a form submission if the user has made a mistake entering the information.
I don't believe there is a W3C specification for this. All the ancient JavaScript interfaces like this have been given the nickname "DOM 0", and are mostly unspecified. You may have some luck reading old Netscape 2 documentation.
The modern way of achieving this effect is to call `event.preventDefault()`, and this is specified in [the DOM 2 Events specification](http://www.w3.org/TR/DOM-Level-2-Events/events.html#Events-flow-cancelation).
|
You can see the difference with the following example:
```
<a href="http://www.google.co.uk/" onclick="return (confirm('Follow this link?'))">Google</a>
```
Clicking "Okay" returns true, and the link is followed. Clicking "Cancel" returns false and doesn't follow the link. If javascript is disabled the link is followed normally.
|
What's the effect of adding 'return false' to a click event listener?
|
[
"",
"javascript",
"html",
""
] |
This question is for C# 2.0 Winform.
For the moment I use checkboxes to select like this : Monday[x], Thuesday[x]¸... etc.
It works fine but **is it a better way to get the day of the week?** (Can have more than one day picked)
|
Checkboxes are the standard UI component to use when selection of multiple items is allowed. From UI usability guru [Jakob Nielsen's](http://www.useit.com/jakob/) article on
[Checkboxes vs. Radio Buttons](http://www.useit.com/alertbox/20040927.html):
> "Checkboxes are used when there are lists of options and the user may select any number of choices, including zero, one, or several. In other words, each checkbox is independent of all other checkboxes in the list, so checking one box doesn't uncheck the others."
When designing a UI, it is important to use standard or conventional components for a given task. [Using non-standard components generally causes confusion](http://www.useit.com/alertbox/20040913.html). For example, it would be possible to use a combo box which would allow multiple items to be selected. However, this would require the user to use Ctrl + click on the desired items, an action which is not terribly intuitive for most people.
|
checkbox seems appropriate.
|
C# Day from Week picker component
|
[
"",
"c#",
"winforms",
""
] |
The problem is in the title - IE is misbehaving and is saying that there is a script running slowly - FF and Chrome don't have this problem.
How can I find the problem . .there's a lot of JS on that page. Checking by hand is not a good ideea
**EDIT :** It's a page from a project i'm working on... but I need a tool to find the problem.
**End :** It turned out to be the UpdatePanel - somehow it would get "confused" and would take too long to process something. I just threw it out the window - will only use JQuery from now on :D.
And I'm selecting Remy Sharp's answere because I really didn't know about the tool and it seems pretty cool.
|
Get yourself a copy of the IBM Page Profiler:
<https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=61d74777-1701-4014-bfc0-96067ed50156>
It's free (always a win). Start it up in the background, give it a few seconds, then refresh the page in IE. Go back to the profiler and it will list out all the resources used on the page and give you detailed profile information - in particular where JavaScript is taking a long time to execute.
It should be a good start to finding the source of your problem.
If the script tags are inline, I'd suggest creating a local copy of the file and separating out the script tags to separate files if you can.
|
Long running scripts are detected differently by different browsers:
* IE will raise the warning once 5 million statements have been executed ([more info on MSDN](http://support.microsoft.com/kb/175500))
* Firefox will warn if the script takes longer than 10 seconds ([more info on MDN](http://support.mozilla.com/en-US/kb/Warning%20Unresponsive%20script))
* Safari will warn if the script takes longer than 5 seconds
* Chrome (1.0) has no set limit and will simply keep trying until an OutOfMemory exception at which point it crashes
* Opera will just continue to run forever, without warning.
[Nicholas Zakas has written an excellent article covering this topic.](http://www.nczonline.net/blog/2009/01/05/what-determines-that-a-script-is-long-running/)
As such - the best way to avoid these problems is by reducing looping, recursion and DOM manipulation.
|
a script on this page is causing ie to run slowly
|
[
"",
"javascript",
"internet-explorer",
""
] |
If I have $var defined in Page1.php and in Page2.php I have
```
//Page2.php
include('Page1.php');
echo $var;
```
For what reasons will it not print the value of $var to the screen? The files are in the same directory so paths shouldn't be the issue. I've checked the php.ini file and nothing really jumps out at me. Any ideas?
|
If it were a path problem you would see a warning in your error log. You could also change to require instead of include and it would become obvious.
```
echo getcwd();
```
You can also print your working directory to figure out what's wrong.
Is $var created in a function? If so, make sure you have
```
global $var;
```
before the first assignment in that function.
|
Possible causes:
* The current working path isn't always the same as the file's. For example, if Page2.php is being included at a higher level, that higher level will be the path. Either make sure you've loaded Page2.php directly or move Page1.php accordingly.
* Make sure `$var` is really what you expect it to be. Echo it in Page1.php to confirm. (this also checks the right file is being included)
* If the source isn't really this simple, make sure you're not undefining/clearing `$var` anywhere.
|
In PHP, for what reasons can I not include a variable from external file?
|
[
"",
"php",
""
] |
Take this simple *C# LINQ* query, and imagine that `db.Numbers` is an *SQL* table with one column `Number`:
```
var result =
from n in db.Numbers
where n.Number < 5
select n.Number;
```
This will run very efficiently in *C#*, because it generates an *SQL* query something like
```
select Number from Numbers where Number < 5
```
What it **doesn't** do is select *all* the numbers from the database, and then filter them in *C#*, as it might appear to do at first.
*Python* supports a similar syntax:
```
result = [n.Number for n in Numbers if n.Number < 5]
```
But it the `if` clause here does the filtering on the client side, rather than the server side, which is much less efficient.
Is there something as efficient as *LINQ* in *Python*? (I'm currently evaluating *Python* vs. *IronPython* vs. *Boo*, so an answer that works in any of those languages is fine.)
|
I believe that when IronPython 2.0 is complete, it will have LINQ support (see [this thread](http://groups.google.com/group/ironpy/browse_thread/thread/eb6b9eb2241cc68e) for some example discussion). Right now you should be able to write something like:
```
Queryable.Select(Queryable.Where(someInputSequence, somePredicate), someFuncThatReturnsTheSequenceElement)
```
Something better might have made it into IronPython 2.0b4 - there's a lot of [current discussion](http://ironpython-urls.blogspot.com/2008/09/dlr-namespace-change-fire-drill.html) about how naming conflicts were handled.
|
[sqlsoup](http://www.sqlalchemy.org/trac/wiki/SqlSoup) in sqlalchemy gives you the quickest solution in python I think if you want a clear(ish) one liner . Look at the page to see.
It should be something like...
```
result = [n.Number for n in db.Numbers.filter(db.Numbers.Number < 5).all()]
```
|
Can you do LINQ-like queries in a language like Python or Boo?
|
[
"",
"python",
"linq",
"linq-to-sql",
"ironpython",
"boo",
""
] |
I have 2 classes with a LINQ association between them i.e.:
```
Table1: Table2:
ID ID
Name Description
ForiegnID
```
The association here is between **Table1.ID -> Table2.ForiegnID**
I need to be able to change the value of Table2.ForiegnID, however I can't and think it is because of the association (as when I remove it, it works).
Therefore, does anyone know how I can change the value of the associated field Table2.ForiegnID?
|
Check out the designer.cs file. This is the key's property
```
[Column(Storage="_ParentKey", DbType="Int")]
public System.Nullable<int> ParentKey
{
get
{
return this._ParentKey;
}
set
{
if ((this._ParentKey != value))
{
//This code is added by the association
if (this._Parent.HasLoadedOrAssignedValue)
{
throw new System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException();
}
//This code is present regardless of association
this.OnParentKeyChanging(value);
this.SendPropertyChanging();
this._ParentKey = value;
this.SendPropertyChanged("ParentKey");
this.OnServiceAddrIDChanged();
}
}
}
```
And this is the associations property.
```
[Association(Name="Parent_Child", Storage="_Parent", ThisKey="ParentKey", IsForeignKey=true, DeleteRule="CASCADE")]
public Parent Parent
{
get
{
return this._Parent.Entity;
}
set
{
Parent previousValue = this._Parent.Entity;
if (((previousValue != value)
|| (this._Parent.HasLoadedOrAssignedValue == false)))
{
this.SendPropertyChanging();
if ((previousValue != null))
{
this._Parent.Entity = null;
previousValue.Exemptions.Remove(this);
}
this._Parent.Entity = value;
if ((value != null))
{
value.Exemptions.Add(this);
this._ParentKey = value.ParentKey;
}
else
{
this._ParentKey = default(Nullable<int>);
}
this.SendPropertyChanged("Parent");
}
}
}
```
It's best to assign changes through the association instead of the key. That way, you don't have to worry about whether the parent is loaded.
|
```
Table1: Table2:
ID ID
Name Description
ForeignID
```
With this :
Table2.ForeignID = 2
you receive an error..........
Example :
You can change ForeignID field in Table 2 whit this :
```
Table2 table = dataContext.Table2.single(d => d.ID == Id)
table.Table1 = dataContext.Table1.single(d => d.ID == newId);
```
Where the variable `newId` is the id of the record in Table 2 that would you like associate whit the record in Table 1
|
How to change the value of associated field
|
[
"",
"c#",
"linq",
"linq-to-sql",
"associations",
""
] |
Just wondering why people like case sensitivity in a programming language? I'm not trying to start a flame war just curious thats all.
Personally I have never really liked it because I find my productivity goes down when ever I have tried a language that has case sensitivity, mind you I am slowly warming up/getting used to it now that I'm using C# and F# alot more then I used to.
So why do you like it?
Cheers
|
Consistency. Code is more difficult to read if "foo", "Foo", "fOO", and "fOo" are considered to be identical.
SOME PEOPLE WOULD WRITE EVERYTHING IN ALL CAPS, MAKING EVERYTHING LESS READABLE.
Case sensitivity makes it easy to use the "same name" in different ways, according to a capitalization convention, e.g.,
```
Foo foo = ... // "Foo" is a type, "foo" is a variable with that type
```
|
An advantage of VB.NET is that although it is not case-sensitive, the IDE automatically re-formats everything to the "official" case for an identifier you are using - so it's easy to be consistent, easy to read.
Disadvantage is that I hate VB-style syntax, and much prefer C-style operators, punctuation and syntax.
In C# I find I'm always hitting Ctrl-Space to save having to use the proper type.
Just because you can name things which only differ by case doesn't mean it's a good idea, because it can lead to misunderstandings if a lot of that leaks out to larger scopes, so I recommend steering clear of it at the application or subsystem-level, but allowing it only internally to a function or method or class.
|
Why do people like case sensitivity?
|
[
"",
"c#",
"case-sensitive",
""
] |
Is there a good way to have a `Map<String, ?>` get and put ignoring case?
|
TreeMap extends Map and supports custom comparators.
String provides a default case insensitive comparator.
So:
```
final Map<String, ...> map = new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
```
The comparator does not take locale into account. Read more about it in its JavaDoc.
|
You could use [CaseInsensitiveMap](http://commons.apache.org/proper/commons-collections/javadocs/api-release/org/apache/commons/collections4/map/CaseInsensitiveMap.html) from Apache's Commons Collections.
|
Is there a good way to have a Map<String, ?> get and put ignoring case?
|
[
"",
"java",
"dictionary",
"case-insensitive",
""
] |
Which is the most correct code?
```
if (HttpContext.Current.Response.Cookies[authCookieName] != null) {
HttpContext.Current.Response.Cookies[authCookieName].Value = "New Value";
}
```
or
```
if (HttpContext.Current != null)
if (HttpContext.Current.Response != null)
if (HttpContext.Current.Response.Cookies != null)
if (HttpContext.Current.Response.Cookies[authCookieName] != null)
HttpContext.Current.Response.Cookies[authCookieName].Value = "New Value";
```
|
If any one of HttpContext, HttpContext.Current, HttpContext.Current.Response, or Http.Current.Response.Cookies is null, you're already in trouble. Let the exception happen and fix your web server.
|
Both are good. Assuming that you have already checked everything else that need to be checked first. E.g.:
```
private bool CheckSuspendersAndBelt()
{
try
{
//ensure that true is true...
if (true == true)
{
//...and that false is false...
if (false == false)
{
//...and that true and false are not equal...
if (false != true)
{
//don't proceed if we don't have at least one processor
if (System.Environment.ProcessorCount > 0)
{
//and if there is no system directory then something is wrong
if (System.Environment.SystemDirectory != null)
{
//hopefully the code is running under some version of the CLR...
if (System.Environment.Version != null)
{
//we don't want to proceed if we're not in a process...
if (System.Diagnostics.Process.GetCurrentProcess() != null)
{
//and code running without a thread would not be good...
if (System.Threading.Thread.CurrentThread != null)
{
//finally, make sure instantiating an object really results in an object...
if (typeof(System.Object) == (new System.Object()).GetType())
{
//good to go
return true;
}
}
}
}
}
}
}
}
}
return false;
}
catch
{
return false;
}
}
```
(sorry, couldn't resist... :) )
|
Which is correct way to check for Null exception?
|
[
"",
"c#",
"asp.net",
""
] |
It looks like I had a fundamental misunderstanding about C++ :<
I like the polymorphic container solution. Thank you SO, for bringing that to my attention :)
---
So, we have a need to create a relatively generic container type object. It also happens to encapsulate some business related logic. However, we need to store essentially arbitrary data in this container - everything from primitive data types to complex classes.
Thus, one would immediately jump to the idea of a template class and be done with it. However, I have noticed C++ polymorphism and templates do not play well together. Being that there is some complex logic that we are going to have to work, I would rather just stick with either templates OR polymorphism, and not try to fight C++ by making it do both.
Finally, given that I want to do one or the other, I would prefer polymorphism. I find it much easier to represent constraints like "this container contains Comparable types" - a la java.
Bringing me to the topic of question: At the most abstract, I imagine that I could have a "Container" pure virtual interface that has something akin to "push(void\* data) and pop(void\* data)" (for the record, I am not actually trying to implement a stack).
However, I don't really like void\* at the top level, not to mention the signature is going to change every time I want to add a constraint to the type of data a concrete container can work with.
Summarizing: We have relatively complex containers that have various ways to retrieve elements. We want to be able to vary the constraints on the elements that can go into the containers. Elements should work with multiple kinds of containers (so long as they meet the constraints of that particular container).
Edit: I should also mention that the containers themselves need to be polymorphic. That is my primary reason for not wanting to use templated C++.
So - should I drop my love for Java type interfaces and go with templates? Should I use void\* and statically cast everything? Or should I go with an empty class definition "Element" that declares nothing and use that as my top level class in the "Element" hierarchy?
One of the reasons why I love stack overflow is that many of the responses provide some interesting insight on other approaches that I hadn't not have even considered. So thank you in advance for your insights and comments.
|
Can you not have a root Container class that contains elements:
```
template <typename T>
class Container
{
public:
// You'll likely want to use shared_ptr<T> instead.
virtual void push(T *element) = 0;
virtual T *pop() = 0;
virtual void InvokeSomeMethodOnAllItems() = 0;
};
template <typename T>
class List : public Container<T>
{
iterator begin();
iterator end();
public:
virtual void push(T *element) {...}
virtual T* pop() { ... }
virtual void InvokeSomeMethodOnAllItems()
{
for(iterator currItem = begin(); currItem != end(); ++currItem)
{
T* item = *currItem;
item->SomeMethod();
}
}
};
```
These containers can then be passed around polymorphically:
```
class Item
{
public:
virtual void SomeMethod() = 0;
};
class ConcreteItem
{
public:
virtual void SomeMethod()
{
// Do something
}
};
void AddItemToContainer(Container<Item> &container, Item *item)
{
container.push(item);
}
...
List<Item> listInstance;
AddItemToContainer(listInstance, new ConcreteItem());
listInstance.InvokeSomeMethodOnAllItems();
```
This gives you the Container interface in a type-safe generic way.
If you want to add constraints to the type of elements that can be contained, you can do something like this:
```
class Item
{
public:
virtual void SomeMethod() = 0;
typedef int CanBeContainedInList;
};
template <typename T>
class List : public Container<T>
{
typedef typename T::CanBeContainedInList ListGuard;
// ... as before
};
```
|
You can look at using a standard container of [boost::any](http://www.boost.org/doc/libs/1_36_0/doc/html/any.html) if you are storing truly arbitrary data into the container.
It sounds more like you would rather have something like a [boost::ptr\_container](http://www.boost.org/doc/libs/1_36_0/libs/ptr_container/doc/ptr_container.html) where anything that *can* be stored in the container has to derive from some base type, and the container itself can only give you reference's to the base type.
|
C++ alternatives to void* pointers (that isn't templates)
|
[
"",
"c++",
"templates",
"pointers",
"polymorphism",
""
] |
At the moment, I'm creating an XML file in Java and displaying it in a JSP page by transforming it with XSL/XSLT. Now I need to take that XML file and display the same information in a PDF. Is there a way I can do this by using some kind of XSL file?
I've seen the [iText](http://www.lowagie.com/iText/) Java-PDF library, but I can't find any way to use it with XML and a stylesheet.
Any assistance would be much appreciated. Thanks in advance!
|
You can use XSL Formatting objects.
Here are some good articles on how to do it:
* <http://www.xml.com/pub/a/2001/01/17/xsl-fo/index.html>
* <http://www.xml.com/pub/a/2001/01/24/xsl-fo/index.html?page=1>
* <http://www.javaworld.com/javaworld/jw-04-2006/jw-0410-html.html>
|
### A - Explanation
You should use **Apache FOP** framework to generate **pdf** output. Simply you provide data in **xml** format and render the page with an **xsl-fo** file and specify the parameters like *margin*, *page layout* in this **xsl-fo** file.
I'll provide a simple demo, I use **maven** build tool to gather the needed jar files. Please notify that at the end of the page, there is an svg graphics embedded in pdf. I also want to demonstrate that you can embed svg graphics inside pdf.
### B - Sample XML input data
```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="application/xml"?>
<users-data>
<header-section>
<data-type id="019">User Bill Data</data-type>
<process-date>Thursday December 9 2016 00:04:29</process-date>
</header-section>
<user-bill-data>
<full-name>John Doe</full-name>
<postal-code>34239</postal-code>
<national-id>123AD329248</national-id>
<price>17.84</price>
</user-bill-data>
<user-bill-data>
<full-name>Michael Doe</full-name>
<postal-code>54823</postal-code>
<national-id>942KFDSCW322</national-id>
<price>34.50</price>
</user-bill-data>
<user-bill-data>
<full-name>Jane Brown</full-name>
<postal-code>66742</postal-code>
<national-id>ABDD324KKD8</national-id>
<price>69.36</price>
</user-bill-data>
</users-data>
```
### C - The XSL-FO Template
```
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:output encoding="UTF-8" indent="yes" method="xml" standalone="no" omit-xml-declaration="no"/>
<xsl:template match="users-data">
<fo:root language="EN">
<fo:layout-master-set>
<fo:simple-page-master master-name="A4-portrail" page-height="297mm" page-width="210mm" margin-top="5mm" margin-bottom="5mm" margin-left="5mm" margin-right="5mm">
<fo:region-body margin-top="25mm" margin-bottom="20mm"/>
<fo:region-before region-name="xsl-region-before" extent="25mm" display-align="before" precedence="true"/>
</fo:simple-page-master>
</fo:layout-master-set>
<fo:page-sequence master-reference="A4-portrail">
<fo:static-content flow-name="xsl-region-before">
<fo:table table-layout="fixed" width="100%" font-size="10pt" border-color="black" border-width="0.4mm" border-style="solid">
<fo:table-column column-width="proportional-column-width(20)"/>
<fo:table-column column-width="proportional-column-width(45)"/>
<fo:table-column column-width="proportional-column-width(20)"/>
<fo:table-body>
<fo:table-row>
<fo:table-cell text-align="left" display-align="center" padding-left="2mm">
<fo:block>
Bill Id:<xsl:value-of select="header-section/data-type/@id"/>
, Date: <xsl:value-of select="header-section/process-date"/>
</fo:block>
</fo:table-cell>
<fo:table-cell text-align="center" display-align="center">
<fo:block font-size="150%">
<fo:basic-link external-destination="http://www.example.com">XXX COMPANY</fo:basic-link>
</fo:block>
<fo:block space-before="3mm"/>
</fo:table-cell>
<fo:table-cell text-align="right" display-align="center" padding-right="2mm">
<fo:block>
<xsl:value-of select="data-type"/>
</fo:block>
<fo:block display-align="before" space-before="6mm">Page <fo:page-number/> of <fo:page-number-citation ref-id="end-of-document"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
</fo:table-body>
</fo:table>
</fo:static-content>
<fo:flow flow-name="xsl-region-body" border-collapse="collapse" reference-orientation="0">
<fo:block>MONTHLY BILL REPORT</fo:block>
<fo:table table-layout="fixed" width="100%" font-size="10pt" border-color="black" border-width="0.35mm" border-style="solid" text-align="center" display-align="center" space-after="5mm">
<fo:table-column column-width="proportional-column-width(20)"/>
<fo:table-column column-width="proportional-column-width(30)"/>
<fo:table-column column-width="proportional-column-width(25)"/>
<fo:table-column column-width="proportional-column-width(50)"/>
<fo:table-body font-size="95%">
<fo:table-row height="8mm">
<fo:table-cell>
<fo:block>Full Name</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>Postal Code</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>National ID</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>Payment</fo:block>
</fo:table-cell>
</fo:table-row>
<xsl:for-each select="user-bill-data">
<fo:table-row>
<fo:table-cell>
<fo:block>
<xsl:value-of select="full-name"/>
</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>
<xsl:value-of select="postal-code"/>
</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>
<xsl:value-of select="national-id"/>
</fo:block>
</fo:table-cell>
<fo:table-cell>
<fo:block>
<xsl:value-of select="price"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
</xsl:for-each>
</fo:table-body>
</fo:table>
<fo:block id="end-of-document">
<fo:instream-foreign-object>
<svg width="200mm" height="150mm" version="1.1" xmlns="http://www.w3.org/2000/svg">
<path d="M153 334
C153 334 151 334 151 334
C151 339 153 344 156 344
C164 344 171 339 171 334
C171 322 164 314 156 314
C142 314 131 322 131 334
C131 350 142 364 156 364
C175 364 191 350 191 334
C191 311 175 294 156 294
C131 294 111 311 111 334
C111 361 131 384 156 384
C186 384 211 361 211 334
C211 300 186 274 156 274" style="fill:yellow;stroke:red;stroke-width:2"/>
</svg>
</fo:instream-foreign-object>
</fo:block>
</fo:flow>
</fo:page-sequence>
</fo:root>
</xsl:template>
</xsl:stylesheet>
```
### D - Project Directory Structure
[](https://i.stack.imgur.com/f8dV5.png)
### E - Pom file
```
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.levent.fopdemo</groupId>
<artifactId>apache-fop-demo</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>apache-fop-demo</name>
<url>http://maven.apache.org</url>
<properties>
<fop.version>2.1</fop.version>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.xmlgraphics/fop -->
<dependency>
<groupId>org.apache.xmlgraphics</groupId>
<artifactId>fop</artifactId>
<version>${fop.version}</version>
</dependency>
</dependencies>
<build>
<finalName>Apache Fop Demo</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
```
### F - Demo Code: PdfGenerationDemo.java
```
package com.levent.fopdemo;
import java.io.File;
import java.io.IOException;
import java.io.OutputStream;
import javax.xml.transform.Result;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.sax.SAXResult;
import javax.xml.transform.stream.StreamSource;
import org.apache.fop.apps.FOPException;
import org.apache.fop.apps.FOUserAgent;
import org.apache.fop.apps.Fop;
import org.apache.fop.apps.FopFactory;
import org.apache.fop.apps.MimeConstants;
public class PdfGenerationDemo
{
public static final String RESOURCES_DIR;
public static final String OUTPUT_DIR;
static {
RESOURCES_DIR = "src//main//resources//";
OUTPUT_DIR = "src//main//resources//output//";
}
public static void main( String[] args )
{
try {
convertToPDF();
} catch (FOPException | IOException | TransformerException e) {
e.printStackTrace();
}
}
public static void convertToPDF() throws IOException, FOPException, TransformerException {
// the XSL FO file
File xsltFile = new File(RESOURCES_DIR + "//template.xsl");
// the XML file which provides the input
StreamSource xmlSource = new StreamSource(new File(RESOURCES_DIR + "//data.xml"));
// create an instance of fop factory
FopFactory fopFactory = FopFactory.newInstance(new File(".").toURI());
// a user agent is needed for transformation
FOUserAgent foUserAgent = fopFactory.newFOUserAgent();
// Setup output
OutputStream out;
out = new java.io.FileOutputStream(OUTPUT_DIR + "//output.pdf");
try {
// Construct fop with desired output format
Fop fop = fopFactory.newFop(MimeConstants.MIME_PDF, foUserAgent, out);
// Setup XSLT
TransformerFactory factory = TransformerFactory.newInstance();
Transformer transformer = factory.newTransformer(new StreamSource(xsltFile));
// Resulting SAX events (the generated FO) must be piped through to
// FOP
Result res = new SAXResult(fop.getDefaultHandler());
// Start XSLT transformation and FOP processing
// That's where the XML is first transformed to XSL-FO and then
// PDF is created
transformer.transform(xmlSource, res);
} finally {
out.close();
}
}
}
```
### G - Sample Output: output.pdf
[](https://i.stack.imgur.com/yQ4In.png)
|
How do you create a PDF from XML in Java?
|
[
"",
"java",
"xml",
"pdf",
"xslt",
"itext",
""
] |
I’m having an issue where a drop down list in IE 6/7 is behaving as such:

You can see that the drop down `width` is not wide enough to display the whole text without expanding the overall drop down list.
However in Firefox, there is no issue as it `expands the width` accordingly. This is the behaviour we want in IE 6/7:

We’ve looked at various ways to utilize the `onfocus, onblur, onchange, keyboard and mouse events` to attempt to solve the problem but still some issues.
I was wondering if anyone has solved this issue in IE 6/7 without using any toolkits/frameworks (YUI, Ext-JS, jQuery, etc…).
|
[This guy](http://www.dougboude.com/blog/1/2008/05/Viewing-Option-Text-in-IE7-thats-Wider-than-the-Select-List.cfm) had the same problem as you and he came up with a solution. It is a bit of a hack and depends on how you have your UI setup, but it is an option. I hope it helps.
**edit**
The link that I started off looking for was actually [this one](http://www.hedgerwow.com/360/dhtml/ui_select_with_fixed_width/demo.php), which is the same one Tim suggested. I think it is a better solution than my original find. **2nd edit** This solution is actually dependent on the YUI framework, but I wouldn't imagine replicating the main idea behind it being too hard. Otherwise, the 1st link is alright too, and much simpler.
Good luck.
|
Would something like this be doable in your situation?
<http://www.hedgerwow.com/360/dhtml/ui_select_with_fixed_width/demo.php>
The width of the dropdown grows/shrinks during a mouseover.
|
Drop Down List Issue
|
[
"",
"javascript",
"internet-explorer",
"drop-down-menu",
"cross-browser",
"html-select",
""
] |
How do you save data from ExtJS form? Load data from the business layer into form or grid?
|
Most likely I found the best solution
<http://developmentalmadness.blogspot.com/2008/07/using-extjs-with-wcf.html>
|
For me, I have used an ASHX page to push straight XML - and then use the ExtJS data reader to read.. then, say using a form etc, I push the form data straight back to another ASHX page to interrogate/post to the DB.. darned if I know the best way - but it suited me, and seems very quick and stable, and most importantly it is easier to follow/debug.
Here is some code example if it helps... hopefully not to hinder!
## GETTING DATA
As you will see, the URL to get the data is getting a simple ASHX (generic handler) .NET page which will return straight XML...
```
// Define the core service page to get the data (we use this when reloading)
var url = '/pagedata/getbizzbox.ashx?duration=today';
var store = new Ext.data.GroupingStore(
{
// Define the source for the bizzbox grid (see above url def). We can pass (via the slider)
// the days param as necessary to reload the grid
url: url,
// Define an XML reader to read /pagedata/getbizzbox.ashx XML results
reader: new Ext.data.XmlReader(
{
// Define the RECORD node (i.e. in the XML <record> is the main row definition), and we also
// need to define what field is the ID (row index), and what node returns the total records count
record: 'record',
id: 'inboxID',
totalRecords: 'totalrecords'
},
// Setup mapping of the fields
['inboxID', 'messageCreated', 'subject', 'message', 'messageOpened', 'messageFrom', 'messageFromID', 'groupedMessageDate']),
// Set the default sort scenario, and which column will be grouped
sortInfo: { field: 'groupedMessageDate', direction: "DESC" },
groupField: 'groupedMessageDate'
}); // end of Ext.data.store
```
## DATA TO THE EXTJS GRID
Ok, I have some extra code here that creates a toolbar in the top part of the grid which you can ignore...
```
var grid = new Ext.grid.GridPanel(
{
// Define the store we are going to use - i.e. from above definition
store: store,
// Define column structs
// { header: "Received", width: 180, dataIndex: 'messageCreated', sortable: true, renderer: Ext.util.Format.dateRenderer('d-M-Y'), dataIndex: 'messageCreated' },
columns: [
{ header: "ID", width: 120, dataIndex: 'inboxID', hidden: true },
{ header: "Received", width: 180, dataIndex: 'messageCreated', sortable: true },
{ header: "Subject", width: 115, dataIndex: 'subject', sortable: false },
{ header: "Opened", width: 100, dataIndex: 'messageOpened', hidden: true, renderer: checkOpened },
{ header: "From", width: 100, dataIndex: 'messageFrom', sortable: true },
{ header: "FromID", width: 100, dataIndex: 'messageFromID', hidden: true },
{ header: "Received", width: 100, dataIndex: 'groupedMessageDate', hidden: true }
],
// Set the row selection model to use
gridRowModel: new Ext.grid.RowSelectionModel({ singleSelect: true }),
// Set the grouping configuration
view: new Ext.grid.GroupingView(
{
forceFit: true,
groupTextTpl: '{text} ({[values.rs.length]} {[values.rs.length > 1 ? "Messages" : "Message"]})'
}),
// Render the grid with sizing/title etc
frame: true,
collapsible: false,
title: 'BizzBox',
iconCls: 'icon-grid',
renderTo: 'bizzbox',
width: 660,
height: 500,
stripeRows: true,
// Setup the top bar within the message grid - this hosts the various buttons we need to create a new
// message, delete etc
tbar: [
// New button pressed - show the NEW WINDOW to allow a new message be created
{
text: 'New',
handler: function()
{
// We need to load the contacts, howver we only load the contacts ONCE to save
// bandwidth - if new contacts are added, this page would have been destroyed anyway.
if(contactsLoaded==false)
{
contactStore.load();
contactsLoaded=true;
}
winNew.show();
}
},
// Delete button pressed
// We need to confirm deletion, then get the ID of the message to physically delete from DB and grid
{
text: 'Delete', handler: function()
{
Ext.MessageBox.confirm('Delete message', 'are you sure you wish to delete this message?', function(btn) {
// If selected YES, get a handle to the row, and delete
if (btn == 'yes')
{
// Get the selected row
var rec = grid.getSelectionModel().getSelected();
if(rec==null)
{
Ext.Msg.show(
{
title:'No message selected',
msg: 'please ensure you select a message by clicking once on the required message before selecting delete',
buttons: Ext.Msg.OK,
icon: Ext.MessageBox.QUESTION
});
}
// Proceed to delete the selected message
else
{
var mesID = rec.get('inboxID');
// AJAX call to delete the message
Ext.Ajax.request(
{
url: '/postdata/bizzbox_message_delete.ashx',
params: { inboxID: mesID },
// Check any call failures
failure: function()
{
Ext.Msg.show(
{
title: 'An error has occured',
msg: 'Having a problem deleting.. please try again later',
buttons: Ext.Msg.OK,
icon: Ext.MessageBox.ERROR
})
}, // end of failure check
// Success check
success: function()
{
// Need to remove the row from the datastore (which doesn't imapct
// a reload of the data)
store.remove(rec);
}
}); // end if delete ajax call
} // end of ELSE for record selected or not
} // end of YES button click
})
} // end of delete button pressed
}] // end of tbar (toolbar def)
}); // end of grid def
```
## POSTING DATA FROM A FORM TO THE BACKEND
Again, note the url in the first part of the definition.. going to send the posted form data back to another ASHX page to then send to the DB...
```
// ---------------------------------------------------------------------------------------------
// DEFINE THE REPLY FORM
// This is used to show the existing message details, and allows the user to respond
// ---------------------------------------------------------------------------------------------
var frmReply = new Ext.form.FormPanel(
{
baseCls: 'x-plain',
labelWidth: 55,
method: 'POST',
url: '/postdata/bizzbox_message_reply.ashx',
items: [
{
xtype: 'textfield',
readOnly: true,
fieldLabel: 'From',
name: 'messageFrom',
value: selectedRow.get('messageFrom'),
anchor: '100%' // anchor width by percentage
},
{
xtype: 'textfield',
readOnly: true,
fieldLabel: 'Sent',
name: 'messageCreated',
value: selectedRow.get('messageCreated'),
anchor: '100%' // anchor width by percentage
},
{
xtype: 'textarea',
selectOnFocus: false,
hideLabel: true,
name: 'msg',
value: replyMessage,
anchor: '100% -53' // anchor width by percentage and height by raw adjustment
},
// The next couple of fields are hidden, but provide FROM ID etc which we need to post a new/reply
// message to
{
xtype: 'textfield',
readOnly: true,
fieldLabel: 'subject',
name: 'subject',
hidden: true,
hideLabel: true,
value: selectedRow.get('subject')
},
{
xtype: 'textfield',
readOnly: true,
fieldLabel: 'FromID',
name: 'messageFromID',
hidden: true,
hideLabel: true,
value: selectedRow.get('messageFromID')
},
{
xtype: 'textfield',
readOnly: true,
fieldLabel: 'InboxID',
name: 'inboxID',
hidden: true,
hideLabel: true,
value: selectedRow.get('inboxID')
}]
}); // end of frmReply
```
## THE LAST BIT TO ACTUALLY SUMBIT THE ABOVE FORM TO THE BACKEND...
This window uses the form definition above to actually submit the data.. in the ASHX page the data simply comes through as a posted form - i.e. you can access via normal Request.form object.. I know there is a way to essentially post the form data to the ASHX page as XML, although for my purpose it wasn't required - quite simple form.
```
// ---------------------------------------------------------------------------------------------
// REPLY WINDOW - uses the frmReply as defined previously on stargate atlantis
// ---------------------------------------------------------------------------------------------
var win = new Ext.Window(
{
title: selectedRow.get("subject"),
width: 500,
height: 300,
minWidth: 300,
minHeight: 200,
layout: 'fit',
plain: false,
bodyStyle: 'padding:5px;',
buttonAlign: 'right',
items: frmReply,
// Add the action buttons for the message form
buttons: [
{
// When the user replies, we send the form results to the posting ashx which updates
// the DB etc, and returns the result
text: 'reply',
handler: function()
{
frmReply.getForm().submit({ waitMsg: 'Sending your message now...' });
}
},
{
text: 'close',
handler: function()
{
// We need to close the message window
win.close();
}
}]
});
// Show the message detail window
win.show();
```
Again, hope this helps somewhat - took me a few weeks to get to that!! getting too old for coding perhaps!
|
What is the best practice to use ExtJS with Asp.net and WCF in .NET 3.5?
|
[
"",
"javascript",
"asp.net-ajax",
"extjs",
""
] |
I'm trying to write some SQL that will delete files of type '.7z' that are older than 7 days.
Here's what I've got that's not working:
```
DECLARE @DateString CHAR(8)
SET @DateString = CONVERT(CHAR(8), DATEADD(d, -7, GETDATE()), 1)
EXECUTE master.dbo.xp_delete_file 0,
N'e:\Database Backups',N'7z', @DateString, 1
```
I've also tried changing the '1' at the end to a '0'.
This returns 'success', but the files aren't getting deleted.
I'm using SQL Server 2005, Standard, w/SP2.
|
Had a similar problem, found various answers. Here's what I found.
You can't delete 7z files with xp\_delete\_file. This is an undocumented extended stored procedure that's a holdover from SQL 2000. It checks the first line of the file to be deleted to verify that it is either a SQL backup file or a SQL report file. It doesn't check based on the file extension. From what I gather its intended use is in maintenance plans to cleanup old backups and plan reports.
Here's a sample based on Tomalak's link to delete backup files older than 7 days. What trips people up is the 'sys' schema, the trailing slash in the folder path, and no dot in the file extension to look for. The user that SQL Server runs as also needs to have delete permissions on the folder.
```
DECLARE @DeleteDate datetime
SET @DeleteDate = DateAdd(day, -7, GetDate())
EXECUTE master.sys.xp_delete_file
0, -- FileTypeSelected (0 = FileBackup, 1 = FileReport)
N'D:\SQLbackups\', -- folder path (trailing slash)
N'bak', -- file extension which needs to be deleted (no dot)
@DeleteDate, -- date prior which to delete
1 -- subfolder flag (1 = include files in first subfolder level, 0 = not)
```
Note that xp\_delete\_file is broken in SP2 and won't work on report files; there's a hotfix for it at [<http://support.microsoft.com/kb/938085]>. I have not tested it with SP3.
Since it's undocumented, xp\_delete\_file may go away or change in future versions of SQL Server. Many sites recommend a shell script to do the deletions instead.
|
AFAIK `xp_delete_file` only delete files recognized by SQL Server 2005 (backup files, transaction logs, ...). Perhaps you can try something like this:
```
xp_cmdshell 'del <filename>'
```
|
SQL Server xp_delete_file not deleting files
|
[
"",
"sql",
"sql-server",
"maintenance-plan",
""
] |
I have a function that is effectively a replacement for print, and I want to call it without parentheses, just like calling print.
```
# Replace
print $foo, $bar, "\n";
# with
myprint $foo, $bar, "\n";
```
In Perl, you can create subroutines with parameter templates and it allows exactly this behavior if you define a subroutine as
```
sub myprint(@) { ... }
```
Anything similar in PHP?
|
print is not a [variable functions](http://ca.php.net/manual/en/functions.variable-functions.php)
> Because this is a language construct
> and not a function, it cannot be
> called using variable functions
And :
> Variable functions
>
> PHP supports the concept of variable
> functions. This means that if a
> variable name has parentheses appended
> to it, PHP will look for a function
> with the same name as whatever the
> variable evaluates to, and will
> attempt to execute it. Among other
> things, this can be used to implement
> callbacks, function tables, and so
> forth.
|
Only by editing the PHP codebase and adding a new language construct.
-Adam
|
Can I create a PHP function that I can call without parentheses?
|
[
"",
"php",
"function",
""
] |
In a C++ Linux app, what is the simplest way to get the functionality that the [Interlocked](http://msdn.microsoft.com/en-us/library/system.threading.interlocked.aspx) functions on Win32 provide? Specifically, a lightweight way to atomically increment or add 32 or 64 bit integers?
|
Upon further review, [this](http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html) looks promising. Yay stack overflow.
|
Just few notes to clarify the issue which has nothing to do with *Linux*.
**RWM** (read-modify-write) operations and those that do not execute in a single-step need the hardware-support to execute *atomically*; among them increments and decrements, fetch\_and\_add, etc.
For some architecture (including I386, AMD\_64 and IA64) gcc has a built-in support for atomic memory access, therefore no external libray is required. [Here](http://gcc.gnu.org/onlinedocs/gcc-4.3.0/gcc/Atomic-Builtins.html) you can read some information about the API.
|
Interlocked equivalent on Linux
|
[
"",
"c++",
"linux",
"multithreading",
"atomic",
""
] |
I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command:
```
usermod -a -G supgroup1,supgroup2 username
```
Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance!
Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure.
```
#!/usr/bin/perl
#
# Usage: removegroup.pl login group
# Purpose: Removes a user from a group while retaining current primary and
# supplementary groups.
# Notes: There is a Debian specific utility that can do this called deluser,
# but I did not want any cross-distribution dependencies
#
# Date: 25 September 2008
# Validate Arguments (correct number, format etc.)
if ( ($#ARGV < 1) || (2 < $#ARGV) ) {
print "\nUsage: removegroup.pl login group\n\n";
print "EXIT VALUES\n";
print " The removeuser.pl script exits with the following values:\n\n";
print " 0 success\n\n";
print " 1 Invalid number of arguments\n\n";
print " 2 Login or Group name supplied greater than 16 characters\n\n";
print " 3 Login and/or Group name contains invalid characters\n\n";
exit 1;
}
# Check for well formed group and login names
if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1])))
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and Group names must be less than 16 Characters\n";
exit 2;
}
if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) )
{
print "Usage: removegroup.pl login group\n";
print "ERROR: Login and/or Group name contains invalid characters\n";
exit 3;
}
# Set some variables for readability
$login=$ARGV[0];
$group=$ARGV[1];
# Requires the GroupFile interface from perl-Unix-Configfile
use Unix::GroupFile;
$grp = new Unix::GroupFile "/etc/group";
$grp->remove_user("$group", "$login");
$grp->commit();
undef $grp;
exit 0;
```
|
Web Link: <http://www.ibm.com/developerworks/linux/library/l-roadmap4/>
To add members to the group, use the gpasswd command with the -a switch and the user id you wish to add:
gpasswd -a userid mygroup
Remove users from a group with the same command, but a -d switch rather than -a:
gpasswd -d userid mygroup
"man gpasswd" for more info...
I looked for ages to find this. Sometimes it takes too much effort not to reinvent the wheel...
|
I found [This](http://search.cpan.org/~ssnodgra/Unix-ConfigFile-0.06/GroupFile.pm) for you. It should do what you need. As far as I can tell Perl does not have any built in functions for removing users from a group. It has several for seeing the group id of a user or process.
|
Perl or Python script to remove user from group
|
[
"",
"python",
"perl",
"system-administration",
"centos",
"redhat",
""
] |
Here's my issue, I'd like to mock a class that creates a thread at initialization and closes it at destruction. There's no reason for my mock class to actually create and close threads. But, to mock a class, I have inherit from it. When I create a new instance of my mock class, the base classes constructor is called, creating the thread. When my mock object is destroyed, the base classes destructor is called, attempting to close the thread.
How does one mock an RAII class without having to deal with the actual resource?
|
You instead make an interface that describes the type, and have both the real class and the mock class inherit from that. So if you had:
```
class RAIIClass {
public:
RAIIClass(Foo* f);
~RAIIClass();
bool DoOperation();
private:
...
};
```
You would make an interface like:
```
class MockableInterface {
public:
MockableInterface(Foo* f);
virtual ~MockableInterface();
virtual bool DoOperation() = 0;
};
```
And go from there.
|
First of all, it is not necessarily an unreasonable thing that your classes might be well designed for their use, but poorly designed for testing. Not everything is easy to test.
Presumably you want to use another function or class which makes use of the class which you want to mock (otherwise the solution is trivial). Lets call the former "User" and the latter "Mocked". Here are some possibilities:
1. Change User to use an abstract version of Mocked (you get to choose what kind of abstraction to use: inheritance, callback, templates, etc....).
2. Compile a different version of Mocked for your testing code (for example, #def out the RAII code when you compile your tests).
3. Have Mocked accept a constructor flag to turn off its behavior. I personally would avoid doing this.
4. Just suck up the cost of allocating the resource.
5. Skip the test.
The last two may be your only recourse if you can not modify User or Mocked. If you can modify User and you believe that designing your code to be testable is important, then you should explore the first option before any of the others. Note that there can be a trade off between making your code generic/flexible and keeping it simple, both of which are admirable qualities.
|
How do you mock classes that use RAII in c++
|
[
"",
"c++",
"unit-testing",
"mocking",
""
] |
Should a method that implements an interface method be annotated with `@Override`?
The [javadoc of the `Override` annotation](http://java.sun.com/javase/6/docs/api/java/lang/Override.html) says:
> Indicates that a method declaration is intended to override a method declaration in a superclass. If a method is annotated with this annotation type but does not override a superclass method, compilers are required to generate an error message.
I don't think that an interface is technically a superclass. Or is it?
`Question Elaboration`
|
You should use `@Override` whenever possible. It prevents simple mistakes from being made. Example:
```
class C {
@Override
public boolean equals(SomeClass obj){
// code ...
}
}
```
This doesn't compile because it doesn't properly override [`public boolean equals(Object obj)`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/lang/Object.html#equals(java.lang.Object)).
The same will go for methods that implement an interface (**1.6 and above only**) or override a Super class's method.
|
I believe that javac behaviour has changed - with 1.5 it prohibited the annotation, with 1.6 it doesn't. The annotation provides an extra compile-time check, so if you're using 1.6 I'd go for it.
|
Should we @Override an interface's method implementation?
|
[
"",
"java",
"oop",
"interface",
"annotations",
""
] |
I'm working profesionally on a php web application which contains contacts, among other data. I was wondering how hard it would be to make this data available to external programs using the LDAP protocol.
Are there specific tools out there for this? I couldn't really find anything, but I can't imagine I'm the first to think about this.
---
Edit 1:
What I'm looking for is a way to have an application (like a mail client) to be able to use a standard ldap lookup to find contacts from my data.
There are no limitations on using third party software or a separate ldap server on my side, but I want the clients to simply be able to use the built-in ldap connectivity of their application of choice.
What I could see is an ldap server which uses my database or service in my application for serving data as if my application itself is an ldap server. I'd prefer a solution like this, because I don't feel it's right to bloat the application with ldap functionality if I can use an external server for this.
|
The LDAP server protocol is big, and I don't think there is a PHP server in existence. I would suggest running an LDAP server and pushing records to it.
I like OpenLDAP, but there are plenty of servers out there, depending on your platform. Even ActiveDirectory is an LDAP server.
Typically, LDAP objects have internal tracking attributes. For example, OpenLDAP has `modifyTimestamp` which looks something like `20080306214429Z`. These don't get returned by the server unless you specifically ask for them in your query.
It would be easy enough to write a crontab to compare modified times in LDAP and your database. If the LDAP `modifyTimestamp` was older than in your database, update the record.
|
Rather than trying to get an LDAP server to use your database, I'd recommend storing your data in the LDAP server. You can create custom fields for anything that doesn't fit in the standard LDAP fields.
|
Implementing LDAP compliance
|
[
"",
"php",
"ldap",
""
] |
Templates are a pretty healthy business in established programming languages, but are there any good ones that can be processed in JavaScript?
By "template" I mean a document that accepts a data object as input, inserts the data into some kind of serialized markup language, and outputs the markup. Well-known examples are [JSP](http://en.wikipedia.org/wiki/JavaServer_Pages), the original PHP, [XSLT](http://en.wikipedia.org/wiki/XSLT).
By "good" I mean that it's declarative and easy for an HTML author to write, that it's robust, and that it's supported in other languages too. Something better than the options I know about. Some examples of "not good":
---
String math:
```
element.innerHTML = "<p>Name: " + data.name
+ "</p><p>Email: " + data.email + "</p>";
```
clearly too unwieldy, HTML structure not apparent.
---
XSLT:
```
<p><xsl:text>Name: </xsl:text><xsl:value-of select="//data/name"></p>
<p><xsl:text>Email: </xsl:text><xsl:value-of select="//data/email"></p>
```
// Structurally this works well, but let's face it, XSLT confuses HTML developers.
---
Trimpath:
```
<p>Name: ${data.name}</p><p>Email: ${data.email}</p>
```
// This is nice, but the processor is only supported in JavaScript, and the language is sort of primitive (<http://code.google.com/p/trimpath/wiki/JavaScriptTemplateSyntax>).
---
I'd love to see a subset of JSP or ASP or PHP ported to the browser, but I haven't found that.
What are people using these days in JavaScript for their templating?
## Addendum 1 (2008)
After a few months there have been plenty of workable template languages posted here, but most of them aren't usable in any other language. Most of these templates couldn't be used outside a JavaScript engine.
The exception is Microsoft's -- you can process the same ASP either in the browser or in any other ASP engine. That has its own set of portability problems, since you're bound to Microsoft systems. I marked that as the answer, but am still interested in more portable solutions.
## Addendum 2 (2020)
Dusting off this old question, it's ten years later, and Mustache is widely supported in dozens of languages. It is now the current answer, in case anyone is still reading this.
|
You might want to check out [Mustache](https://mustache.github.io/) - it's really portable and simple template language with javascript support among other languages.
|
[John Resig](http://ejohn.org/) has a mini javascript templating engine at [http://ejohn.org/blog/javascript-micro-templating/](https://web.archive.org/web/20160314000941/http://ejohn.org/blog/javascript-micro-templating/)
|
What good template language is supported in JavaScript?
|
[
"",
"javascript",
"templates",
""
] |
I'm not exactly sure how to ask this question really, and I'm no where close to finding an answer, so I hope someone can help me.
I'm writing a Python app that connects to a remote host and receives back byte data, which I unpack using Python's built-in struct module. My problem is with the strings, as they include multiple character encodings. Here is an example of such a string:
"^LThis is an example ^Gstring with multiple ^Jcharacter encodings"
Where the different encoding starts and ends is marked using special escape chars:
* ^L - Latin1
* ^E - Central Europe
* ^T - Turkish
* ^B - Baltic
* ^J - Japanese
* ^C - Cyrillic
* ^G - Greek
And so on... I need a way to convert this sort of string into Unicode, but I'm really not sure how to do it. I've read up on Python's codecs and string.encode/decode, but I'm none the wiser really. I should mention as well, that I have no control over how the strings are outputted by the host.
I hope someone can help me with how to get started on this.
|
There's no built-in functionality for decoding a string like this, since it is really its own custom codec. You simply need to split up the string on those control characters and decode it accordingly.
Here's a (very slow) example of such a function that handles latin1 and shift-JIS:
```
latin1 = "latin-1"
japanese = "Shift-JIS"
control_l = "\x0c"
control_j = "\n"
encodingMap = {
control_l: latin1,
control_j: japanese}
def funkyDecode(s, initialCodec=latin1):
output = u""
accum = ""
currentCodec = initialCodec
for ch in s:
if ch in encodingMap:
output += accum.decode(currentCodec)
currentCodec = encodingMap[ch]
accum = ""
else:
accum += ch
output += accum.decode(currentCodec)
return output
```
A faster version might use str.split, or regular expressions.
(Also, as you can see in this example, "^J" is the control character for "newline", so your input data is going to have some interesting restrictions.)
|
Here's a relatively simple example of how do it...
```
# -*- coding: utf-8 -*-
import re
# Test Data
ENCODING_RAW_DATA = (
('latin_1', 'L', u'Hello'), # Latin 1
('iso8859_2', 'E', u'dobrý večer'), # Central Europe
('iso8859_9', 'T', u'İyi akşamlar'), # Turkish
('iso8859_13', 'B', u'Į sveikatą!'), # Baltic
('shift_jis', 'J', u'今日は'), # Japanese
('iso8859_5', 'C', u'Здравствуйте'), # Cyrillic
('iso8859_7', 'G', u'Γειά σου'), # Greek
)
CODE_TO_ENCODING = dict([(chr(ord(code)-64), encoding) for encoding, code, text in ENCODING_RAW_DATA])
EXPECTED_RESULT = u''.join([line[2] for line in ENCODING_RAW_DATA])
ENCODED_DATA = ''.join([chr(ord(code)-64) + text.encode(encoding) for encoding, code, text in ENCODING_RAW_DATA])
FIND_RE = re.compile('[\x00-\x1A][^\x00-\x1A]*')
def decode_single(bytes):
return bytes[1:].decode(CODE_TO_ENCODING[bytes[0]])
result = u''.join([decode_single(bytes) for bytes in FIND_RE.findall(ENCODED_DATA)])
assert result==EXPECTED_RESULT, u"Expected %s, but got %s" % (EXPECTED_RESULT, result)
```
|
Dealing with a string containing multiple character encodings
|
[
"",
"python",
"string",
"unicode",
"encoding",
""
] |
I have code to create another "row" (div with inputs) on a button click. I am creating new input elements and everything works fine, however, I can't find a way to access these new elements.
Example: I have input element (name\_1 below). Then I create another input element (name\_2 below), by using the javascript's `createElement` function.
```
<input type='text' id='name_1' name="name_1" />
<input type='text' id='name_2' name="name_2" />
```
Again, I create the element fine, but I want to be able to access the value of name\_2 after it has been created and modified by the user. Example: `document.getElementById('name_2');`
This doesn't work. How do I make the DOM recognize the new element? Is it possible?
My code sample (utilizing jQuery):
```
function addName(){
var parentDiv = document.createElement("div");
$(parentDiv).attr( "id", "lp_" + id );
var col1 = document.createElement("div");
var input1 = $( 'input[name="lp_name_1"]').clone(true);
$(input1).attr( "name", "lp_name_" + id );
$(col1).attr( "class", "span-4" );
$(col1).append( input1 );
$(parentDiv).append( col1 );
$('#main_div').append(parentDiv);
}
```
I have used both jQuery and JavaScript selectors. Example: `$('#lp_2').html()` returns null. So does `document.getElementById('lp_2');`
|
You have to create the element AND add it to the DOM using functions such as appendChild. See [here](http://www.w3schools.com/htmldom/dom_methods.asp) for details.
My guess is that you called createElement() but never added it to your DOM hierarchy.
|
If it's properly added to the dom tree you will be able to query it with document.getElementById. However browser bugs may cause troubles, so use a JavaScript toolkit like jQuery that works around browser bugs.
|
Accessing created DOM elements
|
[
"",
"javascript",
"jquery",
"html",
"dom",
""
] |
The problem is you can't tell the user how many characters are allowed in the field because the escaped value has more characters than the unescaped one.
I see a few solutions, but none looks very good:
* One whitelist for each field *(too much work and doesn't quite solve the problem)*
* One blacklist for each field *(same as above)*
* Use a field length that could hold the data even if all characters are escaped *(bad)*
* Uncap the size for the database field *(worse)*
* Save the data hex-unescaped and pass the responsibility entirely to output filtering *(not very good)*
* Let the user guess the maximum size *(worst)*
Are there other options? Is there a "best practice" for this case?
Sample code:
```
$string = 'javascript:alert("hello!");';
echo strlen($string);
// outputs 27
$escaped_string = filter_var('javascript:alert("hello!");', FILTER_SANITIZE_ENCODED);
echo strlen($escaped_string);
// outputs 41
```
If the length of the database field is, say, 40, the escaped data will not fit.
|
Don't build your application around the database - build the database for the application!
Design how you want the interface to work for the user first, work out the longest acceptable field length, and use that.
In general, don't escape before storing in the database - store raw data in the database and format it for display.
If something is going to be output many times, then store the processed version.
Remember disk space is relatively cheap - don't waste effort trying to make your database compact.
|
making some wild assumptions about the context here:
* if the field can hold 32 characters, that is 32 unescaped characters
* let the user enter 32 characters
* escape/unescape is not the user's problem
* why is this an issue?
+ if this is form data-entry it won't matter, and
+ if you are for some reason escaping the data and passing it back then unescape it before storage
without further context, it looks like you are fighting a problem that doesn't really exist, or that doesn't need to exist
|
HTML Data exceeds field length after being hex-sanitized
|
[
"",
"php",
"html",
"validation",
""
] |
I have found an interesting issue in windows which allows me to cause the Windows clock (but not the hardware clocks) to run fast - as much as 8 seconds every minute. I am doing some background research to work out how Windows calculates and updates it's internal time (not how it syncs with an NTP servers). Any information anyone has or any documents you can point me to would be greatly appreciated!
Also, if anyone knows how \_ftime works please let me know.
|
[This MSDN article](http://msdn.microsoft.com/en-us/library/ms724961(VS.85).aspx) gives a very brief description of how the system time is handled: "When the system first starts, it sets the system time to a value based on the real-time clock of the computer and then regularly updates the time." Another interesting function is [GetSystemTimeAdjustment](http://msdn.microsoft.com/en-us/library/ms724394.aspx), which has this to say:
> A value of TRUE [for lpTimeAdjustmentDisabled] indicates that periodic time adjustment is disabled. At each clock interrupt, the system merely adds the interval between clock interrupts to the time-of-day clock. The system is free, however, to adjust its time-of-day clock using other techniques. Such other techniques may cause the time-of-day clock to noticeably jump when adjustments are made.
Finally, in regard to \_ftime, it appears to be implemented using [GetSystemTimeAsFileTime](http://msdn.microsoft.com/en-us/library/ms724397(VS.85).aspx). So it would wrap directly onto the same built-in time facilities as would be used everywhere else.
|
Regarding how `_ftime()` works:
If you have Microsoft Visual C++ installed, you probably have the C runtime source installed as well. `_ftime()` is defined in `%ProgramFiles%\Microsoft Visual Studio <version>\VC\crt\src\ftime.c` and `ftime64.c`.
|
How does _ftime / Windows internal time work?
|
[
"",
"c++",
"windows",
"winapi",
"time",
""
] |
I am developing a WebPart (it will be used in a SharePoint environment, although it does not use the Object Model) that I want to expose AJAX functionality in. Because of the nature of the environment, Adding the Script Manager directly to the page is not an option, and so must be added programmatically. I have attempted to add the ScriptManager control to the page in my webpart code.
```
protected override void CreateChildControls()
{
if (ScriptManager.GetCurrent(Page) == null)
{
ScriptManager sMgr = new ScriptManager();
// Ensure the ScriptManager is the first control.
Page.Form.Controls.AddAt(0, sMgr);
}
}
```
However, when this code is executed, I get the following error message:
> "The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases."
Is there another way to add the ScriptManager to the page from a WebPart, or am I going to have to just add the ScriptManager to each page (or master page) that will use the WebPart?
|
I was able to get this to work by using the Page's Init event:
```
protected override void OnInit(EventArgs e)
{
Page.Init += delegate(object sender, EventArgs e_Init)
{
if (ScriptManager.GetCurrent(Page) == null)
{
ScriptManager sMgr = new ScriptManager();
Page.Form.Controls.AddAt(0, sMgr);
}
};
base.OnInit(e);
}
```
|
I had the same basic issue the rest of you had. I was creating a custom ascx control and wanted to be able to not worry about whether or not the calling page had the scriptmanager declared. I got around the issues by adding the following to the ascx contorl itself.
**to the ascx page -**
`<asp:PlaceHolder runat="server" ID="phScriptManager"></asp:PlaceHolder>`
in the update panel itself - `oninit="updatePanel1_Init"`
**to the ascx.cs file -**
```
protected void updatePanel1_Init(object sender, EventArgs e)
{
if (ScriptManager.GetCurrent(this.Page) == null)
{
ScriptManager sManager = new ScriptManager();
sManager.ID = "sManager_" + DateTime.Now.Ticks;
phScriptManager.Controls.AddAt(0, sManager);
}
}
```
Thank you to everyone else in this thread who got me started.
|
Add ScriptManager to Page Programmatically?
|
[
"",
"c#",
"sharepoint-2007",
"web-parts",
"scriptmanager",
""
] |
I'm writing some JavaScript code to parse user-entered functions (for spreadsheet-like functionality). Having parsed the formula I *could* convert it into JavaScript and run `eval()` on it to yield the result.
However, I've always shied away from using `eval()` if I can avoid it because it's evil (and, rightly or wrongly, I've always thought it is even more evil in JavaScript, because the code to be evaluated might be changed by the user).
So, when it is OK to use it?
|
I'd like to take a moment to address the premise of your question - that eval() is "*evil*". The word "*evil*", as used by programming language people, usually means "dangerous", or more precisely "able to cause lots of harm with a simple-looking command". So, when is it OK to use something dangerous? When you know what the danger is, and when you're taking the appropriate precautions.
To the point, let's look at the dangers in the use of eval(). There are probably many small hidden dangers just like everything else, but the two big risks - the reason why eval() is considered evil - are performance and code injection.
* Performance - eval() runs the interpreter/compiler. If your code is compiled, then this is a big hit, because you need to call a possibly-heavy compiler in the middle of run-time. However, JavaScript is still mostly an interpreted language, which means that calling eval() is not a big performance hit in the general case (but see my specific remarks below).
* Code injection - eval() potentially runs a string of code under elevated privileges. For example, a program running as administrator/root would never want to eval() user input, because that input could potentially be "rm -rf /etc/important-file" or worse. Again, JavaScript in a browser doesn't have that problem, because the program is running in the user's own account anyway. Server-side JavaScript could have that problem.
On to your specific case. From what I understand, you're generating the strings yourself, so assuming you're careful not to allow a string like "rm -rf something-important" to be generated, there's no code injection risk (but please remember, it's *very very hard* to ensure this in the general case). Also, if you're running in the browser then code injection is a pretty minor risk, I believe.
As for performance, you'll have to weight that against ease of coding. It is my opinion that if you're parsing the formula, you might as well compute the result during the parse rather than run another parser (the one inside eval()). But it may be easier to code using eval(), and the performance hit will probably be unnoticeable. It looks like eval() in this case is no more evil than any other function that could possibly save you some time.
|
`eval()` isn't evil. Or, if it is, it's evil in the same way that reflection, file/network I/O, threading, and IPC are "evil" in other languages.
If, *for your purpose*, `eval()` is faster than manual interpretation, or makes your code simpler, or more clear... then you should use it. If neither, then you shouldn't. Simple as that.
|
When is JavaScript's eval() not evil?
|
[
"",
"javascript",
"coding-style",
"eval",
""
] |
I was recommeded a book called:
Accelerated C++ Practical Programming by Example
by Andrew Koenig and Barbara E. Moo
Addison-Wesley, 2000
ISBN 0-201-70353-X
The basis of this book is that Object Oriented Programming is highly wasteful memory-wise, and that most source-code should not be written this way, rather that you should use all inline function calls and procedural programming.
I mean I know most programming books have about the same shelf life as milk, but if your coding a client/server application (database, server and all) (not a device driver or a video game) is it really worth the hassle of having un-maintainable code just for a speed boost?
Or is it worth it just to make the application run on a client's really old machine? Or to be able to run more servers on a single box?
|
I haven't read the book, but I have trouble believe that they wrote a book whose "basis ...is that Object Oriented Programming is highly wasteful memory-wise" (Full disclosure: Andy & Barbara are friends of mine).
Andy would never say the OOP is wasteful of memory. He WOULD say that a particular algorithm or technique is wasteful, and might recommend a less OO approach in some cases, but, he would be the first to argue that as a general rule OO designs are no more or less wasteful that any other style of programming.
The argument that OO designs are wasteful largely came from the fact that the EXEs of C++ "hello world" programs tend to be larger that the EXEs of C "hello world" programs. This is mostly because iostreams is larger the printf (but then, iostreams does more).
|
Wow, no.
Modern C++ compilers are excellent. Massive memory usage is more of a symptom of a poor design or large memory data set. The overhead needed for C++ classes is minimal and really not a problem these days.
Object oriented programming is a way to write components in such a way that they can logically group actions related to a single concept (ie, all actions for a 'car' or all actions for a 'cat'). That's not to say it can't be misused to write spaghetti objects, but as they say, you can write COBOL in any language.
As a further example, it's quite possible and accepted these days to write for embedded software platforms with C++ and objects. The slight speed decrease and memory usage increase (if any) is repaid a thousand times over by increased maintainability and code usability.
|
Do the concepts in Accelerated C++ Practical Programming by Example still hold up today?
|
[
"",
"c++",
"performance",
"oop",
"procedural-programming",
""
] |
How do I format a number in Java?
What are the "Best Practices"?
Will I need to round a number before I format it?
> `32.302342342342343` => `32.30`
>
> `.7323` => `0.73`
etc.
|
From [this thread](http://bytes.com/forum/thread16212.html), there are different ways to do this:
```
double r = 5.1234;
System.out.println(r); // r is 5.1234
int decimalPlaces = 2;
BigDecimal bd = new BigDecimal(r);
// setScale is immutable
bd = bd.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP);
r = bd.doubleValue();
System.out.println(r); // r is 5.12
```
---
```
f = (float) (Math.round(n*100.0f)/100.0f);
```
---
```
DecimalFormat df2 = new DecimalFormat( "#,###,###,##0.00" );
double dd = 100.2397;
double dd2dec = new Double(df2.format(dd)).doubleValue();
// The value of dd2dec will be 100.24
```
The [DecimalFormat()](http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html) seems to be the most dynamic way to do it, and it is also very easy to understand when reading others code.
|
You and `String.format()` will be new best friends!
<https://docs.oracle.com/javase/1.5.0/docs/api/java/util/Formatter.html#syntax>
```
String.format("%.2f", (double)value);
```
|
How do I format a number in Java?
|
[
"",
"java",
"number-formatting",
""
] |
One of the guys I work with needs a custom control that would work like a multiline ddl since such a thing does not exist as far as we have been able to discover
does anyone have any ideas or have created such a thing before
we have a couple ideas but they involve to much database usage
We prefer that it be FREE!!!
|
We use a custom modified version of [suckerfish](http://www.google.com/search?q=suckerfish) at work. DB performance isn't an issue for us because we cache the control.
The control renders out nested `UL`/`LI`s either for all nodes in the `web.sitemap` or for a certain set of pages pulled from the DB. We then use [jQuery](http://jquery.com/) to do all the cool javascript stuff. Because it uses such basic HTML, it's pretty easy to have multi-line or wrapped long items once you style it with CSS.
|
Have a look at [EasyListBox](http://www.easylistbox.com/home.aspx). I used on a project and while a bit quirky at first, got the job done.
|
Multiline ddl Custom Control
|
[
"",
"c#",
"asp.net",
"custom-controls",
""
] |
When hiring a front-end developer, what specific skills and practices should you test for? What is a good metric for evaluating their skill in HTML, CSS and Javascript?
Obviously, table-less semantic HTML and pure CSS layout are probably the key skills. But what about specific techniques? Should he/she be able to effortlessly mock up a multi-column layout? CSS sprites? Equal height (or faux) columns? Does HTML tag choice matter (ie, relying too heavily on `<div>`)? Should they be able to explain (in words) how floats work?
And what about javascript skills? How important is framework experience (jQuery, Prototype, etc). today?
Obviously, the details of the position and the sites they'll be working on are the best indication of what skills are needed. But I'm wondering what specific skills people might consider to be deal-breakers (or makers) when creating tests for candidates.
|
When I interview people for a position of Client-Side developer I try to figure out:
```
1) Understanding DOM (what is that, how is it related to HTML etc)
2) Understanding XML/namespaces
3) Understanding JavaScript (object-oriented? what otherwise)
4) Knowing approaches to componentization (XBL, HTC) - plus
5) Understanding OO principles
6) JavaScript closures
7) Memory leaks in browsers
```
The only simple test case I give right away:
```
<script type="text/javascript">
var a = 1;
</script>
```
I suggest interviewee to explain in technical terms that line.
And I also check on overall awareness of the current state of the Web technologies, among other questions I suggest designing a web-browser where interviewee is suggested to pick technologies he would put into his brand-new creature, suggest ones that might be missing from the current client-side platform.
|
I can suggest you online test, that includes HTML, CSS and JavaScript together.
<http://tests4geeks.com/test/html-css-javascript>
It has 60 questions (20 for each subject). And you will recieve the report via email, when the candidate finishes the test.
|
Testing HTML/CSS/Javascript skills when hiring
|
[
"",
"javascript",
"html",
"css",
"frontend",
""
] |
How can I detect mouse clicks regardless of the window the mouse is in?
Perferabliy in python, but if someone can explain it in any langauge I might be able to figure it out.
I found this on microsoft's site:
<http://msdn.microsoft.com/en-us/library/ms645533(VS.85).aspx>
But I don't see how I can detect or pick up the notifications listed.
Tried using pygame's pygame.mouse.get\_pos() function as follows:
```
import pygame
pygame.init()
while True:
print pygame.mouse.get_pos()
```
This just returns 0,0.
I'm not familiar with pygame, is something missing?
In anycase I'd prefer a method without the need to install a 3rd party module.
(other than pywin32 <http://sourceforge.net/projects/pywin32/> )
|
The only way to detect mouse events outside your program is to install a Windows hook using [SetWindowsHookEx](http://msdn.microsoft.com/en-us/library/ms644990(VS.85).aspx). The [pyHook](http://www.cs.unc.edu/Research/assist/developer.shtml) module encapsulates the nitty-gritty details. Here's a sample that will print the location of every mouse click:
```
import pyHook
import pythoncom
def onclick(event):
print event.Position
return True
hm = pyHook.HookManager()
hm.SubscribeMouseAllButtonsDown(onclick)
hm.HookMouse()
pythoncom.PumpMessages()
hm.UnhookMouse()
```
You can check the **example.py** script that is installed with the module for more info about the **event** parameter.
pyHook might be tricky to use in a pure Python script, because it requires an active message pump. From the [tutorial](https://web.archive.org/web/20100501173949/http://mindtrove.info/articles/monitoring-global-input-with-pyhook/):
> Any application that wishes to receive
> notifications of global input events
> must have a Windows message pump. The
> easiest way to get one of these is to
> use the PumpMessages method in the
> Win32 Extensions package for Python.
> [...] When run, this program just sits
> idle and waits for Windows events. If
> you are using a GUI toolkit (e.g.
> wxPython), this loop is unnecessary
> since the toolkit provides its own.
|
I use win32api. It works when clicking on any windows.
```
# Code to check if left or right mouse buttons were pressed
import win32api
import time
state_left = win32api.GetKeyState(0x01) # Left button down = 0 or 1. Button up = -127 or -128
state_right = win32api.GetKeyState(0x02) # Right button down = 0 or 1. Button up = -127 or -128
while True:
a = win32api.GetKeyState(0x01)
b = win32api.GetKeyState(0x02)
if a != state_left: # Button state changed
state_left = a
print(a)
if a < 0:
print('Left Button Pressed')
else:
print('Left Button Released')
if b != state_right: # Button state changed
state_right = b
print(b)
if b < 0:
print('Right Button Pressed')
else:
print('Right Button Released')
time.sleep(0.001)
```
|
Detecting Mouse clicks in windows using python
|
[
"",
"python",
"windows",
"mouse",
""
] |
What are differences between declaring a method in a base type "`virtual`" and then overriding it in a child type using the "`override`" keyword as opposed to simply using the "`new`" keyword when declaring the matching method in the child type?
|
The "new" keyword doesn't override, it signifies a new method that has nothing to do with the base class method.
```
public class Foo
{
public bool DoSomething() { return false; }
}
public class Bar : Foo
{
public new bool DoSomething() { return true; }
}
public class Test
{
public static void Main ()
{
Foo test = new Bar ();
Console.WriteLine (test.DoSomething ());
}
}
```
**This prints false, if you used override it would have printed true.**
(Base code taken from Joseph Daigle)
So, if you are doing real polymorphism you **SHOULD ALWAYS OVERRIDE**. The only place where you need to use "new" is when the method is not related in any way to the base class version.
|
I always find things like this more easily understood with pictures:
Again, taking joseph daigle's code,
```
public class Foo
{
public /*virtual*/ bool DoSomething() { return false; }
}
public class Bar : Foo
{
public /*override or new*/ bool DoSomething() { return true; }
}
```
If you then call the code like this:
```
Foo a = new Bar();
a.DoSomething();
```
*NOTE: The important thing is that our object is actually a `Bar`, but we are **storing it in a variable of type `Foo`** (this is similar to casting it)*
Then the result will be as follows, depending on whether you used `virtual`/`override` or `new` when declaring your classes.
[](https://i.stack.imgur.com/4NrQk.jpg)
|
C# - Keyword usage virtual+override vs. new
|
[
"",
"c#",
"syntax",
"overriding",
"method-hiding",
"member-hiding",
""
] |
Every class that wants to use java.util.logging generally needs to declare a logger like this:
```
public class MyClass {
private static Logger _log = Logger.getLogger(MyClass.class.getName());
}
```
How do you avoid this MyClass.class.getName() boilerplate code?
|
I have a template set up in Eclipse so that I only have to type a portion of the declaration, and then Eclipse will auto-complete the rest for me.
```
${:import(org.apache.log4j.Logger)}
private final static Logger log = Logger.getLogger(${enclosing_type}.class);
${cursor}
```
So, I only have to type `logger`, hit `Ctrl+Space`, followed by `Enter`, and Eclipse fills in the rest for me and adds the import declaration as well.
This won't cut down on the amount of boilerplate code, but at least it cuts down on the amount of keystrokes.
|
Have a look at using point cuts in your code
I have not looked back since using them with spring.
Here is an article on using AspectJ
<http://www.developer.com/java/other/article.php/3109831>
|
How do you reduce Java logging boilerplate code?
|
[
"",
"java",
"logging",
""
] |
I am in the process of writing a Bluetooth scanner that locates and identifies mobile devices in the local vicinity. Is this something that I can accomplish using C#, or do I need to drop down into the C/C++ APIs? My application is targeting Windows XP and Vista. Pointers are appreciated.
Thanks!
|
One problem with Bluetooth on the PC is that there are several BT stacks in use and you can never quite know which one is available on a given machine. The most common ones are Widcomm (now Broadcom) and Microsoft (appeared in XP, maybe one of the service packs). However, some BT hardware vendors package BlueSoleil and some use Toshiba. Most dongles will work with the MS stack so the .NET libs I've seen tend to use that.
Each of the stacks has a totally different way of doing the discovery part where you browse for nearby devices and inquire their services.
If I had to pick one approach today I'd probably do the discovery in C++ and add an interface for .NET.
The 32feet.net stuff worked pretty well when I tried it but didn't support the Widcomm stack.
|
There is also Peter Foot's 32feet.net
<http://inthehand.com/content/32feet.aspx>
I've played around with this back when it was v1.5 and it worked well.
|
Bluetooth APIs in Windows/.Net?
|
[
"",
"c#",
".net",
"windows",
"windows-xp",
"bluetooth",
""
] |
What's the best way to create a non-NULL constraint in MySQL such that fieldA and fieldB can't both be NULL. I don't care if either one is NULL by itself, just as long as the other field has a non-NULL value. And if they both have non-NULL values, then it's even better.
|
MySQL 5.5 introduced [SIGNAL](http://dev.mysql.com/doc/refman/5.5/en/signal.html), so we don't need the extra column in Bill Karwin's answer any more. Bill pointed out you also need a trigger for update so I've included that too.
```
CREATE TABLE foo (
FieldA INT,
FieldB INT
);
DELIMITER //
CREATE TRIGGER InsertFieldABNotNull BEFORE INSERT ON foo
FOR EACH ROW BEGIN
IF (NEW.FieldA IS NULL AND NEW.FieldB IS NULL) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = '\'FieldA\' and \'FieldB\' cannot both be null';
END IF;
END//
CREATE TRIGGER UpdateFieldABNotNull BEFORE UPDATE ON foo
FOR EACH ROW BEGIN
IF (NEW.FieldA IS NULL AND NEW.FieldB IS NULL) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = '\'FieldA\' and \'FieldB\' cannot both be null';
END IF;
END//
DELIMITER ;
INSERT INTO foo (FieldA, FieldB) VALUES (NULL, 10); -- OK
INSERT INTO foo (FieldA, FieldB) VALUES (10, NULL); -- OK
INSERT INTO foo (FieldA, FieldB) VALUES (NULL, NULL); -- gives error
UPDATE foo SET FieldA = NULL; -- gives error
```
|
This isn't an answer directly to your question, but some additional information.
When dealing with multiple columns and checking if all are null or one is not null, I typically use `COALESCE()` - it's brief, readable and easily maintainable if the list grows:
```
COALESCE(a, b, c, d) IS NULL -- True if all are NULL
COALESCE(a, b, c, d) IS NOT NULL -- True if any one is not null
```
This can be used in your trigger.
|
Either OR non-null constraints in MySQL
|
[
"",
"sql",
"mysql",
"null",
""
] |
I have two web applications running in the same Tomcat Instance. In one of these applications the user will have the ability to upload files such as images and pdf files. I would like the uploaded files to be available to the second application.
Is there a best practice for such a scenario? Or just a pointer to a technology would be fine.
I considered using web services but wondered if it was overkill.
Thanks
Vincent
|
Cheap, bad answer - have both applications softlink to a shared directory. This has the benefit of being stupid-simple to do but has evil transactional-type issues. Since you say that only one application is changing the data, and the other is read-only you might be able to get away with it, as long as the second app can't observe files in a partially created state.
Using a db is transactionally safe but is going to be pretty unpleasant as the files get larger.
|
I'd say it depends on how robust you need the file storage to be, and how transactional. The simplest way would be a shared directory that's on the classpath of both apps. A database would be a more robust, but more complex, solution.
|
Transferring files between web applications running in the same Tomcat Instance
|
[
"",
"java",
"tomcat",
""
] |
I have an Open Source app and I have it working on Windows, Linux and Macintosh ( it's in C++ and built with gcc ). I've only tested it on a few different flavors of Linux so I don't know if it compiles and runs on all different Linux versions. Is there a place where I can upload my code and have it tested across a bunch of different systems like other Linux flavors and things like, Solaris, FreeBSD and other operating systems?
What would be great is if I can have it directly connect to my svn repository and grab the latest code and then email me back any compile errors generated and what the OS was that it had a problem with.
I would be happy to just know it compiles as it is a GUI based app so I wouldn't expect it to actually be ran and tested.
|
There are a few options but there don't appear to be many (any?) free services like this, which isn't surprising considering the amount of effort and resources it requires. Sourceforge used to operate a compile farm like what you describe but it shut down a year or so ago. You might look into some of the following. If you're inclined to pay for a service or roll your own, then some of these links may be useful. If you're just looking for a free open source compile/build farm that covers multiple platforms it looks like you're pretty much out of luck.
### [OpenSuse Build Service](http://en.opensuse.org/Build_Service)
Mentioned by [Ted](https://stackoverflow.com/users/954/ted-percival) first, worth repeating - only for Linux currently but does support a number of distros.
### [GCC Compile Farm](http://gcc.gnu.org/wiki/CompileFarm)
Mainly focused on testing builds for GCC but does also host a few other projects such as coLinux, BTG BitTorrent client, ClamAV, and others. May be something you can take advantage of, though I don't see what OSes are in the compile farm (contains at least Linux and Solaris based on the page notes).
### [BuildLocker](http://www.projectlocker.com/site/build_locker)
*BuildLocker is a Web-based continuous integration solution for Java and .NET projects. BuildLocker is a virtual dedicated build machine that helps teams find bugs earlier in the development cycle, saving time and money. BuildLocker manages scheduled automated builds of source code in your ProjectLocker Source Control repository. Just check in the source code, and scheduled builds validate the integrity of the code. BuildLocker can even run automated tests, and can alert you anytime a test fails.*
### [CruiseControl](http://cruisecontrol.sourceforge.net/)
*CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds.*
Interesting side note, CruiseControl is actually used by StackOverflow's dev team for automated build testing as well, according to the podcast.
### [Hudson](http://hudson.gotdns.com/wiki/display/HUDSON/Meet+Hudson)
*Hudson monitors executions of repeated jobs, such as building a software project or jobs run by cron.*
### [RunCodeRun](http://runcoderun.com/)
Mentioned in the other linked question, only supports Ruby projects and is in private beta currently. However, if your project is in Ruby, it might be worth keeping an eye on RunCodeRun.
### [CI Feature Matrix](http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix)
*There are many Continuous Integration systems available. This page is an attempt to keep an unbiased comparison of as many as possible of them.*
|
Take a look at the [OpenSuSE build service](http://en.opensuse.org/Build_Service), it includes a fairly wide variety of Linux distros (not just SuSE/OpenSuSE).
|
Is there build farm for checking open source apps against different OS'es?
|
[
"",
"c++",
"build",
"operating-system",
""
] |
My database contains three tables called `Object_Table`, `Data_Table` and `Link_Table`. The link table just contains two columns, the identity of an object record and an identity of a data record.
I want to copy the data from `DATA_TABLE` where it is linked to one given object identity and insert corresponding records into `Data_Table` and `Link_Table` for a different given object identity.
I **can** do this by selecting into a table variable and the looping through doing two inserts for each iteration.
Is this the best way to do it?
**Edit** : I want to avoid a loop for two reason, the first is that I'm lazy and a loop/temp table requires more code, more code means more places to make a mistake and the second reason is a concern about performance.
I can copy all the data in one insert but how do get the link table to link to the new data records where each record has a new id?
|
The following sets up the situation I had, using table variables.
```
DECLARE @Object_Table TABLE
(
Id INT NOT NULL PRIMARY KEY
)
DECLARE @Link_Table TABLE
(
ObjectId INT NOT NULL,
DataId INT NOT NULL
)
DECLARE @Data_Table TABLE
(
Id INT NOT NULL Identity(1,1),
Data VARCHAR(50) NOT NULL
)
-- create two objects '1' and '2'
INSERT INTO @Object_Table (Id) VALUES (1)
INSERT INTO @Object_Table (Id) VALUES (2)
-- create some data
INSERT INTO @Data_Table (Data) VALUES ('Data One')
INSERT INTO @Data_Table (Data) VALUES ('Data Two')
-- link all data to first object
INSERT INTO @Link_Table (ObjectId, DataId)
SELECT Objects.Id, Data.Id
FROM @Object_Table AS Objects, @Data_Table AS Data
WHERE Objects.Id = 1
```
Thanks to another [answer](https://stackoverflow.com/questions/175066/in-sql-server-is-it-possible-to-insert-into-two-tables-at-the-same-time#175136) that pointed me towards the OUTPUT clause I can demonstrate a solution:
```
-- now I want to copy the data from from object 1 to object 2 without looping
INSERT INTO @Data_Table (Data)
OUTPUT 2, INSERTED.Id INTO @Link_Table (ObjectId, DataId)
SELECT Data.Data
FROM @Data_Table AS Data INNER JOIN @Link_Table AS Link ON Data.Id = Link.DataId
INNER JOIN @Object_Table AS Objects ON Link.ObjectId = Objects.Id
WHERE Objects.Id = 1
```
It turns out however that it is not that simple in real life because of the following error
> the OUTPUT INTO clause cannot be on
> either side of a (primary key, foreign
> key) relationship
I can still `OUTPUT INTO` a temp table and then finish with normal insert. So I can avoid my loop but I cannot avoid the temp table.
|
In one *statement*: No.
In one *transaction*: Yes
```
BEGIN TRANSACTION
DECLARE @DataID int;
INSERT INTO DataTable (Column1 ...) VALUES (....);
SELECT @DataID = scope_identity();
INSERT INTO LinkTable VALUES (@ObjectID, @DataID);
COMMIT
```
The good news is that the above code is also guaranteed to be *atomic*, and can be sent to the server from a client application with one sql string in a single function call as if it were one statement. You could also apply a trigger to one table to get the effect of a single insert. However, it's ultimately still two statements and you probably don't want to run the trigger for *every* insert.
|
SQL Server: Is it possible to insert into two tables at the same time?
|
[
"",
"sql",
"sql-server",
"insert",
"temp-tables",
""
] |
I try to add an addons system to my Windows.Net application using Reflection; but it fails when there is addon with dependencie.
Addon class have to implement an interface 'IAddon' and to have an empty constructor.
Main program load the addon using Reflection:
```
Assembly assembly = Assembly.LoadFile(@"C:\Temp\TestAddon\Addon.dll");
Type t = assembly.GetType("Test.MyAddon");
ConstructorInfo ctor = t.GetConstructor(new Type[] { });
IAddon addon= (IAddon) ctor.Invoke(new object[] { });
addon.StartAddon();
```
It works great when addon do not use dependencie.
But if my addon reference and use an another DLL (C:\Temp\TestAddon\MyTools.dll) that is saved near the addon in disk, it fails:
*System.IO.FileNotFoundException: Could not load file or assembly 'MyTools.dll' or one of its dependencies.*
I do not wants to copy the addons DLL near my executable, how can i do to tell .Net runtime to search in "C:\Temp\TestAddon\" for any dependency?
Note that adding
```
Assembly assembly = Assembly.LoadFile(@"C:\Temp\TestAddon\MyTools.dll");
```
do not change anything.
|
If *MyTools.dll* is located in the same directory as *Addon.dll*, all you need to do is call `Assembly.LoadFrom` instead of `Assembly.LoadFile` to make your code work. Otherwise, handling the `AppDomain.AssemblyResolve` event is the way to go.
|
Assembly.LoadFrom works well until I try to use a webService in my addon, I had had a "*Unable to cast object of type 'X' to type 'X'*" exception.
It's ugly, but i will use Assembly.LoadFile with the AppDomain.AssemblyResolve.
Thanks guys.
|
Reflection.Net: how to load dependencies?
|
[
"",
"c#",
"reflection",
"dependencies",
"add-on",
""
] |
I work on a Webproject using [jQuery](http://jquery.com/) and CakePHP. I use [jeditable](http://www.appelsiini.net/projects/jeditable) as an inplace edit plugin. For textareas I extend it using the [autogrow plugin](http://www.appelsiini.net/2008/4/autogrow-textarea-for-jeditable).
Well, I have two problems with this:
* First, autogrow does only work on Firefox, not on IE, Safari, Opera and Chrome.
* Second, I need a callback event for jeditable, when its finished showing the edit-component, to recalculate the [scrollbar](http://kelvinluck.com/assets/jquery/jScrollPane/jScrollPane.html)
Im not so familiar with Javascript, so i can't extend/correct this two libraries by my own. Has anyone used another js-library for inplace edit with auto growing textareas (no complete editors like TinyMCE, I need a solution for plain text)?
I also found [Growfield](http://plugins.jquery.com/project/Growfield), it would work for other browsers, but there's no jeditable integration...
*(sorry for my english)*
|
I didn't see any problems using Autogrow with jeditable in any browsers but here is an implementation of Growfield with jeditable. It works much in the same way that the Autogrow plugin for jeditable does. You create a special input type for jeditable and just apply .growfield() to it. The necessary javascript is below, a demo can be [found here](http://alexcpendletonapps.appspot.com/editauto/growfield.html "found here").
```
<script type="text/javascript">
/* This is the growfield integration into jeditable
You can use almost any field plugin you'd like if you create an input type for it.
It just needs the "element" function (to create the editable field) and the "plugin"
function which applies the effect to the field. This is very close to the code in the
jquery.jeditable.autogrow.js input type that comes with jeditable.
*/
$.editable.addInputType('growfield', {
element : function(settings, original) {
var textarea = $('<textarea>');
if (settings.rows) {
textarea.attr('rows', settings.rows);
} else {
textarea.height(settings.height);
}
if (settings.cols) {
textarea.attr('cols', settings.cols);
} else {
textarea.width(settings.width);
}
// will execute when textarea is rendered
textarea.ready(function() {
// implement your scroll pane code here
});
$(this).append(textarea);
return(textarea);
},
plugin : function(settings, original) {
// applies the growfield effect to the in-place edit field
$('textarea', this).growfield(settings.growfield);
}
});
/* jeditable initialization */
$(function() {
$('.editable_textarea').editable('postto.html', {
type: "growfield", // tells jeditable to use your growfield input type from above
submit: 'OK', // this and below are optional
tooltip: "Click to edit...",
onblur: "ignore",
growfield: { } // use this to pass options to the growfield that gets created
});
})
```
|
**Mika Tuupola**: If you are Interested in my modified jeditable (added two callback events), you can [get it here](http://knightkiller.ch/jquery.jeditable.js). It would be great if you would provide these events in your official version of jeditable!
Here is my (simplified) integration code. I use the events for more then just for the hover effect. It's just one usecase.
```
$('.edit_memo').editable('/cakephp/efforts/updateValue', {
id : 'data[Effort][id]',
name : 'data[Effort][value]',
type : 'growfield',
cancel : 'Abort',
submit : 'Save',
tooltip : 'click to edit',
indicator : "<span class='save'>saving...</span>",
onblur : 'ignore',
placeholder : '<span class="hint"><click to edit></span>',
loadurl : '/cakephp/efforts/getValue',
loadtype : 'POST',
loadtext : 'loading...',
width : 447,
onreadytoedit : function(value){
$(this).removeClass('edit_memo_hover'); //remove css hover effect
},
onfinishededit : function(value){
$(this).addClass('edit_memo_hover'); //add css hover effect
}
});
```
|
Problems using jeditable and autogrow
|
[
"",
"javascript",
"jquery",
"ajax",
"jeditable",
"autogrow",
""
] |
I have a two dimensional array that I need to load data into. I know the width of the data (22 values) but I do not know the height (estimated around 4000 records, but variable).
I have it declared as follows:
```
float[,] _calibrationSet;
....
int calibrationRow = 0;
While (recordsToRead)
{
for (int i = 0; i < SensorCount; i++)
{
_calibrationSet[calibrationRow, i] = calibrationArrayView.ReadFloat();
}
calibrationRow++;
}
```
This causes a NullReferenceException, so when I try to initialize it like this:
```
_calibrationSet = new float[,];
```
I get an "Array creation must have array size or array initializer."
Thank you,
Keith
|
You can't use an array.
Or rather, you would need to pick a size, and if you ended up needing more then you would have to allocate a new, larger, array, copy the data from the old one into the new one, and continue on as before (until you exceed the size of the new one...)
Generally, you would go with one of the collection classes - ArrayList, List<>, LinkedList<>, etc. - which one depends a lot on what you're looking for; List will give you the closest thing to what i described initially, while LinkedList<> will avoid the problem of frequent re-allocations (at the cost of slower access and greater memory usage).
Example:
```
List<float[]> _calibrationSet = new List<float[]>();
// ...
while (recordsToRead)
{
float[] record = new float[SensorCount];
for (int i = 0; i < SensorCount; i++)
{
record[i] = calibrationArrayView.ReadFloat();
}
_calibrationSet.Add(record);
}
// access later: _calibrationSet[record][sensor]
```
Oh, and it's worth noting (as [Grauenwolf](https://stackoverflow.com/questions/50558/how-do-you-initialize-a-2-dimensional-array-when-you-do-not-know-the-size#50591) did), that what i'm doing here doesn't give you the same memory structure as a single, multi-dimensional array would - under the hood, it's an array of references to other arrays that actually hold the data. This speeds up building the array a good deal by making reallocation cheaper, but can have an impact on access speed (and, of course, memory usage). Whether this is an issue for you depends a lot on what you'll be doing with the data after it's loaded... and whether there are two hundred records or two million records.
|
You can't create an array in .NET (as opposed to declaring a reference to it, which is what you did in your example) without specifying its dimensions, either explicitly, or implicitly by specifying a set of literal values when you initialize it. (e.g. int[,] array4 = { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };)
You need to use a variable-size data structure first (a generic list of 22-element 1-d arrays would be the simplest) and then allocate your array and copy your data into it after your read is finished and you know how many rows you need.
|
How do you initialize a 2 dimensional array when you do not know the size
|
[
"",
"c#",
"array-initialize",
""
] |
Is it possible at runtime to programmatically check the name of the Thread that is holding the lock of a given object?
|
You can only tell whether the current thread holds a normal lock ([`Thread.holdsLock(Object)`](http://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html#holdsLock%28java.lang.Object%29)). You can't get a reference to the thread that has the lock without native code.
However, if you're doing anything complicated with threading, you probably want to familiarize yourself with the java.util.concurrent packages. The [`ReentrantLock`](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html) does allow you to get its owner (but its a protected method, so you'd have to extend this). Depending on your application, it may well be that by using the concurrency packages, you'll find that you don't need to get the lock's owner after all.
There are non-programmatic methods to find the lock owners, such as signaling the JVM to issue a thread dump to stderr, that are useful to determine the cause of deadlocks.
|
You can get at the locks held by threads with reflection. This only works with java 1.6.
```
ThreadMXBean bean = ManagementFactory.getThreadMXBean();
ThreadInfo[] ti = bean.getThreadInfo(bean.getAllThreadIds(), true, true);
```
On each of these ThreadInfo objects there are LockInfo objects which you can use the identityHashCode on them to compare to the lock in question.
|
Programmatically determine which Java thread holds a lock
|
[
"",
"java",
"multithreading",
"concurrency",
""
] |
When I started writing database queries I didn't know the JOIN keyword yet and naturally I just extended what I already knew and wrote queries like this:
```
SELECT a.someRow, b.someRow
FROM tableA AS a, tableB AS b
WHERE a.ID=b.ID AND b.ID= $someVar
```
Now that I know that this is the same as an INNER JOIN I find all these queries in my code and ask myself if I should rewrite them. Is there something smelly about them or are they just fine?
---
**My answer summary**: There is nothing wrong with this query BUT using the keywords will most probably make the code more readable/maintainable.
**My conclusion**: I will not change my old queries but I will correct my writing style and use the keywords in the future.
|
Filtering joins solely using `WHERE` can be extremely inefficient in some common scenarios. For example:
```
SELECT * FROM people p, companies c
WHERE p.companyID = c.id AND p.firstName = 'Daniel'
```
Most databases will execute this query quite literally, first taking the [Cartesian product](http://en.wikipedia.org/wiki/Cartesian_product) of the `people` and `companies` tables and *then* filtering by those which have matching `companyID` and `id` fields. While the fully-unconstrained product does not exist anywhere but in memory and then only for a moment, its calculation does take some time.
A better approach is to group the constraints with the `JOIN`s where relevant. This is not only subjectively easier to read but also far more efficient. Thusly:
```
SELECT * FROM people p JOIN companies c ON p.companyID = c.id
WHERE p.firstName = 'Daniel'
```
It's a little longer, but the database is able to look at the `ON` clause and use it to compute the fully-constrained `JOIN` directly, rather than starting with *everything* and then limiting down. This is faster to compute (especially with large data sets and/or many-table joins) and requires less memory.
I change every query I see which uses the "comma `JOIN`" syntax. In my opinion, the only purpose for its existence is conciseness. Considering the performance impact, I don't think this is a compelling reason.
|
The more verbose `INNER JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN, FULL OUTER JOIN` are from the ANSI SQL/92 syntax for joining. For me, this verbosity makes the join more clear to the developer/DBA of what the intent is with the join.
|
Is there something wrong with joins that don't use the JOIN keyword in SQL or MySQL?
|
[
"",
"sql",
"mysql",
"join",
""
] |
I have a web application using JPA and JTA with Spring. I would like to support both JBoss and Tomcat. When running on JBoss, I'd like to use JBoss' own TransactionManager, and when running on Tomcat, I'd like to use JOTM.
I have both scenarios working, but I now find that I seem to need two separate Spring configurations for the two cases. With JOTM, I need to use Spring's `JotmFactoryBean`:
```
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction">
<bean class="org.springframework.transaction.jta.JotmFactoryBean"/>
</property>
</bean>
```
In JBoss, though, I just need to fetch "TransactionManager" from JNDI:
```
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager">
<bean class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="resourceRef" value="true" />
<property name="jndiName" value="TransactionManager" />
<property name="expectedType"
value="javax.transaction.TransactionManager" />
</bean>
</property>
</bean>
```
Is there a way to configure this so that the appropriate TransactionManager - JBoss or JOTM - is used, without the need for two different configuration files?
|
I think you have missed the point of JNDI. JNDI was pretty much written to solve the problem you have!
I think you can take it up a level, so instead of using the "userTransaction" or "transactionManager from JNDI" depending on your situation. Why not add the "JtaTransactionManager" to JNDI. That way you push the configuration to the JNDI where it is supposed to be instead of creating even more configuration files [ like there aren't enough already ;) ].
|
You can use PropertyConfigurerPlaceholder to inject bean references as well as simple values.
For example if you call your beans 'jotm' and 'jboss' then you could inject your TM like:
```
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE">
<property name="location" value="classpath:/path/to/application.properties"/>
</bean>
<bean id="jotm">...</bean>
<bean id="jboss">...</bean>
<bean id="bean-requiring-transaction-manager">
<property name="transactionManager" ref="${transaction.strategy}"/>
</bean>
```
Then you can swap transaction managers using
* *transaction.strategy=jotm* in a properties file
* *-Dtransaction.strategy=jotm* as a system property
This is one possible approach. See my [blog](http://toolkits.blogspot.com/2008/04/changing-spring-beans-using-properties.html) for a more complete example.
Hope this helps.
|
Spring JTA TransactionManager config: Supporting both Tomcat and JBoss
|
[
"",
"java",
"spring",
"jboss",
"jta",
"jotm",
""
] |
I've found a few samples online but I'd like to get feedback from people who use PHP daily as to potential security or performance considerations and their solutions.
Note that I am only interested in uploading a single file at a time.
Ideally no browser plugin would be required (Flash/Java), although it would be interesting to know the benefits of using a plugin.
I would like to know both the best HTML form code and PHP processing code.
|
# File Upload Tutorial
## HTML
```
<form enctype="multipart/form-data" action="action.php" method="POST">
<input type="hidden" name="MAX_FILE_SIZE" value="1000000" />
<input name="userfile" type="file" />
<input type="submit" value="Go" />
</form>
```
* `action.php` is the name of a PHP file that will process the upload (shown below)
* `MAX_FILE_SIZE` must appear immediately before the input with type `file`. This value can easily be manipulated on the client so should not be relied upon. Its main benefit is to provide the user with early warning that their file is too large, before they've uploaded it.
* You can change the name of the input with type `file`, but make sure it doesn't contain any spaces. You must also update the corresponding value in the PHP file (below).
## PHP
```
<?php
$uploaddir = "/www/uploads/";
$uploadfile = $uploaddir . basename($_FILES['userfile']['name']);
echo '<pre>';
if (move_uploaded_file($_FILES['userfile']['tmp_name'], $uploadfile)) {
echo "Success.\n";
} else {
echo "Failure.\n";
}
echo 'Here is some more debugging info:';
print_r($_FILES);
print "</pre>";
?>
```
The upload-to folder should not be located in a place that's accessible via HTTP, otherwise it would be possible to upload a PHP script and execute it upon the server.
Printing the value of `$_FILES` can give a hint as to what's going on. For example:
```
Array
(
[userfile] => Array
(
[name] => Filename.ext
[type] =>
[tmp_name] =>
[error] => 2
[size] => 0
)
)
```
This structure gives some information as to the file's name, MIME type, size and error code.
## Error Codes
> 0 Indicates that there was no errors and file has been uploaded successfully
> 1 Indicates that the file exceeds the maximum file size defined in php.ini. If you would like to change the maximum file size, you need to open your php.ini file, identify the line which reads: upload\_max\_filesize = 2M and change the value from 2M (2MB) to whatever you need
> 2 Indicates that the maximum file size defined manually, within an on page script has been exceeded
> 3 Indicates that file has only been uploaded partially
> 4 Indicates that the file hasn't been specified (empty file field)
> 5 Not defined yet
> 6 Indicates that there´s no temporary folder
> 7 Indicates that the file cannot be written to the disk
## `php.ini` Configuration
When running this setup with larger files you may receive errors. Check your `php.ini` file for these keys:
`max_execution_time = 30`
`upload_max_filesize = 2M`
Increasing these values as appropriate may help. When using Apache, changes to this file require a restart.
The maximum memory permitted value (set via `memory_limit`) does not play a role here as the file is written to the tmp directory as it is uploaded. The location of the tmp directory is optionally controlled via `upload_tmp_dir`.
## Checking file mimetypes
You should check the filetype of what the user is uploading - the best practice is to validate against a list of allowed filetypes. A potential risk of allowing any file is that **a user could potentially upload PHP code to the server and then run it**.
You can use the very useful [`fileinfo`](http://www.php.net/manual/en/ref.fileinfo.php) extension (that supersedes the older [`mime_content_type`](http://www.php.net/manual/en/function.mime-content-type.php) function) to validate mime-types.
```
// FILEINFO_MIME set to return MIME types, will return string of info otherwise
$fileinfo = new finfo(FILEINFO_MIME);
$file = $fileinfo->file($_FILE['filename']);
$allowed_types = array('image/jpeg', 'image/png');
if(!in_array($file, $allowed_types))
{
die('Files of type' . $file . ' are not allowed to be uploaded.');
}
// Continue
```
## More Information
You can read more on handling file uploads at the [PHP.net manual](http://docs.php.net/manual/en/features.file-upload.php).
## For PHP 5.3+
```
//For those who are using PHP 5.3, the code varies.
$fileinfo = new finfo(FILEINFO_MIME_TYPE);
$file = $fileinfo->file($_FILE['filename']['tmp_name']);
$allowed_types = array('image/jpeg', 'image/png');
if(!in_array($file, $allowed_types))
{
die('Files of type' . $file . ' are not allowed to be uploaded.');
}
// Continue
```
## More Information
You can read more on FILEINFO\_MIME\_TYPE at the [PHP.net documentation](http://php.net/manual/en/fileinfo.constants.php).
|
Have a read of [this introduction](http://docs.php.net/manual/en/features.file-upload.php) which should tell you everything you need to know. The user comments are fairly useful as well.
|
What's the best way to create a single-file upload form using PHP?
|
[
"",
"php",
"upload",
"file-upload",
""
] |
There is a long running habit here where I work that the connection string lives in the web.config, a Sql Connection object is instantiated in a using block with that connection string and passed to the DataObjects constructor (via a CreateInstance Method as the constructor is private). Something like this:
```
using(SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString))
{
DataObject foo = DataObject.CreateInstance(conn);
foo.someProperty = "some value";
foo.Insert();
}
```
This all smells to me.. I don't know. Shouldn't the DataLayer class library be responsible for Connection objects and Connection strings? I'd be grateful to know what others are doing or any good online articles about these kind of design decisions.
Consider that the projects we work on are always Sql Server backends and that is extremely unlikely to change. So factory and provider pattern is not what I'm after. It's more about where responsibility lies and where config settings should be managed for data layer operation.
|
I like to code the classes in my data access layer so that they have one constructor that takes an IDbConnection as a parameter, and another that takes a (connection) string.
That way the calling code can either construct its own SqlConnection and pass it in (handy for integration tests), mock an IDbConnection and pass that in (handy for unit tests) or read a connection string from a configuration file (eg web.config) and pass that in.
|
Hm, I think I agree that the datalayer should be responsible for managing such connection strings so the higher layers don't need to worry about this. However, I do not think that the SQLConnection should worry where the connection string comes from.
I think, I would have a datalayer which provides certain DataInputs, that is, things that take a condition and return DataObjects. Such a DataInput now knows "hey, this DataObjects are stored in THAT Database, and using the Configurations, I can use some connection-string to get an SQL-Connection from over there.
That way you have encapsulated the entire process of "How and where do the data objects come from?" and the internals of the datalayer can still be tested properly. (And, as a side effect, you can easily use different databases, or even multiple different databases at the same time. Such flexibility that just pops up is a good sign(tm))
|
In a layered design with a separate DataAccess layer in .NET where should connection string be managed?
|
[
"",
"c#",
"asp.net",
"database",
""
] |
The NUnit documentation doesn't tell me when to use a method with a `TestFixtureSetup` and when to do the setup in the constructor.
```
public class MyTest
{
private MyClass myClass;
public MyTest()
{
myClass = new MyClass();
}
[TestFixtureSetUp]
public void Init()
{
myClass = new MyClass();
}
}
```
Are there any good/bad practices about the `TestFixtureSetup` versus default constructor or isn't there any difference?
|
I think this has been one of the issues that hasn't been addressed by the nUnit team. However, there is the excellent [xUnit project](http://www.codeplex.com/xunit) that saw this exact issue and decided that constructors were a good thing to use on [test fixture initialization](http://www.codeplex.com/xunit/Wiki/View.aspx?title=Comparisons&referringTitle=Home#attributes).
For nunit, my best practice in this case has been to use the `TestFixtureSetUp`, `TestFixtureTearDown`, `SetUp`, and `TearDown` methods as described in the documentation.
I think it also helps me when I don't think of an nUnit test fixture as a normal class, even though you are defining it with that construct. I think of them as fixtures, and that gets me over the mental hurdle and allows me to overlook this issue.
|
Why would you need to use a constructor in your test classes?
I use `[SetUp]` and `[TearDown]` marked methods for code to be executed before and after each test, and similarly `[TestFixtureSetUp]` and `[TestFixtureTearDown]` marked methods for code to be executed only once before and after all test in the fixture have been run.
I guess you could probably substitute the `[TestFixtureSetUp]` for a constructor (although I haven't tried), but this only seems to break from the clear convention that the marked methods provide.
|
When do I use the TestFixtureSetUp attribute instead of a default constructor?
|
[
"",
"c#",
"unit-testing",
"nunit",
""
] |
Any suggestions on how to write repeatable unit tests for code that may be susceptible to deadlocks and race conditions?
Right now I'm leaning towards skipping unit tests and focusing on stress tests. The problem with that is you can run a stress test 5 times and see five different results.
EDIT: I know its probably just a dream, but if there were a way to control individual threads and cause them to execute one instruction at a time then I might get somewhere.
|
Take a look at [TypeMock](http://www.typemock.com/typemock-racer-product/) Racer (it's in Beta)
edit: actually Alpha
<http://www.typemock.com/Typemock_software_development_tools.html>
|
It is usually possible to force *foreseen* race-conditions and deadlocks by using things like ManualResetEvent to get each thread into the expected state before releasing it - i.e. get thread A to have the lock and wait for a signal... get thread B to request the lock, etc...
However - you might typically write such a test to investigate a suspected bug, to prove when it is fixed and that it doesn't re-surface. You would generally design around race conditions (but test them as best as is pragmatic).
|
Unit Testing, Deadlocks, and Race Conditions
|
[
"",
"c#",
".net",
"vb.net",
"parallel-processing",
""
] |
I'm trying to test if a given default constraint exists. I don't want to use the sysobjects table, but the more standard INFORMATION\_SCHEMA.
I've used this to check for tables and primary key constraints before, but I don't see default constraints anywhere.
Are they not there? (I'm using MS SQL Server 2000).
EDIT: I'm looking to get by the name of the constraint.
|
As I understand it, default value constraints aren't part of the ISO standard, so they don't appear in INFORMATION\_SCHEMA. INFORMATION\_SCHEMA seems like the best choice for this kind of task because it is cross-platform, but if the information isn't available one should use the object catalog views (sys.\*) instead of system table views, which are deprecated in SQL Server 2005 and later.
Below is pretty much the same as @user186476's answer. It returns the name of the default value constraint for a given column. (For non-SQL Server users, you need the name of the default in order to drop it, and if you don't name the default constraint yourself, SQL Server creates some crazy name like "DF\_TableN\_Colum\_95AFE4B5". To make it easier to change your schema in the future, always explicitly name your constraints!)
```
-- returns name of a column's default value constraint
SELECT
default_constraints.name
FROM
sys.all_columns
INNER JOIN
sys.tables
ON all_columns.object_id = tables.object_id
INNER JOIN
sys.schemas
ON tables.schema_id = schemas.schema_id
INNER JOIN
sys.default_constraints
ON all_columns.default_object_id = default_constraints.object_id
WHERE
schemas.name = 'dbo'
AND tables.name = 'tablename'
AND all_columns.name = 'columnname'
```
|
You can use the following to narrow the results even more by specifying the Table Name and Column Name that the Default Constraint correlates to:
```
select * from sysobjects o
inner join syscolumns c
on o.id = c.cdefault
inner join sysobjects t
on c.id = t.id
where o.xtype = 'D'
and c.name = 'Column_Name'
and t.name = 'Table_Name'
```
|
How do I find a default constraint using INFORMATION_SCHEMA?
|
[
"",
"sql",
"sql-server",
"t-sql",
"information-schema",
""
] |
I would like to know if there is an easy way to detect if the text on the clipboard is in ISO 8859 or UTF-8 ?
Here is my current code:
```
COleDataObject obj;
if (obj.AttachClipboard())
{
if (obj.IsDataAvailable(CF_TEXT))
{
HGLOBAL hmem = obj.GetGlobalData(CF_TEXT);
CMemFile sf((BYTE*) ::GlobalLock(hmem),(UINT) ::GlobalSize(hmem));
CString buffer;
LPSTR str = buffer.GetBufferSetLength((int)::GlobalSize(hmem));
sf.Read(str,(UINT) ::GlobalSize(hmem));
::GlobalUnlock(hmem);
//this is my string class
s->SetEncoding(ENCODING_8BIT);
s->SetString(buffer);
}
}
}
```
|
Check out the definition of CF\_LOCALE at [this Microsoft page](http://msdn.microsoft.com/en-us/library/ms649013(VS.85).aspx). It tells you the locale of the text in the clipboard. Better yet, if you use CF\_UNICODETEXT instead, Windows will convert to UTF-16 for you.
|
UTF-8 has a defined structure for non-ASCII bytes. You can scan for bytes >= 128, and if any are detected, check if they form a valid UTF-8 string.
The valid UTF-8 byte formats can be found on [Wikipedia](http://en.wikipedia.org/wiki/UTF-8):
```
Unicode Byte1 Byte2 Byte3 Byte4
U+000000-U+00007F 0xxxxxxx
U+000080-U+0007FF 110xxxxx 10xxxxxx
U+000800-U+00FFFF 1110xxxx 10xxxxxx 10xxxxxx
U+010000-U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
```
---
old answer:
You don't have to -- all ASCII text is valid UTF-8, so you can just decode it as UTF-8 and it will work as expected.
To test if it contains non-ASCII characters, you can scan for bytes >= 128.
|
How to tell if text on the windows clipboard is ISO 8859 or UTF-8 in C++?
|
[
"",
"c++",
"windows",
"utf-8",
"clipboard",
""
] |
What is the best way to get a list of all files in a directory, sorted by date [created | modified], using python, on a windows machine?
|
*Update*: to sort `dirpath`'s entries by modification date in Python 3:
```
import os
from pathlib import Path
paths = sorted(Path(dirpath).iterdir(), key=os.path.getmtime)
```
(put [@Pygirl's answer](https://stackoverflow.com/a/58772122/4279) here for greater visibility)
If you already have a list of filenames `files`, then to sort it inplace by creation time on Windows (make sure that list contains absolute path):
```
files.sort(key=os.path.getctime)
```
The list of files you could get, for example, using `glob` as shown in [@Jay's answer](https://stackoverflow.com/a/168424/4279).
---
old answer
Here's a more verbose version of [`@Greg Hewgill`'s answer](https://stackoverflow.com/questions/168409/how-do-you-get-a-directory-listing-sorted-by-creation-date-in-python/168435#168435). It is the most conforming to the question requirements. It makes a distinction between creation and modification dates (at least on Windows).
```
#!/usr/bin/env python
from stat import S_ISREG, ST_CTIME, ST_MODE
import os, sys, time
# path to the directory (relative or absolute)
dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.'
# get all entries in the directory w/ stats
entries = (os.path.join(dirpath, fn) for fn in os.listdir(dirpath))
entries = ((os.stat(path), path) for path in entries)
# leave only regular files, insert creation date
entries = ((stat[ST_CTIME], path)
for stat, path in entries if S_ISREG(stat[ST_MODE]))
#NOTE: on Windows `ST_CTIME` is a creation date
# but on Unix it could be something else
#NOTE: use `ST_MTIME` to sort by a modification date
for cdate, path in sorted(entries):
print time.ctime(cdate), os.path.basename(path)
```
Example:
```
$ python stat_creation_date.py
Thu Feb 11 13:31:07 2009 stat_creation_date.py
```
|
I've done this in the past for a Python script to determine the last updated files in a directory:
```
import glob
import os
search_dir = "/mydir/"
# remove anything from the list that is not a file (directories, symlinks)
# thanks to J.F. Sebastion for pointing out that the requirement was a list
# of files (presumably not including directories)
files = list(filter(os.path.isfile, glob.glob(search_dir + "*")))
files.sort(key=lambda x: os.path.getmtime(x))
```
That should do what you're looking for based on file mtime.
**EDIT**: Note that you can also use os.listdir() in place of glob.glob() if desired - the reason I used glob in my original code was that I was wanting to use glob to only search for files with a particular set of file extensions, which glob() was better suited to. To use listdir here's what it would look like:
```
import os
search_dir = "/mydir/"
os.chdir(search_dir)
files = filter(os.path.isfile, os.listdir(search_dir))
files = [os.path.join(search_dir, f) for f in files] # add path to each file
files.sort(key=lambda x: os.path.getmtime(x))
```
|
How do you get a directory listing sorted by creation date in python?
|
[
"",
"python",
"windows",
"directory",
""
] |
In C++, on Linux, how can I write a function to return a temporary filename that I can then open for writing?
The filename should be as unique as possible, so that another process using the same function won't get the same name.
|
Use one of the standard library "mktemp" functions: mktemp/mkstemp/mkstemps/mkdtemp.
Edit: plain mktemp can be insecure - mkstemp is preferred.
|
tmpnam(), or anything that gives you a name is going to be vulnerable to race conditions. Use something designed for this purpose that returns a handle, such as tmpfile():
```
#include <stdio.h>
FILE *tmpfile(void);
```
|
How can I create a temporary file for writing in C++ on a Linux platform?
|
[
"",
"c++",
"linux",
"temporary-files",
""
] |
Two main ways to deploy a J2EE/Java Web app (in a very simplistic sense):
## Deploy assembled artifacts to production box
Here, we create the `.war` (or whatever) elsewhere, configure it for production (possibly creating numerous artifacts for numerous boxes) and place the resulting artifacts on the production servers.
* **Pros**: No dev tools on production boxes, can re-use artifacts from testing directly, staff doing deployment doesn't need knowledge of build process
* **Cons**: two processes for creating and deploying artifacts; potentially complex configuration of pre-built artifacts could make process hard to script/automate; have to version binary artifacts
## Build the artifacts **on** the production box
Here, the same process used day-to-day to build and deploy locally on developer boxes is used to deploy to production.
* **Pros**: One process to maintain; and it's heavily tested/validated by frequent use. Potentially easier to customize configuration at artifact creation time rather than customize pre-built artifact afterword; no versioning of binary artifacts needed.
* **Cons**: Potentially complex development tools needed on all production boxes; deployment staff needs to understand build process; you **aren't** deploying what you tested
I've mostly used the second process, admittedly out of necessity (no time/priority for another deployment process). Personally I don't buy arguments like "the production box has to be clean of all compilers, etc.", but I **can** see the logic in deploying what you've tested (as opposed to building another artifact).
However, Java Enterprise applications are so sensitive to configuration, it feels like asking for trouble having two processes for configuring artifacts.
Thoughts?
## Update
Here's a concrete example:
We use OSCache, and enable the disk cache. The configuration file must be inside the .war file and it references a file path. This path is different on every environment. The build process detects the user's configured location and ensures that the properties file placed in the war is correct for his environment.
If we were to use the build process for deployment, it would be a matter of creating the right configuration for the production environment (e.g. `production.build.properties`).
If we were to follow the "deploy assembled artifacts to the production box", we would need an additional process to extract the (incorrect) OSCache properties and replace it with one appropriate to the production environment.
This creates two processes to accomplish the same thing.
So, the questions are:
* Is this avoidable without "compiling on production"?
* If not, is this worth it? It the value of "no compiling on production" greater than "Don't Repeat Yourself"?
|
I'm firmly against building on the production box, because it means you're using a different build than you tested with. It also means every deployment machine has a different JAR/WAR file. If nothing else, do a unified build just so that when bug tracking you won't have to worry about inconsistencies between servers.
Also, you don't need to put the builds into version control if you can easily map between a build and the source that created it.
Where I work, our deployment process is as follows. (This is on Linux, with Tomcat.)
1. Test changes and check into Subversion. (Not necessarily in that order; we don't require that committed code is tested. I'm the only full-time developer, so the SVN tree is essentially my development branch. Your mileage may vary.)
2. Copy the JAR/WAR files to a production server in a shared directory named after the Subversion revision number. The web servers only have read access.
3. The deployment directory contains relative symlinks to the files in the revision-named directories. That way, a directory listing will always show you what version of the source code produced the running version. When deploying, we update a log file which is little more than a directory listing. That makes roll-backs easy. (One gotcha, though; Tomcat checks for new WAR files by the modify date of the real file, not the symlink, so we have to touch the old file when rolling back.)
Our web servers unpack the WAR files onto a local directory. The approach is scalable, since the WAR files are on a single file server; we could have an unlimited number of web servers and only do a single deployment.
|
Most of the places I've worked have used the first method with environment specific configuration information deployed separately (and updated much more rarely) outside of the war/ear.
|
Java Web Deployment: build code, or deploy .war?
|
[
"",
"java",
"deployment",
"jakarta-ee",
""
] |
I've seen some Python programmers use the following style fairly consistently (we'll call it style 1):
```
import some_module
# Use some_module.some_identifier in various places.
```
For support of this style, you can cite the ["explicit is better than implicit"](http://www.python.org/dev/peps/pep-0020/) maxim. I've seen other programmers use this style (style 2):
```
from some_module import some_identifier
# Use some_identifier in various places.
```
The primary benefit that I see in style 2 is maintainability -- especially with [duck typing](http://en.wikipedia.org/wiki/Duck_typing) ideals I may want to swap some\_module for some\_other\_module. I also feel style 2 wins points with the ["readability counts"](http://www.python.org/dev/peps/pep-0020/) maxim. Although I tend to disagree, one can always argue that search-and-replace is just as good an option when using the first style.
**Addendum:** It was noted that you could use `as` to solve the switch from `some_module` to `some_other_module` in style 1. I forgot to mention that it is also common to decide to implement `some_identifier` in your *current* module, which makes creation of an equivalent `some_module` container slightly awkward.
|
There are uses for both cases, so I don't think this is an either-or issue.
I'd consider using from module `import x,y,z` when:
* There are a fairly small number of things to import
* The purpose of the functions imported is obvious when divorced from the module name. If the names are fairly generic, they may clash with others and tell you little. eg. seeing `remove` tells you little, but `os.remove` will probably hint that you're dealing with files.
* The names don't clash. Similar to the above, but more important. **Never** do something like:
```
from os import open
```
`import module [as renamed_module]` has the advantage that it gives a bit more context about what is being called when you use it. It has the disadvantage that this is a bit more cluttered when the module isn't really giving more information, and is slightly less performant (2 lookups instead of 1).
It also has advantages when testing however (eg. replacing os.open with a mock object, without having to change every module), and should be used when using mutable modules, e.g.
```
import config
config.dburl = 'sqlite:///test.db'
```
If in doubt, I'd always go with the `import module` style.
|
With the existence of the following syntax:
```
import some_other_module as some_module
```
the maintainability argument of style 2 is no longer relevant.
I tend to use style 1. Normally, I find that I explicitly reference the imported package name only a few times in a typical Python program. Everything else is methods on the object, which of course don't need to reference the imported package.
|
'from X import a' versus 'import X; X.a'
|
[
"",
"python",
"python-import",
"maintainability",
"duck-typing",
""
] |
I would like to use databinding when displaying data in a TextBox. I'm basically doing like:
```
public void ShowRandomObject(IRandomObject randomObject) {
Binding binding = new Binding {Source = randomObject, Path = new PropertyPath("Name")};
txtName.SetBinding(TextBox.TextProperty, binding);
}
```
I can't seem to find a way to unset the binding. I will be calling this method with a lot of different objects but the TextBox will remain the same. Is there a way to remove the previous binding or is this done automatically when I set the new binding?
|
When available
```
BindingOperations.ClearBinding(txtName, TextBox.TextProperty)
```
For older SilverLight versions, but not reliable as stated in comments:
```
txtName.SetBinding(TextBox.TextProperty, null);
```
C# 6.0 features enabled
```
this.btnFinish.ClearBinding(ButtonBase.CommandProperty);
```
|
Alternately:
```
BindingOperations.ClearBinding(txtName, TextBox.TextProperty)
```
|
Remove binding in WPF using code
|
[
"",
"c#",
"wpf",
""
] |
This question was inspired by a similar question: [How does delete[] “know” the size of the operand array?](https://stackoverflow.com/questions/197675/how-does-delete-know-the-size-of-the-operand-array)
My question is a little different: **Is there any way to determine the size of a C++ array programmatically? And if not, why?** Every function I've seen that takes an array also requires an integer parameter to give it the size. But as the linked question pointed out, `delete[]` must know the size of the memory to be deallocated.
Consider this C++ code:
```
int* arr = new int[256];
printf("Size of arr: %d\n", sizeof(arr));
```
This prints "`Size of arr: 4`", which is just the size of the pointer. It would be nice to have some function which prints 256, but I don't think one exists in C++. (Again, part of the question is why it doesn't exist.)
**Clarification**: I know that if I declared the array on the stack instead of the heap (i.e. "`int arr[256];`") that the `sizeof` operator would return 1024 (array length \* sizeof(int)).
|
`delete []` does know the size that was allocated. However, that knowledge resides in the runtime or in the operating system's memory manager, meaning that it is not available to the compiler during compilation. And `sizeof()` is not a real function, it is actually evaluated to a constant by the compiler, which is something it cannot do for dynamically allocated arrays, whose size is not known during compilation.
Also, consider this example:
```
int *arr = new int[256];
int *p = &arr[100];
printf("Size: %d\n", sizeof(p));
```
How would the compiler know what the size of `p` is? The root of the problem is that arrays in C and C++ are not first-class objects. They decay to pointers, and there is no way for the compiler or the program itself to know whether a pointer points to the beginning of a chunk of memory allocated by `new`, or to a single object, or to some place in the middle of a chunk of memory allocated by `new`.
One reason for this is that C and C++ leave memory management to the programmer and to the operating system, which is also why they do not have garbage collection. Implementation of `new` and `delete` is not part of the C++ standard, because C++ is meant to be used on a variety of platforms, which may manage their memory in very different ways. It may be possible to let C++ keep track of all the allocated arrays and their sizes if you are writing a word processor for a windows box running on the latest Intel CPU, but it may be completely infeasible when you are writing an embedded system running on a DSP.
|
Well there is actually a way to determine the size, but it's not "safe" and will be diferent from compiler to compiler.... **so it shouldn't be used at all**.
When you do:
int\* arr = new int[256];
The 256 is irrelevant you will be given 256\*sizeof(int) assuming for this case 1024, this value will be stored probably at ( arr - 4 )
So to give you the number of "items"
int\* p\_iToSize = arr - 4;
printf("Number of items %d", \*p\_iToSize / sizeof(int));
For every malloc, new, whatever before the continuos memory block that you receive, there is also allocated a space reserved with some information regarding the block of memory you were given.
|
Can you obtain the size of an array allocated with new T[]?
|
[
"",
"c++",
"arrays",
"pointers",
"memory-management",
"new-operator",
""
] |
```
Object o = new Long[0]
System.out.println( o.getClass().isArray() )
System.out.println( o.getClass().getName() )
Class ofArray = ???
```
Running the first 3 lines emits;
```
true
[Ljava.lang.Long;
```
How do I get ??? to be type long? I could parse the string and do a Class.forname(), but thats grotty. What's the easy way?
|
Just write
```
Class ofArray = o.getClass().getComponentType();
```
From [the JavaDoc](http://docs.oracle.com/javase/8/docs/api/java/lang/Class.html#getComponentType--):
> `public Class<?> getComponentType()`
>
> Returns the `Class` representing the component type of an array. If this class does not represent an array class this method returns `null`.
|
[http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Class.html#getComponentType()](http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Class.html#getComponentType()):
> ```
> public Class<?> getComponentType()
> ```
>
> Returns the `Class` representing the component type of an array. If this class does not represent an array class this method returns null...
|
In Java, how do I dynamically determine the type of an array?
|
[
"",
"java",
""
] |
In the following example should I expect that `values.size()` will be called every time around the loop? In which case it might make sense to introduce a temporary `vectorSize` variable. Or should a modern compiler be able to optimize the calls away by recognising that the vector size cannot change.
```
double sumVector(const std::vector<double>& values) {
double sum = 0.0;
for (size_t ii = 0; ii < values.size(); ++ii) {
sum += values.at(ii);
}
}
```
Note that I don't care if there are more efficient methods to sum the contents of a vector, this question is just about the use of size() in a for construct.
|
Here's one way to do it that makes it explicit - size() is called only once.
```
for (size_t ii = 0, count = values.size(); ii < count; ++ii)
```
**Edit:** I've been asked to actually answer the question, so here's my best shot.
A compiler generally won't optimize a function call, because it doesn't know if it will get a different return value from one call to the next. It also won't optimize if there are operations inside the loop that it can't predict the side effects of. Inline functions might make a difference, but nothing is guaranteed. Local variables are easier for the compiler to optimize.
Some will call this premature optimization, and I agree that there are few cases where you will ever notice a speed difference. But if it doesn't make the code any harder to understand, why not just consider it a best practice and go with it? It certainly can't hurt.
P.S. I wrote this before I read [Benoit's](https://stackoverflow.com/questions/166641/is-using-size-for-the-2nd-expression-in-a-for-construct-always-bad#166654) answer carefully, I believe we're in complete agreement.
|
It all depends on what the vector's size implementation is, how aggressive the compiler is and if it listen/uses to inline directives.
I would be more defensive and introduce the temporary as you don't have any guarantees about how efficient your compiler will be.
Of course, if this routine is called once or twice and the vector is small, it really doesn't matter.
If it will be called thousands of times, then I would use the temporary.
Some might call this premature optimization, but I would tend to disagree with that assessment.
While you are trying to optimize the code, you are not investing time or obfuscating the code in the name of performance.
I have a hard time considering what is a refactoring to be an optimization. But in the end, this is along the lines of "you say tomato, I say tomato"...
|
Is using size() for the 2nd expression in a for construct always bad?
|
[
"",
"c++",
""
] |
One of the few annoying things about the Eclipse Java plug-in is the absence of a keyboard shortcut to build the project associated with the current resource. Anyone know how to go about it?
|
In the Preferences dialog box, under the General section is a dialog box called "Keys". This lets you attach key bindings to many events, including Build Project.
|
You can assign a keyboard binding to *Build Project* doing the following
1. Open up the Keys preferences, Window> Preferences >General>Keys
2. Filter by type Build Project
3. Highlight the binding field.
You can then choose the binding you want
i.e. `Ctrl`+`ALt`+`B`, `P`,
|
How do I add a shortcut key to Eclipse 3.2 Java plug-in to build the current project?
|
[
"",
"java",
"eclipse",
"eclipse-3.2",
""
] |
If I have a datetime field, how do I get just records created later than a certain time, ignoring the date altogether?
It's a logging table, it tells when people are connecting and doing something in our application. I want to find out how often people are on later than 5pm.
(Sorry - it is SQL Server. But this could be useful for other people for other databases)
|
For SQL Server:
```
select * from myTable where datepart(hh, myDateField) > 17
```
See <http://msdn.microsoft.com/en-us/library/aa258265(SQL.80).aspx>.
|
What database system are you using? Date/time functions vary widely.
For Oracle, you could say
```
SELECT * FROM TABLE
WHERE TO_CHAR(THE_DATE, 'HH24:MI:SS') BETWEEN '17:00:00' AND '23:59:59';
```
Also, you probably need to roll-over into the next day and also select times between midnight and, say, 6am.
|
How to get records after a certain time using SQL datetime field
|
[
"",
"sql",
"sql-server",
"datetime",
""
] |
I'm subclassing a native window (the edit control of a combobox...)
oldWndProc = SetWindowLong(HandleOfCbEditControl, GWL\_WNDPROC, newWndProc);
In my subclassing wndproc, I'll have code like this, right, but I can't figure out the syntax for calling the oldWndProc.
```
int MyWndProc(int Msg, int wParam, int lParam)
{
if (Msg.m == something I'm interested in...)
{
return something special
}
else
{
return result of call to oldWndProc <<<< What does this look like?***
}
}
```
EDIT: The word "subclassing" in this question refers to the WIN32 API meaning, not C#. Subclassing here doesn't mean overriding the .NET base class behavior. It means telling WIN32 to call your function pointer instead of the windows current callback. It has nothing to do with inheritence in C#.
|
You'll call [CallWindowProc](http://msdn.microsoft.com/en-us/library/aa452919.aspx) by P/Invoke. Just define the parameters as int variables (as it looks like that's how you defined them in the SetWindowLong call), so something like this:
[DllImport("CallWindowProc"...]
public static extern int CallWindowProc(int previousProc, int nativeControlHandle, int msg, int lParam, int wParam);
Remember, that for marshaling, int, uint and IntPtr are all identical.
|
You should use CallWindowProc to call that oldWndProc pointer.
```
[DllImport("user32")]
private static extern int CallWindowProc(IntPtr lpPrevWndFunc, IntPtr hWnd, int Msg, int wParam, int lParam);
```
|
In C#, I have a IntPtr to a WIN32 WndProc. What is the syntax for calling it?
|
[
"",
"c#",
".net",
"winapi",
""
] |
I just figured out that I can actually **store objects in the $\_SESSION** and I find it quite cool because when I jump to another page I still have my object. Now before I start using this approach I would like to find out if it is really such a good idea or if there are **potential pitfalls** involved.
I know that if I had a single point of entry I wouldn't need to do that but I'm not there yet so I don't have a single point of entry and I would really like to keep my object because I don't lose my state like that. (Now I've also read that I should program stateless sites but I don't understand that concept yet.)
So **in short**: Is it ok to store objects in the session, are there any problems with it?
---
Edit:
*Temporary summary*: By now I understand that it is probably **better to recreate** the object even if it involves querying the database again.
Further answers could maybe **elaborate on that aspect** a bit more!
|
I know this topic is old, but this issue keeps coming up and has not been addressed to my satisfaction:
Whether you save objects in $\_SESSION, or reconstruct them whole cloth based on data stashed in hidden form fields, or re-query them from the DB each time, you are using state. HTTP is stateless (more or less; but see GET vs. PUT) but almost everything anybody cares to do with a web app requires state to be maintained somewhere. Acting as if pushing the state into nooks and crannies amounts to some kind of theoretical win is just wrong. State is state. If you use state, you lose the various technical advantages gained by being stateless. This is not something to lose sleep over unless you know in advance that you ought to be losing sleep over it.
I am especially flummoxed by the blessing received by the "double whammy" arguments put forth by Hank Gay. Is the OP building a distributed and load-balanced e-commerce system? My guess is no; and I will further posit that serializing his $User class, or whatever, will not cripple his server beyond repair. My advice: use techniques that are sensible to your application. Objects in $\_SESSION are fine, subject to common sense precautions. If your app suddenly turns into something rivaling Amazon in traffic served, you will need to re-adapt. That's life.
|
it's OK as long as by the time the session\_start() call is made, the class declaration/definition has already been encountered by PHP or can be found by an already-installed autoloader. otherwise it would not be able to deserialize the object from the session store.
|
PHP: Storing 'objects' inside the $_SESSION
|
[
"",
"php",
"session",
"object",
""
] |
My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt.
We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster.
Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically?
Thank you.
|
There hasn't been a lot of work on this subject. I didn't find anything really relevent.
However, your point is well taken. Most machines nowadays have more cores and less powerful cores compared to powerful one core cpu.
So I did find something on a Microsoft blog. The technology is called PUnit and is made especially for testing multi-threaded environment.
It's as close as possible to what you requested that I could find :)
You can visit it the appropriate blog post right there: <http://blogs.microsoft.co.il/blogs/eyal/archive/2008/07/09/punit-parallel-unit-testing-in-making.aspx>
**Update: Link is not valid anymore. The project on CodePlex has been removed.**
**Update2: This is on the roadmap for NUnit 2.5. [Reference](http://www.nunit.org/index.php?p=roadmap)**
|
I struggled with both these problems myself. In the end I developed a custom Nunit test runner that is capable of running multiple tests in parrallel. This combined with the Taumuon.Rakija extension for nunit allowed the tests to be dynmaically created depending on which browser you want the test to run on.
I'm now in a position where I can launch my test suite against as many browser types on as many operating systems as I wish in parrallel.
Unfortunately there doesn't seem to be a good solution to these problems already so you'll probably have to solve them yourself for your particular environment.
|
How can I run NUnit(Selenium Grid) tests in parallel?
|
[
"",
"c#",
"selenium",
"testing",
"nunit",
"selenium-grid",
""
] |
I've done some Python but have just now starting to use Ruby
I could use a good explanation of the difference between "self" in these two languages.
**Obvious on first glance:**
Self is not a keyword in Python, but there is a "self-like" value no matter what you call it.
Python methods receive self as an explicit argument, whereas Ruby does not.
Ruby sometimes has methods explicitly defined as part of self using dot notation.
**Initial Googling reveals**
<http://rubylearning.com/satishtalim/ruby_self.html>
<http://www.ibiblio.org/g2swap/byteofpython/read/self.html>
|
Python is designed to support more than just object-oriented programming. Preserving the same interface between methods and functions lets the two styles interoperate more cleanly.
Ruby was built from the ground up to be object-oriented. Even the literals are objects (evaluate 1.class and you get Fixnum). The language was built such that self is a reserved keyword that returns the current instance wherever you are.
If you're inside an instance method of one of your class, self is a reference to said instance.
If you're in the definition of the class itself (not in a method), self is the class itself:
```
class C
puts "I am a #{self}"
def instance_method
puts 'instance_method'
end
def self.class_method
puts 'class_method'
end
end
```
At class definition time, 'I am a C' will be printed.
The straight 'def' defines an instance method, whereas the 'def self.xxx' defines a class method.
```
c=C.new
c.instance_method
#=> instance_method
C.class_method
#=> class_method
```
|
Despite webmat's claim, Guido [wrote](http://markmail.org/message/n6fs5pec5233mbfg) that explicit self is "not an implementation hack -- it is a semantic device".
> The reason for explicit self in method
> definition signatures is semantic
> consistency. If you write
>
> class C: def foo(self, x, y): ...
>
> This really *is* the same as writing
>
> class C: pass
>
> def foo(self, x, y): ... C.foo = foo
This was an intentional design decision, not a result of introducing OO behaviour at a latter date.
Everything in Python -is- an object, including literals.
See also [Why must 'self' be used explicitly in method definitions and calls?](http://effbot.org/pyfaq/why-must-self-be-used-explicitly-in-method-definitions-and-calls.htm)
|
What is the difference between Ruby and Python versions of"self"?
|
[
"",
"python",
"ruby",
"language-features",
""
] |
I've used lex and yacc (more usually bison) in the past for various projects, usually translators (such as a subset of EDIF streamed into an EDA app). Additionally, I've had to support code based on lex/yacc grammars dating back decades. So I know my way around the tools, though I'm no expert.
I've seen positive comments about Antlr in various fora in the past, and I'm curious as to what I may be missing. So if you've used both, please tell me what's better or more advanced in Antlr. My current constraints are that I work in a C++ shop, and any product we ship will not include Java, so the resulting parsers would have to follow that rule.
|
### Update/warning: This answer may be out of date!
---
One major difference is that ANTLR generates an LL(\*) parser, whereas YACC and Bison both generate parsers that are LALR. This is an important distinction for a number of applications, the most obvious being operators:
```
expr ::= expr '+' expr
| expr '-' expr
| '(' expr ')'
| NUM ;
```
ANTLR is entirely incapable of handling this grammar as-is. To use ANTLR (or any other LL parser generator), you would need to convert this grammar to something that is not left-recursive. However, Bison has no problem with grammars of this form. You would need to declare '+' and '-' as left-associative operators, but that is not strictly required for left recursion. A better example might be dispatch:
```
expr ::= expr '.' ID '(' actuals ')' ;
actuals ::= actuals ',' expr | expr ;
```
Notice that both the `expr` and the `actuals` rules are left-recursive. This produces a much more efficient AST when it comes time for code generation because it avoids the need for multiple registers and unnecessary spilling (a left-leaning tree can be collapsed whereas a right-leaning tree cannot).
In terms of personal taste, I think that LALR grammars are a lot easier to construct and debug. The downside is you have to deal with somewhat cryptic errors like shift-reduce and (the dreaded) reduce-reduce. These are errors that Bison catches when generating the parser, so it doesn't affect the end-user experience, but it can make the development process a bit more interesting. ANTLR is generally considered to be easier to use than YACC/Bison for precisely this reason.
|
The most significant difference between YACC/Bison and ANTLR is the type of grammars these tools can process. YACC/Bison handle LALR grammars, ANTLR handles LL grammars.
Often, people who have worked with LALR grammars for a long time, will find working with LL grammars more difficult and vice versa. That does not mean that the grammars or tools are inherently more difficult to work with. Which tool you find easier to use will mostly come down to familiarity with the type of grammar.
As far as advantages go, there are aspects where LALR grammars have advantages over LL grammars and there are other aspects where LL grammars have advantages over LALR grammars.
YACC/Bison generate table driven parsers, which means the "processing logic" is contained in the parser program's data, not so much in the parser's code. The pay off is that even a parser for a very complex language has a relatively small code footprint. This was more important in the 1960s and 1970s when hardware was very limited. Table driven parser generators go back to this era and small code footprint was a main requirement back then.
ANTLR generates recursive descent parsers, which means the "processing logic" is contained in the parser's code, as each production rule of the grammar is represented by a function in the parser's code. The pay off is that it is easier to understand what the parser is doing by reading its code. Also, recursive descent parsers are typically faster than table driven ones. However, for very complex languages, the code footprint will be larger. This was a problem in the 1960s and 1970s. Back then, only relatively small languages like Pascal for instance were implemented this way due to hardware limitations.
ANTLR generated parsers are typically in the vicinity of 10.000 lines of code and more. Handwritten recursive descent parsers are often in the same ballpark. Wirth's Oberon compiler is perhaps the most compact one with about 4000 lines of code including code generation, but Oberon is a very compact language with only about 40 production rules.
As somebody has pointed out already, a big plus for ANTLR is the graphical IDE tool, called ANTLRworks. It is a complete grammar and language design laboratory. It visualises your grammar rules as you type them and if it finds any conflicts it will show you graphically what the conflict is and what causes it. It can even automatically refactor and resolve conflicts such as left-recursion. Once you have a conflict free grammar, you can let ANTLRworks parse an input file of your language and build a parse tree and AST for you and show the tree graphically in the IDE. This is a very big advantage because it can save you many hours of work: You will find conceptual errors in your language design before you start coding! I have not found any such tool for LALR grammars, it seems there isn't any such tool.
Even to people who do not wish to generate their parsers but hand code them, ANTLRworks is a great tool for language design/prototyping. Quite possibly the best such tool available. Unfortunately, that doesn't help you if you want to build LALR parsers. Switching from LALR to LL simply to take advantage of ANTLRworks may well be worthwhile, but for some people, switching grammar types can be a very painful experience. In other words: YMMV.
|
Advantages of Antlr (versus say, lex/yacc/bison)
|
[
"",
"c++",
"antlr",
"yacc",
"bison",
""
] |
While researching this issue, I found multiple mentions of the following scenario online, invariably as unanswered questions on programming forums. I hope that posting this here will at least serve to document my findings.
First, the symptom: While running pretty standard code that uses waveOutWrite() to output PCM audio, I sometimes get this when running under the debugger:
```
ntdll.dll!_DbgBreakPoint@0()
ntdll.dll!_RtlpBreakPointHeap@4() + 0x28 bytes
ntdll.dll!_RtlpValidateHeapEntry@12() + 0x113 bytes
ntdll.dll!_RtlDebugGetUserInfoHeap@20() + 0x96 bytes
ntdll.dll!_RtlGetUserInfoHeap@20() + 0x32743 bytes
kernel32.dll!_GlobalHandle@4() + 0x3a bytes
wdmaud.drv!_waveCompleteHeader@4() + 0x40 bytes
wdmaud.drv!_waveThread@4() + 0x9c bytes
kernel32.dll!_BaseThreadStart@8() + 0x37 bytes
```
While the obvious suspect would be a heap corruption somewhere else in the code, I found out that that's not the case. Furthermore, I was able to reproduce this problem using the following code (this is part of a dialog based MFC application:)
```
void CwaveoutDlg::OnBnClickedButton1()
{
WAVEFORMATEX wfx;
wfx.nSamplesPerSec = 44100; /* sample rate */
wfx.wBitsPerSample = 16; /* sample size */
wfx.nChannels = 2;
wfx.cbSize = 0; /* size of _extra_ info */
wfx.wFormatTag = WAVE_FORMAT_PCM;
wfx.nBlockAlign = (wfx.wBitsPerSample >> 3) * wfx.nChannels;
wfx.nAvgBytesPerSec = wfx.nBlockAlign * wfx.nSamplesPerSec;
waveOutOpen(&hWaveOut,
WAVE_MAPPER,
&wfx,
(DWORD_PTR)m_hWnd,
0,
CALLBACK_WINDOW );
ZeroMemory(&header, sizeof(header));
header.dwBufferLength = 4608;
header.lpData = (LPSTR)GlobalLock(GlobalAlloc(GMEM_MOVEABLE | GMEM_SHARE | GMEM_ZEROINIT, 4608));
waveOutPrepareHeader(hWaveOut, &header, sizeof(header));
waveOutWrite(hWaveOut, &header, sizeof(header));
}
afx_msg LRESULT CwaveoutDlg::OnWOMDone(WPARAM wParam, LPARAM lParam)
{
HWAVEOUT dev = (HWAVEOUT)wParam;
WAVEHDR *hdr = (WAVEHDR*)lParam;
waveOutUnprepareHeader(dev, hdr, sizeof(WAVEHDR));
GlobalFree(GlobalHandle(hdr->lpData));
ZeroMemory(hdr, sizeof(*hdr));
hdr->dwBufferLength = 4608;
hdr->lpData = (LPSTR)GlobalLock(GlobalAlloc(GMEM_MOVEABLE | GMEM_SHARE | GMEM_ZEROINIT, 4608));
waveOutPrepareHeader(hWaveOut, &header, sizeof(WAVEHDR));
waveOutWrite(hWaveOut, hdr, sizeof(WAVEHDR));
return 0;
}
```
Before anyone comments on this, yes - the sample code plays back uninitialized memory. Don't try this with your speakers turned all the way up.
Some debugging revealed the following information: waveOutPrepareHeader() populates header.reserved with a pointer to what appears to be a structure containing at least two pointers as its first two members. The first pointer is set to NULL. After calling waveOutWrite(), this pointer is set to a pointer allocated on the global heap. In pseudo code, that would look something like this:
```
struct Undocumented { void *p1, *p2; } /* This might have more members */
MMRESULT waveOutPrepareHeader( handle, LPWAVEHDR hdr, ...) {
hdr->reserved = (Undocumented*)calloc(sizeof(Undocumented));
/* Do more stuff... */
}
MMRESULT waveOutWrite( handle, LPWAVEHDR hdr, ...) {
/* The following assignment fails rarely, causing the problem: */
hdr->reserved->p1 = malloc( /* chunk of private data */ );
/* Probably more code to initiate playback */
}
```
Normally, the header is returned to the application by waveCompleteHeader(), a function internal to wdmaud.dll. waveCompleteHeader() tries to deallocate the pointer allocated by waveOutWrite() by calling GlobalHandle()/GlobalUnlock() and friends. Sometimes, GlobalHandle() bombs, as shown above.
Now, the reason that GlobalHandle() bombs is not due to a heap corruption, as I suspected at first - it's because waveOutWrite() returned without setting the first pointer in the internal structure to a valid pointer. I suspect that it frees the memory pointed to by that pointer before returning, but I haven't disassembled it yet.
This only appears to happen when the wave playback system is low on buffers, which is why I'm using a single header to reproduce this.
At this point I have a pretty good case against this being a bug in my application - after all, my application is not even running. Has anyone seen this before?
I'm seeing this on Windows XP SP2. The audio card is from SigmaTel, and the driver version is 5.10.0.4995.
Notes:
To prevent confusion in the future, I'd like to point out that the answer suggesting that the problem lies with the use of malloc()/free() to manage the buffers being played is simply wrong. You'll note that I changed the code above to reflect the suggestion, to prevent more people from making the same mistake - it doesn't make a difference. The buffer being freed by waveCompleteHeader() is not the one containing the PCM data, the responsibility to free the PCM buffer lies with the application, and there's no requirement that it be allocated in any specific way.
Also, I make sure that none of the waveOut API calls I use fail.
I'm currently assuming that this is either a bug in Windows, or in the audio driver. Dissenting opinions are always welcome.
|
You're not alone with this issue:
<http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=100589>
|
> Now, the reason that GlobalHandle()
> bombs is not due to a heap corruption,
> as I suspected at first - it's because
> waveOutWrite() returned without
> setting the first pointer in the
> internal structure to a valid pointer.
> I suspect that it frees the memory
> pointed to by that pointer before
> returning, but I haven't disassembled
> it yet.
I can reproduce this with your code on my system. I see something similar to what Johannes reported. After the call to WaveOutWrite, hdr->reserved normally holds a pointer to allocated memory (which appears to contain the wave out device name in unicode, among other things).
But occasionally, after returning from WaveOutWrite(), the byte pointed to by `hdr->reserved` is set to 0. This is normally the least significant byte of that pointer. The rest of the bytes in `hdr->reserved` are ok, and the block of memory that it normally points to is still allocated and uncorrupted.
It probably is being clobbered by another thread - I can catch the change with a conditional breakpoint immediately after the call to WaveOutWrite(). And the system debug breakpoint is occurring in another thread, not the message handler.
However, I can't cause the system debug breakpoint to occur if I use a callback function instead of the windows messsage pump. (`fdwOpen = CALLBACK_FUNCTION` in WaveOutOpen() )
When I do it this way, my OnWOMDone handler is called by a different thread - possibly the one that's otherwise responsible for the corruption.
So I think there is a bug, either in windows or the driver, but I think you can work around by handling WOM\_DONE with a callback function instead of the windows message pump.
|
Why would waveOutWrite() cause an exception in the debug heap?
|
[
"",
"c++",
"windows",
"audio",
"waveoutwrite",
""
] |
I have a textarea with many lines of input, and a JavaScript event fires that necessitates I scroll the textarea to line 345.
`scrollTop` sort of does what I want, except as far as I can tell it's pixel level, and I want something that operates on a line level. What also complicates things is that, afaik once again, it's not possible to make textareas not line-wrap.
|
You can stop wrapping with the wrap attribute. It is not part of HTML 4, but most browsers support it.
You can compute the height of a line by dividing the height of the area by its number of rows.
```
<script type="text/javascript" language="JavaScript">
function Jump(line)
{
var ta = document.getElementById("TextArea");
var lineHeight = ta.clientHeight / ta.rows;
var jump = (line - 1) * lineHeight;
ta.scrollTop = jump;
}
</script>
<textarea name="TextArea" id="TextArea"
rows="40" cols="80" title="Paste text here"
wrap="off"></textarea>
<input type="button" onclick="Jump(98)" title="Go!" value="Jump"/>
```
Tested OK in FF3 and IE6.
|
If you use .scrollHeight instead of .clientHeight, it will work properly for textareas that are shown with a limited height and a scrollbar:
```
<script type="text/javascript" language="JavaScript">
function Jump(line)
{
var ta = document.getElementById("TextArea");
var lineHeight = ta.scrollHeight / ta.rows;
var jump = (line - 1) * lineHeight;
ta.scrollTop = jump;
}
</script>
<textarea name="TextArea" id="TextArea"
rows="40" cols="80" title="Paste text here"
wrap="off"></textarea>
<input type="button" onclick="Jump(98)" title="Go!" value="Jump"/>
```
|
Cross browser "jump to"/"scroll" textarea
|
[
"",
"javascript",
"html",
"css",
""
] |
I'm trying to get a PHP site working in IIS on Windows Server with MySQL.
I'm getting this error…
Fatal error: Call to undefined function mysql\_connect() in C:\inetpub...\adodb\drivers\adodb-mysql.inc.php on line 363
---
Update…
This link outlines the steps I followed to install PHP on my server:
[How do I get PHP and MySQL working on IIS 7.0 ?](https://stackoverflow.com/questions/11919/how-do-i-get-php-and-mysql-working-on-iis-70#94341)
(note especially steps 6 and 8 regarting php.ini and php\_mysql.dll).
Only two lines in the phpinfo report reference SQL:
```
<?php
phpinfo();
?>
```
> Configure Command:
> cscript /nologo configure.js "--enable-snapshot-build" "--enable-mysqlnd"
>
> sql.safe\_mode:
> Local Value Off, Master Value Off
[PHP Configure Command http://img79.imageshack.us/img79/2373/configurecommandmw8.gif](http://img79.imageshack.us/img79/2373/configurecommandmw8.gif)
[PHP sql.safe\_mode http://img49.imageshack.us/img49/3066/sqlsafemoderu6.gif](http://img49.imageshack.us/img49/3066/sqlsafemoderu6.gif)
---
Update…
I found the solution: [How do I install MySQL modules within PHP?](https://stackoverflow.com/questions/158279/how-do-i-install-mysql-modules-within-php#160746)
|
I found the solution: [How do I install MySQL modules within PHP?](https://stackoverflow.com/questions/158279/how-do-i-install-mysql-modules-within-php#160746)
|
Check out phpinfo to see if the mysql functions are compiled with your PHP
```
<?php
phpinfo();
?>
```
Since in some versions of php, its not default with the install.
**Edit for the Update:**
You should have a full MySQL category in your phpinfo();
See this for example: <https://secure18.easycgi.com/phpinfo.php> (googled example)
|
How do I get PHP to work with ADOdb and MySQL?
|
[
"",
"php",
"mysql",
"windows",
"iis",
"adodb",
""
] |
I have a set of configuration items I need to persist to a "human readable" file. These items are in a hierarchy:
```
Device 1
Name
Channel 1
Name
Size
...
Channel N
Name
...
Device M
Name
Channel 1
```
Each of these item could be stored in a Dictionary with a string Key and a value. They could also be in a structure/DTO.
I don't care about the format of the file as long as it's human readable. It could be XML or it could have something more like INI format
```
[Header]
Key=value
Key2=value
...
```
Is there a way to minimize the amount of boiler plate code I would need to write to manage storing/reading configuration items?
Should I just create Data Transfer Objects (DTO)/structures and mark them serializable (Does that generate bloated XML still human readable?)
Is there other suggestions?
Edit: Not that the software has to **write** as well as **read** the config. That leaves app.config out.
|
See the [FileHelpers](http://filehelpers.sourceforge.net/) library. It's got tons of stuff for reading from and writing to a lot of different formats - and all you have to do is mark up your objects with attributes and call Save(). Sort of like ORM for flat files.
|
[YAML for .NET](http://www.codeplex.com/yaml)
|
Persisting configuration items in .net
|
[
"",
"c#",
".net",
"configuration",
""
] |
I'd like to find the base url of my application, so I can automatically reference other files in my application tree...
So given a file config.php in the base of my application, if a file in a subdirectory includes it, knows what to prefix a url with.
```
application/config.php
application/admin/something.php
application/css/style.css
```
So given that `http://www.example.com/application/admin/something.php` is accessed, I want it to be able to know that the css file is in `$approot/css/style.css`. In this case, `$approot` is "`/application`" but I'd like it to know if the application is installed elsewhere.
I'm not sure if it's possible, many applications (phpMyAdmin, Squirrelmail I think) have to set a config variable to begin with. It would be more user friendly if it just knew.
|
I use the following in a homebrew framework... Put this in a file in the root folder of your application and simply include it.
```
define('ABSPATH', str_replace('\\', '/', dirname(__FILE__)) . '/');
$tempPath1 = explode('/', str_replace('\\', '/', dirname($_SERVER['SCRIPT_FILENAME'])));
$tempPath2 = explode('/', substr(ABSPATH, 0, -1));
$tempPath3 = explode('/', str_replace('\\', '/', dirname($_SERVER['PHP_SELF'])));
for ($i = count($tempPath2); $i < count($tempPath1); $i++)
array_pop ($tempPath3);
$urladdr = $_SERVER['HTTP_HOST'] . implode('/', $tempPath3);
if ($urladdr{strlen($urladdr) - 1}== '/')
define('URLADDR', 'http://' . $urladdr);
else
define('URLADDR', 'http://' . $urladdr . '/');
unset($tempPath1, $tempPath2, $tempPath3, $urladdr);
```
The above code defines two constants. ABSPATH contains the absolute path to the root of the application (local file system) while URLADDR contains the fully qualified URL of the application. It does work in mod\_rewrite situations.
|
You can find the base url with the folowing code:
```
define('SITE_BASE_PATH','http://'.preg_replace('/[^a-zA-Z0-9]/i','',$_SERVER['HTTP_HOST']).'/'.str_replace('\\','/',substr(dirname(__FILE__),strlen($_SERVER['DOCUMENT_ROOT']))).'/');
```
Short and best.
|
How can I find an application's base url?
|
[
"",
"php",
"web-applications",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.