Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
|
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
* the thread is holding a critical resource that must be closed properly
* the thread has created several other threads that must be killed as well.
The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit\_request flag that each thread checks on a regular interval to see if it is time for it to exit.
**For example:**
```
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self, *args, **kwargs):
super(StoppableThread, self).__init__(*args, **kwargs)
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
```
In this code, you should call `stop()` on the thread when you want it to exit, and wait for the thread to exit properly using `join()`. The thread should check the stop flag at regular intervals.
There are cases, however, when you really need to kill a thread. An example is when you are wrapping an external library that is busy for long calls, and you want to interrupt it.
The following code allows (with some restrictions) to raise an Exception in a Python thread:
```
def _async_raise(tid, exctype):
'''Raises an exception in the threads with id tid'''
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid),
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# "if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"
ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(tid), None)
raise SystemError("PyThreadState_SetAsyncExc failed")
class ThreadWithExc(threading.Thread):
'''A thread class that supports raising an exception in the thread from
another thread.
'''
def _get_my_tid(self):
"""determines this (self's) thread id
CAREFUL: this function is executed in the context of the caller
thread, to get the identity of the thread represented by this
instance.
"""
if not self.is_alive(): # Note: self.isAlive() on older version of Python
raise threading.ThreadError("the thread is not active")
# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id
# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid
# TODO: in python 2.6, there's a simpler way to do: self.ident
raise AssertionError("could not determine the thread's id")
def raise_exc(self, exctype):
"""Raises the given exception type in the context of this thread.
If the thread is busy in a system call (time.sleep(),
socket.accept(), ...), the exception is simply ignored.
If you are sure that your exception should terminate the thread,
one way to ensure that it works is:
t = ThreadWithExc( ... )
...
t.raise_exc( SomeException )
while t.isAlive():
time.sleep( 0.1 )
t.raise_exc( SomeException )
If the exception is to be caught by the thread, you need a way to
check that your thread has caught it.
CAREFUL: this function is executed in the context of the
caller thread, to raise an exception in the context of the
thread represented by this instance.
"""
_async_raise( self._get_my_tid(), exctype )
```
(Based on [Killable Threads](http://tomerfiliba.com/recipes/Thread2/) by Tomer Filiba. The quote about the return value of `PyThreadState_SetAsyncExc` appears to be from an [old version of Python](https://svn.python.org/projects/stackless/Python-2.4.3/dev/Python/pystate.c).)
As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption.
A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup.
|
A [`multiprocessing.Process`](https://docs.python.org/library/multiprocessing.html#multiprocessing.Process) can [`p.terminate()`](https://docs.python.org/library/multiprocessing.html#multiprocessing.Process.terminate)
In the cases where I want to kill a thread, but do not want to use flags/locks/signals/semaphores/events/whatever, I promote the threads to full blown processes. For code that makes use of just a few threads the overhead is not that bad.
E.g. this comes in handy to easily terminate helper "threads" which execute blocking I/O
The conversion is trivial: In related code replace all `threading.Thread` with `multiprocessing.Process` and all `queue.Queue` with `multiprocessing.Queue` and add the required calls of `p.terminate()` to your parent process which wants to kill its child `p`
See the [Python documentation for `multiprocessing`](https://docs.python.org/library/multiprocessing.html).
Example:
```
import multiprocessing
proc = multiprocessing.Process(target=your_proc_function, args=())
proc.start()
# Terminate the process
proc.terminate() # sends a SIGTERM
```
|
Is there any way to kill a Thread?
|
[
"",
"python",
"multithreading",
"python-multithreading",
"kill",
"terminate",
""
] |
I'm using .NET Regular Expressions to strip HTML code.
Using something like:
```
<title>(?<Title>[\w\W]+?)</title>[\w\W]+?<div class="article">(?<Text>[\w\W]+?)</div>
```
This works for 99% of the time, but sometimes, when parsing...
```
Regex.IsMatch(HTML, Pattern)
```
The parser just blocks and it will continue on this line of code for several minutes or indefinitely.
What's going on?
|
Your regex will work just fine when your HTML string actually contains HTML that fits the pattern. But when your HTML does not fit the pattern, e.g. if the last tag is missing, your regex will exhibit what I call "[catastrophic backtracking](http://www.regular-expressions.info/catastrophic.html)". Click that link and scroll down to the "Quickly Matching a Complete HTML File" section. It describes your problem exactly. [\w\W]+? is a complicated way of saying .+? with RegexOptions.SingleLine.
|
With some effort, you can make regex work on html - however, have you looked at the [HTML agility pack](http://www.codeplex.com/htmlagilitypack)? This makes it much easier to work with html as a DOM, with support for xpath-type queries etc (i.e. "//div[@class='article']").
|
.NET Regular Expressions in Infinite Cycle
|
[
"",
"c#",
"vb.net",
"visual-studio",
"regex",
""
] |
Is it possible to do a simple count(\*) query in a PHP script while another PHP script is doing insert...select... query?
The situation is that I need to create a table with ~1M or more rows from another table, and while inserting, I do not want the user feel the page is freezing, so I am trying to keep update the counting, but by using a `select count(\*) from table` when background in inserting, I got only 0 until the insert is completed.
So is there any way to ask MySQL returns partial result first? Or is there a fast way to do a series of insert with data fetched from a previous select query while having about the same performance as insert...select... query?
The environment is php4.3 and MySQL4.1.
|
If you're doing a single INSERT...SELECT, then no, you won't be able to get intermediate results. In fact this would be a Bad Thing, as users should never see a database in an intermediate state showing only a partial result of a statement or transaction. For more information, read up on [ACID](http://en.wikipedia.org/wiki/ACID "ACID") compliance.
That said, the MyISAM engine may play fast and loose with this. I'm pretty sure I've seen MyISAM commit some but not all of the rows from an INSERT...SELECT when I've aborted it part of the way through. You haven't said which engine your table is using, though.
|
Without reducing performance? Not likely. With a little performance loss, maybe...
But why are you regularily creating tables and inserting millions of row? If you do this only very seldom, can't you just warn the admin (presumably the only one allowed to do such a thing) that this takes a long time. If you're doing this *all the time*, are you really sure you're not doing it *wrong*?
|
Is it possible to do count(*) while doing insert...select... query in mysql/php?
|
[
"",
"php",
"mysql",
"insert",
""
] |
I am in the process of researching/comparing CXF and Spring-WS for web services? I need to function both as a provider and a consumer of WS. In a nutshell, I have been told that Spring-WS is more configurable, but CXF is easier to get up and running. This question is subjective, but will help direct me in my research.
* What experience do you have with either of these frameworks?
* Have you run into any pitfalls with either framework?
* Have you found any useful features provided by either that is possibly not provided by the other?
|
I think the biggest difference is Spring-WS is ***only*** 'contract-first' whilst I believe CXF is normally 'contract-last'.
<http://static.springsource.org/spring-ws/sites/1.5/reference/html/why-contract-first.html>
Contract-last starts with Java code, so it is usually easier to get started with.
However, the WSDL it creates tends to be more fragile.
|
About Apache CXF:
* CXF supports several standards including SOAP, the WSI Basic Profile, WSDL, WS-Addressing, WS-Policy, WS-ReliableMessaging, WS-Security, WS-SecurityPolicy, and WS-SecureConversation.
* Apache CXF offers both contract-last (starting with Java) and Contract-first (starting with the WSDL) approaches.
* Apache CXF implements JAX-WS and JAX-RS.
About Spring WS:
* Spring WS offers "only" contract-first, starting from an XSD Schema.
* Spring WS supports SOAP, WS-Security, WS-Addressing.
So, at the end, I see Spring WS as a **minimal** web services framework but consider that it doesn't (in my opinion) have any advantages over Apache CXF (which integrates extremely well with Spring). Between both, I'd pick up Apache CXF.
|
Which framework is better CXF or Spring-WS?
|
[
"",
"java",
"web-services",
"cxf",
"spring-ws",
""
] |
How can you find the number of occurrences of a particular character in a string using sql?
Example: I want to find the number of times the letter ‘d’ appears in this string.
```
declare @string varchar(100)
select @string = 'sfdasadhfasjfdlsajflsadsadsdadsa'
```
|
Here you go:
```
declare @string varchar(100)
select @string = 'sfdasadhfasjfdlsajflsadsadsdadsa'
SELECT LEN(@string) - LEN(REPLACE(@string, 'd', '')) AS D_Count
```
|
If you want to make it a little more general, you should divide by the length of the thing you're looking for. Like this:
```
declare @searchstring varchar(10);
set @searchstring = 'Rob';
select original_string,
(len(orginal_string) - len(replace(original_string, @searchstring, ''))
/ len(@searchstring)
from someTable;
```
This is because each time you find 'Rob', you remove three characters. So when you remove six characters, you've found 'Rob' twice.
|
How can you find the number of occurrences of a particular character in a string using sql?
|
[
"",
"sql",
"string",
"search",
""
] |
Is there a way of getting the websites absolute URL (<http://www.domain.com/>) using Java? because I've googled a bit but I only come across with having to make 2 or 3 classes to create that function =/
### Update:
The thing is I am trying to create a crawler that will give me some information and among that I'd like to get the URL of the webpage it's getting the information from. I'm developing this in JAVA and what I meant to say was that I was wondering if there was some getUrl(); or any method like that to get me the Url, because I know it can be done but I've only done it writing a whole other class to retrieve the url and then inherit it and use it further...hope it made it clearer
|
I'm assuming you just want the domain from a JSP, however you may find you need the entire URL including the prefix, domain, path and parameters. The easiest way to get this quickly is to use the Request object and build it. Have a look here for more info:
<http://www.exforsys.com/tutorials/jsp/jsp-request-object.html>
Here is Sun's API on the HttpServletRequest interface:
<http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/http/HttpServletRequest.html>
|
The question is not really clear, but I'll make the assumption that you are trying to get the path from within a Servlet.
```
String realPath = getServletConfig().getServletContext().getRealPath(relativePath);
```
|
getUrl using Java
|
[
"",
"java",
"url",
"geturl",
""
] |
I have a form with a `<textarea>` and I want to capture any line breaks in that textarea on the server-side, and replace them with a `<br/>`.
Is that possible?
I tried setting `white-space:pre` on the `textarea`'s CSS, but it's still not enough.
|
Have a look at the [`nl2br()`](https://www.php.net/manual/en/function.nl2br.php) function. It should do exactly what you want.
|
The [`nl2br()`](http://php.net/nl2br) function exists to do exactly this:
However, this function adds br tags but does not actually remove the new lines - this usually isn't an issue, but if you want to completely strip them and catch carriage returns as well, you should use a [`str_replace`](http://php.net/str_replace) or [`preg_replace`](http://php.net/preg_replace)
I think str\_replace would be slightly faster but I have not benchmarked;
```
$val = str_replace( array("\n","\r","\r\n"), '<br />', $val );
```
or
```
$val = preg_replace( "#\n|\r|\r\n#", '<br />', $val );
```
|
Capturing linebreaks (newline,linefeed) characters in a textarea
|
[
"",
"php",
"html",
"textarea",
"line-breaks",
""
] |
What are some *common*, *real world examples* of using the Builder Pattern? What does it buy you? Why not just use a Factory Pattern?
|
The key difference between a builder and factory IMHO, is that a builder is useful when you need to do lots of things to build an object. For example imagine a DOM. You have to create plenty of nodes and attributes to get your final object. A factory is used when the factory can easily create the entire object within one method call.
One example of using a builder is a building an XML document, I've used this model when building HTML fragments for example I might have a Builder for building a specific type of table and it might have the following methods **(parameters are not shown)**:
```
BuildOrderHeaderRow()
BuildLineItemSubHeaderRow()
BuildOrderRow()
BuildLineItemSubRow()
```
This builder would then spit out the HTML for me. This is much easier to read than walking through a large procedural method.
Check out [Builder Pattern on Wikipedia](http://en.wikipedia.org/wiki/Builder_pattern).
|
Below are some reasons arguing for the use of the pattern and example code in Java, but it is an implementation of the Builder Pattern covered by the Gang of Four in *Design Patterns*. The reasons you would use it in Java are also applicable to other programming languages as well.
As Joshua Bloch states in [Effective Java, 2nd Edition](http://www.amazon.co.uk/Effective-Java-Second-Joshua-Bloch/dp/0321356683):
> The builder pattern is a good choice when designing classes whose constructors or static factories would have more than a handful of parameters.
We've all at some point encountered a class with a list of constructors where each addition adds a new option parameter:
```
Pizza(int size) { ... }
Pizza(int size, boolean cheese) { ... }
Pizza(int size, boolean cheese, boolean pepperoni) { ... }
Pizza(int size, boolean cheese, boolean pepperoni, boolean bacon) { ... }
```
**This is called the Telescoping Constructor Pattern.** The problem with this pattern is that once constructors are 4 or 5 parameters long it becomes **difficult to remember** the required **order of the parameters** as well as what particular constructor you might want in a given situation.
One **alternative** you have to the Telescoping Constructor Pattern is the **JavaBean Pattern** where you call a constructor with the mandatory parameters and then call any optional setters after:
```
Pizza pizza = new Pizza(12);
pizza.setCheese(true);
pizza.setPepperoni(true);
pizza.setBacon(true);
```
**The problem here is that because the object is created over several calls it may be in an inconsistent state partway through its construction.** This also requires a lot of extra effort to ensure thread safety.
**The better alternative is to use the Builder Pattern.**
```
public class Pizza {
private int size;
private boolean cheese;
private boolean pepperoni;
private boolean bacon;
public static class Builder {
//required
private final int size;
//optional
private boolean cheese = false;
private boolean pepperoni = false;
private boolean bacon = false;
public Builder(int size) {
this.size = size;
}
public Builder cheese(boolean value) {
cheese = value;
return this;
}
public Builder pepperoni(boolean value) {
pepperoni = value;
return this;
}
public Builder bacon(boolean value) {
bacon = value;
return this;
}
public Pizza build() {
return new Pizza(this);
}
}
private Pizza(Builder builder) {
size = builder.size;
cheese = builder.cheese;
pepperoni = builder.pepperoni;
bacon = builder.bacon;
}
}
```
Note that **Pizza is immutable and that parameter values are all in a single location**. Because the Builder's setter methods return the Builder object they are **able to be chained**.
```
Pizza pizza = new Pizza.Builder(12)
.cheese(true)
.pepperoni(true)
.bacon(true)
.build();
```
**This results in code that is easy to write and very easy to read and understand.** In this example, the **build method could be modified** to check parameters after they have been copied from the builder to the Pizza object and **throw an IllegalStateException if an invalid parameter value has been supplied.** This pattern is flexible and it is easy to add more parameters to it in the future. It is really only useful if you are going to have more than 4 or 5 parameters for a constructor. That said, it might be worthwhile in the first place **if you suspect you may be adding more parameters in the future.**
I have borrowed heavily on this topic from the book *Effective Java, 2nd Edition* by Joshua Bloch. To learn more about this pattern and other effective Java practices **I highly recommend it.**
|
When would you use the Builder Pattern?
|
[
"",
"java",
"design-patterns",
"builder",
""
] |
When coding, what is a good rule of thumb to keep in mind with respect to performance? There are endless ways to optimize for a specific platform and compiler, but I'm looking for answers that apply equally well (or almost) across compilers and platforms.
|
A famous quote come to mind:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." (Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268.)
But maybe you should not pass large data structures by value anyway... :-)
Edit: And maybe also avoid *O*(N^2) or more complex algorithms...
|
The number #1 performance tip is to profile your code early and often. There are a lot of general "don't do this" tips but it's really hard to guarantee this will impact the performance of your application. Why? Every application is different. It's easy to say that passing a vector by value is bad if you have a lot of elements but does your program even use a vector (you probably should but ...)?
Profiling is the only way to understand the performance of your application. I've been in way too many situations where people "optimized" the code but didn't ever profile. The "optimizations" turned out to introduce many bugs and not even be a hot spot in the code path. Waste of everyones time.
EDIT:
A couple of people have commented on the "early" part of my answer. I don't think you should be profiling from day 1. However you should also not be waiting till 1 month from ship either.
I usually first profile once I have a couple of definitive end to end scenarios, or in a larger project, a mostly functional component. I take a day or two (usually working with QA) to get together some large scenarios and throw it at the code. This is a great spot check to find obvious performance problems early. Fixing them at this point is a bit easier.
On a typical project I find that I have code meeting this criterias 30%-40% of the way through the project (100% being in customers hands). I loosely classify this time as early.
|
C++ performance tips and rules of thumb anyone?
|
[
"",
"c++",
"performance",
"cross-platform",
""
] |
I have the following very simple Javascript-compatible regular expression:
```
<script type="text/javascript" id="(.+)" src="([^"]+)">
```
I am trying to match on script tags and gather both the ID and src attributes. I'd like to make the order of the attributes irrelevant, so that the following will still match:
```
<script id="..." type="text/javascript" src="...">
<script src="..." id="..." type="text/javascript">
<script id="..." src="..." type="text/javascript">
```
Is it possible to allow the attributes to appear in any order without compromising its ability to collect the matching ID and src?
*edit* The string to match on is coming from innerHTML, making DOM traversal impossible. Also, I cannot use any third party libraries for this specific application.
|
Disclaimer: Be careful with regular expressions and HTML source code. It's brittle and therefore easily broken or circumvented, you should not even think of using it to validate user input.
If you are sincere of the source data and know it conforms to the rules of well-formed HTML, you can use this:
```
var html = "variable/property holding your html source";
var re_script = /<script\s.+?>/ig;
var re_id = /id="(.*?)"/i;
var re_src = /src="(.*?)"/i;
var scriptTag = null;
while (scriptTag = re_script.exec(html))
{
var matchId = re_id.exec(scriptTag);
var matchSrc = re_src.exec(scriptTag);
if (matchId && matchSrc)
{
var scriptId = matchId[1];
var scriptSrc = matchSrc[1];
alert('Found script ID="' + scriptId + '", SRC="' + scriptSrc + '"');
}
}
```
Basically, this is what jQuery's `$("script").each()` would do, just without the jQuery and without needing the DOM.
|
That sounds like a nasty regex. IMO, you might be better off using xpath to query the DOM. Or, you could also use the jquery javascript library to select the elements you need.
|
How to write regex so that patterns can appear in any order?
|
[
"",
"javascript",
"regex",
""
] |
I want to, from JavaScript, access as a variable the file that is loaded as an image in an img tag.
## I don't want to access its name, but the actual data.
The reason for this is that I want to be able to copy it to and from variables so that I can , among other things, change the image without reloading it.
Can this be done? If so, how?
Note: I'm slightly more interested in *reading* the data than writing it.
|
```
// Download the image data using AJAX, I'm using jQuery
var imageData = $.ajax({ url: "MyImage.gif", async: false }).responseText;
// Image data updating magic
imageDataChanged = ChangeImage(imageData);
// Encode to base64, maybe try the webtoolkit.base64.js library
imageDataEncoded = Base64Encode(imageDataChanged);
// Write image data out to browser (FF seems to support this)
document.write('<img src="data:image/gif;base64,' + imageDataEncoded + '">');
```
|
If you are using Firefox (and I think Opera and maybe Safari; I can't check right now), you can draw the image on a canvas element and use getImageData.
It would work kind of like this:
```
var img = document.getElementById("img_id");
var canvas = document.getElementById("canvas_id");
var context = canvas.getContext("2d");
var imageData = context.getImageData(0, 0, context.width, context.height);
// Now imageData is an object with width, height, and data properties.
// imageData.data is an array of pixel values (4 values per pixel in RGBA order)
// Change the top left pixel to red
imageData.data[0] = 255; // red
imageData.data[1] = 0; // green
imageData.data[2] = 0; // blue
imageData.data[3] = 255; // alpha
// Update the canvas
context.putPixelData(imageData, 0, 0);
```
Once you get the image data, you can calculate the starting index for each pixel:
```
var index = (y * imageData.width + x) * 4;
```
and add the offset for each channel (0 for red, 1 for green, 2 for blue, 3 for alpha)
|
Can I access the datafile of an img tag from JavaScript
|
[
"",
"javascript",
"html",
"image",
""
] |
I'm aware of the Substance look and feels and that they have a Office 2007 look-a-like look and feel. But this look and feel doesn't look like the Office 2007 design at all, the colors are a lot different for example.
Are there other look and feels which mimic the Office 2007 more accurately?
|
Look!
<http://www.pushing-pixels.org/?p=1010>
|
I suggest looking at jide software conponents at <http://www.jidesoft.com/>
|
Is there any (real) Office 2007 look and feel for Java/Swing?
|
[
"",
"java",
"swing",
"office-2007",
"look-and-feel",
""
] |
I am having some difficulty writing a function that will search through a directory for a file that matches a specific regular expression (which I have compiled using 're.compile'). So my question is: How do I search through a directory (I plan to use os.walk) for a file that matches a specific regular expression? An example would be very much appreciated. Thanks in advance.
|
This will find all files starting with two digits and ending in gif, you can add the files into a global list, if you wish:
```
import re
import os
r = re.compile(r'\d{2}.+gif$')
for root, dirs, files in os.walk('/home/vinko'):
l = [os.path.join(root,x) for x in files if r.match(x)]
if l: print l #Or append to a global list, whatever
```
|
1. Read about the [RE](http://www.python.org/doc/2.5.2/lib/module-re.html) pattern's `match` method.
2. Read all answers to [How do I copy files with specific file extension to a folder in my python (version 2.5) script](https://stackoverflow.com/questions/296173/how-do-i-copy-files-with-specific-file-extension-to-a-folder-in-my-python-versi)?
3. Pick one that uses `fnmatch`. Replace `fnmatch` with `re.match`. This requires careful thought. It's not a cut-and-paste.
4. Then, ask specific questions.
|
How do I search through a folder for the filename that matches a regular expression using Python?
|
[
"",
"python",
"regex",
""
] |
I am currently faced with a difficult sorting problem. I have a collection of events that need to be sorted against each other (a [comparison sort](http://en.wikipedia.org/wiki/Comparison_sort)) and against their relative position in the list.
In the simplest terms I have list of events that each have a priority (integer), a duration (seconds), and an earliest occurrence time that the event can appear in the list. I need to sort the events based on priority, but no event can appear in the list before its earliest occurrence time. Here's an example to (hopefully) make it clearer:
```
// Psuedo C# code
class Event { int priority; double duration; double earliestTime ; }
void Example()
{
Event a = new Event { priority = 1, duration = 4.0, earliestTime = 0.0 };
Event b = new Event { priority = 2, duration = 5.0, earliestTime = 6.0 };
Event c = new Event { priority = 3, duration = 3.0, earliestTime = 0.0 };
Event d = new Event { priority = 4, duration = 2.0, earliestTime = 0.0 };
// assume list starts at 0.0 seconds
List<Event> results = Sort( new List<Event> { a, b, c, d } );
assert( results[ 0 ] == a ); // 4.0 seconds elapsed
assert( results[ 1 ] == c ); // 7.0 seconds elapsed
assert( results[ 2 ] == b ); // 12.0 seconds elapsed
assert( results[ 3 ] == d ); // 14.0 seconds elapsed
}
```
Item "b" has to come last because it isn't allowed to start until 6.0 seconds into the list, so it is deferred and "c" gets to go before "b" even though its priority is lower. (Hopefully the above explains my problem, if not let me know and I'll edit it.)
My current idea is to use an [insertion sort](http://en.wikipedia.org/wiki/Insertion_sort) to manage the sorting process. Unlike many of the other common sorting algorithms, insertion sort decides the order of the list one at a time and in order. So for each index I should be able to find the next lowest priority event whose earliest occurrence time will be satisfied.
I'm hoping to find resources about sorting algorithms and data structures to help me design a good solution for this "sort" of problem. My real problem is actually more complex than this: hierarchical sorting, variable buffers between events, multiple non-constant time constraints, so the more information or ideas the better. Speed and space are not really a concern. Accuracy in sorting and maintainability of the code are a concern.
**Edit:** Clarifications (based on comments)
* Events consume their entire duration (that is there is no overlap of events allowed)
* Events **must** occur at or after their earliestTime, they cannot occur before their earliestTime.
* Events can occur later than their earliestTime if lower priority events exist
* Events cannot be interrupted
* There is a maximum duration the sum of all events that can fit in a list. This is not shown above. (In reality the duration of all events will be far greater than the time list's maximum duration.)
* There cannot be any gaps. (There are no holes to try and back fill.)
**Edit:** Answer
While David Nehme gave the answer I selected, I wanted to point out that his answer is an insertion sorts at heart, and several other people provided insertions sort type answers. This confirms for me that a specialized insertion sort is probably the way to go. Thanks to all of you for your answers.
|
This is actually more than a sorting problem. It's a single-machine scheduling problem with release dates. Depending on what you are trying to do, the problem might be NP-Hard. For example, if you are trying to mimimize the weighted-sum of the completion times (the weight being inversely proportional to the priority), the the problem is [categorized](http://www.lix.polytechnique.fr/~durr/query/search.php?a1=1&a2=%3B&a4=%3B&a3=&b1=r_i&b3=&b7=&b4=&b5=&b6=&b8=&c=sum+w_iC_i&problem=1|r_i|sum+w_iC_i) as
```
1|ri;pmtn|Σ wiCi
```
and is NP-hard. There are numerous [papers](http://www-math.mit.edu/~goemans/GoemansQSSW-2002-SingleMachineSchedulingWithReleaseDates.pdf) on this topic, but it might be more than what you need.
In your case, you never want a solution with gaps, so what you might just need to do is a simple discrete-event simulation ( O(n log(n)) ) time. You need to store released\_jobs as a priority queue.
```
unreleased_jobs = jobs // sorted list of jobs, by release date
released_jobs = {} // priority queue of jobs, by priority
scheduled_jobs = {} // simple list
while (!unreleased_jobs.empty() || !released_jobs.empty()) {
while (unreleased_jobs.top().earliestTime <= t) {
released_jobs.push(unreleased_jobs.pop())
}
if (!released_jobs.empty()) {
next_job = released_jobs.pop();
scheduled_jobs.push_back(next_job)
t = t + next_job.duration
} else {
// we have a gap
t = unreleased_jobs.top().earliestTime
}
}
```
One problem is that you might have a low-priority job with a release time just before a short, high-priority job, but it will produce a schedule with the property that there are no gaps (if a schedule with no gaps is possible).
|
I think:
1. Sort tasks by priority
2. Fit tasks into a time-line, taking the first available slot after their earliestTime, that has a hole big enough for the task.
Convert the time-line into a list a tasks, and waits (for the gaps).
Questions:
1. Are gaps allowed?
2. Can tasks be split?
3. Given the tasks as in the question: is it better to delay b to complete c, or do d so that b can start on time?
Edit:
Os the answers to my questions are:
1. No (ish - if there is nothing to run I guess we could have a gap)
2. No
3. Still not clear, but I guess the example suggests run c and delay b.
In this case the algorithm might be:
1. Sort by priority
2. Keep a counter for the current 'time' starting with t=0
3. Search though the sorted list, for the highest priority item that can be started at t.
4. Add the item to the running order, and add its duration to t.
5. Repeat 3&4 till the list is exhausted. If there are no tasks runnable at t, and there are tasks remaining pending, stick a 1-second sleep task in the running order.
This algorithm is also O(n^2).
|
Sorting algorithm for a non-comparison based sort problem?
|
[
"",
"c#",
"algorithm",
"sorting",
"comparison",
"mathematical-optimization",
""
] |
What is the standard encoding of C++ source code? Does the C++ standard even say something about this? Can I write C++ source in Unicode?
For example, can I use non-ASCII characters such as Chinese characters in comments? If so, is full Unicode allowed or just a subset of Unicode? (e.g., that 16-bit first page or whatever it's called.)
Furthermore, can I use Unicode for strings? For example:
```
Wstring str=L"Strange chars: â Țđ ě €€";
```
|
Encoding in C++ is quite a bit complicated. Here is my understanding of it.
Every implementation has to support characters from the *basic source character set*. These include common characters listed in §2.2/1 (§2.3/1 in C++11). These characters should all fit into one `char`. In addition implementations have to support a way to name other characters using a way called `universal-character-names` and look like `\uffff` or `\Uffffffff` and can be used to refer to Unicode characters. A subset of them are usable in identifiers (listed in Annex E).
This is all nice, but the mapping from characters in the file, to source characters (used at compile time) is implementation defined. This constitutes the encoding used. Here is what it says literally (C++98 version):
> Physical source file characters are
> mapped, in an implementation-defined
> manner, to the basic source character
> set (introducing new-line characters
> for end-of-line indicators) if
> necessary. Trigraph sequences (2.3)
> are replaced by corresponding
> single-character internal
> representations. Any source file
> character not in the basic source
> character set (2.2) is replaced by the
> universal-character-name that des-
> ignates that character. (An
> implementation may use any internal
> encoding, so long as an actual
> extended character encountered in the
> source file, and the same extended
> character expressed in the source file
> as a universal-character-name (i.e.
> using the \uXXXX notation), are
> handled equivalently.)
For gcc, you can change it using the option `-finput-charset=charset`. Additionally, you can change the execution character used to represet values at runtime. The proper option for this is `-fexec-charset=charset` for char (it defaults to `utf-8`) and `-fwide-exec-charset=charset` (which defaults to either `utf-16` or `utf-32` depending on the size of `wchar_t`).
|
In addition to litb's post, MSVC++ supports Unicode too. I understand it gets the Unicode encoding from the BOM. It definitely supports code like `int (*♫)();` or `const std::set<int> ∅;`
If you're really into code obfuscuation:
```
typedef void ‼; // Also known as \u203C
class ooɟ {
operator ‼() {}
};
```
|
Using Unicode in C++ source code
|
[
"",
"c++",
"unicode",
"character-encoding",
"standards",
""
] |
Most of my PHP apps have an ob\_start at the beginning, runs through all the code, and then outputs the content, sometimes with some modifications, after everything is done.
```
ob_start()
//Business Logic, etc
header->output();
echo apply_post_filter(ob_get_clean());
footer->output();
```
This ensures that PHP errors get displayed within the content part of the website, and that errors don't interfere with `header` and `session_*` calls.
My only problem is that with some large pages PHP runs out of memory. How do I stop this from happening?
Some ideas:
1. Write all of the buffered content to a temporary file and output that.
2. When the buffers reaches a certain size, output it. Although this might interfere with the post filter.
3. Raise the memory limit (thanx @troelskn).
Whats the drawbacks on each of these approaches? Especially raising the memory limit?
|
Can't you raise the memory limit? Sounds like the best solution to me.
Edit: Obviously, raising the memory limit just because a script tops out should raise some red flags, but it sounds to me like this is a legitimate case - eg. the script is actually producing rather large chunks of output. As such, you have to store the output *somewhere*, and memory seems to be the best pick, for both performance and convenience reasons.
I should note also that the memory limit setting is just that - a limit. Scripts that don't consume much memory, won't consume more just because you raise the limit. The main reason for its existence, is to prevent misbehaving/buggy scripts from taking down the entire server. This is something that is important if you have a lot of amateurs hacking away on a shared host (Something PHP has been used a lot for). So if this is your own server, or at least you generally know what you're doing, there isn't really any benefit from having a low memory-limit.
|
You should raise the memory limit before anything, especially if your only other solution is to go through a temporary file.
There's all kinds of downsides to using temporary files (mainly, it's slower), and if you really need a way to store the buffer before outputing it, Go look for [memcached](https://www.php.net/manual/en/book.memcache.php) or [APC cache](https://www.php.net/manual/en/book.apc.php). This would let you do roughly the same as a file, except you have the fast access of RAM.
I must say this is a terrible idea overall, though. If the buffer currently doesn't work right, there's likely something you could build differently in order to make your site work better.
|
How to stop a PHP output buffer from going over the memory limit?
|
[
"",
"php",
"memory",
""
] |
In my master pages I have `<form ... action="" ...>`, in pre SP1, if I viewed the source the action attribute would be an empty string. In SP1 the action attribute is overridden "MyPage.aspx?MyParams", unfortunately, this causes my postbacks to fail as I have additional pathinfo in the URL (ie. MyPage.aspx\CustomerData?MyParams). I have checked the action attribute in the OnLoad event and it is still blank at this time, so somewhere SP1 is overriding this :(.
Sorry, I just realized that part of my post was missing since I did not use the markdown correctly.
|
Great solution from MrJavaGuy but there is a typo in the code because pasting code in the box here doesn't always work correctly. There is a duplication on the WriteAttribute method, corrected code is as follows -
```
public class HtmlFormAdapter : ControlAdapter
{
protected override void Render(HtmlTextWriter writer)
{
HtmlForm form = this.Control as HtmlForm;
if (form == null)
{
throw new InvalidOperationException("Can only use HtmlFormAdapter as an adapter for an HtmlForm control");
}
base.Render(new CustomActionTextWriter(writer));
}
public class CustomActionTextWriter : HtmlTextWriter
{
public CustomActionTextWriter(HtmlTextWriter writer) : base(writer)
{
this.InnerWriter = writer.InnerWriter;
}
public override void WriteAttribute(string name, string value, bool fEncode)
{
if (name == "action")
{
value = "";
}
base.WriteAttribute(name, value, fEncode);
}
}
}
```
|
Maybe you can find the solution here in [this ASP.NET Forum post](http://forums.asp.net/t/1305800.aspx) (Known Issues / Breaking Changes for ASP.NET in .NET 3.5 Service Pack 1).
## Issue
The HtmlForm action attribute is now honored when defined in declarative markup.
## Reason
3.5 SP1 added a settable Action property to the HtmlForm type. This new feature makes it much easier for developers to explicitly set the form’s action attribute for scenarios where a developer wants to use a different Url than the normal postback-generated Url. However this change also means that if the action attribute has been set in an .aspx page’s declarative markup, ASP.NET will use the setting from the markup when rendering a `<form />` element.
## Workaround
Previous versions of ASP.NET always ignored the action attribute if it was present in the declarative markup for a `<form />` element. Developers should remove the action attribute from their declarative markup to return to the original behavior where ASP.NET renders the postback Url.
## Example
Before (the action attribute was ignored by ASP.NET as dead code):
```
<form name="form1" method="post" runat="server" action="test.aspx"></form>
```
3.5 SP1 (remove the action attribute to have ASP.NET render the postback Url):
```
<form name="form1" method="post" runat="server"></form>
```
|
How to prevent ASP.NET 3.5 SP1 from overriding my action?
|
[
"",
"c#",
".net",
"asp.net",
".net-3.5",
""
] |
What is the best way to change the height and width of an ASP.NET control from a client-side Javascript function?
Thanks,
Jeff
|
Because of the name mangling introduced by ASP.NET, I use the function at the bottom to find ASP controls. Once you have the control, you can set the height/width as needed.
```
example usage:
<input type='button' value='Expand' onclick='setSize("myDiv", 500, 500);' />
...
function setSize(ctlName, height, width ) {
var ctl = asp$( ctlName, 'div' );
if (ctl) {
ctl.style.height = height + 'px';
ctl.style.width = width + 'px';
}
}
function asp$( id, tagName ) {
var idRegexp = new RegExp( id + '$', 'i' );
var tags = new Array();
if (tagName) {
tags = document.getElementsByTagName( tagName );
}
else {
tags = document.getElementsByName( id );
}
var control = null;
for (var i = 0; i < tags.length; ++i) {
var ctl = tags[i];
if (idRegexp.test(ctl.id)) {
control = ctl;
break;
}
}
if (control) {
return $(control.id);
}
else {
return null;
}
}
```
|
You can use the controls .ClientID and some javascript and change it that way.
You can do it through CSS height/width or on some controls directly on the control itself.
|
Can you change the height/width of an ASP.NET control from a Javascript function?
|
[
"",
"asp.net",
"javascript",
""
] |
I've written an image processing script in php which is run as a cron scheduled task (in OSX). To avoid overloading the system, the script checks the system load (using 'uptime') and only runs when load is below a predefined threshold.
I've now ported this over to Windows (2003 Server) and am looking for a similar command line function to report system load as an integer or float.
|
You can try this at a windows command line. This works in XP, get a lot of other info too.
wmic CPU
|
## Don't use load...
The system load is not a good indicator in this case. On Unix it essentially tells you, how many processes are ready and waiting to be executed at the moment. But since there are numerous reasons for why a process might have to wait, your script may actually be able to run without costing another process any performance, even if the "load" on the system is high.
## ... use nice
You should use `nice(1)` on Unix and the equivalent on Windows ("Process Priority"?), so the OS can decide, when to run your script!
If you set the priority of your script to the absolute lowest possible priority then it will only be executed by the scheduler when there is nothing else to do at the moment.
Probably you will have to implement some kind of mechanism to prevent more than one instance of your script to be run at the same time, in case it takes longer to execute than the interval between to invocations lasts.
|
Putting a value on the load of a windows system
|
[
"",
"php",
"windows",
"macos",
"load",
"uptime",
""
] |
I've seen some posts where jQuery has been favored vs ExtJS. I haven't looked at jQuery in detail, but from what I read so far, jQuery doesn't provide the kind of UI which comes with ExtJS. Am I correct? Why would some of you prefer jQuery in ASP.NET?
Thanks
|
There are two schools of javascript frameworks, ones that focus on widgets (Yui, ext, etc) , and ones that focus on behavior (jquery, prototype, moo, etc)
JQuery just makes life easier to build dynamic, sexy sites. If you are just doing system.draggy.droppy asp development, you can ignore both, since you probably aren't really touching javascript at all. But if you do use javascript, it is worth your time to learn one of the frameworks that are out there, and jquery is currently the most popular.
|
Why not use both? ExtJS does allow you to use jQuery as well. In fact, you can easily configure ExtJS to use jQuery for its core functionality. I've done this before and it works quite well.
This way you can happily use the best of both worlds.
<http://extjs.com/forum/showthread.php?t=29702&highlight=jquery>
|
ExtJS and jQuery in ASP.NET
|
[
"",
"javascript",
"jquery",
"asp.net",
"user-interface",
"extjs",
""
] |
Assuming a largish template library with around 100 files containing around 100 templates with overall more than 200,000 lines of code. Some of the templates use multiple inheritance to make the usage of the library itself rather simple (i.e. inherit from some base templates and only having to implement certain business rules).
All that exists (grown over several years), "works" and is used for projects.
However, compilation of projects using that library consumes a growing amount of time and it takes quite some time to locate the source for certain bugs. Fixing often causes unexpected side effects or is quite difficult, because some interdependent templates need changing. Testing is nearly impossible due to the sheer amount of functions.
Now, I would really like to simplify the architecture to use less templates and more specialized smaller classes.
Is there any proven way to go about that task? What would be a good place to start?
|
I'm not sure I see how/why templates are the problem, and why plain non-templated classes would be an improvement. Wouldn't that just mean even *more* classes, less type safety and so larger potential for bugs?
I can understand simplifying the architecture, refactoring and removing dependencies between the various classes and templates, but automatically assuming that "fewer templates will make the architecture better" is flawed imo.
I'd say that templates *potentially* allow you to build a much cleaner architecture than you'd get without them. Simply because you can make separate classes *totally* independent. Without templates, classes functions which call into another class must know about the class, or an interface it inherits, in advance. With templates, this coupling isn't necessary.
Removing templates would only lead to *more* dependencies, not fewer.
The added type-safety of templates can be used to detect a lot of bugs at compile-time (Sprinkle your code liberally with static\_assert's for this purpose)
Of course, the added compile-time may be a valid reason to avoid templates in some cases, and if you only have a bunch of Java programmers, who are used to thinking in "traditional" OOP terms, templates might confuse them, which can be another valid reason to avoid templates.
But from an architecture point of view, I think avoiding templates is a step in the wrong direction.
Refactor the application, sure, it sounds like that's needed. But don't throw away one of the most useful tools for producing extensible and robust code just because the original version of the app misused it. Especially if you're already concerned with the amount of code, removing templates will most likely lead to *more* lines of code.
|
You need automated tests, that way in ten years time when your succesor has the same problem he can refactor the code (probably to add more templates because he thinks it will simplify usage of the library) and know it still meets all test cases. Similarly the side effects of any minor bug fixes will be immediately visible (assuming your test cases are good).
Other than that, "divide and conqueor"
|
How best to switch from template mess to clean classes architecture (C++)?
|
[
"",
"c++",
"templates",
"simplify",
""
] |
1. In a simple winform application, I call a function that endlessy create files on a button click event. I add Application.DoEvents() to the loop.
2. I press the red X to close the form.
3. the form closes, but files continue to be created ...
I think its on the buttons thread, but shouldnt it be a background one ? trying changing Thread.CurrentThread.IsBackGround to True on the loop function does not help.
Ideas ?
|
The fact that you're using `Application.DoEvents` is the first sign of a problem: it shows that you're doing too much in the UI thread. It's almost never appropriate in a well-structured program. The UI thread is not meant to have any long-running tasks. (Admittedly if it takes a long time to draw your UI you have little choice - but that suggests you should simplify your UI... and it's not applicable in this case, I suspect.)
Instead, you should be performing the long-running task (creating the files) in a separate thread. [BackgroundWorker](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx) is a perfect fit for this - you can use it to report progress back to the UI, and the UI can call [`CancelAsync`](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.cancelasync.aspx) method to request that it stops. You need to check the [`CancellationPending`](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.cancellationpending.aspx) property from within the worker thread, to see whether cancellation has been requested, and stop appropriately.
EDIT: Just to clarify what I believe is happening - I suspect your form is closing, but the program won't terminate until the event loop has finished. You're keeping the event loop going with your file-creation loop, hence the problem.
Note that there isn't a thread for the button - there's just one for your whole UI. (In certain cases you may need more than one UI thread, but that's rare - and you'd know it if you'd done it.)
|
Add this form-level variable to your form's code:
```
private bool _StillOpen = true;
```
Then wrap the endless-loop code in your button click like this:
```
while (_StillOpen)
{
// do whatever your method does
Application.DoEvents();
}
```
Finally, add this code to your form's FormClosing event:
```
_StillOpen = false;
```
This will allow your form to close when you click the close button. Ideally you would want something like this to execute on a background thread, but this flag approach may be a quick fix to your current problem.
|
stopping a function executed on a winform button click
|
[
"",
"c#",
"winforms",
"multithreading",
""
] |
Is it possible for a JPA entity class to contain two embedded (`@Embedded`) fields? An example would be:
```
@Entity
public class Person {
@Embedded
public Address home;
@Embedded
public Address work;
}
public class Address {
public String street;
...
}
```
In this case a `Person` can contain two `Address` instances - home and work. I'm using JPA with Hibernate's implementation. When I generate the schema using Hibernate Tools, it only embeds one `Address`. What I'd like is two embedded `Address` instances, each with its column names distinguished or pre-pended with some prefix (such as home and work). I know of `@AttributeOverrides`, but this requires that each attribute be individually overridden. This can get cumbersome if the embedded object (`Address`) gets big as each column needs to be individually overridden.
|
If you want to have the same embeddable object type twice in the same entity, the column name defaulting will not work: at least one of the columns will have to be explicit. Hibernate goes beyond the EJB3 spec and allows you to enhance the defaulting mechanism through the NamingStrategy. DefaultComponentSafeNamingStrategy is a small improvement over the default EJB3NamingStrategy that allows embedded objects to be defaulted even if used twice in the same entity.
From Hibernate Annotations Doc: <http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#d0e714>
|
The generic JPA way to do it is with @AttributeOverride. This should work in both EclipseLink and Hibernate.
```
@Entity
public class Person {
@AttributeOverrides({
@AttributeOverride(name="street",column=@Column(name="homeStreet")),
...
})
@Embedded public Address home;
@AttributeOverrides({
@AttributeOverride(name="street",column=@Column(name="workStreet")),
...
})
@Embedded public Address work;
}
@Embeddable public class Address {
@Basic public String street;
...
}
}
```
|
JPA Multiple Embedded fields
|
[
"",
"java",
"hibernate",
"jpa",
"jakarta-ee",
""
] |
What functionality does the [`yield`](https://docs.python.org/3/reference/simple_stmts.html#yield) keyword in Python provide?
For example, I'm trying to understand this code**1**:
```
def _get_child_candidates(self, distance, min_dist, max_dist):
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
```
And this is the caller:
```
result, candidates = [], [self]
while candidates:
node = candidates.pop()
distance = node._get_dist(obj)
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
What happens when the method `_get_child_candidates` is called?
Is a list returned? A single element? Is it called again? When will subsequent calls stop?
---
1. This piece of code was written by Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: [Module mspace](https://well-adjusted.de/~jrspieker/mspace/).
|
To understand what [`yield`](https://docs.python.org/3/reference/simple_stmts.html#yield) does, you must understand what *[generators](https://docs.python.org/3/glossary.html#term-generator)* are. And before you can understand generators, you must understand *[iterables](https://docs.python.org/3/glossary.html#term-iterable)*.
## Iterables
When you create a list, you can read its items one by one. Reading its items one by one is called iteration:
```
>>> mylist = [1, 2, 3]
>>> for i in mylist:
... print(i)
1
2
3
```
`mylist` is an *iterable*. When you use a list comprehension, you create a list, and so an iterable:
```
>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
... print(i)
0
1
4
```
Everything you can use "`for... in...`" on is an iterable; `lists`, `strings`, files...
These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.
## Generators
Generators are *[iterators](https://docs.python.org/3/glossary.html#term-iterator)*, a kind of iterable **you can only iterate over once**. Generators do not store all the values in memory, **they generate the values on the fly**:
```
>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
... print(i)
0
1
4
```
It is just the same except you used `()` instead of `[]`. BUT, you **cannot** perform `for i in mygenerator` a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end after calculating 4, one by one.
## Yield
`yield` is a keyword that is used like `return`, except the function will return a generator.
```
>>> def create_generator():
... mylist = range(3)
... for i in mylist:
... yield i*i
...
>>> mygenerator = create_generator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object create_generator at 0xb7555c34>
>>> for i in mygenerator:
... print(i)
0
1
4
```
Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once.
To master `yield`, you must understand that **when you call the function, the code you have written in the function body does not run.** The function only returns the generator object, this is a bit tricky.
Then, your code will continue from where it left off each time `for` uses the generator.
Now the hard part:
The first time the `for` calls the generator object created from your function, it will run the code in your function from the beginning until it hits `yield`, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting `yield`. That can be because the loop has come to an end, or because you no longer satisfy an `"if/else"`.
---
## Your code explained
*Generator:*
```
# Here you create the method of the node object that will return the generator
def _get_child_candidates(self, distance, min_dist, max_dist):
# Here is the code that will be called each time you use the generator object:
# If there is still a child of the node object on its left
# AND if the distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild
# If there is still a child of the node object on its right
# AND if the distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild
# If the function arrives here, the generator will be considered empty
# There are no more than two values: the left and the right children
```
*Caller:*
```
# Create an empty list and a list with the current object reference
result, candidates = list(), [self]
# Loop on candidates (they contain only one element at the beginning)
while candidates:
# Get the last candidate and remove it from the list
node = candidates.pop()
# Get the distance between obj and the candidate
distance = node._get_dist(obj)
# If the distance is ok, then you can fill in the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)
# Add the children of the candidate to the candidate's list
# so the loop will keep running until it has looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result
```
This code contains several smart parts:
* The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, `candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))` exhausts all the values of the generator, but `while` keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.
* The `extend()` method is a list object method that expects an iterable and adds its values to the list.
Usually, we pass a list to it:
```
>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]
```
But in your code, it gets a generator, which is good because:
1. You don't need to read the values twice.
2. You may have a lot of children and you don't want them all stored in memory.
And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...
You can stop here, or read a little bit to see an advanced use of a generator:
## Controlling a generator exhaustion
```
>>> class Bank(): # Let's create a bank, building ATMs
... crisis = False
... def create_atm(self):
... while not self.crisis:
... yield "$100"
>>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
$100
>>> print(corner_street_atm.next())
$100
>>> print([corner_street_atm.next() for cash in range(5)])
['$100', '$100', '$100', '$100', '$100']
>>> hsbc.crisis = True # Crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs
>>> print(wall_street_atm.next())
<type 'exceptions.StopIteration'>
>>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business
>>> for cash in brand_new_atm:
... print cash
$100
$100
$100
$100
$100
$100
$100
$100
$100
...
```
**Note:** For Python 3, use`print(corner_street_atm.__next__())` or `print(next(corner_street_atm))`
It can be useful for various things like controlling access to a resource.
## Itertools, your best friend
The `itertools` module contains special functions to manipulate iterables. Ever wish to duplicate a generator?
Chain two generators? Group values in a nested list with a one-liner? `Map / Zip` without creating another list?
Then just `import itertools`.
An example? Let's see the possible orders of arrival for a four-horse race:
```
>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]
```
## Understanding the inner mechanisms of iteration
Iteration is a process implying iterables (implementing the `__iter__()` method) and iterators (implementing the `__next__()` method).
Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.
There is more about it in this article about [how `for` loops work](https://web.archive.org/web/20201109034340/http://effbot.org/zone/python-for-statement.htm).
|
## Shortcut to understanding `yield`
When you see a function with `yield` statements, apply this easy trick to understand what will happen:
1. Insert a line `result = []` at the start of the function.
2. Replace each `yield expr` with `result.append(expr)`.
3. Insert a line `return result` at the bottom of the function.
4. Yay - no more `yield` statements! Read and figure out the code.
5. Compare the function to the original definition.
This trick may give you an idea of the logic behind the function, but what actually happens with `yield` is significantly different than what happens in the list-based approach. In many cases, the yield approach will be a lot more memory efficient and faster too. In other cases, this trick will get you stuck in an infinite loop, even though the original function works just fine. Read on to learn more...
## Don't confuse your iterables, iterators, and generators
First, the **iterator protocol** - when you write
```
for x in mylist:
...loop body...
```
Python performs the following two steps:
1. Gets an iterator for `mylist`:
Call `iter(mylist)` -> this returns an object with a `next()` method (or `__next__()` in Python 3).
[This is the step most people forget to tell you about]
2. Uses the iterator to loop over items:
Keep calling the `next()` method on the iterator returned from step 1. The return value from `next()` is assigned to `x` and the loop body is executed. If an exception `StopIteration` is raised from within `next()`, it means there are no more values in the iterator and the loop is exited.
The truth is Python performs the above two steps anytime it wants to *loop over* the contents of an object - so it could be a for loop, but it could also be code like `otherlist.extend(mylist)` (where `otherlist` is a Python list).
Here `mylist` is an *iterable* because it implements the iterator protocol. In a user-defined class, you can implement the `__iter__()` method to make instances of your class iterable. This method should return an *iterator*. An iterator is an object with a `next()` method. It is possible to implement both `__iter__()` and `next()` on the same class, and have `__iter__()` return `self`. This will work for simple cases, but not when you want two iterators looping over the same object at the same time.
So that's the iterator protocol, many objects implement this protocol:
1. Built-in lists, dictionaries, tuples, sets, and files.
2. User-defined classes that implement `__iter__()`.
3. Generators.
Note that a `for` loop doesn't know what kind of object it's dealing with - it just follows the iterator protocol, and is happy to get item after item as it calls `next()`. Built-in lists return their items one by one, dictionaries return the *keys* one by one, files return the *lines* one by one, etc. And generators return... well that's where `yield` comes in:
```
def f123():
yield 1
yield 2
yield 3
for item in f123():
print item
```
Instead of `yield` statements, if you had three `return` statements in `f123()` only the first would get executed, and the function would exit. But `f123()` is no ordinary function. When `f123()` is called, it *does not* return any of the values in the yield statements! It returns a generator object. Also, the function does not really exit - it goes into a suspended state. When the `for` loop tries to loop over the generator object, the function resumes from its suspended state at the very next line after the `yield` it previously returned from, executes the next line of code, in this case, a `yield` statement, and returns that as the next item. This happens until the function exits, at which point the generator raises `StopIteration`, and the loop exits.
So the generator object is sort of like an adapter - at one end it exhibits the iterator protocol, by exposing `__iter__()` and `next()` methods to keep the `for` loop happy. At the other end, however, it runs the function just enough to get the next value out of it and puts it back in suspended mode.
## Why use generators?
Usually, you can write code that doesn't use generators but implements the same logic. One option is to use the temporary list 'trick' I mentioned before. That will not work in all cases, for e.g. if you have infinite loops, or it may make inefficient use of memory when you have a really long list. The other approach is to implement a new iterable class `SomethingIter` that keeps the state in instance members and performs the next logical step in its `next()` (or `__next__()` in Python 3) method. Depending on the logic, the code inside the `next()` method may end up looking very complex and prone to bugs. Here generators provide a clean and easy solution.
|
What does the "yield" keyword do in Python?
|
[
"",
"python",
"iterator",
"generator",
"yield",
""
] |
I'm writing C# code that needs to connect to COM events. I implemented the use of
IConnectionPointContainer and IConnectionPoint thus:
```
IConnectionPointContainer connectionPointContainer = internalGenerator as IConnectionPointContainer;
if (connectionPointContainer == null)
{
Debug.Fail("The script generator doesn't support the required interface - IConnectionPointContainer");
throw new InvalidCastException("The script generator doesn't support the required interface - IConnectionPointContainer");
}
Guid IID_IScriptGeneratorEvents = typeof(IScriptGeneratorCallback).GUID;
connectionPointContainer.FindConnectionPoint(ref IID_IScriptGeneratorEvents, out m_connectionPoint);
m_connectionPoint.Advise(this, out m_cookie);
```
The problem is that when the COM server is actually implemented in .Net (say, C#), after .Net creates it, it handles it as a .Net object, not a COM object. Since the .Net object doesn't implement the IConnectionPointContainer interface, I get null when trying to cast the object to that interface.
Any idea how can i workaround this?
I can of course implement IConnectionPointContainer by myself in the C# COM server, however I would like a simpler solution, which I can easily explain to other developers which need to implement the COM server.
P.S I must use IConnectionPointContainer as the COM server may be implemented in non-.Net (C++, Java).
Thanks,
Inbar
|
I didn't find a way to do this.
Eventually I will define another interface in .Net and will write 2 code paths, one for .Net objects and one for real COM objects.
|
IConnectionPointContainer is implemented on the CCW (COM callable wrapper) that .NET automatically generates when exposing your .NET object as a COM object externally.
Try calling [Marshal.GetComInterfaceForObject](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.getcominterfaceforobject.aspx) on the .NET object to get a COM interface for IConnectionPointContainer rather than just casting it.
**update** ... and if that doesn't work [Marshal.GetIUnknownForObject](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.getiunknownforobject.aspx) must return something, and maybe that will then support a [Marshal.QueryInterface](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.queryinterface.aspx) call.
|
Connect to COM events in C# - support both managed and unmanaged servers
|
[
"",
"c#",
"com",
"interop",
"iconnectionpoint",
""
] |
I've created a small python script to toggle between two files I'm using for testing.
My question is, what is a good Python format style for the following code:
```
import filecmp
import shutil
local = "local.txt"
remote = "remote.txt"
config_file = "C:\some\path\file.txt"
shutil.copyfile( remote if( filecmp.cmp(local, config_file ) ) else local, config_file )
```
Or
```
shutil.copyfile( remote
if( filecmp.cmp(local, config_file ) )
else local,
config_file )
```
Or
```
tocopy = remote if( filecmp.cmp( local, config_file ) ) else local
shutil.copyfile( tocopy, config_file )
```
Or what?
Also, what is the preferred way to name var in python for many-word names, is it "to\_copy", "tocopy", "toCopy", "ToCopy"
|
For the conditional statement, I would probably go with:
```
if filecmp.cmp(local, config_file):
shutil.copyfile(remote, config_file)
else:
shutil.copyfile(local, config_file)
```
There's little need to use the inline `y if x else z` in this case, since the surrounding code is simple enough.
|
The most common naming I've seen is underscode separated words, to\_copy.
As for the format style, I've seen no such agreement. I find
```
source = remote if filecmp.cmp(local, config_file) else local
shutil.copyfile(source, config_file)
```
to be the clearest among your options.
And seeing that everyone prefers to split the if I'd, at the very least, encapsulate the copyfile call in case you someday wish to change it:
```
def copy_to(source, destination):
shutil.copyfile(source,destination)
if filecmp.cmp(local, config_file):
copy_to(remote, config_file)
else:
copy_to(local, config_file)
```
|
What is a good Python format style
|
[
"",
"python",
"coding-style",
""
] |
Why is it called a single in VB.net? I'm sure there is a good reason but it doesn't seem intuitive to a non formally trained programmer like me.
|
BPAndrew's question seems to be really "why float in C# and Single in VB.NET", which noone actually answered, so here's my 2p...
The use of "float" in C# seems to be a throwback to its C/C++ heritage. "float" still maps to the System.Single type in C#, so the keyword just exists for convenience. You could just as well declare the variable as "Single" in C# the same as you do in VB.NET.
(And as stated above, naming them Single/Double actually makes more sense as they are single/double precision floating-point numbers.)
|
As others have said, they map to "single" and "double" precision binary floating point types. Personally I think it was a sideways step to just name `System.Single` and `System.Double` - why not `System.Float32` and `System.Float64` to match the integer types?
|
C# Float vs. VB.net Single - Namin' complainin'
|
[
"",
"c#",
"vb.net",
"naming-conventions",
""
] |
I'm looking for a syntax highlighter cum code formatter for my blog on BlogSpot. I've tried several today, but the all want to include `<style>` tags, or reference a stylesheet. I'm looking for one that is ideal for segments of code, and includes styling inline. Any suggestions?
|
You can always copy the extra few bytes of styling for, say, a [Pygments](http://pygments.org/) highlighter (which really is quite excellent) into the `<head>` of your blog. You don't even need to install any software; just copy the HTML from the online service.
|
The project referenced above has moved:
<http://alexgorbatchev.com/wiki/SyntaxHighlighter>
Instructions for using it on Blogger:
mlawire.blogspot.com/2009/07/blogger-syntax-highlighting.html
|
C# syntax highlighter for blogging on BlogSpot
|
[
"",
"c#",
"syntax-highlighting",
"blogs",
"blogspot",
""
] |
I have a BasePage class which all other pages derive from:
```
public class BasePage
```
This BasePage has a constructor which contains code which must always run:
```
public BasePage()
{
// Important code here
}
```
I want to force derived classes to call the base constructor, like so:
```
public MyPage
: base()
{
// Page specific code here
}
```
**How can I enforce this (preferably at compile time)?**
|
The base constructor will always be called at some point. If you call `this(...)` instead of `base(...)` then that calls into another constructor in the same class - which again will have to either call yet another sibling constructor or a parent constructor. Sooner or later you will always get to a constructor which either calls `base(...)` explicitly or implicitly calls a parameterless constructor of the base class.
See [this article](http://pobox.com/~skeet/csharp/constructors.html) for more about constructor chaining, including the execution points of the various bits (such as variable initializers).
|
The base class constructor taking no arguments is automatically run if you don't call any other base class constructor taking arguments explicitly.
|
How can I force the base constructor to be called in C#?
|
[
"",
"c#",
"asp.net",
"oop",
"constructor",
"c#-2.0",
""
] |
What are my options for running Java 6 on OS X?
I have an MacBook Pro Intel Core Duo running Mac OS X 10.4. Do I have any options for running Java 6 on this hardware and OS?
Related questions: Which Macs (either current or forthcoming) support 64-bit computing? Are there any Apple laptops (either current or forthcoming) that will have a 64 bit architecture?
Any relevant links would be appreciated.
|
Since the Core **2** Duo all available Intel macs are 64-bit capable. If your are an early adopter and have just like me a Core Duo-based Intel mac (note the missing 2), your computer is not 64-bit capable (see <http://en.wikipedia.org/wiki/Core_duo>).
The first Macbook (Pro) and Mac mini are examples for that.
However, as has been aforementioned with SoyLatte (<http://landonf.bikemonkey.org/static/soylatte/>) it is rather simple to get a running Java 6 on your Mac. I've walked through the installation steps and I could successfully run Java 6 applications (e.g. Osmosis <http://wiki.openstreetmap.org/index.php/Osmosis>).
The Apple Java 1.6 is only deployed on 64-bit capable Macs, i.e. even if you have Leopard you won't have Java 6 on your Core Duo Mac.
|
People out there are working on getting [OpenJDK](http://openjdk.java.net/) 6 to work on 10.4. I've never tried myself but [soy latte](http://landonf.bikemonkey.org/static/soylatte/), a Mac Os port of the [BSD openjdk port](http://openjdk.java.net/projects/bsd-port/), looks promising. The 10.4 version appears to be 32-bit only.
|
What are my options for running Java 6 on OS X?
|
[
"",
"java",
"macos",
""
] |
If I understand correctly the .net runtime will always clean up after me. So if I create new objects and I stop referencing them in my code, the runtime will clean up those objects and free the memory they occupied.
Since this is the case why then do some objects need to have a destructor or dispose method? Won’t the runtime clean up after them when they are not referenced anymore?
|
Finalizers are needed to guarantee the release of scarce resources back into the system like file handles, sockets, kernel objects, etc. Since the finalizer always runs at the end of the objects life, it’s the designated place to release those handles.
The `Dispose` pattern is used to provide deterministic destruction of resources. Since the .net runtime garbage collector is non-deterministic (which means you can never be sure when the runtime will collect old objects and call their finalizer), a method was needed to ensure the deterministic release of system resources. Therefore, when you implement the `Dispose` pattern properly you provide deterministic release of the resources and in cases where the consumer is careless and does not dispose the object, the finalizer will clean up the object.
A simple example of why `Dispose` is needed might be a quick and dirty log method:
```
public void Log(string line)
{
var sw = new StreamWriter(File.Open(
"LogFile.log", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None));
sw.WriteLine(line);
// Since we don't close the stream the FileStream finalizer will do that for
// us but we don't know when that will be and until then the file is locked.
}
```
In the above example, the file will remain locked until the garbage collector calls the finalizer on the `StreamWriter` object. This presents a problem since, in the meantime, the method might be called again to write a log, but this time it will fail because the file is still locked.
The correct way is to dispose the object when are done using it:
```
public void Log(string line)
{
using (var sw = new StreamWriter(File.Open(
"LogFile.log", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None))) {
sw.WriteLine(line);
}
// Since we use the using block (which conveniently calls Dispose() for us)
// the file well be closed at this point.
}
```
BTW, technically finalizers and destructors mean the same thing; I do prefer to call c# destructors 'finalizers' since otherwise they tend to confuse people with C++ destructors, which unlike C#, are deterministic.
|
The previous answers are good but let me emphasize the important point here once again. In particular, you said that
> If I understand correctly the .net runtime will always clean up after me.
This is only partly correct. In fact, **.NET *only* offers automatic management for one particular resource**: main memory. All other resources need manual cleanup.1)
Oddly, main memory gets special status in almost all discussions about program resources. There's of course a good reason for this – main memory is often the scarcest resource. But it's worth remembering that there are other types of resources as well, that also need managing.
---
1) The usual attempted solution is to couple the lifetime of other resources to the lifetime of memory locations or identifiers in the code – hence the existence of finalizers.
|
Since .NET has a garbage collector why do we need finalizers/destructors/dispose-pattern?
|
[
"",
"c#",
".net",
"memory",
"memory-management",
"garbage-collection",
""
] |
I saw something about needing to have the assembly available for the type of the first argument passed to the function. I think it is, I can't figure out what am I missing.
This code is in a service. I was running the service under the 'NETWORK SERVICES' user account, when I changed the account to that of the session I was logged on with it worked ok. But, what's the difference, and how can I get it to work for the NETWORK SERVICES user.
|
I finally found an answer: it appears that the type given to ApplicationHost.CreateApplicationHost() must be in an assembly located in the GAC.
Simple, and stupid :)
|
copy your binary to the bin folder of your web app will also fix this.
|
Why does System.Web.Hosting.ApplicationHost.CreateApplicationHost throw System.IO.FileNotFoundException?
|
[
"",
"c#",
""
] |
Imagine I have the following:
```
inFile = "/adda/adas/sdas/hello.txt"
# that instruction give me hello.txt
Name = inFile.name.split("/") [-1]
# that one give me the name I want - just hello
Name1 = Name.split(".") [0]
```
Is there any chance to simplify that doing the same job in just one expression?
|
You can get what you want platform independently by using [os.path.basename](http://docs.python.org/library/os.path.html#os.path.basename) to get the last part of a path and then use [os.path.splitext](http://docs.python.org/library/os.path.html#os.path.splitext) to get the filename without extension.
```
from os.path import basename, splitext
pathname = "/adda/adas/sdas/hello.txt"
name, extension = splitext(basename(pathname))
print name # --> "hello"
```
Using [os.path.basename](http://docs.python.org/library/os.path.html#os.path.basename) and [os.path.splitext](http://docs.python.org/library/os.path.html#os.path.splitext) instead of str.split, or re.split is more proper (and therefore received more points then any other answer) because it does not break down on other [platforms that use different path separators (you would be surprised how varried this can be)](http://en.wikipedia.org/wiki/Path_(computing)#Representations_of_paths_by_operating_system_and_shell).
It also carries most points because it answers your question for "one line" precisely and is aesthetically more pleasing then your example (even though that is debatable as are all questions of taste)
|
Answering the question in the topic rather than trying to analyze the example...
You really want to use [Florians](https://stackoverflow.com/questions/324132/split-twice-in-the-same-expression#324141) solution if you want to split paths, but if you promise not to use this for path parsing...
You can use [re.split()](https://docs.python.org/library/re.html#module-contents) to split using several separators by or:ing them with a '|', have a look at this:
```
import re
inFile = "/adda/adas/sdas/hello.txt"
print re.split('\.|/', inFile)[-2]
```
|
split twice in the same expression?
|
[
"",
"python",
""
] |
I have a PHP web application which uses a MySQL database for object tagging, in which I've used the tag structure accepted as the answer to [this SO question](https://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging).
I'd like to implement a tag hierarchy, where each tag can have a unique parent tag. Searches for a parent tag T would then match all descendants of T (i.e. T, tags whos parent is T (children of T), grandchildren of T, etc.).
The easiest way of doing this seems to be to add a ParentID field to the tag table, which contains the ID of a tag's parent tag, or some magic number if the tag has no parent. Searching for descendants, however, then requires repeated full searches of the database to find the tags in each 'generation', which I'd like to avoid.
A (presumably) faster, but less normalised way of doing this would be to have a table containing all the children of each tag, or even all the descendants of each tag. This however runs the risk of inconsistent data in the database (e.g. a tag being the child of more than one parent).
Is there a good way to make queries to find descendants fast, while keeping the data as normalised as possible?
|
Ali's answer has a link to [Joe Celko's Trees and Hierarchies in SQL for Smarties](http://books.google.co.uk/books?id=6mAAFPTB-9cC), which confirms my suspicion - there isn't a simple database structure that offers the best of all worlds. The best for my purpose seems to be the "Frequent Insertion Tree" detailed in this book, which is like the "Nested Set Model" of Ali's link, but with non-consecutive indexing. This allows O(1) insertion (*a la* unstructured BASIC line numbering), with occasional index reorganisation as and when needed.
|
I implemented it using two columns. I simplify it here a little, because I had to keep the tag name in a separate field/table because I had to localize it for different languages:
* tag
* path
Look at these rows for example:
```
tag path
--- ----
database database/
mysql database/mysql/
mysql4 database/mysql/mysql4/
mysql4-1 database/mysql/mysql4-1/
oracle database/oracle/
sqlserver database/sqlserver/
sqlserver2005 database/sqlserver/sqlserver2005/
sqlserver2005 database/sqlserver/sqlserver2008/
```
etc.
Using the `like` operator on the path field you can easily get all needed tag rows:
```
SELECT * FROM tags WHERE path LIKE 'database/%'
```
There are some implementation details like when you move a node in the hierarchy you have to change all children too etc., but it's not hard.
Also make sure that the length of your path is long enough - in my case I used not the tag name for the path, but another field to make sure that I don't get too long paths.
|
Hierarchical tagging in SQL
|
[
"",
"sql",
"mysql",
"database",
"tags",
"normalizing",
""
] |
I'm having troubles with HttpWebRequest/HttpWebResponse and cookies/CookieContainer/CookieCollection.
The thing is, if the web server does not send/use a "path" in the cookie, Cookie.Path equals the path-part of the request URI instead of "/" or being empty in my application.
Therefore, those cookies do not work for the whole domain, which it actually does in proper web browsers.
Any ideas how to solve this issue?
Thanks in advance
|
Ah, I see what you mean. Generally what browsers *really* do is take the folder containing the document as the path; for ‘/login.php’ that would be ‘/’ so it would effectively work across the whole domain. ‘/potato/login.php’ would be limited to ‘/potato/’; anything with trailing path-info parts (eg. ‘/login.php/’) would not work.
In this case the Netscape spec could be considered wrong or at least misleading in claiming that path defaults to the current document path... depending on how exactly you read ‘path’ there. However the browser behaviour is consistent back as far as the original Netscape version. Netscape never were that good at writing specs...
If .NET's HttpWebRequest is really defaulting CookieContainer.Path to the *entire* path of the current document, I'd file a bug against it.
Unfortunately the real-world behaviour is not actually currently described in a standards document... there is RFC 2965, which does get the path thing right, but makes several other changes not representative of real-world browser behaviour, so that's not wholly reliable either. :-(
|
Seems like I cannot go any further with the default cookie handler, so I got annoyed and I did it the hard way. Haha. So parsing response.Headers["Set-Cookie"] myself is my solution. Not my preferred one but it works. And I simply eliminated the problem with splitting at the wrong comma using regular expressions.
If I could give out points here, I would give you some of them, bobince, because you gave me valuable information. I would also vote up if I could (need higher rep. score), but since this behavior probably is a bug, as you mentioned, I will accept that as an answer.
Thank you. :)
|
Problems with HttpWebRequest/HttpWebResponse and cookies
|
[
"",
"c#",
".net",
""
] |
Sometimes you have strings that must fit within a certain pixel width. This function attempts to do so efficiently. Please post your suggestions or refactorings below :)
```
function fitStringToSize(str,len) {
var shortStr = str;
var f = document.createElement("span");
f.style.display = 'hidden';
f.style.padding = '0px';
document.body.appendChild(f);
// on first run, check if string fits into the length already.
f.innerHTML = str;
diff = f.offsetWidth - len;
// if string is too long, shorten it by the approximate
// difference in characters (to make for fewer iterations).
while(diff > 0)
{
shortStr = substring(str,0,(str.length - Math.ceil(diff / 5))) + '…';
f.innerHTML = shortStr;
diff = f.offsetWidth - len;
}
while(f.lastChild) {
f.removeChild(f.lastChild);
}
document.body.removeChild(f);
// if the string was too long, put the original string
// in the title element of the abbr, and append an ellipsis
if(shortStr.length < str.length)
{
return '<abbr title="' + str + '">' + shortStr + '</abbr>';
}
// if the string was short enough in the first place, just return it.
else
{
return str;
}
}
```
UPDATE:
@some's solution below is much better; please use that.
Update 2:
Code now posted as a [gist](https://gist.github.com/24261/7fdb113f1e26111bd78c0c6fe515f6c0bf418af5); feel free to fork and submit patches :)
|
There are a couple of problems with your code.
* Why `/ 5` ? The width of the characters depends on `font-family` and `font-size`.
* You must escape `str` in the abbr title (or else an " will make the code invalid).
* `diff` is not declared and ends up in the global scope
* The `substring` is not supposed to work like that. What browser are you using?
* `hidden` is not a valid value of `style.display`. To hide it you should use the value `none` but then the browser don't calculate the `offsetWidth`. Use `style.visibility="hidden"` instead.
* The search for the right length is very inefficient.
* Must escape `</abbr>`"
I rewrote it for you and added `className` so you can use a style to set the `font-family` and `font-size`. Mr Fooz suggested that you use a mouseover to show the whole string. That is not necessary since modern browsers do that for you (tested with FF, IE, Opera and Chrome)
```
function fitStringToSize(str,len,className) {
var result = str; // set the result to the whole string as default
var span = document.createElement("span");
span.className=className; //Allow a classname to be set to get the right font-size.
span.style.visibility = 'hidden';
span.style.padding = '0px';
document.body.appendChild(span);
// check if the string don't fit
span.innerHTML = result;
if (span.offsetWidth > len) {
var posStart = 0, posMid, posEnd = str.length;
while (true) {
// Calculate the middle position
posMid = posStart + Math.ceil((posEnd - posStart) / 2);
// Break the loop if this is the last round
if (posMid==posEnd || posMid==posStart) break;
span.innerHTML = str.substring(0,posMid) + '…';
// Test if the width at the middle position is
// too wide (set new end) or too narrow (set new start).
if ( span.offsetWidth > len ) posEnd = posMid; else posStart=posMid;
}
//Escape
var title = str.replace("\"",""");
//Escape < and >
var body = str.substring(0,posStart).replace("<","<").replace(">",">");
result = '<abbr title="' + title + '">' + body + '…<\/abbr>';
}
document.body.removeChild(span);
return result;
}
```
Edit:
While testing a little more I found a couple of bugs.
* I used `Math.ceil` instead of the
intended `Math.floor` (I blame this on
that English isn't my native
language)
* If the input string had html-tags
then the result would be undefined
(it's not good to truncate a tag in
the middle or to leave open tags)
Improvements:
* Escape the string that is copied to the span on all places. You can still use html-entities, but no tags are allowed (`<` and `>` will be displayed)
* Rewrote the `while`-statement (it is a
little faster, but the main reason
was to get rid of the bug that
caused extra rounds and to get rid
of the break-statement)
* Renamed the function to `fitStringToWidth`
Version 2:
```
function fitStringToWidth(str,width,className) {
// str A string where html-entities are allowed but no tags.
// width The maximum allowed width in pixels
// className A CSS class name with the desired font-name and font-size. (optional)
// ----
// _escTag is a helper to escape 'less than' and 'greater than'
function _escTag(s){ return s.replace("<","<").replace(">",">");}
//Create a span element that will be used to get the width
var span = document.createElement("span");
//Allow a classname to be set to get the right font-size.
if (className) span.className=className;
span.style.display='inline';
span.style.visibility = 'hidden';
span.style.padding = '0px';
document.body.appendChild(span);
var result = _escTag(str); // default to the whole string
span.innerHTML = result;
// Check if the string will fit in the allowed width. NOTE: if the width
// can't be determined (offsetWidth==0) the whole string will be returned.
if (span.offsetWidth > width) {
var posStart = 0, posMid, posEnd = str.length, posLength;
// Calculate (posEnd - posStart) integer division by 2 and
// assign it to posLength. Repeat until posLength is zero.
while (posLength = (posEnd - posStart) >> 1) {
posMid = posStart + posLength;
//Get the string from the beginning up to posMid;
span.innerHTML = _escTag(str.substring(0,posMid)) + '…';
// Check if the current width is too wide (set new end)
// or too narrow (set new start)
if ( span.offsetWidth > width ) posEnd = posMid; else posStart=posMid;
}
result = '<abbr title="' +
str.replace("\"",""") + '">' +
_escTag(str.substring(0,posStart)) +
'…<\/abbr>';
}
document.body.removeChild(span);
return result;
}
```
|
At a quick glance, it looks good to me. Here are some minor suggestions:
* Use a binary search to find the optimal size instead of a linear one.
* (optionally) add a mouseover so that a tooltip would give the full string.
|
Truncate a string nicely to fit within a given pixel width
|
[
"",
"javascript",
"text",
"reference",
"refactoring",
""
] |
It looks like the run-time compiler doesn't support the same language as the command-line compiler so if you want to use lambda expressions, extensions methods or LINQ, well, you're stuck.
There's more detail here:
<http://metadatalabs.com/blog/>
Is this correct or is there a work-around? (Short of spawning the command-line compiler, of course.)
|
This guy's blog seems to have the answer
[CodeDomProviders](http://andersnoras.com/blogs/anoras/archive/2008/04/13/codedomproviders-and-compiler-magic.aspx)
Looks like the factory defaults the instance it returns to 2.0.
This seems like a pretty crazy technique. Somewhere Paul Graham is crying.
|
I've been using this, and it seems to work when compiling using .Net 3.5
```
CodeDomProvider provider = new CSharpCodeProvider(new Dictionary<string, string> { { "CompilerVersion", "v3.5" } });
```
|
Does the .Net run-time compiler support C# 3.0?
|
[
"",
"c#",
".net",
""
] |
What is the best way to randomize the order of a generic list in C#? I've got a finite set of 75 numbers in a list I would like to assign a random order to, in order to draw them for a lottery type application.
|
Shuffle any `(I)List` with an extension method based on the [Fisher-Yates shuffle](http://en.wikipedia.org/wiki/Fisher-Yates_shuffle):
```
private static Random rng = new Random();
public static void Shuffle<T>(this IList<T> list)
{
int n = list.Count;
while (n > 1) {
n--;
int k = rng.Next(n + 1);
T value = list[k];
list[k] = list[n];
list[n] = value;
}
}
```
Usage:
```
List<Product> products = GetProducts();
products.Shuffle();
```
The code above uses the much criticised System.Random method to select swap candidates. It's fast but not as random as it should be. If you need a better quality of randomness in your shuffles use the random number generator in System.Security.Cryptography like so:
```
using System.Security.Cryptography;
...
public static void Shuffle<T>(this IList<T> list)
{
RNGCryptoServiceProvider provider = new RNGCryptoServiceProvider();
int n = list.Count;
while (n > 1)
{
byte[] box = new byte[1];
do provider.GetBytes(box);
while (!(box[0] < n * (Byte.MaxValue / n)));
int k = (box[0] % n);
n--;
T value = list[k];
list[k] = list[n];
list[n] = value;
}
}
```
A simple comparison is available [at this blog](https://web.archive.org/web/20150801085341/http://blog.thijssen.ch/2010/02/when-random-is-too-consistent.html) (WayBack Machine).
Edit: Since writing this answer a couple years back, many people have commented or written to me, to point out the big silly flaw in my comparison. They are of course right. There's nothing wrong with System.Random if it's used in the way it was intended. In my first example above, I instantiate the rng variable inside of the Shuffle method, which is asking for trouble if the method is going to be called repeatedly. Below is a fixed, full example based on a really useful comment received today from @weston here on SO.
Program.cs:
```
using System;
using System.Collections.Generic;
using System.Threading;
namespace SimpleLottery
{
class Program
{
private static void Main(string[] args)
{
var numbers = new List<int>(Enumerable.Range(1, 75));
numbers.Shuffle();
Console.WriteLine("The winning numbers are: {0}", string.Join(", ", numbers.GetRange(0, 5)));
}
}
public static class ThreadSafeRandom
{
[ThreadStatic] private static Random Local;
public static Random ThisThreadsRandom
{
get { return Local ?? (Local = new Random(unchecked(Environment.TickCount * 31 + Thread.CurrentThread.ManagedThreadId))); }
}
}
static class MyExtensions
{
public static void Shuffle<T>(this IList<T> list)
{
int n = list.Count;
while (n > 1)
{
n--;
int k = ThreadSafeRandom.ThisThreadsRandom.Next(n + 1);
T value = list[k];
list[k] = list[n];
list[n] = value;
}
}
}
}
```
|
If we only need to shuffle items in a completely random order (just to mix the items in a list), I prefer this simple yet effective code that orders items by guid...
```
var shuffledcards = cards.OrderBy(_ => Guid.NewGuid()).ToList();
```
---
As people have pointed out in the comments, GUIDs are not guaranteed to be random, so we should be using a real random number generator instead:
```
private static Random rng = new Random();
...
var shuffledcards = cards.OrderBy(_ => rng.Next()).ToList();
```
|
Randomize a List<T>
|
[
"",
"c#",
"generic-list",
""
] |
I've been using MVC frameworks for a short while now and I really like how the concerns are separated out. I've got into a bad habit of letting the controllers do quite a bit of work. So I'm really looking for some advice.
When I first started using MVC I quite often had the controller doing manipulation on the models after database work had been done. I knew this was bad so moved that work into the models. However I'm not happy with that as I want my models to be very learn.
I've done a bit of reading and I see that people are keeping their controllers and models lean by having a service layer, which I like the look of.
I'm just trying to understand how a service layer and repository should all work together. Here are my assumptions, can you please let me know if this is a good way of working?
1. The controller can call the repository directly if no manipulation needs to be done on the data and as such a service layer does not need to get involved
2. Once any work needs to be done to data (business logic) then this should be done in the service layer and the controller will make a simple call to the service layer as and when required
3. Once a service has done it's business logic it will then use the repository as necessary (if data needs to be persisted).
4. Models ideally should be kept lean, ideally actings as nothing more than DTOs
5. Validation of data will be done within the models (using MonoRail validation attributes). I appreciate not even one likes polluting their models with lots of attributes, but that is a different discussion. I like the benefit of MonoRail's validation attributes for the automatic jQuery validation in the UI.
I'm trying to turn all my code around to the single responsibility principle, hence trying to sort out my coding practices.
Thanks
|
First, there is no set of rules that's going to work in every situation. How you model you're application depends a lot on the type and complexity of the project. Having said that, here are some ideas:
1. Nothing wrong with calling the repository from a controller. Just make sure the controller does not contain business logic.
2. The service takes care of (some) business logic and uses other services to do so. The repository is a type of service, there's nothing wrong with calling it from a service.
3. The model **should** contain business logic, actually you should always try to put it in the model first. If you need external data to perform that business logic (from another model or from the repository) then you should create a service.
4. Nothing wrong with validation in the models. Using attributes or not is a question of taste (if you like it then it's good). Move the validation outside of the model if it gets too complex (create a external set of rules).
Most important, do what feels right (that's usually the right answer).
|
[This](http://channel9.msdn.com/Events/aspConf/aspConf/ASP-NET-MVC-Solution-Best-Practices) video gives great insight into how to organize your asp.net MVC solution and addressing separation of concerns, and better testability. Hopefully it will help someone else also. I learned some good stuff from it.
|
Service Layers and Repositories
|
[
"",
"c#",
"model-view-controller",
"repository-pattern",
"castle-monorail",
""
] |
I've accidentally removed Win2K compatibility from an application by using [GetProcessID](http://msdn.microsoft.com/en-us/library/ms683215(VS.85).aspx).
I use it like this, to get the main HWND for the launched application.
```
ShellExecuteEx(&info); // Launch application
HANDLE han = info.hProcess; // Get process
cbinfo.han = han;
//Call EnumWindows to enumerate windows....
//with this as the callback
static BOOL CALLBACK enumproc(HWND hwnd, LPARAM lParam)
{
DWORD id;
GetWIndowThreadProcessID(hwnd, &id);
if (id == GetProcessID(cbinfo.han))
setResult(hwnd)
...
}
```
Any ideas how the same function could be acheived on Win2K?
|
There is an 'sort-of-unsupported' function: ZwQueryInformationProcess(): see
<http://msdn.microsoft.com/en-us/library/ms687420.aspx>
This will give you the process id (amongst other things), given the handle. This isn't guaranteed to work with future Windows versions, so I'd suggest having a helper function that tests the OS version and then uses GetProcAddress() to call either GetProcessId() for XP and above, and ZwQueryInformationProcess() for Win2K only.
|
DavidK's right. Please see the comment in the ZwQueryInformationProcess documentation:
> [ZwQueryInformationProcess may be
> altered or unavailable in future
> versions of Windows. Applications
> should use the alternate functions
> listed in this topic.]
That means that Microsoft can choose to remove this at any time in the future, thus breaking your application. I strongly consider you follow DavidK's advice and use ZwQueryInformationProcess on OS's that don't support GetProcessID and use GetProcessID on OS's that support it (XP SP1 and above).
|
Alternative to GetProcessID for Windows 2000
|
[
"",
"c++",
"windows",
"winapi",
""
] |
I have been learning C++ for three months now and in that time created a number of applications for my company. I consider myself fairly comfortable with C++ / MFC and STL, however I don't just want to be an OK programmer, I want to be a good programmer. I have a few books on best practices but I was wondering if anyone could suggest reading materials that helped them and any disciplines which should be encouraged?
Thanks!
|
For C++, [Scott Meyers books](http://www.aristeia.com/books.html) are very good, and will help take you to the next level.
If you don't already have it [C++ by Bjarne Stroustrup, 3rd Edition](https://rads.stackoverflow.com/amzn/click/com/0201700735)
|
I would start with the [Pragmatic Programmer](http://www.pragprog.com/the-pragmatic-programmer), [Code Complete](http://cc2e.com/), [Refactoring](http://www.refactoring.com/) and [Design Patterns](http://www.dofactory.com/Patterns/Patterns.aspx).
|
Developing as a programmer
|
[
"",
"c++",
"mfc",
"stl",
""
] |
This is kind of a brainteaser question, since the code works perfectly fine as-is, it just irritates my aesthetic sense ever so slightly. I'm turning to Stack Overflow because my own brain is failing me right now.
Here's a snippet of code that looks up an address using the Google Maps JS API and places a marker on a map. However, sometimes the initial lookup fails, so I want to repeat the process with a different address.
```
geocoder.getLatLng(item.mapstring, function(point) {
if (!point) {
geocoder.getLatLng(item.backup_mapstring, function(point) {
if (!point) return;
map.setCenter(point, 13);
map.setZoom(7);
map.addOverlay(new GMarker(point));
})
return;
}
map.setCenter(point, 13);
map.setZoom(7);
map.addOverlay(new GMarker(point));
})
```
(The second parameter to `getLatLng` is a callback function.)
Of course you can see that the three lines that center and zoom the map and add the marker are duplicated, once in the primary callback and once in the "fallback callback" (ha ha). Can you find a way to express the whole thing without any redundancy? You earn bonus points, and my adulation, if your solution works for an arbitrary number of backup map strings.
|
The other answers are good, but here's one more option. This allows you to keep the same form you started with but uses the trick of naming your lambda function so that you can refer to it recursively:
```
mapstrings = ['mapstring1', 'mapstring2', 'mapstring3'];
geocoder.getLatLng(mapstrings.shift(), function lambda(point) {
if(point) {
// success
map.setCenter(point, 13);
map.setZoom(7);
map.addOverlay(new GMarker(point));
}
else if(mapstrings.length > 0) {
// Previous mapstring failed... try next mapstring
geocoder.getLatLng(mapstrings.shift(), lambda);
}
else {
// Take special action if no mapstring succeeds?
}
})
```
The first time the symbol "lambda" is used, it is to introduce it as a new function literal name. The second time it is used, it is a recursive reference.
function literal naming works in Chrome, and I assume it works in most modern browsers, but I haven't tested it and I don't know about older browsers.
|
There is an exceedingly nice method for performing recursion in language constructs that don't explicitly support recursion called a *fixed point combinator*. The most well known is the [Y-Combinator](http://en.wikipedia.org/wiki/Fixed_point_combinator).
[Here is the Y combinator for a function of one parameter in Javascript](http://www.cs.cityu.edu.hk/~hwchun/31337/blog/2005/09/y-combinator-in-javascript.php):
```
function Y(le, a) {
return function (f) {
return f(f);
}(function (f) {
return le(function (x) {
return f(f)(x);
}, a);
});
}
```
This looks a little scary but you only have to write that once. Using it is actually pretty simple. Basically, you take your original lambda of one parameter, and you turn it into a new function of two parameters - the first parameter is now the actual lambda expression that you can do the recursive call on, the second parameter is the original first parameter (`point`) that you want to use.
This is how you might use it in your example. Note that I am using `mapstrings` as a list of strings to look up and the pop function would destructively remove an element from the head.
```
geocoder.getLatLng(pop(mapstrings), Y(
function(getLatLongCallback, point)
{
if (!point)
{
if (length(mapstrings) > 0)
geocoder.getLatLng(pop(mapstrings), getLatLongCallback);
return;
}
map.setCenter(point, 13);
map.setZoom(7);
map.addOverlay(new GMarker(point));
});
```
|
Javascript callback functions and recursion
|
[
"",
"javascript",
"recursion",
"callback",
""
] |
I have two tables A and B. I would like to delete all the records from table A that are returned in the following query:
```
SELECT A.*
FROM A , B
WHERE A.id = B.a_id AND
b.date < '2008-10-10'
```
I have tried:
```
DELETE A
WHERE id in (
SELECT a_id
FROM B
WHERE date < '2008-10-10')
```
but that only works if the inner select actually returns a value (not if the result set is empty)
**NB:** this has to work on **both SQLServer AND MySQL**
EDIT: More information
The above delete works 100% on SQLServer
When running it on MySQL I get an "error in you SQL syntax" message which points to the start of the SELECT as the problem. if I substitute the inner select with (1,2) then it works.
*@Kibbee You are right it actually makes no difference if the inner select returns rows or not.*
*@Fred I get a "not unique table.alias: a" message*
|
I think this should work (works on MySQL anyway):
```
DELETE a.* FROM A a JOIN B b ON b.id = a.id WHERE b.date < '2008-10-10';
```
Without aliases:
```
DELETE A.* FROM A JOIN B ON B.id = A.id WHERE B.date < '2008-10-10';
```
|
I'm not sure why your method is failing. If the inner query returns an empty set, then the first query should also return an empty set. I don't think @Fred's solution is right, as he seems to be joining on the wrong column.
|
How do I delete all the records in a table that have corresponding records in another table
|
[
"",
"sql",
"mysql",
"sql-server",
""
] |
I want to access a MySQL database directly from JavaScript code in an HTML page in Firefox.
Does such a library exist?
To be very clear, **CGI+Ajax will not work**
Some background: I want to create something like a GUI front end for a MySQL database (that's not what it is, but it's close enough). I'm thinking about doing this as a local HTML page using JavaScript but for that to work I would need MySQL bindings for JavaScript under Firefox. I already have a working prototype in under 100 LOC, but it requires a web server, and for reasons that are beyond this question, that won't work.
NOTE: both the database and the JavaScript code will be running locally and are not intended as a public page. In fact the HTML file will loaded as a file://// file. The only reason I'm using JavaScript is that it's the only available system for doing GUI stuff where I need it.
I'm willing to install plugins, DLL's, Windows dependent stuff or what not to make this work.
---
Edit: It looks like the answer is, "It can be done, but it's going to be painful". As one of my options is to just spew out all the data as files (ugly, and not too flexible, but it would work) I think I'm not going to pursue this.
|
JavaScript code lives inside the browser. It can make HTTP requests to the outside, but not really much more. So by design you won't be able to bind to a program running locally. If MySQL did expose an HTTP service, it might be possible, but that's not the case.
You might be able to find a plugin for Firefox that exposes a MySQL API to JavaScript, but I don't know any such plugin.
If you don't specifically need MySQL, but just a database accessible from JavaScript code, have a look at [Google Gears](http://gears.google.com/). It is a Firefox / Internet Explorer plugin that exposes an [SQLite](http://www.sqlite.org/) database and a few other goodies.
If you give more information on what you are trying to build, we might be able to give you better advice...
|
Unfortunately you need a server. Or if you know how to and are ready to be platform/browser locked, you could write a plug-in for your browser of choice (as far as I know there is no DLL for Internet Explorer so you'll need to write one yourself).
You might want to look into a small server that requires no setup. I modified Lua's [Xavante](http://keplerproject.github.com/xavante/) server, for a very similar reason to yours, so it runs with no external dependencies, thus I can install/uninstall the application with a single copy/paste.
|
Are there JavaScript bindings for MySQL?
|
[
"",
"javascript",
"mysql",
"firefox",
""
] |
Is it possible for a python script to open its own source file and overwrite it?
The idea was to have a very simple and very dirty way for a python script to download an update of itself so that the next time it is run it would be an updated version.
|
That's certainly possible. After the script is loaded/imported, the Python interpreter won't access it anymore, except when printing source line in a exception stack trace. Any pyc file will be regenerated the next time as the source file is newer than the pyc.
|
If you put most of the code into a module, you could have the main file (which is the one that is run) check the update location, and automatically download the most recent version and install that, before the module is imported.
That way you wouldn't have to have a restart of the application to run the most recent version, just reimport the module.
```
# Check version of module
import module
# Check update address
if update_version > module.version:
download(update_module)
import module
reload(module)
module.main()
```
You can use the reload() function to force a module to reload it's data. Note there are some caveats to this: objects created using classes in this module will not be magically updated to the new version, and "from module import stuff" before the reimport may result in "stuff" referring to the old object "module.stuff".
[Clearly, I didn't read the previous post clearly enough - it does exactly what I suggest!]
|
Is it possible for a running python program to overwrite itself?
|
[
"",
"python",
""
] |
Is it advisable to use arrays in Javascript using tens of thousands of indexes?
I have a database which contains a table of static data, meaning that it will never change. In this case would it be advisable to convert this table to a javascript array and look the info up in there when needed? Is this bad practice?
The table in question contains roughly 40,000 key/value pair entries.
|
No.
I'd keep the table, as its a single point of maintenance.
You'll find that doing a XmlHTTPRequest to return a key/value pair based on a server side query, would actually perform faster, and have significantly less memory footprint than a huge JavaScript array.
|
In my experience, moving beyond a few thousand lines in an array cached client-side in a browser led to hellish experiences. Bloated browser memory footprints, slow load times and general sluggishness were the order of the day. You should test this scenario yourself for the experience. It's worth the few minutes it'd take you.
|
Is it advisable to use arrays in Javascript using tens of thousands of indexes?
|
[
"",
"javascript",
"performance",
"arrays",
""
] |
I was wondering if there was some kind of J tool in the java swing library that opens up a file browser window and allows a user to choose a file. Then the ouput of the file would be the absolute path of the chosen file.
Thanks in advance,
|
You can use the [JFileChooser](http://java.sun.com/j2se/1.4.2/docs/api/javax/swing/JFileChooser.html) class, check [this example](http://www.java2s.com/Code/Java/Swing-JFC/DemonstrationofFiledialogboxes.htm).
|
I ended up using this quick piece of code that did exactly what I needed:
```
final JFileChooser fc = new JFileChooser();
fc.showOpenDialog(this);
try {
// Open an input stream
Scanner reader = new Scanner(fc.getSelectedFile());
}
```
|
How to browse for a file in java swing library?
|
[
"",
"java",
"swing",
"file",
""
] |
I'm learning traditional Relational Databases (with [PostgreSQL](http://www.postgresql.org/)) and doing some research I've come across some new types of databases. [CouchDB](http://couchdb.apache.org/), [Drizzle](https://launchpad.net/drizzle), and [Scalaris](http://www.zib.de/CSR/Projects/scalaris/) to name a few, what is going to be the next database technologies to deal with?
|
I would say next-gen *database*, not next-gen SQL.
SQL is a language for querying and manipulating relational databases. SQL is dictated by an international standard. While the standard is revised, it seems to always work within the relational database paradigm.
Here are a few new data storage technologies that are getting attention (circa 2008 when I wrote this answer):
* [**CouchDB**](http://couchdb.apache.org/) is a non-relational database. They call it a document-oriented database.
* [**Amazon SimpleDB**](http://aws.amazon.com/simpledb/) is also a non-relational database accessed in a distributed manner through a web service. Amazon also has a distributed key-value store called **Dynamo**, which powers some of its S3 services.
* [**Dynomite**](http://github.com/cliffmoon/dynomite/tree/master) and [**Kai**](http://kai.wiki.sourceforge.net/) are open source solutions inspired by Amazon Dynamo.
* [**BigTable**](http://research.google.com/archive/bigtable.html) is a proprietary data storage solution used by Google, and implemented using their Google File System technology. Google's MapReduce framework uses BigTable.
* [**Hadoop**](http://hadoop.apache.org/core/) is an open-source technology inspired by Google's MapReduce, and serving a similar need, to distribute the work of very large scale data stores.
* [**Scalaris**](http://www.zib.de/CSR/Projects/scalaris/) is a distributed transactional key/value store. Also not relational, and does not use SQL. It's a research project from the Zuse Institute in Berlin, Germany.
* [**RDF**](http://www.w3.org/RDF/) is a standard for storing semantic data, in which data and metadata are interchangeable. It has its own query language SPARQL, which resembles SQL superficially, but is actually totally different.
* [**Vertica**](http://www.vertica.com/) is a highly scalable column-oriented analytic database designed for distributed (grid) architecture. It does claim to be relational and SQL-compliant. It can be used through Amazon's Elastic Compute Cloud.
* [**Greenplum**](https://greenplum.org/) is a high-scale data warehousing DBMS, which implements both MapReduce and SQL.
* [**XML**](http://www.w3.org/XML/) isn't a DBMS at all, it's an interchange format. But some DBMS products work with data in XML format.
* [**ODBMS**](http://www.odbms.org/), or Object Databases, are for managing complex data. There don't seem to be any dominant ODBMS products in the mainstream, perhaps because of lack of standardization. Standard SQL is gradually gaining some OO features (e.g. extensible data types and tables).
* [**Drizzle**](https://launchpad.net/drizzle) is a relational database, drawing a lot of its code from MySQL. It includes various architectural changes designed to manage data in a scalable "cloud computing" system architecture. Presumably it will continue to use standard SQL with some MySQL enhancements.
* [**Cassandra**](http://incubator.apache.org/cassandra/) is a highly scalable, eventually consistent, distributed, structured key-value store, developed at Facebook by one of the authors of Amazon Dynamo, and contributed to the Apache project.
* [**Project Voldemort**](http://project-voldemort.com/) is a non-relational, distributed, key-value storage system. It is used at LinkedIn.com
* [**Berkeley DB**](http://www.oracle.com/technology/products/berkeley-db/index.html) deserves some mention too. It's not "next-gen" because it dates back to the early 1990's. It's a popular key-value store that is easy to embed in a variety of applications. The technology is currently owned by Oracle Corp.
Also see this nice article by Richard Jones: "[Anti-RDBMS: A list of distributed key-value stores](http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/)." He goes into more detail describing some of these technologies.
Relational databases have weaknesses, to be sure. People have been arguing that they don't handle all data modeling requirements since the day it was first introduced.
Year after year, researchers come up with new ways of managing data to satisfy special requirements: either requirements to handle data relationships that don't fit into the relational model, or else requirements of high-scale volume or speed that demand data processing be done on distributed collections of servers, instead of central database servers.
Even though these advanced technologies do great things to solve the specialized problem they were designed for, relational databases are still a good general-purpose solution for most business needs. SQL isn't going away.
---
I've written an article in php|Architect magazine about the innovation of non-relational databases, and data modeling in relational vs. non-relational databases. <http://www.phparch.com/magazine/2010-2/september/>
|
I'm missing **graph databases** in the answers so far. A graph or network of objects is common in programming and can be useful in databases as well. It can handle semi-structured and interconnected information in an efficient way. Among the areas where graph databases have gained a lot of interest are semantic web and bioinformatics. RDF was mentioned, and it is in fact a language that represents a graph. Here's some pointers to what's happening in the graph database area:
* [Graphs - a better database abstraction](http://whydoeseverythingsuck.com/2008/03/graphs-better-database-abstraction.html)
* [Graphd, the backend of Freebase](http://blog.freebase.com/2008/04/09/a-brief-tour-of-graphd/)
* [Neo4j open source graph database engine](http://neo4j.org/)
* [AllegroGraph RDFstore](http://agraph.franz.com/)
* [Graphdb abstraction layer for bioinformatics](http://code.google.com/p/pygr/)
* [Graphdb behind Directed Edge recommendation engine](http://blog.directededge.com/2009/02/27/on-building-a-stupidly-fast-graph-database/)
I'm part of the [Neo4j](http://neo4j.org/) project, which is written in Java but has bindings to Python, Ruby and Scala as well. Some people use it with Clojure or Groovy/Grails. There is also a [GUI tool](http://wiki.neo4j.org/content/Neoclipse) evolving.
|
The Next-gen Databases
|
[
"",
"sql",
"database",
"nosql",
"non-relational-database",
""
] |
I have got a table in MS Access 2007 with 4 fields.
* Labour Cost
* Labour Hours
* Vat
* Total
How do I multiply 'Labour Hours' by 'Labour Cost' add the 'VAT' and display the answer in 'Total'
Where would I put any formulas?, in a form or query or table ?
|
There is also the dummies (ie not SQL) way to do it:
First delete your total column from your table and for this exercise pretend that the name of your table is "Labour" .
Now create a new query and view it in design view, add all the fields from your Labour table (so you can check that everything is working), select an empty field, right click and select "Build" from the drop down list. You should now have an Expression Builder window.
Type in the name for your calculated field, e.g. labourTotal, follow it with a colon ":" and then select the field names you want to add from Tables at the bottom left of the Expression Builder window and double-click on each. Each field will appear in the expression builder following the "Total:". Now replace each "«Expr»" with a "+". You should see this in the expression builder: "labourTotal: [Labour]![Labour Cost] + [Labour]![Labour Hours] + [Labour]![Vat] ". Click OK and run the query - if all is well the total column will display the results.
|
You don't need the "Total" column in all probability.
Your queries or reports will probably resemble this:
```
SELECT [Total] = [Labour Cost] * [Labour Hours] + [VAT]
```
You can use the same sort of formula in controls on your forms or reports.
|
MS Access multiply fields
|
[
"",
"sql",
"ms-access",
"calculated-columns",
""
] |
I have an input file that I want to sort based on timestamp which is a substring of each record. I want to store multiple attributes of the
The list is currently about 1000 records. But, I want it to be able to scale up a bit just in case.
When I did it with a Linked List by searching the entire list for insertion it took about 20 seconds. Now, just filling up a vector and outputting to file is taking 4 seconds (does that sound too long)?
I would like to use merge sort or quick sort (merge sort appears to be a little easier to me). The trouble that I'm running into is that I don't see many examples of implementing these sorts using objects rather than primitive data types.
I could use either a vector or Linked list. The feedback that I've gotten from this site has been most helpful so far. I'm hoping that someone can sprinkle on the magic pixie dust to make this easier on me :)
Any links or examples on the easiest way to do this with pretty decent performance would be most appreciated. I'm getting stuck on how to implement these sorts with objects because I'm newbie at C++ :)
Here's what my new code looks like (no sorting yet):
```
class CFileInfo
{
public:
std::string m_PackLine;
std::string m_FileDateTime;
int m_NumDownloads;
};
void main()
{
CFileInfo packInfo;
vector<CFileInfo> unsortedFiles;
vector<CFileInfo>::iterator Iter;
packInfo.m_PackLine = "Sample Line 1";
packInfo.m_FileDateTime = "06/22/2008 04:34";
packInfo.m_NumDownloads = 0;
unsortedFiles.push_back(packInfo);
packInfo.m_PackLine = "Sample Line 2";
packInfo.m_FileDateTime = "12/05/2007 14:54";
packInfo.m_NumDownloads = 1;
unsortedFiles.push_back(packInfo);
for (Iter = unsortedFiles.begin(); Iter != unsortedFiles.end(); ++Iter )
{
cout << " " << (*Iter).m_PackLine;
}
}
```
|
Sorting a linked-list will inherently be either O(N^2) or involve external random-access storage.
Vectors have random access storage. So do arrays. Sorting can be O(NlogN).
At 1000 elements you will begin to see a difference between O(N^2) and O(NlogN). At 1,000,000 elements you'll definitely notice the difference!
It is possible under very special situations to get O(N) sorting. (For example: Sorting a deck of playing cards. We can create a function(card) that maps each card to its sorted position.)
But in general, O(NlogN) is as good as it gets. So you might as well use STL's sort()!
Just add *#include <algorithms>*
---
All you'll need to add is an operator<(). Or a sort functor.
But one suggestion: For god's sake man, if you are going to sort on a date, either encode it as a long int representing seconds-since-epoch (mktime?), or at the very least use a ***"year/month/day-hour:minute:second.fraction"*** format. (And MAKE SURE everything is 2 (or 4) digits with leading zeros!) Comparing "6/22/2008-4:34" and "12/5/2007-14:54" will require parsing! Comparing "2008/06/22-04:34" with "2007/12/05-14:54" is much easier. (Though still much less efficient than comparing two integers!)
---
Rich wrote:
***the other answers seem to get into syntax more which is what I'm really lacking.***
Ok. With basic a "int" type we have:
```
#define PRINT(DATA,N) for(int i=0; i<N; i++) { cout << (i>0?", ":"") << DATA[i]; } cout << endl;
int
main()
{
// Creating and Sorting a stack-based array.
int d [10] = { 1, 4, 0, 2, 8, 6, 3, 5, 9, 7 };
PRINT(d,10);
sort( d, d+10 );
PRINT(d,10);
cout << endl;
// Creating a vector.
int eData [10] = { 1, 4, 0, 2, 8, 6, 3, 5, 9, 7 };
vector<int> e;
for(int i=0; i<10; i++ )
e.push_back( eData[i] );
// Sorting a vector.
PRINT(e,10);
sort(e.begin(), e.end());
PRINT(e,10);
}
```
With your own type we have:
```
class Data
{
public:
string m_PackLine;
string m_FileDateTime;
int m_NumberDownloads;
/* Lets simplify creating Data elements down below. */
Data( const string & thePackLine = "",
const string & theDateTime = "",
int theDownloads = 0 )
: m_PackLine ( thePackLine ),
m_FileDateTime ( theDateTime ),
m_NumberDownloads ( theDownloads )
{ }
/* Can't use constructor with arrays */
void set( const string & thePackLine,
const string & theDateTime,
int theDownloads = 0 )
{
m_PackLine = thePackLine;
m_FileDateTime = theDateTime;
m_NumberDownloads = theDownloads;
}
/* Lets simplify printing out down below. */
ostream & operator<<( ostream & theOstream ) const
{
theOstream << "PackLine=\"" << m_PackLine
<< "\" fileDateTime=\"" << m_FileDateTime
<< "\" downloads=" << m_NumberDownloads;
return theOstream;
}
/*
* This is IT! All you need to add to use sort()!
* Note: Sort is just on m_FileDateTime. Everything else is superfluous.
* Note: Assumes "YEAR/MONTH/DAY HOUR:MINUTE" format.
*/
bool operator< ( const Data & theOtherData ) const
{ return m_FileDateTime < theOtherData.m_FileDateTime; }
};
/* Rest of simplifying printing out down below. */
ostream & operator<<( ostream & theOstream, const Data & theData )
{ return theData.operator<<( theOstream ); }
/* Printing out data set. */
#define PRINT(DATA,N) for(int i=0; i<N; i++) { cout << "[" << i << "] " << DATA[i] << endl; } cout << endl;
int
main()
{
// Creating a stack-based array.
Data d [10];
d[0].set( "Line 1", "2008/01/01 04:34", 1 );
d[1].set( "Line 4", "2008/01/04 04:34", 4 );
d[2].set( "Line 0", "2008/01/00 04:34", 0 );
d[3].set( "Line 2", "2008/01/02 04:34", 2 );
d[4].set( "Line 8", "2008/01/08 04:34", 8 );
d[5].set( "Line 6", "2008/01/06 04:34", 6 );
d[6].set( "Line 3", "2008/01/03 04:34", 3 );
d[7].set( "Line 5", "2008/01/05 04:34", 5 );
d[8].set( "Line 9", "2008/01/09 04:34", 9 );
d[9].set( "Line 7", "2008/01/07 04:34", 7 );
// Sorting a stack-based array.
PRINT(d,10);
sort( d, d+10 );
PRINT(d,10);
cout << endl;
// Creating a vector.
vector<Data> e;
e.push_back( Data( "Line 1", "2008/01/01 04:34", 1 ) );
e.push_back( Data( "Line 4", "2008/01/04 04:34", 4 ) );
e.push_back( Data( "Line 0", "2008/01/00 04:34", 0 ) );
e.push_back( Data( "Line 2", "2008/01/02 04:34", 2 ) );
e.push_back( Data( "Line 8", "2008/01/08 04:34", 8 ) );
e.push_back( Data( "Line 6", "2008/01/06 04:34", 6 ) );
e.push_back( Data( "Line 3", "2008/01/03 04:34", 3 ) );
e.push_back( Data( "Line 5", "2008/01/05 04:34", 5 ) );
e.push_back( Data( "Line 9", "2008/01/09 04:34", 9 ) );
e.push_back( Data( "Line 7", "2008/01/07 04:34", 7 ) );
// Sorting a vector.
PRINT(e,10);
sort(e.begin(), e.end());
PRINT(e,10);
}
```
|
I'm not sure I understood your question correctly, is your problem defining the sort functor? The STL sort is generally implemented as an introspective sort which is very good for most of the cases.
```
struct sort_functor
{
bool operator()(const CFileInfo & a, const CFileInfo & b) const
{
// may be a little bit more subtle depending on what your strings look like
return a.m_FileDateTime < b.m_FileDateTime;
}
}
std::sort(unsortedFiles.begin(), unsortedFile.end(), sort_functor());
```
or using boost::lambda
```
std::sort(unsortedFiles.begin(),
unsortedFile.end(),
bind(&CFileInfo::m_FileDateTime, _1) < bind(&CFileInfo::m_FileDateTime, _2));
```
Was it the needed information?
|
C++ sorting a vector or linked list
|
[
"",
"c++",
"algorithm",
""
] |
We are deciding the naming convention for tables, columns, procedures, etc. at our development team at work. The singular-plural table naming *has already been decided*, we are using singular. We are discussing whether to use a prefix for each table name or not. I would like to read suggestions about using a prefix or not, and why.
Does it provide any security at all (at least one more obstacle for a possible intruder)? I think it's generally more comfortable to name them with a prefix, in case we are using a table's name in the code, so to not confuse them with variables, attributes, etc. But I would like to read opinions from more experienced developers.
|
I prefer prefixing tables and other database objects with a short name of the application or solution.
This helps in two potential situations which spring to mind:
1. You are less likely to get naming conflicts if you opt to use any third-party framework components which require tables in your application database (e.g. asp net membership provider).
2. If you are developing solutions for customers, they may be limited to a single database (especially if they are paying for external hosting), requiring them to store the database objects for multiple applications in a single database.
|
I find hungarian DB object prefixes to indicate their types rather annoying.
I've worked in places where every table name had to start with "tbl". In every case, the naming convention ended up eventually causing much pain when someone needed to make an otherwise minor change.
For example, if your convention is that tables start with "tbl" and views start with "v", thn what's the right thing to do when you decide to replace a table with some other things on the backend and provide a view for compatibility or even as the preferred interface? We ended up having views that started with "tbl".
|
Should we use prefixes in our database table naming conventions?
|
[
"",
"sql",
"database",
"naming-conventions",
""
] |
All,
this is my code
```
//declare string pointer
BSTR markup;
//initialize markup to some well formed XML <-
//declare and initialize XML Document
MSXML2::IXMLDOMDocument2Ptr pXMLDoc;
HRESULT hr;
hr = pXMLDoc.CreateInstance(__uuidof(MSXML2::DOMDocument40));
pXMLDoc->async = VARIANT_FALSE;
pXMLDoc->validateOnParse = VARIANT_TRUE;
pXMLDoc->preserveWhiteSpace = VARIANT_TRUE;
//load markup into XML document
vtBoolResult = pXMLDoc->loadXML(markup);
//do some changes to the XML file<-
//get back string from XML doc
markup = pXMLDoc->Getxml(); //<-- this retrieves RUBBISH
```
At this point my string is mangled (just a few chinese characters at the start then rubbish) . Looks like an encoding issue.
I also tried the following:
```
_bstr_t superMarkup = _bstr_t(markup);
//did my stuff
superMarkup = pXMLDoc->Getxml();
markup = superMarkup;
```
but still I am getting the same result.
Even if I call GetXML() without changing anything in the xml document I still get rubbish.
At this point if I try to assign the mangled pointer to another pointer it will trow an error:
> Attempted to restore write protected
> memory. this is often an indication
> that other memory is corrupted.
Any suggestion?
EDIT1:
**I found out this is happening in relation to the size of the XML string.
If it happens on a given XML string and I reduce the size (keeping the same schema) it will work fine. Looks like MSXML2::DOMDocument40 has a limitation on size?
In detail it happens if I have more than 16407 characters. I have one more GetXML will retrieve RUBBISH - if it's <= 16407 everything works fine.**
EDIT2:
**Roddy was right - I was missing that `_bstr_t` is a class ...**
Rings any bell?
Cheers
|
Try replacing
```
BSTR Markup;
```
with
```
bstr_t Markup;
```
BSTR is pretty much a dumb pointer, and I think that the return result of GetXML() is being converted to a temporary which is then destroyed by the time you get to see it. bstr\_t wraps that with some smart-pointer goodness...
Note: Your "SuperMarkup" thing did NOT do what I suggested. Again, **BSTR is just a pointer**, and doesn't "own" what it points to. bstr\_t, on the other hand does. I think your GetXML() function is returning a bstr\_t, which is then being deleted as it goes out of scope, leaving your BSTR pointing to memory that is no longer valid.
|
Ok, I think Patrick is right. I took your code and made a quick ATL EXE project named getxmltest.
I added this line after #include directives
```
#import "MSXML3.DLL"
```
removed the post-build event which registers the component because I dont want to expose any component from the exe but only have all ATL headers and libs already referenced and added the following code to the \_tWinMain
```
extern "C" int WINAPI _tWinMain(HINSTANCE /*hInstance*/, HINSTANCE /*hPrevInstance*/,
LPTSTR /*lpCmdLine*/, int nShowCmd)
{
CoInitialize(NULL);
{
//declare string pointer
_bstr_t markup;
//initialize markup to some well formed XML <-
//declare and initialize XML Document
MSXML2::IXMLDOMDocument2Ptr pXMLDoc;
HRESULT hr = pXMLDoc.CreateInstance(__uuidof(MSXML2::DOMDocument));
pXMLDoc->async = VARIANT_FALSE;
pXMLDoc->validateOnParse = VARIANT_TRUE;
pXMLDoc->preserveWhiteSpace = VARIANT_TRUE;
//load markup into XML document
VARIANT_BOOL vtBoolResult = pXMLDoc->loadXML(L"<XML></XML>");
//do some changes to the XML file<-
//get back string from XML doc
markup = pXMLDoc->Getxml(); //<-- this retrieves RUBBISH (not anymore...)
ATLTRACE("%S", (BSTR)markup.GetBSTR());
}
CoUninitialize();
return _AtlModule.WinMain(nShowCmd);
}
```
The resulting trace lines were the following...
```
'getxmltest.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.6001.18000_none_5cdbaa5a083979cc\comctl32.dll'
<XML></XML>
'getxmltest.exe': Unloaded 'C:\Windows\SysWOW64\msxml3.dll'
The program '[6040] getxmltest.exe: Native' has exited with code 0 (0x0).
```
Where we can see the string we entered initially..
I didnt add any logic to the code because I though this was enough to display the resulting xml after processing it with the MSXML engine. Obviously you may do some more testing using this code and see what happens next.
|
MSXML2::IXMLDOMDocument2Ptr->GetXML() messing up my string!
|
[
"",
"c++",
"msxml",
""
] |
I'm attempting to use Python to convert a multi-page PDF into a series of JPEGs. I can split the PDF up into individual pages easily enough with available tools, but I haven't been able to find anything that can covert PDFs to images.
PIL does not work, as it can't read PDFs. The two options I've found are using either GhostScript or ImageMagick through the shell. This is not a viable option for me, since this program needs to be cross-platform, and I can't be sure either of those programs will be available on the machines it will be installed and used on.
Are there any Python libraries out there that can do this?
|
[ImageMagick](https://www.imagemagick.org/script/index.php) has [Python bindings](http://www.imagemagick.org/download/python/).
|
Here's whats worked for me using the python ghostscript module (installed by '$ pip install ghostscript'):
```
import ghostscript
def pdf2jpeg(pdf_input_path, jpeg_output_path):
args = ["pdf2jpeg", # actual value doesn't matter
"-dNOPAUSE",
"-sDEVICE=jpeg",
"-r144",
"-sOutputFile=" + jpeg_output_path,
pdf_input_path]
ghostscript.Ghostscript(*args)
```
I also installed Ghostscript 9.18 on my computer and it probably wouldn't have worked otherwise.
|
Converting a PDF to a series of images with Python
|
[
"",
"python",
"pdf",
"imagemagick",
"jpeg",
"python-imaging-library",
""
] |
We are looking at various options in porting our persistence layer from Oracle to another database and one that we are looking at is MS SQL. However we use Oracle sequences throughout the code and because of this it seems moving will be a headache. I understand about @identity but that would be a massive overhaul of the persistence code.
Is it possible in SQL Server to create a function which could handle a sequence?
|
That depends on your current use of sequences in Oracle. Typically a sequence is read in the Insert trigger.
From your question I guess that it is the persistence layer that generates the sequence before inserting into the database (including the new pk)
In MSSQL, you can combine SQL statements with ';', so to retrieve the identity column of the newly created record, use INSERT INTO ... ; SELECT SCOPE\_IDENTITY()
Thus the command to insert a record return a recordset with a single row and a single column containing the value of the identity column.
You can of course turn this approach around, and create Sequence tables (similar to the dual table in Oracle), in something like this:
```
INSERT INTO SequenceTable (dummy) VALUES ('X');
SELECT @ID = SCOPE_IDENTITY();
INSERT INTO RealTable (ID, datacolumns) VALUES (@ID, @data1, @data2, ...)
```
|
I did this last year on a project. Basically, I just created a table with the name of the sequence, current value, & increment amount.
Then I created a 4 procs :
* GetCurrentSequence( sequenceName)
* GetNextSequence( sequenceName)
* CreateSequence( sequenceName, startValue, incrementAmount)
* DeleteSequence( sequenceName)
But there is a limitation you may not appreciate; functions cannot have side effects. So you could create a function for GetCurrentSequence(...), but GetNextSequence(...) would need to be a proc, since you will probably want to increment the current sequence value. However, if it's a proc, you won't be able to use it directly in your insert statements.
So instead of
```
insert into mytable(id, ....) values( GetNextSequence('MySequence'), ....);
```
Instead you will need to break it up over 2 lines;
```
declare @newID int;
exec @newID = GetNextSequence 'MySequence';
insert into mytable(id, ....) values(@newID, ....);
```
Also, SQL Server doesn't have any mechanism that can do something like
```
MySequence.Current
```
or
```
MySequence.Next
```
Hopefully, somebody will tell me I am incorrect with the above limitations, but I'm pretty sure they are accurate.
Good luck.
|
Is it possible in SQL Server to create a function which could handle a sequence?
|
[
"",
"sql",
"sql-server",
"oracle",
"sequence",
""
] |
I'm building a GUI class for C++ and dealing a lot with pointers. An example call:
```
mainGui.activeWindow->activeWidget->init();
```
My problem here is that I want to cast the **activeWidget** pointer to another type. **activeWidget** is of type GUI\_BASE. Derived from BASE I have other classes, such as GUI\_BUTTON and GUI\_TEXTBOX. I want to cast the **activeWidget** pointer from GUI\_BASE to GUI\_TEXTBOX. I assume it would look something like this:
```
(GUI_TEXTBOX*)(mainGui.activeWindow->activeWidget)->function();
```
This isn't working, because the compiler still thinks the pointer is of type GUI\_BASE. The following bit of code does work, however:
```
GUI_TEXTBOX *textbox_pointer;
textbox_pointer = (GUI_TEXTBOX*)mainGui.activeWindow->activeWidget;
textbox_pointer->function();
```
I'm hoping my problem here is just a syntax issue. Thanks for the help :)
|
The problem is that casts have lower precedence than the . -> () [] operators. You'll have to use a C++ style cast or add extra parentheses:
```
((GUI_TEXTBOX*)mainGui.activeWindow->activeWidget)->function(); // Extra parentheses
dynamic_cast<GUI_TEXTBOX*>(mainGui.activeWindow->activeWidget)->function(); // C++ style cast
```
|
You should not be using the C style cast.
You need to use the C++ dynamic cast. This will then allow you to test that the object is actually a GUI\_TEXTBOX before you call the method on it.
```
GUI_TEXTBOX* textboxPointer = dynamic_cast<GUI_TEXTBOX*>(mainGui.activeWindow->activeWidget);
if (textboxPointer)
{
// If activeWidget is not a text box then dynamic_cast
// will return a NULL.
textboxPointer->textBoxMethod();
}
// or
dynamic_cast<GUI_TEXTBOX&>(*mainGui.activeWindow->activeWidget).textBoxMethod();
// This will throw bad_cast if the activeWidget is not a GUI_TEXTBOX
```
Note the C style cast and reinterpret\_cast<>() are not guaranteed to work in this situation (Though on most compilers they will [but this is just an aspect of the implementation and you are getting lucky]). All bets are off if the object assigned to activeWidget actually uses multiple inheritance, in this situation you will start to see strange errors with most compilers if you do not use dynamic\_cast<>().
|
What am I doing wrong with this pointer cast?
|
[
"",
"c++",
"pointers",
""
] |
I recently came across this in some code - basically someone trying to create a large object, coping when there's not enough heap to create it:
```
try {
// try to perform an operation using a huge in-memory array
byte[] massiveArray = new byte[BIG_NUMBER];
} catch (OutOfMemoryError oome) {
// perform the operation in some slower but less
// memory intensive way...
}
```
This doesn't seem right, since Sun themselves recommend that you shouldn't try to catch `Error` or its subclasses. We discussed it, and another idea that came up was explicitly checking for free heap:
```
if (Runtime.getRuntime().freeMemory() > SOME_MEMORY) {
// quick memory-intensive approach
} else {
// slower, less demanding approach
}
```
Again, this seems unsatisfactory - particularly in that picking a value for `SOME_MEMORY` is difficult to easily relate to the job in question: for some arbitrary large object, how can I estimate how much memory its instantiation might need?
Is there a better way of doing this? Is it even possible in Java, or is any idea of managing memory below the abstraction level of the language itself?
**Edit 1:** in the first example, it might actually be feasible to estimate the amount of memory a `byte[]` of a given length might occupy, but is there a more generic way that extends to arbitrary large objects?
**Edit 2:** as @erickson points out, there are ways to estimate the size of an object once it's created, but (ignoring a statistical approach based on previous object sizes) is there a way of doing so for yet-uncreated objects?
There also seems to be some debate as to whether it's reasonable to catch `OutOfMemoryError` - anyone know anything conclusive?
|
freeMemory isn't quite right. You'd also have to add maxMemory()-totalMemory(). e.g. assuming you start up the VM with max-memory=100M, the JVM may at the time of your method call only be using (from the OS) 50M. Of that, let's say 30M is actually in use by the JVM. That means you'll show 20M free (roughly, because we're only talking about the heap here), but if you try to make your larger object, it'll attempt to grab the other 50M its contract allows it to take from the OS before giving up and erroring. So you'd actually (theoretically) have 70M available.
To make this more complicated, the 30M it reports as in use in the above example includes stuff that may be eligible for garbage collection. So you may actually have more memory available, if it hits the ceiling it'll try to run a GC to free more memory.
You can try to get around this bit by manually triggering a System.GC, except that that's not such a terribly good thing to do because
-it's not guaranteed to run immediately
-it will stop everything in its tracks while it runs
Your best bet (assuming you can't easily rewrite your algorithm to deal with smaller memory chunks, or write to a memory-mapped file, or something less memory intensive) might be to do a safe rough estimate of the memory needed and insure that it's available before you run your function.
|
I don't believe that there's a reasonable, generic approach to this that could safely be assumed to be 100% reliable. Even the Runtime.freeMemory approach is vulnerable to the fact that you may actually have enough memory after a garbage collection, but you wouldn't know that unless you force a gc. But then there's no foolproof way to force a GC either. :)
Having said that, I suspect if you really did know approximately how much you needed, and did run a `System.gc()` beforehand, and your running in a simple single-threaded app, you'd have a reasonably decent shot at getting it right with the .freeMemory call.
If any of those constraints fail, though, and you get the OOM error, your back at square one, and therefore are probably no better off than just catching the Error subclass. While there are some risks associated with this (Sun's VM does not make a lot of guarantees about what happens after an OOM... there's some risk of internal state corruption), there are many apps for which just catching it and moving on with life will leave you with no serious harm.
A more interesting question in my mind, however, is why are there cases where you do have enough memory to do this and others where you don't? Perhaps some more analysis of the performance tradeoffs involved is the real answer?
|
Java: enough free heap to create an object?
|
[
"",
"java",
""
] |
I'm trying to "single source" a form page which can be in edit mode or view mode. For various reasons, this isn't using the ASP.Net FormView or DetailsView controls.
Since there is no way to disable a textbox without turning its contents gray (well, we could "eat" all of the keystrokes into it, but that isn't very elegant either) and disabling a dropdown list or listbox isn't what we want, our first try was to duplicate all of the form input controls with a label and use CSS to select which ones are visible depending on the mode of the form. That works, but it's ugly to edit and the code-behind has to populate both controls every time.
We could control the visibility in the code-behind to avoid filling both controls, but we still have to add them both to the form.
So I had the idea to use jQuery to swap out the input controls for `<label>`, `<div>`, or `<span>` elements. This works, to some extent, by creating the appropriate selectors and using the `replace()` jQuery method to swap out the elements dynamically.
The problem is that I not only need to copy the contents, but also the styles, attributes, and sizing of the original input controls (at this point we're only talking about textboxes - we have a different solution for dropdown lists and listboxes).
Brute force should work - "backup" all of the attributes of the input control, create the new "read only" element, then replace the input control with the new element. What I'm looking for is something simpler.
Succinctly, using jQuery, what is the best way to replace a textbox with a label and have the label have the same contents and appear in the same location and style as the textbox?
Here is what I have so far:
```
$(":text").each( function() {
var oldClass = $(this).attr("class");
var oldId = $(this).attr("id");
var oldHeight = $(this).outerHeight();
var oldWidth = $(this).outerWidth();
var oldStyle = $(this).attr("style");
$(this).replaceWith("<div id='" + oldId + "'>" + $(this).val() + "</div>");
$("div#" + oldId).attr("class", oldClass);
$("div#" + oldId).attr("style", oldStyle);
$("div#" + oldId).width(oldWidth);
$("div#" + oldId).height(oldHeight);
$("div#" + oldId).css("display", "inline-block");
});
```
|
This may not suit your needs, but it's a possibility.
`<input>` and `<textarea>` tags support the read-only property. The behavior of read-only fields is slightly different than disabled. Here's what the [HTML 4.01 Recommendation](http://www.w3.org/TR/html401/) says:
> When set, the readonly attribute has the following effects on an element:
>
> Read-only elements receive focus but cannot be modified by the user.
> Read-only elements are included in tabbing navigation.
> Read-only elements may be successful. ("Successful" means it will be submitted as a parameter.)
Another key difference is that elements with this attribute can be styled however you like. (You could remove or change the borders and background for instance.) So instead of having to create new elements and copy attributes, you could merely add or remove the read-only attribute.
You could then create a style for these fields `"input[readonly] {}"`. Noting of course that popular versions of IE ignore the attribute selector in CSS. (So maybe just define a class that you add and remove.)
|
Why not use an edit in place plugin like [Jeditable](http://www.appelsiini.net/projects/jeditable). This way you can generate your view mode and have each field editable at the click of a button.
|
How can I switch a text box for a <label>, <div>, or <span> using jQuery?
|
[
"",
"asp.net",
"javascript",
"jquery",
"html",
""
] |
I'm attempting to use the following code to serialize an anonymous type to JSON:
```
var serializer = new DataContractJsonSerializer(thing.GetType());
var ms = new MemoryStream();
serializer.WriteObject(ms, thing);
var json = Encoding.Default.GetString(ms.ToArray());
```
However, I get the following exception when this is executed:
> Type
> '<>f\_\_AnonymousType1`3[System.Int32,System.Int32,System.Object[]]'
> cannot be serialized. Consider marking
> it with the DataContractAttribute
> attribute, and marking all of its
> members you want serialized with the
> DataMemberAttribute attribute. See
> the Microsoft .NET Framework
> documentation for other supported
> types.
I can't apply attributes to an anonymous type (as far as I know). Is there another way to do this serialization or am I missing something?
|
Try the JavaScriptSerializer instead of the DataContractJsonSerializer
```
JavaScriptSerializer serializer = new JavaScriptSerializer();
var output = serializer.Serialize(your_anon_object);
```
|
As others have mentioned, [Newtonsoft JSON.NET](http://james.newtonking.com/projects/json-net.aspx) is a good option. Here is a specific example for simple JSON serialization:
```
return JsonConvert.SerializeObject(
new
{
DataElement1,
SomethingElse
});
```
I have found it to be a very flexible, versatile library.
|
How do I serialize a C# anonymous type to a JSON string?
|
[
"",
"c#",
"json",
"anonymous-types",
"datacontractjsonserializer",
"json-serialization",
""
] |
I am currently running the following code based on Chapter 12.5 of the Python Cookbook:
```
from xml.parsers import expat
class Element(object):
def __init__(self, name, attributes):
self.name = name
self.attributes = attributes
self.cdata = ''
self.children = []
def addChild(self, element):
self.children.append(element)
def getAttribute(self,key):
return self.attributes.get(key)
def getData(self):
return self.cdata
def getElements(self, name=''):
if name:
return [c for c in self.children if c.name == name]
else:
return list(self.children)
class Xml2Obj(object):
def __init__(self):
self.root = None
self.nodeStack = []
def StartElement(self, name, attributes):
element = Element(name.encode(), attributes)
if self.nodeStack:
parent = self.nodeStack[-1]
parent.addChild(element)
else:
self.root = element
self.nodeStack.append(element)
def EndElement(self, name):
self.nodeStack.pop()
def CharacterData(self,data):
if data.strip():
data = data.encode()
element = self.nodeStack[-1]
element.cdata += data
def Parse(self, filename):
Parser = expat.ParserCreate()
Parser.StartElementHandler = self.StartElement
Parser.EndElementHandler = self.EndElement
Parser.CharacterDataHandler = self.CharacterData
ParserStatus = Parser.Parse(open(filename).read(),1)
return self.root
```
I am working with XML documents of about 1 GB in size. Does anyone know a faster way to parse these?
|
I looks to me as if you do not need any DOM capabilities from your program. I would second the use of the (c)ElementTree library. If you use the iterparse function of the cElementTree module, you can work your way through the xml and deal with the events as they occur.
Note however, Fredriks advice on using cElementTree [iterparse function](https://web.archive.org/web/20201111223627/https://effbot.org/zone/element-iterparse.htm):
> to parse large files, you can get rid of elements as soon as you’ve processed them:
```
for event, elem in iterparse(source):
if elem.tag == "record":
... process record elements ...
elem.clear()
```
> The above pattern has one drawback; it does not clear the root element, so you will end up with a single element with lots of empty child elements. If your files are huge, rather than just large, this might be a problem. To work around this, you need to get your hands on the root element. The easiest way to do this is to enable start events, and save a reference to the first element in a variable:
```
# get an iterable
context = iterparse(source, events=("start", "end"))
# turn it into an iterator
context = iter(context)
# get the root element
event, root = context.next()
for event, elem in context:
if event == "end" and elem.tag == "record":
... process record elements ...
root.clear()
```
The [lxml.iterparse()](https://lxml.de/FAQ.html#why-can-t-i-just-delete-parents-or-clear-the-root-node-in-iterparse) does not allow this.
The previous does not work on Python 3.7, consider the following way to get the first element.
```
import xml.etree.ElementTree as ET
# Get an iterable.
context = ET.iterparse(source, events=("start", "end"))
for index, (event, elem) in enumerate(context):
# Get the root element.
if index == 0:
root = elem
if event == "end" and elem.tag == "record":
# ... process record elements ...
root.clear()
```
|
Have you tried the `cElementTree` module?
`cElementTree` is included with Python 2.5 and later, as xml.etree.cElementTree. Refer the [benchmarks](http://effbot.org/zone/celementtree.htm).
Note that since Python 3.3 `cElementTree` is used as the default implementation so this change is not needed with a Python version 3.3+.
*removed dead ImageShack link*
|
What is the fastest way to parse large XML docs in Python?
|
[
"",
"python",
"xml",
"performance",
"parsing",
""
] |
I am developing a Java desktop application and would like to have an external configuration.xml.
I am developing the application using Netbeans and tried to add the configuration.xml file in the dist directory so that it resides in the application work folder. But when Netbeans executes its clean operation it deletes the dist directory,
Where should I put this configuration.xml file so that it will not be deleted and will exist in the application start-up directory.
|
You can add this to your build.xml :
```
<target name="-post-jar">
<copy todir="${dist.jar.dir}">
<fileset dir="resources" includes="**"/>
</copy>
</target>
```
You can now put your configuration.xml file in the folder 'resources' (that you need to create) in your project and all files in it will be copied to the dist folder during the build process.
|
I was able to get this to work, but I couldn't get -post-jar to trigger without explicitly entering it as a dependency in the main build config. This is in Netbeans 7.0.1 for a Rich Client project.
Instead, in build.xml for the Netbeans module where I want to have external resource files (mainly .txt files that the user could potentially edit later), I entered the following:
```
<target name="netbeans-extra">
<echo>Copying resources files to build cluster directory...</echo>
<mkdir dir="${cluster}/resources"/>
<copy todir="${cluster}/resources">
<fileset dir="resources" includes="**"/>
</copy>
</target>
```
Then I create a new directory in my module's top directory (right alongside src, release, build) called 'resources' and place my .txt files in there.
When you do a build on this module, netbeans-extra will get called as a dependency and carry out the creation of a 'resources' folder in the main project build/cluster directory, followed by copying the contents of the project resources directory over there.
Ultimately, when you build a distribution for your project, you'll find the resource directory placed right next to your projects modules directory, making for a nice and neat side by side arrangement.
|
Netbeans and external configuration files
|
[
"",
"java",
"netbeans",
"external",
"resources",
""
] |
A while ago, I had a discussion with a colleague about how to insert values in STL [maps](http://www.sgi.com/tech/stl/Map.html). I preferred `map[key] = value;` because it feels natural and is clear to read whereas he preferred `map.insert(std::make_pair(key, value))`.
I just asked him and neither of us can remember the reason why insert is better, but I am sure it was not just a style preference rather there was a technical reason such as efficiency. The [SGI STL reference](http://www.sgi.com/tech/stl/Map.html) simply says: "Strictly speaking, this member function is unnecessary: it exists only for convenience."
Can anybody tell me that reason, or am I just dreaming that there is one?
|
When you write
```
map[key] = value;
```
there's no way to tell if you **replaced** the `value` for `key`, or if you **created** a new `key` with `value`.
[`map::insert()`](http://en.cppreference.com/w/cpp/container/map/insert) will only create:
```
using std::cout; using std::endl;
typedef std::map<int, std::string> MyMap;
MyMap map;
// ...
std::pair<MyMap::iterator, bool> res = map.insert(MyMap::value_type(key,value));
if ( ! res.second ) {
cout << "key " << key << " already exists "
<< " with value " << (res.first)->second << endl;
} else {
cout << "created key " << key << " with value " << value << endl;
}
```
For most of my apps, I usually don't care if I'm creating or replacing, so I use the easier to read `map[key] = value`.
|
The two have different semantics when it comes to the key already existing in the map. So they aren't really directly comparable.
But the operator[] version requires default constructing the value, and then assigning, so if this is more expensive then copy construction, then it will be more expensive. Sometimes default construction doesn't make sense, and then it would be impossible to use the operator[] version.
|
In STL maps, is it better to use map::insert than []?
|
[
"",
"c++",
"dictionary",
"stl",
"insert",
"stdmap",
""
] |
What are the proper uses of:
* [`static_cast`](https://en.cppreference.com/w/cpp/language/static_cast)
* [`dynamic_cast`](https://en.cppreference.com/w/cpp/language/dynamic_cast)
* [`const_cast`](https://en.cppreference.com/w/cpp/language/const_cast)
* [`reinterpret_cast`](https://en.cppreference.com/w/cpp/language/reinterpret_cast)
* [`(type)value`](https://en.cppreference.com/w/cpp/language/explicit_cast) (C-style cast)
* [`type(value)`](https://en.cppreference.com/w/cpp/language/explicit_cast) (function-style cast)
How does one decide which to use in which specific cases?
|
## `static_cast`
`static_cast` is the first cast you should attempt to use. It does things like implicit conversions between types (such as `int` to `float`, or pointer to `void*`), and it can also call explicit conversion functions (or implicit ones). In many cases, explicitly stating `static_cast` isn't necessary, but it's important to note that the `T(something)` syntax is equivalent to `(T)something` and should be avoided (more on that later). A `T(something, something_else)` is safe, however, and guaranteed to call the constructor.
`static_cast` can also cast through inheritance hierarchies. It is unnecessary when casting upwards (towards a base class), but when casting downwards it can be used as long as it doesn't cast through `virtual` inheritance. It does not do checking, however, and it is undefined behavior to `static_cast` down a hierarchy to a type that isn't actually the type of the object.
## `const_cast`
`const_cast` can be used to remove or add `const` to a variable; no other C++ cast is capable of removing it (not even `reinterpret_cast`). It is important to note that modifying a formerly `const` value is only undefined if the original variable is `const`; if you use it to take the `const` off a reference to something that wasn't declared with `const`, it is safe. This can be useful when overloading member functions based on `const`, for instance. It can also be used to add `const` to an object, such as to call a member function overload.
`const_cast` also works similarly on `volatile`, though that's less common.
## `dynamic_cast`
`dynamic_cast` is exclusively used for handling polymorphism. You can cast a pointer or reference to any polymorphic type to any other class type (a polymorphic type has at least one virtual function, declared or inherited). You can use it for more than just casting downwards – you can cast sideways or even up another chain. The `dynamic_cast` will seek out the desired object and return it if possible. If it can't, it will return `nullptr` in the case of a pointer, or throw `std::bad_cast` in the case of a reference.
`dynamic_cast` has some limitations, though. It doesn't work if there are multiple objects of the same type in the inheritance hierarchy (the so-called 'dreaded diamond') and you aren't using `virtual` inheritance. It also can only go through public inheritance - it will always fail to travel through `protected` or `private` inheritance. This is rarely an issue, however, as such forms of inheritance are rare.
## `reinterpret_cast`
`reinterpret_cast` is the most dangerous cast, and should be used very sparingly. It turns one type directly into another — such as casting the value from one pointer to another, or storing a pointer in an `int`, or all sorts of other nasty things. Largely, the only guarantee you get with `reinterpret_cast` is that normally if you cast the result back to the original type, you will get the exact same value (but ***not*** if the intermediate type is smaller than the original type). There are a number of conversions that **`reinterpret_cast`** cannot do, too. It's often abused for particularly weird conversions and bit manipulations, like turning a raw data stream into actual data, or storing data in the low bits of a pointer to aligned data. For those cases, see `std::bit_cast`.
## C-Style Cast and Function-Style Cast
C-style cast and function-style cast are casts using `(type)object` or `type(object)`, respectively, and are functionally equivalent. They are defined as the first of the following which succeeds:
* `const_cast`
* `static_cast` (though ignoring access restrictions)
* `static_cast` (see above), then `const_cast`
* `reinterpret_cast`
* `reinterpret_cast`, then `const_cast`
It can therefore be used as a replacement for other casts in some instances, but can be extremely dangerous because of the ability to devolve into a `reinterpret_cast`, and the latter should be preferred when explicit casting is needed, unless you are sure `static_cast` will succeed or `reinterpret_cast` will fail. Even then, consider the longer, more explicit option.
C-style casts also ignore access control when performing a `static_cast`, which means that they have the ability to perform an operation that no other cast can. This is mostly a kludge, though, and in my mind is just another reason to avoid C-style casts.
## `std::bit_cast` [C++20]
`std::bit_cast` copies the bits and bytes of the source object (its representation) directly into a new object of the target type. It's a standards-compliant way to do type punning. If you find yourself writing `*reinterpret_cast<SomeType*>(&x)`, you probably should use `std::bit_cast<SomeType>(x)` instead.
`std::bit_cast` is declared in `<bit>`. The objects must be the same size and be trivially copyable. If you can't yet use C++20, use `memcpy` to copy the source value into a variable of the desired type.
|
* Use `dynamic_cast` for converting pointers/references within an inheritance hierarchy.
* Use `static_cast` for ordinary type conversions.
* Use `reinterpret_cast` for low-level reinterpreting of bit patterns. Use with extreme caution.
* Use `const_cast` for casting away `const/volatile`. Avoid this unless you are stuck using a const-incorrect API.
|
When should static_cast, dynamic_cast, const_cast, and reinterpret_cast be used?
|
[
"",
"c++",
"pointers",
"casting",
"c++-faq",
""
] |
I have a List containing several keywords.
I foreach through them building my linq query with them like so (boiled down to remove the code noise):
```
List<string> keys = FillKeys()
foreach (string key in keys){
q = q.Where(c => c.Company.Name.Contains(key));
}
```
When I now make my keys contain 2 keys that return results seperatly, but can never occure together (every item in q is either "xyz" or "123", never "123" AND "xyz"), I still get results. The resultset is then the same as the last string it got to.
I had a look at the linq query and it appears it creates the correct sql, but it replaces @p1 AND @p2 both by the same (last itterated) value.
What am I doing wrong?
|
You're reusing the same variable (`key`) in your lambda expression.
See my article on [anonymous methods](http://pobox.com/~skeet/csharp/csharp2/delegates.html#anonymous.methods) for more details, and there are a number of related SO questions too:
* [LINQ to SQL bug (or very strange feature)...](https://stackoverflow.com/questions/293216/linq-to-sql-bug-or-very-strange-feature-when-using-iqueryable-foreach-and-multi)
* [Local variables with delegates](https://stackoverflow.com/questions/148669/local-variables-with-delegates)
* [C# captured variable in a loop](https://stackoverflow.com/questions/271440/c-captured-variable-in-loop)
* [C# gotcha answer](https://stackoverflow.com/questions/241134/what-is-the-worst-cnet-gotcha#241180)
* [Building a LINQ query programmatically without local variables tricking me](https://stackoverflow.com/questions/190227/building-a-linq-query-programatically-without-local-variables-tricking-me)
The simple fix is to copy the variable first:
```
List<string> keys = FillKeys()
foreach (string key in keys){
string copy = key;
q = q.Where(c => c.Company.Name.Contains(copy));
}
```
|
Possibly a captured variable issue; try adding:
```
List<string> keys = FillKeys()
foreach (string key in keys){
string tmp = key;
q = q.Where(c => c.Company.Name.Contains(tmp));
}
```
|
Linq query built in foreach loop always takes parameter value from last iteration
|
[
"",
"c#",
"linq",
""
] |
I am writing an iframe based facebook app. Now I want to use the same html page to render the normal website as well as the canvas page within facebook. I want to know if I can determine whether the page has been loaded inside the iframe or directly in the browser?
|
Note: Browsers can block access to `window.top` due to [same origin policy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Same_origin_policy_for_JavaScript). IE bugs also take place.
Here's the working code:
```
function inIframe () {
try {
return window.self !== window.top;
} catch (e) {
return true;
}
}
```
`top` and `self` are both `window` objects (along with `parent`), so you're seeing if your window is the top window.
|
When in an iframe on the same origin as the parent, the [`window.frameElement`](https://devdocs.io/dom/window/frameelement) method returns the element (e.g. `iframe` or `object`) in which the window is embedded. Otherwise, if browsing in a top-level context, or if the parent and the child frame have different origins, it will evaluate to `null`.
```
window.frameElement
? 'embedded in iframe or object'
: 'not embedded or cross-origin'
```
This is an [HTML Standard](https://html.spec.whatwg.org/multipage/browsers.html#dom-frameelement) with basic support in all modern browsers.
|
How to identify if a webpage is being loaded inside an iframe or directly into the browser window?
|
[
"",
"javascript",
"facebook",
"iframe",
""
] |
I'm working with some example java code for making md5 hashes. One part converts the results from bytes to a string of hex digits:
```
byte messageDigest[] = algorithm.digest();
StringBuffer hexString = new StringBuffer();
for (int i=0;i<messageDigest.length;i++) {
hexString.append(Integer.toHexString(0xFF & messageDigest[i]));
}
```
However, it doesn't quite work since toHexString apparently drops off leading zeros. So, what's the simplest way to go from byte array to hex string that maintains the leading zeros?
|
A simple approach would be to check how many digits are output by `Integer.toHexString()` and add a leading zero to each byte if needed. Something like this:
```
public static String toHexString(byte[] bytes) {
StringBuilder hexString = new StringBuilder();
for (int i = 0; i < bytes.length; i++) {
String hex = Integer.toHexString(0xFF & bytes[i]);
if (hex.length() == 1) {
hexString.append('0');
}
hexString.append(hex);
}
return hexString.toString();
}
```
|
Check out [Hex.encodeHexString](https://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/binary/Hex.html#encodeHexString-byte:A-) from [Apache Commons Codec](https://commons.apache.org/proper/commons-codec/).
```
import org.apache.commons.codec.binary.Hex;
String hex = Hex.encodeHexString(bytes);
```
|
In Java, how do I convert a byte array to a string of hex digits while keeping leading zeros?
|
[
"",
"java",
"md5",
"hex",
""
] |
In relation to [another question](https://stackoverflow.com/questions/283431/why-would-an-command-not-recognized-error-occur-only-when-a-window-is-populated), how do you account for paths that may change? For example, if a program is calling a file in the same directory as the program, you can simply use the path ".\foo.py" in \*nix. However, apparently Windows likes to have the path hard-coded, e.g. "C:\Python\_project\foo.py".
What happens if the path changes? For example, the file may not be on the C: drive but on a thumb drive or external drive that can change the drive letter. The file may still be in the same directory as the program but it won't match the drive letter in the code.
I want the program to be cross-platform, but I expect I may have to use **os.name** or something to determine which path code block to use.
|
Simple answer: You work out the absolute path based on the environment.
What you really need is a few pointers. There are various bits of runtime and environment information that you can glean from various places in the standard library (and they certainly help me when I want to deploy an application on windows).
So, first some general things:
1. `os.path` - standard library module with lots of cross-platform path manipulation. Your best friend. "Follow the os.path" I once read in a book.
2. `__file__` - The location of the current module.
3. `sys.executable` - The location of the running Python.
Now you can fairly much glean anything you want from these three sources. The functions from os.path will help you get around the tree:
* `os.path.join('path1', 'path2')` - join path segments in a cross-platform way
* `os.path.expanduser('a_path')` - find the path `a_path` in the user's home directory
* `os.path.abspath('a_path')` - convert a relative path to an absolute path
* `os.path.dirname('a_path')` - get the directory that a path is in
* many many more...
So combining this, for example:
```
# script1.py
# Get the path to the script2.py in the same directory
import os
this_script_path = os.path.abspath(__file__)
this_dir_path = os.path.dirname(this_script_path)
script2_path = os.path.join(this_dir_path, 'script2.py')
print script2_path
```
And running it:
```
ali@work:~/tmp$ python script1.py
/home/ali/tmp/script2.py
```
Now for your specific case, it seems you are slightly confused between the concept of a "working directory" and the "directory that a script is in". These can be the same, but they can also be different. For example the "working directory" can be changed, and so functions that use it might be able to find what they are looking for sometimes but not others. `subprocess.Popen` is an example of this.
If you always pass paths absolutely, you will never get into working directory issues.
|
If your file is always in the same directory as your program then:
```
def _isInProductionMode():
""" returns True when running the exe,
False when running from a script, ie development mode.
"""
return (hasattr(sys, "frozen") or # new py2exe
hasattr(sys, "importers") # old py2exe
or imp.is_frozen("__main__")) #tools/freeze
def _getAppDir():
""" returns the directory name of the script or the directory
name of the exe
"""
if _isInProductionMode():
return os.path.dirname(sys.executable)
return os.path.dirname(__file__)
```
should work. Also, I've used py2exe for my own application, and haven't tested it with other exe conversion apps.
|
Accounting for a changing path
|
[
"",
"python",
"file",
"path",
""
] |
I the following styles:
```
a.button {
background-color: orange;
margin: .2cm;
padding: .2cm;
color: black;
font-family: sans-serif;
text-decoration: none;
font-weight: bold;
border: solid #000000;
}
a.buttonMouseover {
background-color: darkGoldenRod;
margin: .2cm;
padding: .2cm;
color: black;
font-family: sans-serif;
text-decoration: none;
font-weight: bold;
border: solid #000000;
}
```
And the following javascript code (my first ever btw):
```
function backgroundChangeIn(element){
if (element.className = "a.button"){element.className = "buttonMouseover";}
}
function backgroundChangeOut(element){
if (element.className = "a.buttonMouseover"){element.className = "button";}
}
```
And, the following element that should change the background on mouseover:
```
<a class="button" href="" onmouseover="backgroundChangeIn(this)" onmouseout="backgroundChangeOut(this)">A Button</a>
```
It is working for me so far. But I was wondering if there was a better way.
(Sorry about all the code)
|
Depending on your target browsers, you could use the `hover` pseudo tag.
```
a.button {
background-color: orange;
margin: .2cm;
padding: .2cm;
color: black;
font-family: sans-serif;
text-decoration: none;
font-weight: bold;
border: solid #000000;
}
a.button:hover {
background-color: darkGoldenRod;
}
```
Here's a bit of documentation on it at [w3schools](http://www.w3schools.com/css/pr_pseudo_hover.asp). It looks like it's well supported on all remotely modern browsers.
Note that both the normal and the hover styling rules are applied, hover taking precedence. So you just need to put what changes in the hover rule.
|
sblundy has the basics right. To add to that, all modern browsers will allow you to use the hover pseudo element on the <a> however IE6 won't recognise this on any other element.
In IE6 you would need some sort of JavaScript to add a class name when you hover. I like using jQuery, and the way I would do it like that is as follows:
```
$(function(){
$('.hoverable').hover(function(){
$(this).addClass('hover');
},
function(){
$(this).removeClass('hover');
})
})
```
which would give all elements with the class 'hoverable' a class of hover when they are hovered over.
|
What is the simplest way to implement mouseover background color change for HTML elements?
|
[
"",
"javascript",
"html",
""
] |
I'm thinking how to arrange a deployed python application which will have a
1. Executable script located in /usr/bin/ which will provide a CLI to functionality implemented in
2. A library installed to wherever the current site-packages directory is.
Now, currently, I have the following directory structure in my sources:
```
foo.py
foo/
__init__.py
...
```
which I guess is not the best way to do things. During development, everything works as expected, however when deployed, the "from foo import FooObject" code in foo.py seemingly attempts to import foo.py itself, which is not the behaviour I'm looking for.
So the question is what is the standard practice of orchestrating situations like this? One of the things I could think of is, when installing, rename foo.py to just foo, which stops it from importing itself, but that seems rather awkward...
Another part of the problem, I suppose, is that it's a naming challenge. Perhaps call the executable script foo-bin.py?
|
[This article](http://jcalderone.livejournal.com/39794.html) is pretty good, and shows you a good way to do it. The second item from the *Do* list answers your question.
*shameless copy paste:*
> # Filesystem structure of a Python project
>
> *by [Jp Calderone](http://jcalderone.livejournal.com/profile)*
>
> **Do:**
>
> * name the directory something related to your project. For example, if your
> project is named "Twisted", name the
> top-level directory for its source
> files `Twisted`. When you do releases,
> you should include a version number
> suffix: `Twisted-2.5`.
> * create a directory `Twisted/bin` and put your executables there, if you
> have any. Don't give them a `.py`
> extension, even if they are Python
> source files. Don't put any code in
> them except an import of and call to a
> main function defined somewhere else
> in your projects.
> * If your project is expressable as a single Python source file, then put it
> into the directory and name it
> something related to your project. For
> example, `Twisted/twisted.py`. If you
> need multiple source files, create a
> package instead (`Twisted/twisted/`,
> with an empty
> `Twisted/twisted/__init__.py`) and place
> your source files in it. For example,
> `Twisted/twisted/internet.py`.
> * put your unit tests in a sub-package of your package (note - this means
> that the single Python source file
> option above was a trick - you always
> need at least one other file for your
> unit tests). For example,
> `Twisted/twisted/test/`. Of course, make
> it a package with
> `Twisted/twisted/test/__init__.py`.
> Place tests in files like
> `Twisted/twisted/test/test_internet.py`.
> * add `Twisted/README` and `Twisted/setup.py` to explain and
> install your software, respectively,
> if you're feeling nice.
>
> **Don't:**
>
> * put your source in a directory called `src` or `lib`. This makes it hard
> to run without installing.
> * put your tests outside of your Python package. This makes it hard to
> run the tests against an installed
> version.
> * create a package that only has a `__init__.py` and then put all your code into `__init__.py`. Just make a module
> instead of a package, it's simpler.
> * try to come up with magical hacks to make Python able to import your module
> or package without having the user add
> the directory containing it to their
> import path (either via `PYTHONPATH` or
> some other mechanism). You will not
> correctly handle all cases and users
> will get angry at you when your
> software doesn't work in their
> environment.
|
[Distutils](http://www.python.org/doc/2.5.2/dist/dist.html) supports installing modules, packages, and scripts. If you create a distutils `setup.py` which refers to `foo` as a package and `foo.py` as a script, then `foo.py` should get installed to `/usr/local/bin` or whatever the appropriate script install path is on the target OS, and the `foo` package should get installed to the `site_packages` directory.
|
Deploying a python application with shared package
|
[
"",
"python",
"naming",
"package",
"conventions",
"deploying",
""
] |
* How do I utilize a ?: operator in the SELECT clause of a LINQ query? If this can't be done, how can I emulate one? The goal is to get a CASE block in my select clause. As you might suspect, I'm getting an error: *Invalid anonymous type member declarator. Anonymous type members must be declared with a member assignment, simple name or member access.*
* Is this the proper way, or a sufficient way, to say "from a inner join i on a.ipid=i.id inner join u on i.uid=u.id"? If not, please provide one. Thanks.
```
var query =
from a in db.tblActivities
from i in db.tblIPs
from u in db.tblUsers
select new {
u.UserName == null
? i.Address
: u.UserName,
a.Request,
a.DateTime };
```
|
When creating an anonymous type (what you're doing with the "new" without specifying a type) you have to specify the member name for each property. From your example, it would look something like this: (also fixed your joins)
```
var query = from a in db.tblActivities
join i in db.tblIPs on a.ipid equals i.id
join u in db.tblUsers on i.uid equals u.id
select new {
UserName = (u.UserName ?? i.Address),
Request = a.Request,
Date = a.DateTime
};
```
You could probably do the UserName your way, too:
```
UserName = (u.UserName == null) ? i.Address : u.UserName,
```
but the ?? operator is more concise. It's similar to "isnull" in SQL.
|
You have to use the join keyword, and define the relationship between the entities in order to make a proper inner join.
[Here](http://msdn.microsoft.com/en-us/library/bb397941.aspx) you can find some examples about that, I also highly recommend you to get [LinqPad](http://www.linqpad.net/), its a really valuable tool for testing your queries, also its very good to learn, it has 200+ examples.
|
?: Operator in LINQ Query
|
[
"",
"c#",
"linq",
"linq-to-sql",
""
] |
I am looking to do some tinkering with openGL and Python and haven't been able to find good reasons for using PyOpenGl versus pyglet
Which would you recommend and why?
|
As Tony said, this is really going to depend on your goals. If you're "tinkering" to try to learn about OpenGL or 3D rendering in general that I would dispense with all pleasantries and start working with PyOpenGL, which is as close are you're going to get to "raw" 3D programming using Python.
On the other hand, if you're "tinkering" by way of mocking up a game or multimedia application, or trying to learn about programming practices in general than Pyglet will save you lots of up-front development time by providing hooks for input events, sounds, text/billboarding, etc. Often, this up-front investment is what prevents people from completing their projects, so having it done for you is not something to be ignored. (It is also very Pythonic to avoid reinventing the wheel.)
If you are looking to do any sort of heavy-duty lifting (which normally falls outside my definition of "tinkering", but maybe not if you're tinkering with 3D engine design) then you might want to take a look at Python-Ogre, which wraps the *very* full-featured and robust [OGRE 3D](http://www.ogre3d.org/) graphics engine.
|
Start with pyglet. It contains the best high-level API, which contains all you need to get started, from opening a window to drawing sprites and OpenGL primitives using their friendly and powerful Sprite and Batch classes.
Later, you might also want to write your own lower-level code, that makes calls directly to OpenGL functions such as glDrawArrays, etc. You can do this using pyglet's OpenGL bindings, or using PyOpenGL's. The good news is that whichever you use, you can insert such calls right into the middle of your existing pyglet application, and they will 'just work'. Transitioning your code from Pyglet to PyOpenGL is fairly easy, so this is not a decision you need to worry about too much upfront. The trades-off between the two are:
PyOpenGL's bindings make the OpenGL interface more friendly and pythonic. For example, you can pass vertex arrays in many different forms, ctypes arrays, numpy arrays, plain lists, etc, and PyOpenGL will convert them into something OpenGL can use. Things like this make PyOpenGL really easy and convenient.
pyglet's OpenGL bindings are automatically generated, and are not as friendly to use as PyOpenGL. For example, sometimes you will have to manually create ctypes objects, in order to pass 'C pointer' kinds of args to OpenGL. This can be fiddly. The plus side though, is pyglet's bindings tends to be significantly faster.
This implies that there is an optimal middle ground: Use pyglet for windowing, mouse events, sound, etc. Then use PyOpenGL's friendly API when you want to make direct OpenGL function calls. Then when optimising, replace just the small percentage of performance-critical PyOpenGL calls that lie within your inner render loop with the pyglet equivalents. For me, this gives me between 2 and 4 times faster framerates, with PyOpenGL's convenience for 90% of my code.
|
PyOpenGl or pyglet?
|
[
"",
"python",
"pyglet",
"pyopengl",
""
] |
The following code does not compile:
```
public class GenericsTest {
public static void main(String[] args) {
MyList<?> list = new MyList<Object>();
Class<?> clazz = list.get(0);
// Does not compile with reason
// "Type mismatch: cannot convert from Object to Class"
MyList list2 = new MyList();
Class clazz2 = list2.get(0);
}
static class MyList<T> extends ArrayList<Class<T>> {
}
}
```
I wanted to do this to introduce generics to old code without breaking the build.
Is this a bug in both the compiler (both eclipse and javac) or am I missing something here? What other possibility exists to introduce generics to MyList?
**EDIT**
For clarification:
I have the generic class
```
public class MyList extends ArrayList<MyObject> {}
```
with
```
public class MyObject {}
```
and code using MyList
```
MyList list = new MyList();
...
MyObject o = list.get(0);
```
Now during development I see I want to introduce generics to MyObject
```
public class MyObject<T> {}
```
and now I want to have this new generic thingy in MyList as well
```
public class MyList<T> extends ArrayList<MyObject<T>> {}
```
But that does break my build. Interestingly
```
public class MyList<T> extends ArrayList<MyObject<T>> {
public MyObject<T> get(int i) {
return super.get(i);
}
}
```
will allow old code
```
MyList list = new MyList();
...
MyObject o = list.get(0);
```
to compile.
OK, seems that when I introduce this generic, I will have to live with having to change all calls to MyList to the generic form. I wanted the old code to just introduce a warning instead of an error.
|
I think you are not understanding quite how generics work.
```
MyList<?> list = new MyList<Object>();
Class<String> clazz= list.get(0);
```
This code snippet does not compile because you are telling the compiler that `list` is going to hold `Class<Object>` types - and then in the next line you are expecting it to return you a `Class<String>`. The generic system in Java is not capable of converting types used with generics based on inheritance like you might think it would.
If you expect `list` to hold `Class<String>`, then you need to declare it as so - or, if you want it to be able to hold any types, then you cannot do the second line without a cast.
```
MyList<String> list = new MyList<String>();
Class<String> clazz = list.get(0);
```
or
```
MyList<?> list = new MyList<Object>();
//generates a warning about an unchecked cast
Class<String> clazz = (Class<String>) list.get(0);
```
---
The second code snippet does not work because when you use raw types, you still need to cast the `Object` returned by `get()` to the declared type you are using (which has always been the case).
```
MyList list2 = new MyList();
Class clazz2 = (Class) list2.get(0);
```
|
I don't have a compiler on this machine, but this should work.
```
public class GenericsTest {
public static void main(String[] args) {
MyList<Object> list = new MyList<Object>();
Class<?> clazz= list.get(0);
}
static class MyList<T> extends ArrayList<Class<? extends T>> {
}
}
```
|
Introducing generics to Java code without breaking the build
|
[
"",
"java",
"generics",
""
] |
I have a few ASP.Net database front-end websites where MS Access is the back-end. I am trying to use ASP.Net dynamic data website. Should I change this database to SQL Server Express or something else to make it easier or should this work from with MS Access.
|
Pick SQL Express for these reasons:
1. **Scaling**: MS Access will never scale. Once you get over about 10 concurrent connections you will start to see trouble. SQL Express will scale. You can always move SQL Express up to a full-blow SQL Server installation. While this is somewhat true of Access, some of your SQL statements and data types may not transfer cleanly.
2. **Security**: SQL Server has a much better security model than Access. You can lock down the schema in your db per user. You can also better administrate user access (think dev user vs test user vs production user).
3. **Performance**: This is similar to scaling. If you see a traffic spike to your web site, Access may not handle it while SQL Server Express probably will.
4. **Tools**: Tools and libraries like LINQ are always going to be targeted at SQL Server. You will get better support and better documentation using them this way.
|
A Jet back end will work fine for mostly read-only websites, even ones with decent traffic. Michael Kaplan's old website, <http://trigeminal.com>, uses a Jet database as the back end, and back when it was his primary site (he's since been hired by Microsoft), he said it was getting over 100K hits per day, and was holding up just fine.
For read/write operations, Jet will begin to bog down on a website about the same place it would bog down in a multi-user LAN application, at somewhere between 20 and 50 simultaneous connections, depending on your app's design.
For what it's worth, even though I'm a professional Access developer and know Jet very, very well, I don't use Jet as a back end for any of my websites. That's not so much because I don't like Jet for websites, but because I don't like *Windows* for a web server -- I'm an Apache supporter (and PHP developer), and host all my websites (and my clients' as well) on non-Windows hosting services (presently all of them are Linux, but that wasn't always the case). All the website databases I have ever done use MySQL for the back end.
In short, find out what your host offers in terms of database back ends and choose the best one. I can't imagine a host that provides ASP.NET that would not offer SQL Server, and that's definitely what I'd go with in that situation. It would be easy enough to upsize your Jet back end to SQL Server on your local PC, and then do a backup that you could import to your website.
|
SQL Server Express / MS Access LINQ Features, Which One to Use?
|
[
"",
"c#",
"asp.net",
"sql-server",
"linq",
"ms-access",
""
] |
I'm using LINQ to Entities (not LINQ to SQL) and I'm having trouble creating an 'IN' style query. Here is my query at the moment:
```
var items = db.InventoryItem
.Include("Kind")
.Include("PropertyValues")
.Include("PropertyValues.KindProperty")
.Where(itm => valueIds.Contains(itm.ID)).ToList<InventoryItem>();
```
When I do this however, the following exception is thrown:
> LINQ to Entities does not recognize the method 'Boolean Contains(Int64)' method, and this method cannot be translated into a store expression.
Does anyone have a workaround or another solution for this?
|
You need to either use this one:
```
.Where(string.Format("it.ID in {0}", string.Join(",", valueIds.ToArray())));
```
or construct the WHERE part dynamically, as in [this](http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3917617&SiteID=1) post.
## P.S. - Information has been updated and this answer updated as follows to maintain relevance:
The link referenced contains the following update:
> *...in EF4 we added support for the*
> *Contains method and at least in this*
> *specific case for collection-valued*
> *parameters. Therefore this kind of*
> *code now works right out of the box*
> *and it is not necesary to use any*
> *additinal expression building method:*
```
var statusesToFind = new List<int> {1, 2, 3, 4};
var foos = from foo in myEntities.Foos
where statusesToFind.Contains(foo.Status)
select foo;
```
|
You can use Linq's `Any` extension method for this in some cases:
```
var userIds = new[] { 1, 2, 3 };
from u in Users
where userIds.Any(i => i==u.Id)
select u;
```
The generated SQL looks pretty strange in such a case, but like much Linq-to-Entities generated SQL it might be overly verbose for a human, but runs fast in practice.
```
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[DisplayName] AS [DisplayName],
FROM [dbo].[Users] AS [Extent1]
WHERE EXISTS (SELECT
1 AS [C1]
FROM (SELECT
[UnionAll1].[C1] AS [C1]
FROM (SELECT
1 AS [C1]
FROM ( SELECT 1 AS X ) AS [SingleRowTable1]
UNION ALL
SELECT
2 AS [C1]
FROM ( SELECT 1 AS X ) AS [SingleRowTable2]) AS [UnionAll1]
UNION ALL
SELECT
3 AS [C1]
FROM ( SELECT 1 AS X ) AS [SingleRowTable3]) AS [UnionAll2]
WHERE [UnionAll2].[C1] = [Extent1].[Id]
)
```
|
How do you do a SQL style 'IN' statement in LINQ to Entities (Entity Framework) if Contains isn't supported?
|
[
"",
"c#",
"linq",
"entity-framework",
"linq-to-entities",
""
] |
Let's say I have **a `List` object** and **an iterator** for that list.
Now I sort the list with `java.util.Collections.sort()`
* What happens to the iterator?
* Is its behavior still defined and can it still be used?
* If not, can I prevent destroying the iterators for the list?
I know, this problem could be circumvented by changing the program design, cloning the list for example, but I specificly want to know the "official" behavior of Java.
|
Most of the collections in `java.util` are "fail-fast" and **may** throw a [`ConcurrentModificationException`](http://java.sun.com/javase/6/docs/api/java/util/ConcurrentModificationException.html) if the underlying collection is changed. It should be pointed out that this is intended for debugging and so is not guaranteed. According to the javadocs, this is true of all decedents of [`AbstractList`](http://java.sun.com/javase/6/docs/api/java/util/AbstractList.html), but this is **not** true of [`CopyOnWriteArrayList`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/CopyOnWriteArrayList.html), which is intended for multi-threaded use.
|
Iterators are generally invalid after *any* modification to their underlying collections, except via the iterator itself. (For instance, [`ListIterator`](http://java.sun.com/javase/6/docs/api/java/util/ListIterator.html) allows for insertion and removal.)
I'd certainly expect any iterators to become invalidated after a sort though - and if they weren't, I'd have no idea what order to expect.
|
When I sort a List what happens to its iterators?
|
[
"",
"java",
"collections",
"concurrency",
""
] |
Are DES Encryption called "One Way Encryption" or "two way Encryption" ?
Are there a PHP class or method to decrypt the DES Encryption ?
|
The php [crypt](https://www.php.net/crypt) function is a one-way hashing function, if you want to be able to decrypt, take a look at the [mcrypt](https://www.php.net/manual/en/book.mcrypt.php) extension which supports [a range of algorithms](https://www.php.net/manual/en/mcrypt.ciphers.php)
|
It should be noted that there are (and have always been) questions surrounding the [DES](http://en.wikipedia.org/wiki/Data_Encryption_Standard) algorithm. It's been widely in use for a long time, but since it was originally specified with only a 56 bit key, it's questionable whether it's secure enough for any important uses at this point. Triple DES is generally better, but there are some known theoretical attacks. If you have a choice of cipher, you might want to look at [AES](http://en.wikipedia.org/wiki/Advanced_Encryption_Standard) instead.
|
PHP class or method to decrypt the DES Encryption
|
[
"",
"php",
"encryption",
"des",
""
] |
I'm building an XML document with PHP's SimpleXML extension, and I'm adding a token to the file:
```
$doc->addChild('myToken');
```
This generates (what I know as) a self-closing or single tag:
```
<myToken/>
```
However, the aging web-service I'm communicating with is tripping all over self-closing tags, so I need to have a separate opening and closing tag:
```
<myToken></myToken>
```
The question is, how do I do this, outside of running the generated XML through a **preg\_replace**?
|
From the documentation at [SimpleXMLElement->\_\_construct](http://www.php.net/manual/en/function.simplexml-element-construct.php) and [LibXML Predefined Constants](https://www.php.net/manual/en/libxml.constants.php), I think this should work:
```
<?php
$sxe = new SimpleXMLElement($someData, LIBXML_NOEMPTYTAG);
// some processing here
$out = $sxe->asXML();
?>
```
Try that and see if it works. Otherwise, I'm afraid, it's preg\_replace-land.
|
If you set the value to something empty (i.e. null, empty string) it will use open/close brackets.
```
$tag = '<SomeTagName/>';
echo "Tag: '$tag'\n\n";
$x = new SimpleXMLElement($tag);
echo "Autoclosed: {$x->asXML()}\n";
$x = new SimpleXMLElement($tag);
$x[0] = null;
echo "Null: {$x->asXML()}\n";
$x = new SimpleXMLElement($tag);
$x[0] = '';
echo "Empty: {$x->asXML()}\n";
```
See example: <http://sandbox.onlinephpfunctions.com/code/10642a84dca5a50eba882a347f152fc480bc47b5>
|
Turn OFF self-closing tags in SimpleXML for PHP?
|
[
"",
"php",
"xml",
"simplexml",
""
] |
I am using XmlSerializer to write and read an object to xml in C#. I currently use the attributes `XmlElement` and `XmlIgnore` to manipulate the serialization of the object.
If my xml file is missing an xml element that I require, my object still deserializes (xml -> object) just fine. How do I indicate (preferably via Attributes) that a certain field is "required"?
Here is a sample method of what I am using currently:
```
[XmlElement(ElementName="numberOfWidgets")]
public int NumberThatIsRequired {
set ...;
get ...;
}
```
My ideal solution would be to add something like an `XmlRequired` attribute.
Also, is there a good reference for what Attributes are available to manipulate the behavior of XmlSerializer?
|
I've got an answer for the second part: ["Attributes that control XML serialization"](http://msdn.microsoft.com/en-us/library/83y7df3e(VS.71).aspx).
Still investigating the first part...
EDIT: I strongly suspect you can't do this through XML deserialization itself. I've just run xsd.exe on a sample schema which includes a required attribute - and it's exactly the same if the attribute is marked as being optional. If there were a way of requiring properties to be set, I'd expect it to be implemented in that case.
I suspect you've basically got to just validate your tree of objects after deserializing it. Sorry about that...
|
The only way I've found to do this is via XSD. What you can do is validate while you deserialize:
```
static T Deserialize<T>(string xml, XmlSchemaSet schemas)
{
//List<XmlSchemaException> exceptions = new List<XmlSchemaException>();
ValidationEventHandler validationHandler = (s, e) =>
{
//you could alternatively catch all the exceptions
//exceptions.Add(e.Exception);
throw e.Exception;
};
XmlReaderSettings settings = new XmlReaderSettings();
settings.Schemas.Add(schemas);
settings.ValidationType = ValidationType.Schema;
settings.ValidationEventHandler += validationHandler;
XmlSerializer serializer = new XmlSerializer(typeof(T));
using (StringReader sr = new StringReader(xml))
using (XmlReader books = XmlReader.Create(sr, settings))
return (T)serializer.Deserialize(books);
}
```
|
Can I fail to deserialize with XmlSerializer in C# if an element is not found?
|
[
"",
"c#",
"xml",
"xml-serialization",
".net-attributes",
""
] |
I've got a popup div showing on rightclick (I know this breaks expected functionality but Google Docs does it so why not?) However the element I'm showing my popup on has a "title" attribute set which appears over the top of my div. I still want the tooltip to work but not when the popup is there.
What's the best way to stop the tooltip showing while the popup is open/openning?
Edit: I am using jQuery
|
With [jquery](http://jquery.com) you could bind the hover function to also set the title attribute to blank onmouseover and then reset it on mouse out.
```
$("element#id").hover(
function() {
$(this).attr("title","");
$("div#popout").show();
},
function() {
$("div#popout").hide();
$(this).attr("title",originalTitle);
}
);
```
|
Here is another example of how it can be done by using [`data`](http://api.jquery.com/data/) for value storage and [`prop`](http://api.jquery.com/prop/) for value assigning
```
$('[title]').on({
mouseenter : function()
{
$(this).data('title', this.title).prop('title', '');
},
mouseleave: function()
{
$(this).prop('title', $(this).data('title'));
}
});
```
|
How to stop title attribute from displaying tooltip temporarily?
|
[
"",
"javascript",
"jquery",
"html",
""
] |
I need to store a double as a string. I know I can use `printf` if I wanted to display it, but I just want to store it in a string variable so that I can store it in a map later (as the *value*, not the *key*).
|
The *boost (tm)* way:
```
std::string str = boost::lexical_cast<std::string>(dbl);
```
The *Standard C++* way:
```
std::ostringstream strs;
strs << dbl;
std::string str = strs.str();
```
**Note**: Don't forget `#include <sstream>`
|
```
// The C way:
char buffer[32];
snprintf(buffer, sizeof(buffer), "%g", myDoubleVar);
// The C++03 way:
std::ostringstream sstream;
sstream << myDoubleVar;
std::string varAsString = sstream.str();
// The C++11 way:
std::string varAsString = std::to_string(myDoubleVar);
// The boost way:
std::string varAsString = boost::lexical_cast<std::string>(myDoubleVar);
```
|
How do I convert a double into a string in C++?
|
[
"",
"c++",
"string",
"double",
""
] |
I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can I always rely on it to produce the same result? What factors might affect this?
* Time between calculations?
* Current state of the CPU?
* Different hardware?
* Language / platform / OS?
* Solar flares?
I have a simple physics simulation and would like to record sessions so that they can be replayed. If the calculations can be relied on then I should only need to record the initial state plus any user input and I should always be able to reproduce the final state exactly. If the calculations are not accurate errors at the start may have huge implications by the end of the simulation.
I am currently working in Silverlight though would be interested to know if this question can be answered in general.
**Update:** The initial answers indicate yes, but apparently this isn't entirely clear cut as discussed in the comments for the selected answer. It looks like I will have to do some tests and see what happens.
|
From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.
Specific gotchas that I'm aware of:
1. some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.
2. floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.
3. standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.
4. The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.
|
The short answer is that FP calculations are entirely deterministic, as per the [IEEE Floating Point Standard](http://en.wikipedia.org/wiki/IEEE_754), but that doesn't mean they're entirely reproducible across machines, compilers, OS's, etc.
The long answer to these questions and more can be found in what is probably the best reference on floating point, David Goldberg's [What Every Computer Scientist Should Know About Floating Point Arithmetic](http://docs.sun.com/source/806-3568/ncg_goldberg.html). Skip to the section on the IEEE standard for the key details.
To answer your bullet points briefly:
* Time between calculations and state
of the CPU have little to do with
this.
* Hardware can affect things (e.g. some GPUs are not
IEEE floating point compliant).
* Language, platform, and OS can also
affect things. For a better description of this than I can offer, see Jason Watkins's answer. If you are using Java, take a look at Kahan's [rant on Java's floating point inadequacies](http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf).
* Solar flares might matter, hopefully
infrequently. I wouldn't worry too much, because if
they do matter, then everything else is screwed up too. I would put this in the same category as worrying about [EMP](http://en.wikipedia.org/wiki/Effects_of_nuclear_explosions#Electromagnetic_pulse).
Finally, if you are doing the same *sequence* of floating point calculations on the same initial inputs, then things should be replayable exactly just fine. The exact sequence can change depending on your compiler/os/standard library, so you might get some small errors this way.
Where you usually run into problems in floating point is if you have a numerically unstable method and you start with FP inputs that are *approximately* the same but not quite. If your method's stable, you should be able to guarantee reproducibility within some tolerance. If you want more detail than this, then take a look at Goldberg's FP article linked above or pick up an intro text on numerical analysis.
|
How deterministic is floating point inaccuracy?
|
[
"",
"c#",
"math",
"silverlight",
"floating-point",
"deterministic",
""
] |
Does anyone know how to modify the content of the Excel ribbon at runtime with VSTO 2005SE? Not only update labels or dynamic menus, but also add or remove buttons, tabs, groups, drop downs etc. At runtime means not only at my add-in startup, but also during work with Excel.
|
I agree with Mike, working with the visibility callback on controls or groups is probably your best bet (that's what we are using).
The entire ribbon layout is loaded from an XML string. I don't know if it is possible to trigger a reload of the XML, which you could then customize to load different XML content.
|
Irrespective of VS version, I don't think all that you want is actually possible with the current version of the RibbonX control\*. Specifically, there's no functionality for adding and removing. You *can* control visibility, though, so you can put everything in by default and make it visible or otherwise as needed.
\* which was 2007 at time of writing - Excel/Office 2010 may have changed things
|
Add Excel ribbon controls at runtime (VSTO 2005SE)
|
[
"",
"c#",
"excel",
"vsto",
""
] |
I'm starting a web project that likely should be fine with SQLite. I have SQLObject on top of it, but thinking long term here -- if this project should require a more robust (e.g. able to handle high traffic), I will need to have a transition plan ready. My questions:
1. How easy is it to transition from one DB (SQLite) to another (MySQL or Firebird or PostGre) under SQLObject?
2. Does SQLObject provide any tools to make such a transition easier? Is it simply take the objects I've defined and call createTable?
3. What about having multiple SQLite databases instead? E.g. one per visitor group? Does SQLObject provide a mechanism for handling this scenario and if so, what is the mechanism to use?
Thanks,
Sean
|
3) Is quite an interesting question. In general, SQLite is pretty useless for web-based stuff. It scales fairly well for size, but scales terribly for concurrency, and so if you are planning to hit it with a few requests at the same time, you will be in trouble.
Now your idea in part 3) of the question is to use multiple SQLite databases (eg one per user group, or even one per user). Unfortunately, SQLite will give you no help in this department. But it is possible. The one project I know that has done this before is [**Divmod's Axiom**](http://www.divmod.org/trac/wiki/DivmodAxiom). So I would certainly check that out.
Of course, it would probably be much easier to just use a good concurrent DB like the ones you mention (Firebird, PG, etc).
For completeness:
1 and 2) It should be straightforward without you actually writing **much** code. I find SQLObject a bit restrictive in this department, and would strongly recommend [**SQLAlchemy**](http://www.sqlalchemy.org/) instead. This is far more flexible, and if I was starting a new project today, I would certainly use it over SQLObject. It won't be moving "Objects" anywhere. There is no magic involved here, it will be transferring rows in tables in a database. Which as mentioned you could do by hand, but this might save you some time.
|
Your success with createTable() will depend on your existing underlying table schema / data types. In other words, how well SQLite maps to the database you choose and how SQLObject decides to use your data types.
The safest option may be to create the new database by hand. Then you'll have to deal with data migration, which may be as easy as instantiating two SQLObject database connections over the same table definitions.
Why not just start with the more full-featured database?
|
Database change underneath SQLObject
|
[
"",
"python",
"mysql",
"database",
"sqlite",
"sqlobject",
""
] |
I don't even know where to go with this. Google wasn't very helpful. As with my previous question. I'm using TextMate's Command+R to compile the project.
> game.h:16:error: declaration of ‘Player\* HalfSet::Player() const’
>
> players.h:11:error: changes meaning of ‘Player’ from ‘class Player’
>
> game.h:21:error: ‘Player’ is not a type
player.h file (partial)
```
#ifndef PLAYERS_H
#define PLAYERS_H
using namespace std;
#include <string>
#include <vector>
#include <istream>
#include <iomanip>
#include "generics.h"
class Player{ //Line 11
public:
//getters
long Id() const;
string FirstName() const;
string LastName() const;
string Country() const;
//setters
void setId(long id);
void setFirstName(string s);
void setLastName(string s);
void setCountry(string s);
//serializing functions
void display(ostream &out);
void read(istream &in);
void write(ostream &out);
//Initalizers
Player();
Player(istream &in);
Player(string firstName, string lastName);
Player(string firstName, string lastName, string country);
Player(long id, string firstName, string lastName, string country);
~Player();
private:
long _id;
string _firstName;
string _lastName;
string _country;
};
```
game.h file (partial)
```
#ifndef GAME_H
#define GAME_H
#include "generics.h"
#include "players.h"
#include <string>
#include <vector>
#include <istream>
#include <iomanip>
using namespace std;
class HalfSet{
public:
//getters
Player* Player() const; //Line 16
int GamesWon() const;
int TotalPoints() const;
int Errors() const;
//setters
void setPlayer(Player* p);
void setGamesWon(int games);
void setTotalPoints(int points);
void setErrors(int errors);
//Serialization
void display(ostream &out) const;
void read(istream &in) const;
void write(ostream &out) const;
//Initalizers
HalfSet();
~HalfSet();
private:
Player* _player;
int _gamesWon;
int _points;
int _errors;
};
```
What is going on here?
|
In C++ you cannot name a function the same name as a class/struct/typedef. You have a class named "Player" and so the HalfSet class has a function named "Player" ("Player \*Player()"). You need to rename one of these (probably changing HalfSet's Player() to getPlayer() or somesuch).
|
Your problem is that names are looked up in scopes. Within the declaration of `HalfSet::setPlayer(Player*)`, the unqualified name `Player` needs to be looked up. The first scope tried is `class HalfSet`. In that scope, the lookup of `Player` finds function `HalfSet::Player`, not `global class ::Player`.
The solution is to use a qualified name, `::Player`. This tells the compiler which scope to use for lookup (global) which in turn means `HalfSet::Player` is not even considered.
|
C++ odd compile error: error: changes meaning of "Object" from class "Object"
|
[
"",
"c++",
"xcode",
""
] |
Below is my (simplified) schema (in MySQL ver. 5.0.51b) and my strategy for updating it. There has got to be a better way. Inserting a new item requires 4 trips to the database and editing/updating an item takes up to **7**!
**items**: itemId, itemName
**categories**: catId, catName
**map**: mapId\*, itemId, catId
\* mapId (varchar) is concat of itemId + | + catId
1) If inserting: insert item. Get itemId via MySQL API.
Else updating: just update the item table. We already have the itemId.
2) Conditionally batch insert into `categories`.
```
INSERT IGNORE INTO categories (catName)
VALUES ('each'), ('category'), ('name');
```
3) Select IDs from `categories`.
```
SELECT catId FROM categories
WHERE catName = 'each' OR catName = 'category' OR catName = 'name';
```
4) Conditionally batch insert into `map`.
```
INSERT IGNORE INTO map (mapId, itemId, catId)
VALUES ('1|1', 1, 1), ('1|2', 1, 2), ('1|3', 1, 3);
```
If inserting: we're done. Else updating: continue.
5) It's possible that we no longer associate a category with this item that we did prior to the update. Delete old categories for this itemId.
```
DELETE FROM MAP WHERE itemId = 2
AND catID <> 2 AND catID <> 3 AND catID <> 5;
```
6) If we have disassociated ourselves from a category, it's possible that we left it orphaned. We do not want categories with no items. Therefore, if `affected rows > 0`, kill orphaned categories. I haven't found a way to combine these in MySQL, so this is #6 & #7.
```
SELECT categories.catId
FROM categories
LEFT JOIN map USING (catId)
GROUP BY categories.catId
HAVING COUNT(map.catId) < 1;
```
7) Delete IDs found in step 6.
```
DELETE FROM categories
WHERE catId = 9
AND catId = 10;
```
Please tell me there's a better way that I'm not seeing.
|
Steps 6 & 7 can be combined easily enough:
```
DELETE categories.*
FROM categories
LEFT JOIN map USING (catId)
WHERE map.catID IS NULL;
```
Steps 3 & 4 can also be combined:
```
INSERT IGNORE INTO map (mapId, itemId, catId)
SELECT CONCAT('1|', c.catId), 1, c.catID
FROM categories AS c
WHERE c.catName IN('each','category','name');
```
Otherwise, your solution is pretty standard, unless you want to use triggers to maintain the map table.
|
There are a number of things you can do to make a bit easier:
* Read about [`INSERT...ON DUPLICATE KEY UPDATE`][1]
* Delete old categories before you insert new categories. This may benefit from an index better.
`DELETE FROM map WHERE itemId=2`;
* You probably don't need `map.mapID`. Instead, declare a compound primary key over `(itemID, catID)`.
* As Peter says in his answer, use MySQL's multi-table delete:
```
DELETE categories.* FROM categories LEFT JOIN map USING (catId)
WHERE map.catID IS NULL
```
<http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html>
|
Updating an associative table in MySQL
|
[
"",
"sql",
"mysql",
"database",
""
] |
I've inherited a rather large application that really could use some cleanup. There is data access code littered throughout the application. In code behinds, some in business logic classes, some inline in in classic asp pages.
What I'd like to do is refactor this code, removing all the data access code into a few DAL classes.
All the data access is done through stored procedures (Microsoft SQL 2000). There are 300-400 of them.
It seems like there should be an automated way to analyse the store procedures, and automatically generate c# methods for each of them, mapping the Method parameters to the stored procedure parameters, with a method returning a datatable.
I don't have any experience with ORM products, and I'm not sure if what I'm looking for is a full blown ORM, or just a 3rd party utility that will help generate simple wrappers around the sp calls.
|
If you have access to .NET Framework 3.5 and Linq to SQL, you can do it very easily, check this video:
[LINQ to SQL: Using Stored Procedures](http://weblogs.asp.net/scottgu/archive/2007/08/16/linq-to-sql-part-6-retrieving-data-using-stored-procedures.aspx)
> Using existing stored procedures and
> functions is easy with LINQ. We simply
> drag the stored procedures onto the
> O/R mapping surface and call them from
> the generated Data Context object.
|
I recommend you get a hold of Code Smith. The product includes a template for ORM and fully supports generating classes from DB Schemas (and I think Procs). You can then Code Gen all the objects you need.
Another option would be to use LINQ to SQL.
|
Automatically create C# wrapper classes around stored procedures
|
[
"",
"c#",
"sql-server",
"stored-procedures",
"orm",
""
] |
Create a flat text file in c++ around 50 - 100 MB
with the content 'Added first line' should be inserted in to the file for 4 million times
|
using old style file io
**fopen** the file for write.
**fseek** to the desired file size - 1.
**fwrite** a single byte
**fclose** the file
|
The fastest way to create a file of a certain size is to simply create a zero-length file using `creat()` or `open()` and then change the size using `chsize()`. This will simply allocate blocks on the disk for the file, the contents will be whatever happened to be in those blocks. It's very fast since no buffer writing needs to take place.
|
Fastest way to create large file in c++?
|
[
"",
"c++",
"file-io",
"iostream",
"bulkinsert",
""
] |
I want to create a simple bit of JS code that creates an image element in the background and doesn't display anything. The image element will call a tracking URL (such as Omniture) and needs to be simple and robust and work in IE 6 =< only. Here is the code I have:
```
var oImg = document.createElement("img");
oImg.setAttribute('src', 'http://www.testtrackinglink.com');
oImg.setAttribute('alt', 'na');
oImg.setAttribute('height', '1px');
oImg.setAttribute('width', '1px');
document.body.appendChild(oImg);
```
Is this the simplest but most robust (error free) way to do it?
I cannot use a framework like jQuery. It needs to be in plain JavaScript.
|
```
oImg.setAttribute('width', '1px');
```
`px` is for CSS only. Use either:
```
oImg.width = '1';
```
to set a width through HTML, or:
```
oImg.style.width = '1px';
```
to set it through CSS.
Note that old versions of IE don't create a proper image with `document.createElement()`, and old versions of KHTML don't create a proper DOM Node with `new Image()`, so if you want to be fully backwards compatible use something like:
```
// IEWIN boolean previously sniffed through eg. conditional comments
function img_create(src, alt, title) {
var img = IEWIN ? new Image() : document.createElement('img');
img.src = src;
if ( alt != null ) img.alt = alt;
if ( title != null ) img.title = title;
return img;
}
```
Also be slightly wary of `document.body.appendChild` if the script may execute as the page is in the middle of loading. You can end up with the image in an unexpected place, or a weird JavaScript error on IE. If you need to be able to add it at load-time (but after the `<body>` element has started), you could try inserting it at the start of the body using `body.insertBefore(body.firstChild)`.
To do this invisibly but still have the image actually load in all browsers, you could insert an absolutely-positioned-off-the-page `<div>` as the body's first child and put any tracking/preload images you don't want to be visible in there.
|
```
var img = new Image(1,1); // width, height values are optional params
img.src = 'http://www.testtrackinglink.com';
```
|
What is the best JavaScript code to create an img element
|
[
"",
"javascript",
"dhtml",
""
] |
In visual studio 2008, when we press Ctrl+F5, the ASP.NET Development Server is launched to simulate a Web Server. I'm wondering what server does it simulate? IIS6 or IIS7? Is it possible to customize what server it simulate?
|
It doesn't simulate a server as such, it runs on a server based on cassini that is written in .net and has source code available. I suppose it's more similar to IIS6 in that it does not support the IIS7 integrated pipeline feature. You can always choose to have VS use II6 or IIS7 if it's installed (depending on whether you have XP or Vista on your DEV PC).
|
I don't think it "simulates" either, tbh, it just a web server for local development where IIS is not an option, based off [Cassini](http://www.asp.net/Downloads/archived/cassini/). More information [here](http://msdn.microsoft.com/en-us/library/58wxa9w5(VS.80).aspx).
|
ASP.NET Development Server simulates IIS6 or IIS7?
|
[
"",
"c#",
"asp.net",
"visual-studio-2008",
""
] |
IDE: Microsoft Visual Studio Professional 2008
Language: C#
Background:
I have one form and two groupboxes that are going to be used as preference catagory groupboxes. My problem is that everytime I drag one groupbox on top of the other (hide/show groupboxes upon listview catagory select), the base groupbox auto-parents the second groupbox I just dragged on top.
Question:
How do I stop the two groupboxes from auto-parenting themselves when dragged on top of one another?
// Edit;
Q) I assume you are trying to acheive a layout where you swap out the visible box? --Mitchel Sellers
A) Yes I am. I know I could do it at runtime but I'd like to make that my last option in case there is a more appropriate solution.
|
You may consider resizing your second GroupBox into position over the first. Drag one side or one corner of the second group box until it covers the first GroupBox and continue to resize the second GroupBox until it is positioned as you desire.
Also available, in the Layout Toolbar, are the alignment and sizing buttons. Select both of your GroupBoxes to enable these options. The GroupBox displaying white grab handles is the one that will remain stationary when you use the alignment and sizing buttons, which include options to "Make Same Size" and "Align Middle".
Furthermore, don't forget the keyboard right in front of you. The arrow keys will move the selected controls. Try holding various combinations of `Ctrl`+`Alt`+`Shift` while using the arrow keys.
|
The behavior of the group box is that you can drag controls into it, including other group box controls. If you let go of the mouse while dragging it over, it will become a child of that groupbox, and AFAIK there is no way to change the designer. Unless you work potentially with send to back first, than load the other on top.
I assume you are trying to acheive a layout where you swap out the visible box?
|
MSV08-C#: How do I stop two groupboxes from auto-parenting themselves when dragged on top of one another?
|
[
"",
"c#",
".net",
""
] |
I am developing a website using CodeIgniter and PHP. Some of my friends suggest I start using Subversion to take care of my organization. Could someone give me a basic explanation of what SVN is without the jargon and also if it is practical for a framework where 5 files are needed to run a single page?
|
Subversion allows you to store your source in a central location, make changes from multiple places (ie. laptop, desktop), track changes and revert changes to your source tree if needed.
Have you ever made a big 'improvement' and then decided you were going in the wrong direction? With Subversion you can revert to the version of your source before you started heading off in the wrong direction.
Or how about this, you make some subtle changes to the codebase, but don't notice that you introduced a new bug until much later. You can use subversion to revert to previous code versions, helping you track down the change that introduced the bug.
It is more valuable when sharing the source with multiple developers, but even for my single developer projects I find it very handy to have all my source and changes in a Subversion repository.
If you combine it with Trac, you can then keep track of bugs/features, milestones, and use the Trac Wiki to document your project.
|
Every single project (even with only one developer) should be under source control, no matter what framework or language you are using.
Using simple words, Subversion will keep a history of your code files, should you like to revert them to a previous state or restore a copy in case of a disk crash (SVN must be on another computer in this case). It will also help you see easily the differences between two versions of a file.
For more information, just read TortoiseSVN user's guide, it has a pretty good overview of Subversion.
Here's some good info : [Chapter 2. Basic Concepts](http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-basics.html)
|
Subversion... practical with a PHP framework app or not?
|
[
"",
"php",
"svn",
"frameworks",
""
] |
I am using jQuery to make an AJAX request to a remote endpoint. That endpoint will return a JSON object if there is a failure and that object will describe the failure. If the request is successful it will return HTML or XML.
I see how to define the expected request type in jQuery as part of the `$.ajax()` call. Is there a way to detect the request type in the `success` handler?
```
$.ajax(
{
type: "DELETE",
url: "/SomeEndpoint",
//dataType: "html",
data:
{
"Param2": param0val,
"Param1": param1val
},
success: function(data) {
//data could be JSON or XML/HTML
},
error: function(res, textStatus, errorThrown) {
alert('failed... :(');
}
}
);
```
|
Have you application generate correct Content-Type headers (application/json, text/xml, etc) and handle those in your success callback. Maybe something like this will work?
```
xhr = $.ajax(
{
//SNIP
success: function(data) {
var ct = xhr.getResponseHeader('Content-Type');
if (ct == 'application/json') {
//deserialize as JSON and continue
} else if (ct == 'text/xml') {
//deserialize as XML and continue
}
},
//SNIP
);
```
Untested, but it's worth a shot.
|
how about using the `complete` option?
```
$.ajax({
...
complete : function(xhr, status) {
// status is either "success" or "error"
// complete is fired after success or error functions
// xhr is the xhr object itself
var header = xhr.getResponseHeader('Content-Type');
},
...
});
```
|
Can I evaluate the response type of an $.ajax() call in success callback?
|
[
"",
"javascript",
"jquery",
"ajax",
""
] |
Is there some way to detect file handle leaks at program termination?
In particular I would like to make sure that all of my handles that get created are being freed in code.
For example, I may have a CreateFile() somewhere, and at program termination I want to detect and ensure that all of them are closed.
|
I have used !htrace command of windbg.
```
!htrace -enable
!htrace -snapshot
!htrace -diff
```
Allow you to compare the handle situation of two execution point and help you the locate the point where the leaked handle have been allocated.
It worked well for me.
|
If you can (i.e. if it's not a huge legacy code-base you are bugfixing) you should consider using the [RAII](http://en.wikipedia.org/wiki/Resource_acquisition_is_initialization) idiom to wrap around your file handles.
By "taking" the file handle in the constructor and releasing it in the destructor you can be sure that by the time your RAII goes out of scope your file handle is nicely cleaned up too.
It's the same principle as smart pointers, and it's a very useful concept to have in your toolbox for avoiding issues like this in C++.
|
Detect file handle leaks with Win32 C++
|
[
"",
"c++",
"winapi",
"visual-c++",
""
] |
In Python I could've converted it to Unicode and do '(?u)^[\w ]+$' regex search, but PHP doesn't seem to understand international \w, or does it?
|
Although I haven't tested myself, looking at <https://www.php.net/manual/en/reference.pcre.pattern.syntax.php> suggests the following: '/^[\p{L} ]+$/u' would work - the \p{L} will match any unicode letter. Additionally, you can apparently write this without the curly brackets - '/^[\pL ]+$/u'.
|
afaik PHP isn't aware of utf8, meaning that php itself won't be able to process it other than bytewise.
PHP believes everything is latin1, but there is however extensions that might be useful for you, like mbstring.
<http://se.php.net/mbstring>
|
How do I check that string has only international letters and spaces in UTF8 in PHP?
|
[
"",
"php",
"utf-8",
"utf",
""
] |
I'm writing an applicationt hat was prototyped on MySQL and is now connecting to an Oracle database.
All I had to do to connect to the oracle database (having built up the table structure) was change the connection string.
What is the format to connect to a SQL Server DB on another machine?
I've read some tutorials which tell you to use the SQL Server JDBC adaptor but I'd rather configure the application so that it's database agnostic, and just have the connection string specify the protocol etc.
Any references I've seen which tell you how to use the bridge with SQL Server require the ODBC Data Source to be installed, this is less than ideal as my app may run on Linux or windows.
I'm not doing anything complicated just inserts.
|
You should not use the JDBC-ODBC bridge in a production environment. It is much slower than other JDBC drivers and only necessary when a JDBC driver is not available.
SQL Server has a [JDBC driver](http://www.microsoft.com/downloads/details.aspx?familyid=C47053EB-3B64-4794-950D-81E1EC91C1BA&displaylang=en) available from Microsoft. If you use it then you will get the required result.
With the ODBC bridge you have no choice but to install the ODBC driver.
This [article](http://msdn.microsoft.com/en-us/library/ms378428(SQL.90).aspx) describes the connection string you will need to use to connect to the SQL Server.
|
Do NOT use the JDBC-ODBC bridge driver. That was meant purely for testing, not for production. You can still make your application database agnostic using drivers that are optimized for the database you want to connect to. Just externalize the username, password database driver name and connect string, and don't use any DB-specific SQL and you should be fine.
For connecting to SQL Server, use the jTDS driver <http://jtds.sourceforge.net/> The connect string format looks like this:
jdbc:jtds:sqlserver://localhost/my\_database
There are a few other parameters you can include, separated by semicolons, but I think this is all that's required. Obviously when you connect, you'll need to supply a username and password.
|
Connecting to SQLServer using JDBC-ODBC Bridge
|
[
"",
"java",
"jdbc",
"odbc-bridge",
""
] |
One of my columns is called `from`. I can't change the name because I didn't make it.
Am I allowed to do something like `SELECT from FROM TableName` or is there a special syntax to avoid the SQL Server being confused?
|
Wrap the column name in brackets like so, `from` becomes [from].
```
select [from] from table;
```
It is also possible to use the following (useful when querying multiple tables):
```
select table.[from] from table;
```
|
If it had been in PostgreSQL, use double quotes around the name, like:
```
select "from" from "table";
```
Note: Internally PostgreSQL automatically converts all unquoted commands and parameters to lower case. That have the effect that commands and identifiers aren't case sensitive. **sEleCt \* from tAblE;** is interpreted as **select \* from table;**. However, parameters inside double quotes are used as is, and therefore ARE case sensitive: **select \* from "table";** and **select \* from "Table";** gets the result from two different tables.
|
How to deal with SQL column names that look like SQL keywords?
|
[
"",
"sql",
"sql-server",
""
] |
I know that lots of web hosting providers are offering FreeBSD, but how good is FreeBSD as a development platform?
Specifically, is Java 1.6 available in it?
Is there somthing specific that it offers with regard to tools that is not available under Linux?
|
I've always found FreeBSD a wonderful secure hosting environment, but perhaps not the easiest development platform. You will have to dig a bit to get Java 1.6 up and running, though I think it will be doable. I hope you are familiar with emacs or vi. The ports system will afford you access to many pieces of software, but they will have to be compiled from source code. If you are familiar with standard Unix command line tools and the command line itself, you should have no problems with FreeBSD.
Alan
|
You can get binary distributions of Java from the [FreeBSD Foundation](http://freebsdfoundation.org/), they signed an agreement with Sun for that. Art from Java, FreeBSD is awonderful development platform with every language and environement you may need/want. Disclaimer: I've been a FreeBSD developer for more than 13 years.
|
How good is FreeBSD as a development platform?
|
[
"",
"java",
"operating-system",
"freebsd",
"platform",
""
] |
Suppose that I have a Java program within an IDE (Eclipse in this case).
Suppose now that I execute the program and at some point terminate it or it ends naturally.
Is there a **convenient** way to determine which lines executed at least once and which ones did not (e.g., exception handling or conditions that weren't reached?)
A manual way to collect this information would be to constantly step with the debugging and maintain a set of lines where we have passed at least once. However, is there some tool or profiler that already does that?
Edit: Just for clarification: I need to be able to access this information programmatically and not necessarily from a JUnit test.
|
[eclemma](http://www.eclemma.org/) would be a good start: a code coverage tool would allow a coverage session to record the information you are looking for.
[](https://i.stack.imgur.com/bWhQj.gif)
(source: [eclemma.org](http://www.eclemma.org/images/smallscreen.gif))
|
What you're asking about is called "coverage". There are several tools that measure that, some of which integrate into Eclipse. I've used [jcoverage](http://java-source.net/open-source/code-coverage/jcoverage-gpl) and it works (I believe it has a free trial period, after which you'd have to buy it). I've not used it, but you might also try [Coverlipse](http://coverlipse.sourceforge.net/).
|
How to identify which lines of code participated in a specific execution of a Java program?
|
[
"",
"java",
"eclipse",
"profiling",
"code-coverage",
"trace",
""
] |
I want to output a timestamp with a PST offset (e.g., 2008-11-13T13:23:30-08:00). `java.util.SimpleDateFormat` does not seem to output timezone offsets in the *hour:minute* format, it excludes the colon. Is there a simple way to get that timestamp in Java?
```
// I want 2008-11-13T12:23:30-08:00
String timestamp = new SimpleDateFormat("yyyy-MM-dd'T'h:m:ssZ").format(new Date());
System.out.println(timestamp);
// prints "2008-11-13T12:23:30-0800" See the difference?
```
Also, `SimpleDateFormat` cannot properly parse the example above. It throws a `ParseException`.
```
// Throws a ParseException
new SimpleDateFormat("yyyy-MM-dd'T'h:m:ssZ").parse("2008-11-13T13:23:30-08:00")
```
|
Starting in Java 7, there's the `X` pattern string for ISO8601 time zone. For strings in the format you describe, use `XXX`. [See the documentation](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html#iso8601timezone).
Sample:
```
System.out.println(new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssXXX")
.format(new Date()));
```
Result:
```
2014-03-31T14:11:29+02:00
```
|
Check out the [Joda Time](http://www.joda.org/joda-time/) package. They make RFC 3339 date formatting a lot easier.
**Joda Example:**
```
DateTime dt = new DateTime(2011,1,2,12,45,0,0, DateTimeZone.UTC);
DateTimeFormatter fmt = ISODateTimeFormat.dateTime();
String outRfc = fmt.print(dt);
```
|
Output RFC 3339 Timestamp in Java
|
[
"",
"java",
"timestamp",
"rfc3339",
""
] |
I have html code that looks roughly like this:
```
<div id="id1">
<div id="id2">
<p>some html</p>
<span>maybe some more</span>
</div>
<div id="id3">
<p>different text here</p>
<input type="text">
<span>maybe even a form item</span>
</div>
</div>
```
Obviously there's more to it than that, but that's the basic idea. What I need to do is switch the location of #id2 and #id3, so the result is:
```
<div id="id1">
<div id="id3">...</div>
<div id="id2">...</div>
</div>
```
Does anyone know of a function (I'm sure I'm not the first person to require this functionality) that can read and write the two nodes (and all their children) so as to swap their location in the DOM?
|
In this case, `document.getElementById('id1').appendChild(document.getElementById('id2'));` should do the trick.
More generally you can use [`insertBefore()`](https://developer.mozilla.org/en-US/docs/Web/API/Node/insertBefore).
|
This function takes any node that is passed into it and wraps it with the tag given. In the example code snippet I wrapped a span tag with a section tag.
```
function wrap(node, tag) {
node.parentNode.insertBefore(document.createElement(tag), node);
node.previousElementSibling.appendChild(node);
}
```
```
function wrap(node, tag) {
node.parentNode.insertBefore(document.createElement(tag), node);
node.previousElementSibling.appendChild(node);
}
let toWrap = document.querySelector("#hi");
wrap(toWrap, "section");
console.log(document.querySelector("section > #hi"), " section wrapped element");
```
```
<span id="hi">hello there!</span>
```
|
"Cut and Paste" - moving nodes in the DOM with Javascript
|
[
"",
"javascript",
"dom",
""
] |
What is the best method of hiding php errors from being displayed on the browser?
Would it be to use the following:
```
ini_set("display_errors", 1);
```
Any best practice tips would be appreciated as well!
I am logging the errors, I just want to make sure that setting the display\_errors value to off (or 0) will not prevent errors from being logged.
|
The best way is to log your errors instead of displaying or ignoring them.
This example will log the errors to syslog instead of displaying them in the browser.
```
ini_set("display_errors", 0);
ini_set("log_errors", 1);
//Define where do you want the log to go, syslog or a file of your liking with
ini_set("error_log", "syslog"); // or ini_set("error_log", "/path/to/syslog/file");
```
|
Assuming you are in control of the php.ini file, you can make these changes globally inside that file instead of having the ini\_set code laying around in all your php files (which you might forget to put in one of your files one day, which could be bad in production).
|
Best way to suppress php errors on production servers
|
[
"",
"php",
"error-handling",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.