Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I've started to use constructs like these:
```
class DictObj(object):
def __init__(self):
self.d = {}
def __getattr__(self, m):
return self.d.get(m, None)
def __setattr__(self, m, v):
super.__setattr__(self, m, v)
```
Update: based on this thread, I've revised the DictObj implementation to:
```
class dotdict(dict):
def __getattr__(self, attr):
return self.get(attr, None)
__setattr__= dict.__setitem__
__delattr__= dict.__delitem__
class AutoEnum(object):
def __init__(self):
self.counter = 0
self.d = {}
def __getattr__(self, c):
if c not in self.d:
self.d[c] = self.counter
self.counter += 1
return self.d[c]
```
where DictObj is a dictionary that can be accessed via dot notation:
```
d = DictObj()
d.something = 'one'
```
I find it more aesthetically pleasing than `d['something']`. Note that accessing an undefined key returns None instead of raising an exception, which is also nice.
Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using **dict** instead of defining a new dictionary; if not, I like mhawke's solution a lot.
AutoEnum is an auto-incrementing Enum, used like this:
```
CMD = AutoEnum()
cmds = {
"peek": CMD.PEEK,
"look": CMD.PEEK,
"help": CMD.HELP,
"poke": CMD.POKE,
"modify": CMD.POKE,
}
```
Both are working well for me, but I'm feeling unpythonic about them.
Are these in fact bad constructs?
|
This is a simpler version of your DictObj class:
```
class DictObj(object):
def __getattr__(self, attr):
return self.__dict__.get(attr)
>>> d = DictObj()
>>> d.something = 'one'
>>> print d.something
one
>>> print d.somethingelse
None
>>>
```
|
Your DictObj example is actually quite common. Object-style dot-notation access can be a win if you are dealing with ‘things that resemble objects’, ie. they have fixed property names containing only characters valid in Python identifiers. Stuff like database rows or form submissions can be usefully stored in this kind of object, making code a little more readable without the excess of ['item access'].
The implementation is a bit limited - you don't get the nice constructor syntax of dict, len(), comparisons, 'in', iteration or nice reprs. You can of course implement those things yourself, but in the new-style-classes world you can get them for free by simply subclassing dict:
```
class AttrDict(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
```
To get the default-to-None behaviour, simply subclass Python 2.5's collections.defaultdict class instead of dict.
|
Javascript style dot notation for dictionary keys unpythonic?
|
[
"",
"python",
"coding-style",
""
] |
Does anyone know where to get the **source** code for LambdaProbe?
Alternatively, does anyone know if the project could be moved to a community repository?
Besides the tool not being updated for over a year, the LambdaProbe website has been down since late September 2008.
Background: Lambda Probe is a useful tool for viewing stats on a running tomcat server. It used to be found at <http://www.lambdaprobe.org>.
|
The site still seems to be pretty dead. However, I've done some looking into available source code and release notes and here's everything I've found in regards to lambda probe. I hope it helps!
Google cache search:
site:www.lambdaprobe.org
Release notes:
<http://209.85.173.104/search?q=cache:HB0527hDa5AJ:www.lambdaprobe.org/d/latest.shtml+release+site:www.lambdaprobe.org&hl=en&gl=us&strip=1>
Lates Release Downloads:
<http://209.85.173.104/search?q=cache:t5jXPCYgiCsJ:www.lambdaprobe.org/d/download.htm+download+1.7b+site:www.lambdaprobe.org&hl=en&gl=us&strip=1>
<http://www.lambdaprobe.org/downloads/1.7/probe.1.7b.zip>
<http://www.lambdaprobe.org/downloads/1.7/probe.1.7b.src.zip>
<http://www.lambdaprobe.org/downloads/1.7/probe.1.7b-jb.zip>
<http://www.lambdaprobe.org/downloads/1.6/probe.1.6.zip>
<http://www.lambdaprobe.org/downloads/1.6/probe.1.6.src.zip>
<http://www.lambdaprobe.org/downloads/1.6/probe.1.6-jb.zip>
|
There seems to be a fork of the project over at <http://code.google.com/p/psi-probe/>
|
Is Lambda Probe dead?
|
[
"",
"java",
"memory",
"tomcat",
"garbage-collection",
""
] |
I have the following C# which simply replaces parts of the input string that look like EQUIP:19d005 into URLs, like this:
```
input = Regex.Replace(input, @"(EQUIP:)(\S+)", @"<a title=""View equipment item $2"" href=""/EquipmentDisplay.asp?eqnum=$2"">$1$2</a>", RegexOptions.IgnoreCase);
```
The HTML ends up looking like this.
```
<a title="View equipment item 19d005" href="/EquipmentDisplay.asp?eqnum=19d005">EQUIP:19d005</a>
```
The only trouble is that the destination page expects the eqnum querystring to be all UPPERCASE so it returns the correct equipment when eqnum=19D005 but fails if it receives eqnum=19d005.
I guess I can modify and correct EquipmentDisplay.asp's errant requirement of uppercase values however, if possible I'd like to make the C# code comply with the existing classic ASP page by uppercasing the $2 in the Regex.Replace statement above.
Ideally, I'd like the HTML returned to look like this:
```
<a title="View equipment item 19d005" href="/EquipmentDisplay.asp?eqnum=19D005">EQUIP:19d005</a>
```
*Notice although the original string was EQUIP:19d005 (lowercase), only the eqnum= value is uppercased.*
Can it be done and if so, what's the tidiest way to do it?
|
OK, 2 solutions, one inline:
```
input = Regex.Replace(input, @"(EQUIP:)(\S+)", m => string.Format(@"<a title=""View equipment item {1}"" href=""/EquipmentDisplay.asp?eqnum={2}"">{0}{1}</a>", m.Groups[1].Value, m.Groups[2].Value, m.Groups[2].Value.ToUpper()), RegexOptions.IgnoreCase);
```
The other using a separate function:
```
var input = Regex.Replace(input, @"(EQUIP:)(\S+)", Evaluator, RegexOptions.IgnoreCase);
private static string Evaluator(Match match)
{
return string.Format(@"<a title=""View equipment item {1}"" href=""/EquipmentDisplay.asp?eqnum={2}"">{0}{1}</a>", match.Groups[1].Value, match.Groups[2].Value, match.Groups[2].Value.ToUpper());
}
```
|
Using Regex.Replace directly I do not think there is a way. But you could make this a two step process and get the result you are looking for.
```
var match = Regex.Match(input, @"(EQUIP:)(\S+)", RegexOptions.IgnoreCase);
var input = String.Format( @"<a title=""View equipment item {1}"" href=""/EquipmentDisplay.asp?eqnum={2}"">{0}{1}</a>",
match.Groups[1].Value,
match.Groups[2].Value,
match.Groups[2].Value.ToUpper());
```
|
Regular Expression Uppercase Replacement in C#
|
[
"",
"c#",
".net",
"regex",
""
] |
I have a database that stores events in it and a page with a calendar object on it. When rendering the days it looks through the months events and if any match the current day being rendered it creates a linkbutton to represent the event in the day on the calendar and adds it to that cell. I add some javascript to the linkbutton to change the window.location to a page to view event details passing EventID in the querystring ( I tried setting the postbackurl of the newly created linkbutton but it wasnt causing a postback... no luck). I need to set a Session variable ie. Session("EditMode") = "Edit" So the new page will know it is to get an existing event info rather than prepare to create a new event? Any SUGGESTIONS?
|
Your session vars are controlled by the server,
JS runs client side, and as such cannot modify the vars directly.
You need to make server requests using POST or GET and hidden
iframes, or XMLHTTPRequest() calls to send data from the JS to
the server, and then have your server side code handle the vars.
Add another query string variable that the page can use to trigger existing vs new.
|
Add another query string variable that the page can use to trigger existing vs new.
|
Javascript and session variables
|
[
"",
"javascript",
"session",
""
] |
I can't seem to find the \_findfirst / findfirst, \_findnext / findnext API on gcc for Linux, and would actually rather use the Standard Template Library (STL) for that if it is included there.
Does anyone know what API there is available for listing files in a directory under Linux for C++ (gcc)?
|
Check out the [Boost.Filesystem](http://www.boost.org/doc/libs/1_36_0/libs/filesystem/doc/index.htm) library.
In particular, the [basic\_directory\_iterator](http://www.boost.org/doc/libs/1_36_0/libs/filesystem/doc/reference.html#Class-template-basic_directory_iterator).
|
It's not a C++-style API, but the API you aren't finding (the Linux/Unix correspondent of DOS/Windows-style findfirst/findnext) is opendir/readdir/closedir.
The main advantage of using opendir/readdir/closedir is that you do not need any extra library (it's part of the C library, which you are already using). In fact, the Boost filesystem library uses opendir/readdir/closedir to get the list of files in a directory.
References:
* <http://www.opengroup.org/onlinepubs/009695399/functions/opendir.html>
* <http://www.opengroup.org/onlinepubs/009695399/functions/readdir.html>
* <http://www.opengroup.org/onlinepubs/009695399/functions/closedir.html>
|
Is there a standard way to do findfirst, findnext with gcc on linux using stl?
|
[
"",
"c++",
"linux",
"gcc",
"stl",
"file",
""
] |
I'm looking for an example algorithm of smart pagination. By smart, what I mean is that I only want to show, for example, 2 adjacent pages to the current page, so instead of ending up with a ridiculously long page list, I truncate it.
Here's a quick example to make it clearer... this is what I have now:
```
Pages: 1 2 3 4 [5] 6 7 8 9 10 11
```
This is what I want to end up with:
```
Pages: ... 3 4 [5] 6 7 ...
```
(In this example, I'm only showing 2 adjacent pages to the current page)
I'm implementing it in PHP/Mysql, and the "basic" pagination (no trucating) is already coded, I'm just looking for an example to optimize it... It can be an example in any language, as long as it gives me an idea as to how to implement it...
|
Here is some code based on original code from [this very old link](https://www.strangerstudios.com/sandbox/pagination/diggstyle.php). It uses markup compatible with Bootstrap's pagination component, and outputs page links like this:
```
[1] 2 3 4 5 6 ... 100
1 [2] 3 4 5 6 ... 100
...
1 2 ... 14 15 [16] 17 18 ... 100
...
1 2 ... 97 [98] 99 100
```
```
<?php
// How many adjacent pages should be shown on each side?
$adjacents = 3;
//how many items to show per page
$limit = 5;
// if no page var is given, default to 1.
$page = (int)$_GET["page"] ?? 1;
//first item to display on this page
$start = ($page - 1) * $limit;
/* Get data. */
$data = $db
->query("SELECT * FROM mytable LIMIT $start, $limit")
->fetchAll();
$total_pages = count($data);
/* Setup page vars for display. */
$prev = $page - 1;
$next = $page + 1;
$lastpage = ceil($total_pages / $limit);
//last page minus 1
$lpm1 = $lastpage - 1;
$first_pages = "<li class='page-item'><a class='page-link' href='?page=1'>1</a></li>" .
"<li class='page-item'><a class='page-link' href='?page=2'>2</a>";
$ellipsis = "<li class='page-item disabled'><span class='page-link'>...</span></li>";
$last_pages = "<li class='page-item'><a class='page-link' href='?page=$lpm1'>$lpm1</a></li>" .
"<li class='page-item'><a class='page-link' href='?page=$lastpage'>$lastpage</a>";
$pagination = "<nav aria-label='page navigation'>";
$pagincation .= "<ul class='pagination'>";
//previous button
$disabled = ($page === 1) ? "disabled" : "";
$pagination.= "<li class='page-item $disabled'><a class='page-link' href='?page=$prev'>« previous</a></li>";
//pages
//not enough pages to bother breaking it up
if ($lastpage < 7 + ($adjacents * 2)) {
for ($i = 1; $i <= $lastpage; $i++) {
$active = $i === $page ? "active" : "";
$pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>";
}
} elseif($lastpage > 5 + ($adjacents * 2)) {
//enough pages to hide some
//close to beginning; only hide later pages
if($page < 1 + ($adjacents * 2)) {
for ($i = 1; $i < 4 + ($adjacents * 2); $i++) {
$active = $i === $page ? "active" : "";
$pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>";
}
$pagination .= $ellipsis;
$pagination .= $last_pages;
} elseif($lastpage - ($adjacents * 2) > $page && $page > ($adjacents * 2)) {
//in middle; hide some front and some back
$pagination .= $first_pages;
$pagination .= $ellipsis
for ($i = $page - $adjacents; $i <= $page + $adjacents; $i++) {
$active = $i === $page ? "active" : "";
$pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>";
}
$pagination .= $ellipsis;
$pagination .= $last_pages;
} else {
//close to end; only hide early pages
$pagination .= $first_pages;
$pagination .= $ellipsis;
$pagination .= "<li class='page-item disabled'><span class='page-link'>...</span></li>";
for ($i = $lastpage - (2 + ($adjacents * 2)); $i <= $lastpage; $i++) {
$active = $i === $page ? "active" : "";
$pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>";
}
}
}
//next button
$disabled = ($page === $last) ? "disabled" : "";
$pagination.= "<li class='page-item $disabled'><a class='page-link' href='?page=$next'>next »</a></li>";
$pagination .= "</ul></nav>";
if($lastpage <= 1) {
$pagination = "";
}
echo $pagination;
foreach ($data as $row) {
// display your data
}
echo $pagination;
```
|
Kinda late =), but here is my go at it:
```
function Pagination($data, $limit = null, $current = null, $adjacents = null)
{
$result = array();
if (isset($data, $limit) === true)
{
$result = range(1, ceil($data / $limit));
if (isset($current, $adjacents) === true)
{
if (($adjacents = floor($adjacents / 2) * 2 + 1) >= 1)
{
$result = array_slice($result, max(0, min(count($result) - $adjacents, intval($current) - ceil($adjacents / 2))), $adjacents);
}
}
}
return $result;
}
```
---
**Example:**
```
$total = 1024;
$per_page = 10;
$current_page = 2;
$adjacent_links = 4;
print_r(Pagination($total, $per_page, $current_page, $adjacent_links));
```
**Output ([@ Codepad](http://codepad.org/vcULT1DA)):**
```
Array
(
[0] => 1
[1] => 2
[2] => 3
[3] => 4
[4] => 5
)
```
---
**Another example:**
```
$total = 1024;
$per_page = 10;
$current_page = 42;
$adjacent_links = 4;
print_r(Pagination($total, $per_page, $current_page, $adjacent_links));
```
**Output ([@ Codepad](http://codepad.org/HAKkgrb2)):**
```
Array
(
[0] => 40
[1] => 41
[2] => 42
[3] => 43
[4] => 44
)
```
|
Smart pagination algorithm
|
[
"",
"php",
"pagination",
""
] |
Python's access to environment variables does not accurately reflect the operating system's view of the processes environment.
os.getenv and os.environ do not function as expected in particular cases.
Is there a way to properly get the running process' environment?
---
To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python):
```
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[]){
char *env;
for(;;){
env = getenv("SOME_VARIABLE");
if(env)
puts(env);
sleep(5);
}
}
```
---
```
import os
import time
while True:
env = os.getenv("SOME_VARIABLE")
if env is not None:
print env
time.sleep(5)
```
---
Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this:
```
(gdb) print setenv("SOME_VARIABLE", "my value", 1)
[Switching to Thread -1208600896 (LWP 16163)]
$1 = 0
(gdb) print (char *)getenv("SOME_VARIABLE")
$2 = 0x8293126 "my value"
```
then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not.
Is there a way to get the python program to function like the C program in this case?
(Yes, I realize this is a very obscure and potentially damaging action to perform on a running process)
Also, I'm currently using python 2.4, this may have been fixed in a later version of python.
|
That's a very good question.
It turns out that the `os` module initializes `os.environ` to the value of [`posix`](http://docs.python.org/library/posix.html)`.environ`, which is set on interpreter start up. In other words, the standard library does not appear to provide access to the [getenv](http://www.opengroup.org/onlinepubs/000095399/functions/getenv.html) function.
That is a case where it would probably be safe to use [ctypes](http://docs.python.org/library/ctypes.html) on unix. Since you would be calling an ultra-standard libc function.
|
You can use `ctypes` to do this pretty simply:
```
>>> from ctypes import CDLL, c_char_p
>>> getenv = CDLL("libc.so.6").getenv
>>> getenv.restype = c_char_p
>>> getenv("HOME")
'/home/glyph'
```
|
Environment Variables in Python on Linux
|
[
"",
"python",
"gdb",
"environment-variables",
""
] |
I have noticed that regardless of a given script's execution time, every date() call will return the same timestamp regardless of where the function is called within the script. It looks like it just returns the time at which the script first started executing.
For logging purposes, it would be extremely useful to be able to get incremental timestamps from within a script. Is this possible? Is there a way to do this that is relatively lightweight?
**Edit:** Would the example for the [microtime()](http://us.php.net/microtime) function suggests it might do this. Can anyone confirm?
**Update:** microtime() does work, but I cannot format it with the date() function because date() only accepts timestamps as integers (so no microseconds). How can I get a properly formatted date from the value returned by microtime() ?
|
<http://us.php.net/microtime> gives me different times within the same script.
|
I ran this code on my machine:
```
<?php
$time_start = time();
sleep(2);
$time_end = time();
print 'Start: ' . date("m/d/Y @ g:i:sA", $time_start) . '<br>';
print 'End: ' . date("m/d/Y @ g:i:sA", $time_end);
?>
```
And it output:
```
Start: 10/23/2008 @ 3:12:23PM
End: 10/23/2008 @ 3:12:25PM
```
Leading me to believe `time()` does not just return the time when execution started.
|
Getting Precise Times in PHP
|
[
"",
"php",
"logging",
"date",
"timestamp",
""
] |
I'm creating a table that looks something like this.
```
CREATE TABLE packages
(
productCode char(2)
, name nvarchar(100)
, ...
)
```
How do I make sure the productCode is always one of two values `XJ` or `XD`?
|
```
ALTER TABLE packages
ADD CONSTRAINT constraintname CHECK (productCode in ('XJ', 'XD'))
```
|
Either make it a [foreign key](http://en.wikipedia.org/wiki/Foreign_key) to a lookup table, or add a [check constraint](http://en.wikipedia.org/wiki/Check_constraint) to enforce it.
|
How do I ensure a column contains one of a set of values?
|
[
"",
"sql",
"sql-server",
""
] |
I have this CheckBoxList on a page:
```
<asp:checkboxlist runat="server" id="Locations" datasourceid="LocationsDatasource"
datatextfield="CountryName" datavaluefield="CountryCode" />
```
I'd like to loop through the checkbox elements on the client using Javascript and grab the value of each checked checkbox, but the values don't appear to be available on the client side. The HTML output looks like this:
```
<table id="ctl00_Content_Locations" class="SearchFilterCheckboxlist" cellspacing="0" cellpadding="0" border="0" style="width:235px;border-collapse:collapse;">
<tr>
<td><input id="ctl00_Content_Locations_0" type="checkbox" name="ctl00$Content$Locations$0" /><label for="ctl00_Content_Locations_0">Democratic Republic of the Congo</label></td>
</tr><tr>
<td><input id="ctl00_Content_Locations_1" type="checkbox" name="ctl00$Content$Locations$1" /><label for="ctl00_Content_Locations_1">Central African Republic</label></td>
</tr><tr>
<td><input id="ctl00_Content_Locations_2" type="checkbox" name="ctl00$Content$Locations$2" /><label for="ctl00_Content_Locations_2">Congo</label></td>
</tr><tr>
<td><input id="ctl00_Content_Locations_3" type="checkbox" name="ctl00$Content$Locations$3" /><label for="ctl00_Content_Locations_3">Cameroon</label></td>
</tr><tr>
<td><input id="ctl00_Content_Locations_4" type="checkbox" name="ctl00$Content$Locations$4" /><label for="ctl00_Content_Locations_4">Gabon</label></td>
</tr><tr>
<td><input id="ctl00_Content_Locations_5" type="checkbox" name="ctl00$Content$Locations$5" /><label for="ctl00_Content_Locations_5">Equatorial Guinea</label></td>
</tr>
```
The values ("cd", "cg", "ga", etc.) are nowhere to be found. Where are they? Is it even possible to access them on the client, or do I need to build this checkboxlist myself using a repeater or something?
|
Stored in ViewState, you cannot access them on the client without [**some hacking**](http://www.daveparslow.com/2007/08/assigning-value-to-aspnet-checkbox.html).
|
I finally have the answer I've been looking for!
The asp.net CheckboxList control *does* in fact render the value attribute to HTML - it has been working for me in a Production site for over a year now! (We ALWAYS have EnableViewState turned off for all our sites and it still works, without any tweaks or hacks)
However, all of a sudden, it stopped working one day, and our CheckboxList checkboxes no longer were rendering their value attribute in HTML! WTF you say? So did we! It took a while to figure this out, but since it had been working before, we knew there had to be a reason. The reason was an accidental change to our web.config!
```
<pages controlRenderingCompatibilityVersion="3.5">
```
We removed this attribute from the pages configuration section and that did the trick!
Why is this so? We reflected the code for the CheckboxList control and found this in the RenderItem method:
```
if (this.RenderingCompatibility >= VersionUtil.Framework40)
{
this._controlToRepeat.InputAttributes.Add("value", item.Value);
}
```
Dearest dev brothers and sisters do not despair! ALL the answers I searched for here on Stackexchange and the rest of the web gave erroneous information! As of .Net 4.0 asp.net renders the value attribute of the checkboxes of a CheckboxList:
```
<input type="checkbox" value="myvalue1" id="someid" />
```
Perhaps not practically useful, Microsoft gave you the ability to add a "controlRenderingCompatibilityVersion" to your web.config to turn this off by setting to a version lower than 4, which for our purposes is completely unnecessary and in fact harmful, since javascript code relied on the value attribute...
We were getting chk.val() equal to "on" for all our checkboxes, which is what originally alerted us to this problem in the first place (using jquery.val() which gets the value of the checkbox. Turns out "on" is the value of a checkbox that is checked... Learn something every day).
|
Where are the DataValueField values for a CheckBoxList stored?
|
[
"",
"asp.net",
"javascript",
""
] |
I've noticed that when generating a new C++ project using MS Visual Studio 2008, the **Release** build contains debugging symbols - specifically the following settings are enabled:
* The C++/General/Debug Information Format is set to **Program Database**.
* The Linker/Debugging/Generate Debug Info setting is set to **Yes**.
I have never noticed this on earlier releases of Visual Studio.
So, other than generating a larger EXE file, is there any downside to leaving these settings enabled?
|
We have turned on those settings in our commercial releases for years now with no apparent downside. The upsides are enormous,though.
We have integrated a crash dump packager that packages the dump along with some other information and emails it (with the user's consent) to a company inbox. This has helped us find problems that would have taken us *forever* to reproduce and find otherwise.
Although it's slightly off topic, here's a link to an excellent contribution someone made that gives you an easy way to include a crash reporter to a C++/Windows app:
<http://www.codeproject.com/KB/debug/crash_report.aspx>
Note: It would be wise, though, not to include the PDB file with your release. That said, you must keep the PDB file that matches your released version so that you can correctly debug the problem in the future. If a PDB file is used that wasn't built with the same code that built the exe, the stack you see when you try to debug the dmp will be wrong.
|
They're turned on by default because:
1. If you don't create them now, you can't create them later.
2. You need them.
Enabling debug info in Visual C++ causes a small entry to be added to the binary header, identifying the PDB for this binary. It's too small to be of any size concern, and doesn't contain any useful secrets that you might be concerned about sharing.
(The header entry is labeled RSDS: who can guess why?)
Of course, those PDBs will use more disk space on your build machine / in your backups. Deal with it. You need those PDBs when it comes time to debug something.
|
Visual C++ 2008 'Release' build contains debug information
|
[
"",
"c++",
"visual-studio",
"debugging",
""
] |
In there an easy way to do this in PHP. I want to make sure that only web requests from certain countries are able to access my website.
Any ideas?
|
Use an IP geolocation database (some are free) and $\_SERVER["REMOTE\_ADDR"] to get the visitor's IP address.
<http://www.maxmind.com/app/geolitecity> is a free (less accurate) version of a commercial one.
|
Like Gilles, I've used MaxMind's GeoIP stuff for this in the past - configured with the PECL extension for speed. It works reasonably well, but...
The requirement for this kind of thing tends to come from somebody that doesn't understand that it is impossible to reliably determine a visitors location in this way. It's very important that the person asking for it be encouraged to understand that it is *almost* useless. Typical thing that happens with geo-location in this:
> > > Client: I want to be able to restrict content by IP
> > > Dev: You do know that that is impossible to do reliably?
> > > Client: Ah yes, but this company say they will sell me something that will do it
> > > Dev: Yes but it isn't accurate and is easy to circumvent and usually indicates a poor business model for internet based content
> > > Client: Can you do it?
> > > Dev: Whatever...
...Six months later...
> > > Client: Some of my visitors have been complaining they can't see my content and some bad people who shouldn't see it have been able to!
> > > Dev: /me slaps head
It's only one step on from there to *"can I have it so that when a user right clicks in their browser a little sign pops up saying **'these images are copyright Idiot Inc.'**?"*
Sorry, obviously in a cynical mood today!
|
How do I limit which countries can view my website ( PHP )
|
[
"",
"php",
"geolocation",
""
] |
Anybody have a slicker way to do this? Seems like it should be easier than this, but I'm having a mental block. Basically I need to remove items from an dictionary and recurse into the values of the items that are also dictionaries.
```
private void RemoveNotPermittedItems(ActionDictionary menu)
{
var keysToRemove = new List<string>();
foreach (var item in menu)
{
if (!GetIsPermitted(item.Value.Call))
{
keysToRemove.Add(item.Key);
}
else if (item.Value is ActionDictionary)
{
RemoveNotPermittedItems((ActionDictionary)item.Value);
if (((ActionDictionary)item.Value).Count == 0)
{
keysToRemove.Add(item.Key);
}
}
}
foreach (var key in (from item in menu where keysToRemove.Contains(item.Key) select item.Key).ToArray())
{
menu.Remove(key);
}
}
```
Action dictionary is like this:
```
public class ActionDictionary : Dictionary<string, IActionItem>, IActionItem
```
|
You don't really need to collect the keys and iterate them again if you iterate the dictionary in reverse (from 'menu.Count - 1' to zero). Iterating in forward order will, of course, yield mutated collection exceptions if you start removing things.
I don't know what an ActionDictionary is, so I couldn't test your exact scenario, but here's an example using just `Dictionary<string,object>`.
```
static int counter = 0;
private static void RemoveNotPermittedItems(Dictionary<string, object> menu)
{
for (int c = menu.Count - 1; c >= 0; c--)
{
var key = menu.Keys.ElementAt(c);
var value = menu[key];
if (value is Dictionary<string, object>)
{
RemoveNotPermittedItems((Dictionary<string, object>)value);
if (((Dictionary<string, object>)value).Count == 0)
{
menu.Remove(key);
}
}
else if (!GetIsPermitted(value))
{
menu.Remove(key);
}
}
}
// This just added to actually cause some elements to be removed...
private static bool GetIsPermitted(object value)
{
if (counter++ % 2 == 0)
return false;
return true;
}
```
I also reversed the 'if' statement, but that was just an assumption that you'd want to do type checking before calling a method to act on the item's value...it will work either way assuming 'GetIsPermitted' always returns TRUE for ActionDictionary.
Hope this helps.
|
While foreach and GetEnumerator fails, a for-loop works,
```
var table = new Dictionary<string, int>() {{"first", 1}, {"second", 2}};
for (int i = 0; i < table.Keys.Count; i++)//string key in table.Keys)
{
string key = table.Keys.ElementAt(i);
if (key.StartsWith("f"))
{
table.Remove(key);
}
}
```
But ElementAt() is a .NET 3.5 feature.
|
Removing Items From IDictionary With Recursion
|
[
"",
"c#",
"recursion",
"idictionary",
""
] |
```
string [] files = new string[2];
files[0] = "ThinkFarAhead.Example.Settings.Configuration_Local.xml";
files[1] = "ThinkFarAhead.Example.Settings.Configuration_Global.xml";
//Resharper complains this is an "access to modified closure"
for (int i = 0; i < files.Length; i++ )
{
// Resharper disable AccessToModifiedClosure
if(Array.Exists(Assembly.GetExecutingAssembly().GetManifestResourceNames(),
delegate(string name) { return name.Equals(files[i]); }))
return Assembly.GetExecutingAssembly().GetManifestResourceStream(files[i]);
// ReSharper restore AccessToModifiedClosure
}
```
The above seems to work fine though ReSharper complains that this is "access to modified closure". Can any one shed light on this?
(this topic continued [here](https://stackoverflow.com/questions/304258))
|
In this case, it's okay, since you are actually executing the delegate *within* the loop.
If you were saving the delegate and using it later, however, you'd find that all of the delegates would throw exceptions when trying to access files[i] - they're capturing the *variable* `i` rather than its value at the time of the delegates creation.
In short, it's something to be aware of as a *potential* trap, but in this case it doesn't hurt you.
See the [bottom of this page](http://jonskeet.uk/csharp/csharp2/delegates.html#captured.variables) for a more complex example where the results are counterintuitive.
|
I know this is an old question, but I've recently been studying closures and thought a code sample might be useful. Behind the scenes, the compiler is generating a class that represents a lexical closure for your function call. It probably looks something like:
```
private sealed class Closure
{
public string[] files;
public int i;
public bool YourAnonymousMethod(string name)
{
return name.Equals(this.files[this.i]);
}
}
```
As mentioned above, your function works because the predicates are invoked immediately after creation. The compiler will generate something like:
```
private string Works()
{
var closure = new Closure();
closure.files = new string[3];
closure.files[0] = "notfoo";
closure.files[1] = "bar";
closure.files[2] = "notbaz";
var arrayToSearch = new string[] { "foo", "bar", "baz" };
//this works, because the predicates are being executed during the loop
for (closure.i = 0; closure.i < closure.files.Length; closure.i++)
{
if (Array.Exists(arrayToSearch, closure.YourAnonymousMethod))
return closure.files[closure.i];
}
return null;
}
```
On the other hand, if you were to store and then later invoke the predicates, you would see that every single call to the predicates would really be calling the same method on the same instance of the closure class and therefore would use the same value for i.
|
Access to Modified Closure
|
[
"",
"c#",
"resharper",
"closures",
""
] |
This is really only easy to explain with an example, so to remove the intersection of a list from within a dict I usually do something like this:
```
a = {1:'', 2:'', 3:'', 4:''}
exclusion = [3, 4, 5]
# have to build up a new list or the iteration breaks
toRemove = []
for var in a.iterkeys():
if var in exclusion:
toRemove.append(var)
for var in toRemove:
del a[var]
```
This might seem like an unusual example, but it's surprising the number of times I've had to do something like this. Doing this with sets would be much nicer, but I clearly want to retain the 'values' for the dict.
This method is annoying because it requires two loops and an extra array. Is there a cleaner and more efficient way of doing this.
|
Consider [dict.pop](http://docs.python.org/library/stdtypes.html#dict.pop):
```
for key in exclusion:
a.pop(key, None)
```
The `None` keeps pop from raising an exception when `key` isn't a key.
|
```
a = dict((key,value) for (key,value) in a.iteritems() if key not in exclusion)
```
|
Removing a subset of a dict from within a list
|
[
"",
"python",
"list",
"containers",
""
] |
I am working on a web page that is using jQuery. I have an Ajax call that gets data from the server and updates a div. Inside that data there is a jQuery function, but the function is not being called after the data is loaded into the page. I have the proper js files included in the page already.
This is what is returned from the Ajax call and placed into a div:
```
<script type="text/javascript">
$(function() {
$('input').myFunction('param');
});
</script>
<p> other html </p>
```
How do I get the returned javascript to run after the html is inserted into the page?
(I am using Rails with the jRails plugin )
|
If you want JavaScript tag evaluation, with html content, you should set the dataType option of the ajax call to "html":
```
$.ajax({
type: "GET",
url: "yourPage.htm",
dataType: "html"
});
```
Or dataType "script", if you want to load and execute a .js file:
```
$.ajax({
type: "GET",
url: "test.js",
dataType: "script"
});
```
more info here: [Ajax/jQuery.ajax](http://docs.jquery.com/Ajax/jQuery.ajax#options)
|
I had a similar problem where I wanted to add little jquery date pickers to a couple fields I was retrieving via ajax. here is what I did to get around it... just quick and dirty. Instead of returning this whole block from my ajax call:
```
<script type="text/javascript">
$(function() {
$('input').myFunction('param');
});
</script>
<p> other html </p>
```
I would return this (note the made up |x| separator)
```
$(function() {
$('input').myFunction('param');
});
|x|
<p> other html </p>
```
Then when I received the data back via ajax, I split the return value into 2 parts: the javascript to be executed, and the html to display:
```
r = returnvalfromajax.split("|x|");
document.getElementById('whatever').innerHTML = r[1];
eval(r[0]);
```
|
Calling a jQuery function inside html return from an AJAX call
|
[
"",
"javascript",
"jquery",
"ajax",
""
] |
I have a windows application running at the backend. I have functions in this applications mapped to hot keys. Like if I put a message box into this function and give hot key as `Alt`+`Ctrl`+`D`. then on pressing `Alt`, `Ctrl` and `D` together the message box comes up. My application is working fine till this point.
Now I want to write a code inside this function so that when I am using another application like notepad, I select a particular line of text and press the hot key `Alt` + `Ctrl` + `D` it is supposed to copy the selected text append it with "\_copied" and paste it back to notepad.
Anyone who has tried a similar application please help me with your valuable inputs.
|
Your question has two answers
## How can my app set a global hotkey
You have to call an API funcion called RegisterHotKey
```
BOOL RegisterHotKey(
HWND hWnd, // window to receive hot-key notification
int id, // identifier of hot key
UINT fsModifiers, // key-modifier flags
UINT vk // virtual-key code
);
```
More info here: <http://www.codeproject.com/KB/system/nishhotkeys01.aspx>
## How to get the selected text from the foreground window
Easiest way is to send crl-C to the window and then capture the clipboard content.
```
[DllImport("User32.dll")]
private static extern bool SetForegroundWindow(IntPtr hWnd);
[DllImport("user32.dll", CharSet=CharSet.Auto)]
static public extern IntPtr GetForegroundWindow();
[DllImport("user32.dll")]
static extern void keybd_event(byte bVk, byte bScan, uint dwFlags, uint dwExtraInfo);
.....
private void SendCtrlC(IntPtr hWnd)
{
uint KEYEVENTF_KEYUP = 2;
byte VK_CONTROL = 0x11;
SetForegroundWindow(hWnd);
keybd_event(VK_CONTROL,0,0,0);
keybd_event (0x43, 0, 0, 0 ); //Send the C key (43 is "C")
keybd_event (0x43, 0, KEYEVENTF_KEYUP, 0);
keybd_event (VK_CONTROL, 0, KEYEVENTF_KEYUP, 0);// 'Left Control Up
}
```
Disclaimer: Code by Marcus Peters from here: <http://bytes.com/forum/post1029553-5.html>
Posted here for your convenience.
|
## UPDATE 2020
### How to get the selected text from the foreground window
No idea for how long has this been possible but instead of fighting with Win32 programming (mostly `user32.dll` and various Windows messages like `WM_GETTEXT, WM_COPY` and various `SendMessage(handle, WM_GETTEXT, maxLength, sb)` calls) which is advised in most of SO threads on this topic, **I easily managed to access selected text in any window in my C# code followingly**:
```
// programatically copy selected text into clipboard
await System.Threading.Tasks.Task.Factory.StartNew(fetchSelectionToClipboard);
// access clipboard which now contains selected text in foreground window (active application)
await System.Threading.Tasks.Task.Factory.StartNew(useClipBoardValue);
```
Here the methods being called:
```
static void fetchSelectionToClipboard()
{
Thread.Sleep(400);
SendKeys.SendWait("^c"); // magic line which copies selected text to clipboard
Thread.Sleep(400);
}
// depends on the type of your app, you sometimes need to access clipboard from a Single Thread Appartment model..therefore I'm creating a new thread here
static void useClipBoardValue()
{
Exception threadEx = null;
// Single Thread Apartment model
Thread staThread = new Thread(
delegate ()
{
try
{
Console.WriteLine(Clipboard.GetText());
}
catch (Exception ex)
{
threadEx = ex;
}
});
staThread.SetApartmentState(ApartmentState.STA);
staThread.Start();
staThread.Join();
}
```
|
Copy and Modify selected text in different application
|
[
"",
"c#",
".net",
"clipboard",
"hotkeys",
""
] |
I want to use an AOP framework, but I have two constraints.
Any framework that I choose
1. Must be fairly independent. I plan to use this in a legacy code base, and hence
cannot upgrade prospective dependencies like commons-logging-XXX.jar to commons-logging-newest.jar.
2. Must be fairly well documented, should not be too complex to understand and integrate.
|
[AspectJ](http://www.eclipse.org/aspectj/) as far as I can tell is just a compiler and imposes no dependencies on compiled programs other than including the AspectJ runtime jar.
It is also actively maintained, part of the Eclipse project and has a nice development environment ([AJDT](http://www.eclipse.org/ajdt/), a plugin to Eclipse).
|
I like Spring + AspectJ. You can do all of your pointcut definitions in xml **if you like**, and you don't need to change a line of code on your legacy stuff. If you already know spring and AOP concepts, there's not much to learn.
|
Which is the simplest and least dependent AOP framework in Java?
|
[
"",
"java",
"jakarta-ee",
"aop",
""
] |
What is the best way to validate a crontab entry with PHP? Should I be using a regex, or an external library? I've got a PHP script that adds/removes entries from a crontab file, but want to have some way to verify that the time interval portion is in a valid format.
|
Hmmm, interesting problem.
If you're going to really validate it, regex isn't going to be enough, you'll have to actually parse the entry and validate each of the scheduling bits. That's because each bit can be a number, a month/day of the week string, a range (2-7), a set (3, 4, Saturday), a Vixie cron-style shortcut (60/5) or any combination of the above -- any single regex approach is going to get very hairy, fast.
Just using the `crontab` program of Vixie cron to validate isn't sufficient, because it actually doesn't validate completely! I can get `crontab` to accept all sorts of illegal things.
Dave Taylor's Wicked Cool Shell Scripts ([Google books link](http://books.google.com/books?id=Df7P1WyG87sC&pg=PA147&lpg=PA147&source=web&ots=duLswqHFGM&sig=CIGfV56qkKVAGnz17zF76JmFhgQ&hl=en&sa=X&oi=book_result&resnum=1&ct=result#PPA147)) has a sh script that does partial validation, I found the discussion interesting. You might also use or adapt the code.
I also turned up links to two PHP classes that do what you say (whose quality I haven't evaluated):
* <http://www.phpclasses.org/browse/package/1189.html>
* <http://www.phpclasses.org/browse/package/1985.html>
Another approach (depending on what your app needs to do) might be to have PHP construct the crontab entry programatically and insert it, so you know it's always valid, rather than try to validate an untrusted string. Then you would just need to make a "build a crontab entry" UI, which could be simple if you don't need really complicated scheduling combinations.
|
Who said regular expressions can't do that?
Courtesy of my employer, [Salir.com](http://www.salir.com), here's a PHPUnit test which does such validation. Feel free to modify & distribute. I'll appreciate if you keep the @author notice & link to web site.
```
<?php
/**
* @author Jordi Salvat i Alabart - with thanks to <a href="www.salir.com">Salir.com</a>.
*/
abstract class CrontabChecker extends PHPUnit_Framework_TestCase {
protected function assertFileIsValidUserCrontab($file) {
$f= @fopen($file, 'r', 1);
$this->assertTrue($f !== false, 'Crontab file must exist');
while (($line= fgets($f)) !== false) {
$this->assertLineIsValid($line);
}
}
protected function assertLineIsValid($line) {
$regexp= $this->buildRegexp();
$this->assertTrue(preg_match("/$regexp/", $line) !== 0);
}
private function buildRegexp() {
$numbers= array(
'min'=>'[0-5]?\d',
'hour'=>'[01]?\d|2[0-3]',
'day'=>'0?[1-9]|[12]\d|3[01]',
'month'=>'[1-9]|1[012]',
'dow'=>'[0-7]'
);
foreach($numbers as $field=>$number) {
$range= "($number)(-($number)(\/\d+)?)?";
$field_re[$field]= "\*(\/\d+)?|$range(,$range)*";
}
$field_re['month'].='|jan|feb|mar|apr|may|jun|jul|aug|sep|oct|nov|dec';
$field_re['dow'].='|mon|tue|wed|thu|fri|sat|sun';
$fields_re= '('.join(')\s+(', $field_re).')';
$replacements= '@reboot|@yearly|@annually|@monthly|@weekly|@daily|@midnight|@hourly';
return '^\s*('.
'$'.
'|#'.
'|\w+\s*='.
"|$fields_re\s+\S".
"|($replacements)\s+\S".
')';
}
}
```
|
Validating Crontab Entries with PHP
|
[
"",
"php",
"regex",
"validation",
"cron",
""
] |
What is the functional programming approach to convert an `IEnumerable<string>` to a delimited string? I know I can use a loop, but I'm trying to wrap my head around functional programming.
Here's my example:
```
var selectedValues =
from ListItem item in checkboxList.Items
where item.Selected
select item.Value;
var delimitedString = ??
```
.. or could I do this in just the first var assignment (append each result to the previous)?
|
```
var delimitedString = selectedValues.Aggregate((x,y) => x + ", " + y);
```
|
```
string.Join(", ", string[] enumerable)
```
|
C# - IEnumerable to delimited string
|
[
"",
"c#",
"functional-programming",
""
] |
I've inherited a large Java app that uses Struts, Spring, and Hibernate. The classes and interfaces I deal with daily are: Struts Actions, Struts ActionForms, Value Objects, Service Interfaces and Implementations, DAO Interfaces and Implementations, and Entities. I'm pretty clear on the how and why of most of these, except I'm unsure about the correct separation of responsibilities between the ActionForms, Value Objects, and Entities. I should also mention that the Domain Model (i.e. all of the entities) doesn't contain much (if any) real business logic. This is essentially a CRUD app and most of the real logic is in the database (yuck!). Anyway, there are several distinct Java related issues that I'm wondering about:
1) It seems there is not much difference between the Entities and the Value Objects (VOs), and a lot of code must be written to transform on into the other when they pass through the service layer in either direction (Struts Actions deal only with VOs, DAOs deal only with Entities). So, VOs and Entities seem somewhat redundant. Why have them both?
2) Where should the VO<->Entity translation code go? The service layer, the Entity, the VO?
3) VOs are placed directly into ActionForms and directly bound to tags in the JSP (e.g. ). Is this a good practice? If not, what's the appropriate design?
4) It is unclear how to properly handle foreign key dependencies in the Value Objects. For example, certain VOs have a type field that, in database terms, represent a foreign key relationship into a type table. In the UI, this translates into a dropdown field that lets the user pick the type, OR a label that simply displays the textual representation of the type (depending on which screen it is). Now, should the VO have a property for the type ID, the textual representation of the type, or both? Who is responsible for translating between the two, and when?
5) The VOs have a field for their database ID. I thought VOs don't have identities? What's up with this?
I hope these questions are generic enough to be of general interest. It seems this would come up all the time in this type of architecture.
Also, I have the suspicion that this architecture is way to heavy for this app, and if you have suggestions about a better one, go ahead. But I'm mainly interested in the answer to the above questions since a different architecture is a long-term refactoring that I can't do right now.
|
1.
Considering the DAO - VO transformation; whether this is usefull depends on how Hibernate is used. If the entire Web request handling is in a single Hibernate session you should not really need separate VO's.
If, however, your DAO layer opens a session to retrieve an object and closes the session before you are finished using the DAO you may get trouble with collections and references to other objects. There is a fair chance that those are lazily loaded, meaning that the Session must still be opened when requesting those properties.
In short, before you start ditching those VO's have a good, hard look at you database transaction and session boundaries.
3.
As for using a VO in a Form; if the VO maps nicely to the JSP I would say why not? I'm either impressed that the data model so closely matches the process it supports, and a bit suspicious that the database has not been normalized (which may or may not pose problems in the future).
Going back to 1. If you use DAO's with lazy loading and collections, remember that the database session must also include the JSP phase as the DAO will be read in that phase.
5. The service layer must have a facility to know which database objects to alter, and the id is designed to do just that. The service layer will have to retrieve the DAO from the database and write the fields from the VO in the DAO, though it obviously does not need to update the id of the DAO with the id of the VO :)
6. What you need from the request is the id of the foreign key field. As it comes from the client you should probably check in the business logic whether an object with such an id exists.
Depending on whether the VO accepts the id of the foreign object or requires an object you should then either:
* set the id, or
* get the foreign object as a VO by id using the service layer and
put it in your VO, and store it using the service layer
Your business layer is responsible for translations as the service layer only deals with object retrieval and storage. And either the text or the id are not objects but identifiers of the objects. The service layer may
offer search facilities, but it should not need context information.
And if I read your question right your VOs refer to other objects in the database by id. In that case you enter the id. If you get a String from the client you should look it up in the business layer (using the service layer) and put the id of the found object in the VO. Or, if no ID is found, return a decent error message.
As a closing note; don't touch the DAO-VO thing unless you know what you're doing REALLY WELL. Hibernate is a powerfull and complex tool which is deceptively easy to use. You can very easily make mistakes and they can be very hard to find. And customers and bosses alike don't seem to appreciate the introduction of bugs in stuff that used to work.
By the way; my conservatism in the DAO-VO thing come from fixing problems due to similar problems in EJB2 to Hibernate transitions. The devil is in the details, and changing how you deal with the data layer is a major refactoring even if it looks like a piece of cake.
|
1) No need for separate VO and Entities : some companies mandate such a structure for their project. It might have made sense in a different project and hence it was mandated (I can only guess)
2) Service layer : it is the natural separation from DAO and Action layer, right ?
3) It does NOT hurt however value Objects are bound as long as they are properly validated before sent to DAOs
4) The service layer should be responsible for translating between the two. During load and save time
5) if they don't have identities, then how would you prevent duplication ?
I hope these terse answers helped. I'll try and get back and give a longer answer later.
|
How to properly use Struts ActionForms, Value Objects, and Entities?
|
[
"",
"java",
"design-patterns",
"architecture",
"oop",
""
] |
I want completely automated integration testing for a Maven project. The integration tests require that an external (platform-dependent) program is started before running. Ideally, the external program would be killed after the unit tests are finished, but is not necessary.
Is there a Maven plugin to accomplish this? Other ideas?
|
You could use the [antrun](http://maven.apache.org/plugins/maven-antrun-plugin/) plugin. Inside you would use ant's [exec](http://ant.apache.org/manual/Tasks/exec.html) [apply](http://ant.apache.org/manual/Tasks/apply.html) task.
Something like this.
```
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<phase> <!-- a lifecycle phase --> </phase>
<configuration>
<tasks>
<apply os="unix" executable="cmd">
<arg value="/c"/>
<arg value="ant.bat"/>
<arg value="-p"/>
</apply>
<apply os="windows" executable="cmd.exe">
<arg value="/c"/>
<arg value="ant.bat"/>
<arg value="-p"/>
</apply>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
```
Ant support os specific commands of course through the [condition task](http://ant.apache.org/manual/Tasks/condition.html).
|
The cargo maven plugin is a good way to go if you're doing servlet development and want to deploy the resulting WAR for integration testing.
When I do this myself, I often set up a multi-module project (although that's not strictly nessecarily) and encapsulate all the integration testing into that one module. I then enable the module with profiles (or not) so that it's not blocking the immediate "yeah, I know I broke it" builds.
Here's the pom from that functional test module - make of it what you will:
```
<?xml version="1.0"?><project>
<parent>
<artifactId>maven-example</artifactId>
<groupId>com.jheck</groupId>
<version>1.5.0.4-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>com.jheck.example</groupId>
<artifactId>functional-test</artifactId>
<name>Example Functional Test</name>
<packaging>pom</packaging>
<dependencies>
<dependency>
<groupId>com.jheck.example</groupId>
<artifactId>example-war</artifactId>
<type>war</type>
<scope>provided</scope>
<version>LATEST</version>
</dependency>
<dependency>
<groupId>httpunit</groupId>
<artifactId>httpunit</artifactId>
<version>1.6.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<executions>
<execution>
<phase>integration-test</phase>
<goals>
<goal>test</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<version>0.3</version>
<configuration>
<wait>false</wait> <!-- don't pause on launching tomcat... -->
<container>
<containerId>tomcat5x</containerId>
<log>${project.build.directory}/cargo.log</log>
<zipUrlInstaller>
<!--
<url>http://www.apache.org/dist/tomcat/tomcat-5/v5.0.30/bin/jakarta-tomcat-5.0.30.zip</url>
-->
<!-- better be using Java 1.5... -->
<url>http://www.apache.org/dist/tomcat/tomcat-5/v5.5.26/bin/apache-tomcat-5.5.26.zip</url>
<installDir>${installDir}</installDir>
</zipUrlInstaller>
</container>
<configuration>
<!-- where the running instance will be deployed for testing -->
<home>${project.build.directory}/tomcat5x/container</home>
</configuration>
</configuration>
<executions>
<execution>
<id>start-container</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
<goal>deploy</goal>
</goals>
<configuration>
<deployer>
<deployables>
<deployable>
<groupId>com.jheck.example</groupId>
<artifactId>example-war</artifactId>
<type>war</type>
<!-- <properties>
<plan>${basedir}/src/deployment/geronima.plan.xml</plan>
</properties> -->
<pingURL>http://localhost:8080/example-war</pingURL>
</deployable>
</deployables>
</deployer>
</configuration>
</execution>
<execution>
<id>stop-container</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
```
|
Starting external process during integration testing in maven
|
[
"",
"java",
"maven-2",
"build-process",
"automated-tests",
"integration-testing",
""
] |
If I have a property:
```
public list<String> names { get; set; }
```
How can I generate and handle a custom Event for arguments sake called 'onNamesChanged' whenever a name gets added to the list?
|
You should check out the [System.ComponentModel.BindingList](http://msdn.microsoft.com/en-us/library/ms132680.aspx), specifically the [ListChanged event](http://msdn.microsoft.com/en-us/library/ms132742.aspx).
|
A BindingList is likely your best option as it has builtin change tracking and a variety of existing events you can use. Below is an example of exposing a custom event for Add which forwards to the BindingList event.
```
class Example
{
private BindingList<string> m_names = new BindingList<string>();
public IEnumerable<string> Names { get { return m_names; } }
public event AddingNewEventHandler NamesAdded
{
add { m_names.AddingNew += value; }
remove { m_names.AddingNew -= value; }
}
public void Add(string name)
{
m_names.Add(name);
}
}
```
|
C# custom event handlers
|
[
"",
"c#",
"properties",
"event-handling",
""
] |
We're working on an hospital information system that is being written on C# and using NHibernate to map objects to database. MVC pattern is being used to separate business logic from UI. Here is the problem,
How do you get variable sized different set of strings to UI?
For example a `Contact` object have a property named `City` that holds which city contact lives. In country that the app is written for has more than 80 cities. How could you write those cities to a combo box? (or a data grid, tables, ...) In this example the city number is fixed. There is no need to add another city for a long time. (If the city list changes, recompiling is not a problem)
For example a `Contact` object have another property named `FooBar` which is going to be hold 1000 different string values and these values is going to be selected from a combo box for that property. And this set can be grown if users want. How do you load the combo box with these values? (If the string list statically written to combo box object, recompiling is a problem)
I have different solutions as below
1. All string values statically written to combo box in code or designer
2. Get the values from a resource file
3. Write those values to an XML file (Actually same as above, but no need to recompile)
4. Make a `City` object and get the values into a list from `CITY` table with NHibernate
5. Make a class named `StringHolder` which has a `Type` and `Value` property. All string values(including `City` and `FooBar`) would be written in just one table named `STRINGHOLDER`. And get those values with a key like "CITY" or "FOOBAR" with NHibernate.
Which one would you choose? Or could you suggest me another one?
Thanks all
|
If the locations are actually going to be used for anything, get them into the database. If the data "isn't really used", but cities lookup is provided to make the user interface better, then the XML file option is not a bad way to go either.
By used, I mean stuff like *list all emplyees in New York* and stuff like that. If it's "dead data", just to be displayed, go for the solutions that will require the least amount of work and least risk - which might be the file option.
|
I would vote for solution #4. That's the way I have always done it in similar situations. It just seems like a cleaner solution.
|
Getting String Sets into Presentation Layer
|
[
"",
"c#",
"nhibernate",
"oop",
"mapping",
"domain-driven-design",
""
] |
Can someone please point me to the easiest way to have a timer in a Win32 service?
I suppose I could create a dummy window for this purpose or have a second thread do tick counts, but what's best? Is there a more elegant way?
Thanks in advance.
|
You can use Timer Queues (<http://msdn.microsoft.com/en-us/library/ms686796.aspx>). They don't require an HWND.
|
Instead of using UI timers (even though you can use the NULL window handle as shown by Mr. 1800-INFO) you can use kernel waitable timer objects. See [CreateWaitableTimer](http://msdn.microsoft.com/en-us/library/ms682492(VS.85).aspx) in the API docs. These can then be waited-on using WaitForSingleObject or WaitForMultipleObjects, etc, which is especially useful if that's already how your service waits for external events.
If it's not clear from that first link, the SetWaitableTimer function can associate a completion routine (user callback) with the timer. Remember to use the ...Ex versions of WaitForMultipleObjects (etc.) so that the thread is in an "alertable" state.
|
Timer in a win32 service
|
[
"",
"c++",
"windows",
"winapi",
"windows-services",
""
] |
Does Java Connection.close rollback into a finally block?.
I know .Net SqlConnection.close does it.
With this I could make try/finally blocks without catch...
Example:
```
try {
conn.setAutoCommit(false);
ResultSet rs = executeQuery(conn, ...);
....
executeNonQuery(conn, ...);
....
conn.commit();
} finally {
conn.close();
}
```
|
According to [the javadoc](http://java.sun.com/javase/6/docs/api/java/sql/Connection.html#close()), you should try to either commit or roll back before calling the close method. The results otherwise are implementation-defined.
|
In any database system I've worked with, there is no harm in doing a rollback right after the commit, so if you commit in the try block, and rollback in the finally, things get committed, whereas if an exception or early return causes the commit to be missed, the rollback will rollback the transaction. So the safe thing to do is
```
try {
conn.setAutoCommit(false);
ResultSet rs = executeQuery(conn, ...);
....
executeNonQuery(conn, ...);
....
conn.commit();
} finally {
conn.rollback();
conn.close();
}
```
|
Does Java Connection.close rollback?
|
[
"",
"java",
"transactions",
"database-connection",
""
] |
I have a custom class that implements that IComparable. This class is stored in a Generic List. I now need to compare to lists to see which objects are in list A but not in list B.
I thought the most simple way of doing this would be to iterate through list B and do A.contains().
I do not know how to get it to use my CompareTo() (or another method that I can override so that I can say if it contains a certain object or not). I could be wrong but as I understand it the contains checks if the objects are actually the same (i.e. points to the same place in memory).
Could anyone help me please?
|
Why don't you just override the `Equals` method of your class to be consistent in meaning with `CompareTo(other) == 0`? This is the simplest way and also the most idiomatic since, as you've noticed, `Contains` compares equality rather than using `CompareTo`. However, this check is done via `Equals`. It does *not* check whether the objects point to the same memory location.
/EDIT: Additionally, if you're using .NET 3.5 you can use the [`Contains`](http://msdn.microsoft.com/en-us/library/bb339118.aspx) overload that accepts an `IEqualityComparer` argument. You can use this to provide a class that implements a custom equality relation for your class type. However, I think the first method is more appropriate in your case.
|
Also if you are using framework 3.5 you can make a query like:
list notInB = From item in listA where (listB.find(function(x) x.property = item.property) = nothing) select item
This will return all the items of the listA that are not in listB.
The linq function find returns the item that matches de condition that you write in the lambda function. There you can add more conditions using and (&&)or or (||)
If you use this, there is no need to implement IComparable.
|
How to get a Compare items in a Generic List
|
[
"",
"c#",
""
] |
When you attempt to declare an unsigned variable in C#.NET with a value outside its value range it is flagged as a compiler error, but if you produce a negative value at runtime and assign it to that variable at runtime the value wraps.
```
uint z = -1; // Will not compile
uint a = 5;
uint b = 6;
uint c = a - b; // Will result in uint.MaxValue
```
Is there a good reason why unsigned variables wrap in such a situation instead of throwing an exception?
Thanks.
|
Declaring an unassigned variable in C# isn't flagged with an error - trying to assign an invalid value to a variable is. For instance, here's a variable which isn't definitely assigned (assuming it's local) after declaration:
```
uint z;
```
-1 isn't a valid value for a uint any more than 0.5 is, which is why your example wouldn't compile.
Now, as for the rest: integers types just wrap on overflow - just as adding 1 to `int.MaxValue` returns `int.MinValue`. This is a significant performance improvement over having the program check each operation for overflow - at the cost of potentially not spotting an error.
That's only if you're in an unchecked context, mind you - if you perform any of these operations in a checked context, you'll get an exception instead. For instance;
```
class Test
{
static void Main()
{
checked
{
uint a = 5;
uint b = 6;
uint c = a - b;
}
}
}
```
Run that and you'll see an `OverflowException` get thrown. If that's what you want for your whole project, you can set it in the project properties (or compile with the `/checked+` command line option to `csc`.)
EDIT: It's worth noting that the other answers have shown that you could put smaller amounts of code in the checked context - just the declaration and assignment of `c` or even just the calculation. It's all pretty flexible.
|
The wrapping is because by dfault C# is unchecked. If you add a "checked" block, the overflow will be detected:
```
uint a = 3, b = 4;
checked
{
uint c = a - b; // throws an overflow
}
```
As for the compiler: it simply demands valid data.
|
Difference of two 'uint'
|
[
"",
"c#",
".net",
""
] |
What are Null Pointer Exceptions (`java.lang.NullPointerException`) and what causes them?
What methods/tools can be used to determine the cause so that you stop the exception from causing the program to terminate prematurely?
|
There are two overarching types of variables in Java:
1. *Primitives*: variables that contain data. If you want to manipulate the data in a primitive variable you can manipulate that variable directly. By convention primitive types start with a lowercase letter. For example variables of type `int` or `char` are primitives.
2. *References*: variables that contain the memory address of an `Object` i.e. variables that *refer* to an `Object`. If you want to manipulate the `Object` that a reference variable refers to you must *dereference* it. Dereferencing usually entails using `.` to access a method or field, or using `[` to index an array. By convention reference types are usually denoted with a type that starts in uppercase. For example variables of type `Object` are references.
Consider the following code where you declare a variable of *primitive* type `int` and don't initialize it:
```
int x;
int y = x + x;
```
These two lines will crash the program because no value is specified for `x` and we are trying to use `x`'s value to specify `y`. All primitives have to be initialized to a usable value before they are manipulated.
Now here is where things get interesting. *Reference* variables can be set to `null` which means "**I am referencing *nothing***". You can get a `null` value in a reference variable if you explicitly set it that way, or a reference variable is uninitialized and the compiler does not catch it (Java will automatically set the variable to `null`).
If a reference variable is set to null either explicitly by you or through Java automatically, and you attempt to *dereference* it you get a `NullPointerException`.
The `NullPointerException` (NPE) typically occurs when you declare a variable but did not create an object and assign it to the variable before trying to use the contents of the variable. So you have a reference to something that does not actually exist.
Take the following code:
```
Integer num;
num = new Integer(10);
```
The first line declares a variable named `num`, but it does not actually contain a reference value yet. Since you have not yet said what to point to, Java sets it to `null`.
In the second line, the `new` keyword is used to instantiate (or create) an object of type `Integer`, and the reference variable `num` is assigned to that `Integer` object.
If you attempt to dereference `num` *before* creating the object you get a `NullPointerException`. In the most trivial cases, the compiler will catch the problem and let you know that "`num may not have been initialized`," but sometimes you may write code that does not directly create the object.
For instance, you may have a method as follows:
```
public void doSomething(SomeObject obj) {
// Do something to obj, assumes obj is not null
obj.myMethod();
}
```
In which case, you are not creating the object `obj`, but rather assuming that it was created before the `doSomething()` method was called. Note, it is possible to call the method like this:
```
doSomething(null);
```
In which case, `obj` is `null`, and the statement `obj.myMethod()` will throw a `NullPointerException`.
If the method is intended to do something to the passed-in object as the above method does, it is appropriate to throw the `NullPointerException` because it's a programmer error and the programmer will need that information for debugging purposes.
In addition to `NullPointerException`s thrown as a result of the method's logic, you can also check the method arguments for `null` values and throw NPEs explicitly by adding something like the following near the beginning of a method:
```
// Throws an NPE with a custom error message if obj is null
Objects.requireNonNull(obj, "obj must not be null");
```
Note that it's helpful to say in your error message clearly *which* object cannot be `null`. The advantage of validating this is that 1) you can return your own clearer error messages and 2) for the rest of the method you know that unless `obj` is reassigned, it is not null and can be dereferenced safely.
Alternatively, there may be cases where the purpose of the method is not solely to operate on the passed in object, and therefore a null parameter may be acceptable. In this case, you would need to check for a **null parameter** and behave differently. You should also explain this in the documentation. For example, `doSomething()` could be written as:
```
/**
* @param obj An optional foo for ____. May be null, in which case
* the result will be ____.
*/
public void doSomething(SomeObject obj) {
if(obj == null) {
// Do something
} else {
// Do something else
}
}
```
Finally, [How to pinpoint the exception & cause using Stack Trace](https://stackoverflow.com/q/3988788/2775450)
> What methods/tools can be used to determine the cause so that you stop
> the exception from causing the program to terminate prematurely?
Sonar with find bugs can detect NPE.
[Can sonar catch null pointer exceptions caused by JVM Dynamically](https://stackoverflow.com/questions/20899931/can-sonar-catch-null-pointer-exceptions-caused-by-jvm-dynamically)
Now Java 14 has added a new language feature to show the root cause of NullPointerException. This language feature has been part of SAP commercial JVM since 2006.
In Java 14, the following is a sample NullPointerException Exception message:
> in thread "main" java.lang.NullPointerException: Cannot invoke "java.util.List.size()" because "list" is null
### List of situations that cause a `NullPointerException` to occur
Here are all the situations in which a `NullPointerException` occurs, that are directly\* mentioned by the Java Language Specification:
* Accessing (i.e. getting or setting) an *instance* field of a null reference. (static fields don't count!)
* Calling an *instance* method of a null reference. (static methods don't count!)
* `throw null;`
* Accessing elements of a null array.
* Synchronising on null - `synchronized (someNullReference) { ... }`
* Any integer/floating point operator can throw a `NullPointerException` if one of its operands is a boxed null reference
* An unboxing conversion throws a `NullPointerException` if the boxed value is null.
* Calling `super` on a null reference throws a `NullPointerException`. If you are confused, this is talking about qualified superclass constructor invocations:
```
class Outer {
class Inner {}
}
class ChildOfInner extends Outer.Inner {
ChildOfInner(Outer o) {
o.super(); // if o is null, NPE gets thrown
}
}
```
* Using a `for (element : iterable)` loop to loop through a null collection/array.
* `switch (foo) { ... }` (whether its an expression or statement) can throw a `NullPointerException` when `foo` is null.
* `foo.new SomeInnerClass()` throws a `NullPointerException` when `foo` is null.
* Method references of the form `name1::name2` or `primaryExpression::name` throws a `NullPointerException` when evaluated when `name1` or `primaryExpression` evaluates to null.
a note from the JLS here says that, `someInstance.someStaticMethod()` doesn't throw an NPE, because `someStaticMethod` is static, but `someInstance::someStaticMethod` still throw an NPE!
\* Note that the JLS probably also says a lot about NPEs *indirectly*.
|
`NullPointerException`s are exceptions that occur when you try to use a reference that points to no location in memory (null) as though it were referencing an object. Calling a method on a null reference or trying to access a field of a null reference will trigger a `NullPointerException`. These are the most common, but other ways are listed on the [`NullPointerException`](http://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html) javadoc page.
Probably the quickest example code I could come up with to illustrate a `NullPointerException` would be:
```
public class Example {
public static void main(String[] args) {
Object obj = null;
obj.hashCode();
}
}
```
On the first line inside `main`, I'm explicitly setting the `Object` reference `obj` equal to `null`. This means I have a reference, but it isn't pointing to any object. After that, I try to treat the reference as though it points to an object by calling a method on it. This results in a `NullPointerException` because there is no code to execute in the location that the reference is pointing.
(This is a technicality, but I think it bears mentioning: A reference that points to null isn't the same as a C pointer that points to an invalid memory location. A null pointer is literally not pointing *anywhere*, which is subtly different than pointing to a location that happens to be invalid.)
|
What is a NullPointerException, and how do I fix it?
|
[
"",
"java",
"nullpointerexception",
""
] |
What's the advantage of passing data as parameters vs part of the URL in an Ajax GET request?
Using parameters:
```
var ajax = new Ajax.Request('server.php',{
parameters: 'store=11200&product=Meat',
onSuccess: function(myData){whatever}
});
```
Using URL:
```
var ajax = new Ajax.Request('server.php?store=11200&product=Meat',{
onSuccess: function(myData){whatever}
});
```
|
One advantage to using the `parameters` argument is that you can pass it a `Hash`-like object instead of as a string. (If you do this, though, make sure so set the `method` parameter to `"GET"`, as the default method for Prototype Ajax requests is POST; see [the Prototype Introduction to Ajax](http://www.prototypejs.org/learn/introduction-to-ajax) for more details.)
Another advantage, which is more in-line with the example that you gave, is that you can separate the request URL from the options that are sent to it. This might be useful if, for example, you need to send a bunch of similar requests to several different URLs. (In that case, having a common parameters `Hash` that you modify for each request might be more useful, than using a parameter string, as well.)
For more information, see [the Prototype documentation of Ajax options](http://www.prototypejs.org/api/ajax/options).
|
One of my favorite uses of **parameters** is to pass in all fields of a form without explicitly listing them:
```
new Ajax.Request('/myurl.php', {
method: 'get',
parameters: $('myForm').serialize(),
onSuccess: successFunc(),
onFailure: failFunc()
}
```
|
Ajax GET requests: use parameters or put data in URL?
|
[
"",
"javascript",
"ajax",
"prototypejs",
""
] |
Using jQuery, how would you `show()` every `div.foo` on a page in a random order, with a new one appearing every X milliseconds?
**Clarification**: I want to start with all these elements hidden and end with all of them showing, so it wouldn't make sense to `show()` the same element twice.
I originally thought I'd make an array listing all the elements, randomly pick one, show that one, remove it from the array using `splice()`, and then randomly pick the next one from the remaining list - etc. But since my array is part of a jQuery object, `splice()` is not available.
|
An interesting way to do this would be the extend Javascript's Array base object with a shuffle function. In Prototype (should be the same in JQuery, except jQuery.extend). This is quick and dirty shuffle, there are plenty of other ways to do it.
```
Object.extend(Array.prototype, {
shuffle : function() {
this.sort( function() { return 0.5 - Math.random(); } );
return this;
}
});
```
So assuming you have your array of divs ready to go, call the shuffle() method and simply go through them one by one, in order (they're now shuffled) and show them (according to your intervals). Might want to make that 'non-destructive' though by cloning the array returned by the shuffle method instead of sorting it directly.
|
I don't use jQuery myself, but what about this:
```
var intervalMilliseconds = X; // set to your value for X
var divFoos = $("div.foo").get();
var intervalId = setInterval(function() {
$(divFoos.splice(Math.floor(Math.random() * divFoos.length), 1)).show();
if(divFoos.length == 0) clearInterval(intervalId);
}, intervalMilliseconds);
```
That should do the trick.
---
UPDATE: Since your description isn't explicit about it, I assumed you meant that you ultimately want to show all of them, and that once they are visible, we are done. If not, please explain further so I can update this (if you can't already determine what you need from the code I provided).
|
How can I randomly show a set of elements using jQuery?
|
[
"",
"javascript",
"jquery",
""
] |
When a user clicks a link to download a file on my website, they go to [this PHP file](http://viewsourcecode.org/viewsource/homebrew.php) which increments a download counter for that file and then header()-redirects them to the actual file. I suspect that bots are following the download link, however, so the number of downloads is inaccurate.
* How do I let bots know that they shouldn't follow the link?
* Is there a way to detect most bots?
* Is there a better way to count the number of downloads a file gets?
|
robots.txt: <http://www.robotstxt.org/robotstxt.html>
Not all bots respect it, but most do. If you really want to prevent access via bots, make the link to it a POST instead of a GET. Bots will not follow POST urls. (I.E., use a small form that posts back to the site that takes you to the URL in question.)
|
I would think Godeke's robots.txt answer would be sufficient. If you absolutely cannot have the bots up your counter, then I would recommend using the robots file in conjunction with not not incrementing the clicks with some [common robot user agents](http://support.free-conversant.com/2701).
Neither way is perfect., but the mixture of the two is probably a little more strict. If is was me, I would probably just stick to the robots file though, since it is easy and probably the most effective solution.
|
How do I stop bots from incrementing my file download counter in PHP?
|
[
"",
"php",
"bots",
""
] |
After much searching, I found the download for the [eclipse version of jalopy](http://sourceforge.net/project/showfiles.php?group_id=45216). Is this compatible with Eclipse 3.4? It's dated 2006.
I've copied the extracted folder to my plugins directory and run `eclipse -clean`, but I can't find anything matching 'jalopy' in preferences.
If it's not compatible, are there any (free) alternatives?
|
Just an update:
Note: the latest eclipse3.5 Galileo [code formatter](http://help.eclipse.org/galileo/topic/org.eclipse.jdt.doc.user/tasks/tasks-230.htm) now:
* supports most of Jalopy formatting option.
* has made quite a few improvement on formatters (look for the term 'formatter' in those [building notes of jdt-core](http://giano.com.dist.unige.it/eclipseMirror/eclipse/downloads/drops/S-3.5.1RC4-200909111628/buildnotes/buildnotes_jdt-core.html).
It also can be run from the [command-line if needed](http://help.eclipse.org/galileo/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/tasks-230.htm).
For a specific formatting option which would not be correctly managed by eclipse, you can define a [`codeFormatter` extension point](http://help.eclipse.org/galileo/topic/org.eclipse.jdt.doc.isv/reference/extension-points/org_eclipse_jdt_core_codeFormatter.html?resultof=%22%66%6f%72%6d%61%74%74%65%72%22%20%22%66%6f%72%6d%61%74%74%22%20). More practical details in this [SO question](https://stackoverflow.com/questions/975980/how-can-i-use-the-eclipse-indenter-from-my-code).
|
i'm told that open source version of jalopy is not for commercial use.
eclipse itself has good source code formatting capabilities. Go to java | preferences/settings. type format and you should see the various options.
|
Is the Open Source Jalopy Eclipse plugin compatible with Ganymede?
|
[
"",
"java",
"eclipse-plugin",
"code-formatting",
"eclipse-3.4",
"jalopy",
""
] |
The most favorite feature of StackOverflow for me is that it can automatically detect code in post and set appropriate color to the code.
I'm wondering how the color is set. When I do a `Ctrl`+`F5` on a page, the code seems first be black text, then change to be colorful. Is it done by jQuery?
|
In reply to..
> Spolsky: How do they know, how do they even know what language you're writing in?
It doesn't. The highlighter is very dumb, but manages to gets away with it because most programming languages are so similar. Nearly everything uses syntax close-enough to..
```
AFunction("a string")
1 + 4 # <- numbers
# /\ a comment
// also a comment..
```
..that most stuff highlights properly. The above isn't an actuall programming language, but it highlights perfectly.
There are exceptions, for example, it can sometimes treat a `/` as the start of a regex (as in Perl/Ruby). when it is not:
```
this [^\s>/] # is highlighted as a regex, not a comment
```
..but these are fairly rare, and it does a good job of working out most stuff, like..
```
/*
this is a multi-line comment
"with a string" =~ /and a regex/
*/
but =~ /this is a regex with a [/*] multiline comment
markers in it! */
```
|
From [Stack Overflow Podcast #11](https://stackoverflow.fogbugz.com/default.asp?W12621):
> **Atwood:** It is. Okay, so that comes from, that's a project some Google engineer, I think, wrote it--it's called "Prettify." And it's a little interesting in that it actually infers all the syntax highlighting, which sounds like it couldn't possibly work--it sounds actually insane, if you think about it. But it actually kind of works. Now, he only supports it for, there's certain dialects that just don't really work well with it, but for all the dialects that sort of, you'd find on Google. I think it comes from Google's Google Code. It's the actual code, it's the actual JavaScript which is on Google Code that highlights that the code that comes back when you're hosting projects on Google Code. And you, and you, um, 'cause I think they use Subversion so you can actually click through...
>
> **Spolsky:** How do they know, how do they even know what language you're writing in? And therefore, what a comment is and...
>
> **Atwood:** I don't know. It's crazy. It's prettify.js, so if anyone's interested in looking at this, just do a web search for "prettify.js," and you'll find it.
And here's where you can find prettify.js: <http://code.google.com/p/google-code-prettify/>
|
How Code Color is Set in StackOverflow?
|
[
"",
"javascript",
"syntax-highlighting",
""
] |
I have a `Person` model that has a foreign key relationship to `Book`, which has a number of fields, but I'm most concerned about `author` (a standard CharField).
With that being said, in my `PersonAdmin` model, I'd like to display `book.author` using `list_display`:
```
class PersonAdmin(admin.ModelAdmin):
list_display = ['book.author',]
```
I've tried all of the obvious methods for doing so, but nothing seems to work.
Any suggestions?
|
As another option, you can do lookups like:
```
#models.py
class UserAdmin(admin.ModelAdmin):
list_display = (..., 'get_author')
def get_author(self, obj):
return obj.book.author
get_author.short_description = 'Author'
get_author.admin_order_field = 'book__author'
```
Since Django 3.2 you can use [`display()`](https://docs.djangoproject.com/en/3.2/ref/contrib/admin/#the-display-decorator) decorator:
```
#models.py
class UserAdmin(admin.ModelAdmin):
list_display = (..., 'get_author')
@admin.display(ordering='book__author', description='Author')
def get_author(self, obj):
return obj.book.author
```
|
Despite all the great answers above and due to me being new to Django, I was still stuck. Here's my explanation from a very newbie perspective.
**models.py**
```
class Author(models.Model):
name = models.CharField(max_length=255)
class Book(models.Model):
author = models.ForeignKey(Author)
title = models.CharField(max_length=255)
```
**admin.py (Incorrect Way)** - you think it would work by using 'model\_\_field' to reference, but it doesn't
```
class BookAdmin(admin.ModelAdmin):
model = Book
list_display = ['title', 'author__name', ]
admin.site.register(Book, BookAdmin)
```
**admin.py (Correct Way)** - this is how you reference a foreign key name the Django way
```
class BookAdmin(admin.ModelAdmin):
model = Book
list_display = ['title', 'get_name', ]
def get_name(self, obj):
return obj.author.name
get_name.admin_order_field = 'author' #Allows column order sorting
get_name.short_description = 'Author Name' #Renames column head
#Filtering on side - for some reason, this works
#list_filter = ['title', 'author__name']
admin.site.register(Book, BookAdmin)
```
For additional reference, see the Django model link [here](https://docs.djangoproject.com/en/dev/ref/contrib/admin/#list-display)
|
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields?
|
[
"",
"python",
"django",
"django-models",
"django-admin",
"modeladmin",
""
] |
How do I list and export a private key from a keystore?
|
A portion of code originally from Example Depot for listing all of the aliases in a key store:
```
// Load input stream into keystore
keystore.load(is, password.toCharArray());
// List the aliases
Enumeration aliases = keystore.aliases();
for (; aliases.hasMoreElements(); ) {
String alias = (String)aliases.nextElement();
// Does alias refer to a private key?
boolean b = keystore.isKeyEntry(alias);
// Does alias refer to a trusted certificate?
b = keystore.isCertificateEntry(alias);
}
```
The exporting of private keys came up on the [Sun forums](http://forums.sun.com/thread.jspa?threadID=154587&start=15&tstart=0) a couple of months ago, and [u:turingcompleter](http://forums.sun.com/profile.jspa?userID=505275) came up with a DumpPrivateKey class to stitch into your app.
```
import java.io.FileInputStream;
import java.security.Key;
import java.security.KeyStore;
import sun.misc.BASE64Encoder;
public class DumpPrivateKey {
/**
* Provides the missing functionality of keytool
* that Apache needs for SSLCertificateKeyFile.
*
* @param args <ul>
* <li> [0] Keystore filename.
* <li> [1] Keystore password.
* <li> [2] alias
* </ul>
*/
static public void main(String[] args)
throws Exception {
if(args.length < 3) {
throw new IllegalArgumentException("expected args: Keystore filename, Keystore password, alias, <key password: default same tha
n keystore");
}
final String keystoreName = args[0];
final String keystorePassword = args[1];
final String alias = args[2];
final String keyPassword = getKeyPassword(args,keystorePassword);
KeyStore ks = KeyStore.getInstance("jks");
ks.load(new FileInputStream(keystoreName), keystorePassword.toCharArray());
Key key = ks.getKey(alias, keyPassword.toCharArray());
String b64 = new BASE64Encoder().encode(key.getEncoded());
System.out.println("-----BEGIN PRIVATE KEY-----");
System.out.println(b64);
System.out.println("-----END PRIVATE KEY-----");
}
private static String getKeyPassword(final String[] args, final String keystorePassword)
{
String keyPassword = keystorePassword; // default case
if(args.length == 4) {
keyPassword = args[3];
}
return keyPassword;
}
}
```
Note: this use Sun package, [which is a "bad thing"](http://java.sun.com/products/jdk/faq/faq-sun-packages.html).
If you can download [apache commons code](http://commons.apache.org/codec/index.html), here is a version which will compile without warning:
```
javac -classpath .:commons-codec-1.4/commons-codec-1.4.jar DumpPrivateKey.java
```
and will give the same result:
```
import java.io.FileInputStream;
import java.security.Key;
import java.security.KeyStore;
//import sun.misc.BASE64Encoder;
import org.apache.commons.codec.binary.Base64;
public class DumpPrivateKey {
/**
* Provides the missing functionality of keytool
* that Apache needs for SSLCertificateKeyFile.
*
* @param args <ul>
* <li> [0] Keystore filename.
* <li> [1] Keystore password.
* <li> [2] alias
* </ul>
*/
static public void main(String[] args)
throws Exception {
if(args.length < 3) {
throw new IllegalArgumentException("expected args: Keystore filename, Keystore password, alias, <key password: default same tha
n keystore");
}
final String keystoreName = args[0];
final String keystorePassword = args[1];
final String alias = args[2];
final String keyPassword = getKeyPassword(args,keystorePassword);
KeyStore ks = KeyStore.getInstance("jks");
ks.load(new FileInputStream(keystoreName), keystorePassword.toCharArray());
Key key = ks.getKey(alias, keyPassword.toCharArray());
//String b64 = new BASE64Encoder().encode(key.getEncoded());
String b64 = new String(Base64.encodeBase64(key.getEncoded(),true));
System.out.println("-----BEGIN PRIVATE KEY-----");
System.out.println(b64);
System.out.println("-----END PRIVATE KEY-----");
}
private static String getKeyPassword(final String[] args, final String keystorePassword)
{
String keyPassword = keystorePassword; // default case
if(args.length == 4) {
keyPassword = args[3];
}
return keyPassword;
}
}
```
You can use it like so:
```
java -classpath .:commons-codec-1.4/commons-codec-1.4.jar DumpPrivateKey $HOME/.keystore changeit tomcat
```
|
You can extract a private key from a keystore with Java6 and OpenSSL. This all depends on the fact that both Java and OpenSSL support PKCS#12-formatted keystores. To do the extraction, you first use `keytool` to convert to the standard format. Make sure you ***use the same password for both files (private key password, not the keystore password)*** or you will get odd failures later on in the second step.
```
keytool -importkeystore -srckeystore keystore.jks \
-destkeystore intermediate.p12 -deststoretype PKCS12
```
Next, use OpenSSL to do the extraction to PEM:
```
openssl pkcs12 -in intermediate.p12 -out extracted.pem -nodes
```
You should be able to handle that PEM file easily enough; it's plain text with an encoded unencrypted private key and certificate(s) inside it (in a pretty obvious format).
*When you do this, take care to keep the files created secure. They contain secret credentials. Nothing will warn you if you fail to secure them correctly.* The easiest method for securing them is to do all of this in a directory which doesn't have any access rights for anyone other than the user. And never put your password on the command line or in environment variables; it's too easy for other users to grab.
|
How do I list / export private keys from a keystore?
|
[
"",
"java",
"ssl",
"keystore",
""
] |
I'm trying to select a column from a single table (no joins) and I need the count of the number of rows, ideally before I begin retrieving the rows. I have come to two approaches that provide the information I need.
**Approach 1:**
```
SELECT COUNT( my_table.my_col ) AS row_count
FROM my_table
WHERE my_table.foo = 'bar'
```
Then
```
SELECT my_table.my_col
FROM my_table
WHERE my_table.foo = 'bar'
```
Or **Approach 2**
```
SELECT my_table.my_col, ( SELECT COUNT ( my_table.my_col )
FROM my_table
WHERE my_table.foo = 'bar' ) AS row_count
FROM my_table
WHERE my_table.foo = 'bar'
```
I am doing this because my SQL driver (SQL Native Client 9.0) does not allow me to use SQLRowCount on a SELECT statement but I need to know the number of rows in my result in order to allocate an array before assigning information to it. The use of a dynamically allocated container is, unfortunately, not an option in this area of my program.
I am concerned that the following scenario might occur:
* SELECT for count occurs
* Another instruction occurs, adding or removing a row
* SELECT for data occurs and suddenly the array is the wrong size.
-In the worse case, this will attempt to write data beyond the arrays limits and crash my program.
Does Approach 2 prohibit this issue?
Also, Will one of the two approaches be faster? If so, which?
Finally, is there a better approach that I should consider (perhaps a way to instruct the driver to return the number of rows in a SELECT result using SQLRowCount?)
For those that asked, I am using Native C++ with the aforementioned SQL driver (provided by Microsoft.)
|
There are only two ways to be 100% certain that the `COUNT(*)` and the actual query will give consistent results:
* Combined the `COUNT(*)` with the query, as in your Approach 2. I recommend the form you show in your example, not the correlated subquery form shown in the comment from kogus.
* Use two queries, as in your Approach 1, after starting a transaction in `SNAPSHOT` or `SERIALIZABLE` isolation level.
Using one of those isolation levels is important because any other isolation level allows new rows created by other clients to become visible in your current transaction. Read the MSDN documentation on [`SET TRANSACTION ISOLATION`](http://msdn.microsoft.com/en-us/library/ms173763.aspx) for more details.
|
If you're using SQL Server, after your query you can select the [@@RowCount](https://learn.microsoft.com/en-us/sql/t-sql/functions/rowcount-transact-sql) function (or if your result set might have more than 2 billion rows use the [RowCount\_Big()](https://learn.microsoft.com/en-us/sql/t-sql/functions/rowcount-big-transact-sql) function). This will return the number of rows selected by the previous statement or number of rows affected by an insert/update/delete statement.
```
SELECT my_table.my_col
FROM my_table
WHERE my_table.foo = 'bar'
SELECT @@Rowcount
```
Or if you want to row count included in the result sent similar to Approach #2, you can use the the [OVER clause](https://learn.microsoft.com/en-us/sql/t-sql/queries/select-over-clause-transact-sql).
```
SELECT my_table.my_col,
count(*) OVER(PARTITION BY my_table.foo) AS 'Count'
FROM my_table
WHERE my_table.foo = 'bar'
```
Using the OVER clause will have much better performance than using a subquery to get the row count. Using the @@RowCount will have the best performance because the there won't be any query cost for the select @@RowCount statement
Update in response to comment: The example I gave would give the # of rows in partition - defined in this case by "PARTITION BY my\_table.foo". The value of the column in each row is the # of rows with the same value of my\_table.foo. Since your example query had the clause "WHERE my\_table.foo = 'bar'", all rows in the resultset will have the same value of my\_table.foo and therefore the value in the column will be the same for all rows and equal (in this case) this the # of rows in the query.
Here is a better/simpler example of how to include a column in each row that is the total # of rows in the resultset. Simply remove the optional Partition By clause.
```
SELECT my_table.my_col, count(*) OVER() AS 'Count'
FROM my_table
WHERE my_table.foo = 'bar'
```
|
Need a row count after SELECT statement: what's the optimal SQL approach?
|
[
"",
"sql",
"odbc",
"sqlncli",
""
] |
I am currently running gdb version 6.7.1 on Ubuntu Linux, and working in a C++ project.
Surprisingly I was trying to debug a constructor and I have found that local variables declared within the scope of the constructor are not followed or noticed by gdb. Is this a bug ?
Thanks for any information ..
|
It is a [bug](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27574) in GCC, not in GDB.
It was recently [fixed](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27574#c21).
|
If you are using optimization (-O), disable it (remove the -O or use -O0). The compiler is sometimes too smart and guesses a variable it not needed, do calculations at compile time, change a variable's scope, or several other tricks.
Note that even with "no optimization", some trivial optimizations are still done, but they shouldn't interfere much with debugging. Also, with inline-heavy use of C++ (including the STL), your program can become much slower without optimization.
|
local variables in constructors are not followed. Is this a bug in gdb ?
|
[
"",
"c++",
"debugging",
"gdb",
""
] |
I'm a C/C++ developer, and here are a couple of questions that always baffled me.
* Is there a big difference between "regular" code and inline code?
* Which is the main difference?
* Is inline code simply a "form" of macros?
* What kind of tradeoff must be done when choosing to inline your code?
Thanks
|
> * Is there a big difference between "regular" code and inline code?
Yes and no. No, because an inline function or method has exactly the same characteristics as a regular one, most important one being that they are both type safe. And yes, because the assembly code generated by the compiler will be different; with a regular function, each call will be translated into several steps: pushing parameters on the stack, making the jump to the function, popping the parameters, etc, whereas a call to an inline function will be replaced by its actual code, like a macro.
> * Is inline code simply a "form" of macros?
**No**! A macro is simple text replacement, which can lead to severe errors. Consider the following code:
```
#define unsafe(i) ( (i) >= 0 ? (i) : -(i) )
[...]
unsafe(x++); // x is incremented twice!
unsafe(f()); // f() is called twice!
[...]
```
Using an inline function, you're sure that parameters will be evaluated before the function is actually performed. They will also be type checked, and eventually converted to match the formal parameters types.
> * What kind of tradeoff must be done when choosing to inline your code?
Normally, program execution should be faster when using inline functions, but with a bigger binary code. For more information, you should read [GoTW#33](http://www.gotw.ca/gotw/033.htm "GoTW#33").
|
## Performance
As has been suggested in previous answers, use of the `inline` keyword can make code faster by inlining function calls, often at the expense of increased executables. “Inlining function calls” just means substituting the call to the target function with the actual code of the function, after filling in the arguments accordingly.
However, modern compilers are very good at inlining function calls automatically *without any prompt from the user* when set to high optimisation. Actually, compilers are usually *better* at determining what calls to inline for speed gain than humans are.
**Declaring functions `inline` explicitly for the sake of performance gain is (almost?) always unnecessary!**
Additionally, compilers can *and will* **ignore** the `inline` request if it suits them. Compilers will do this if a call to the function is impossible to inline (i.e. using nontrivial recursion or function pointers) but also if the function is simply too large for a meaningful performance gain.
## One Definition Rule
However, declaring an inline function using the `inline` keyword [has other effects](http://en.cppreference.com/w/cpp/language/inline), and may actually be *necessary* to satisfy the One Definition Rule (ODR): This rule in the C++ standard states that a given symbol may be declared multiple times but may only be defined once. If the link editor (= linker) encounters several identical symbol definitions, it will generate an error.
One solution to this problem is to make sure that a compilation unit doesn't export a given symbol by giving it internal linkage by declaring it `static`.
However, it's often better to mark a function `inline` instead. This tells the linker to merge all definitions of this function across compilation units into one definition, with one address, and shared function-static variables.
As an example, consider the following program:
```
// header.hpp
#ifndef HEADER_HPP
#define HEADER_HPP
#include <cmath>
#include <numeric>
#include <vector>
using vec = std::vector<double>;
/*inline*/ double mean(vec const& sample) {
return std::accumulate(begin(sample), end(sample), 0.0) / sample.size();
}
#endif // !defined(HEADER_HPP)
```
```
// test.cpp
#include "header.hpp"
#include <iostream>
#include <iomanip>
void print_mean(vec const& sample) {
std::cout << "Sample with x̂ = " << mean(sample) << '\n';
}
```
```
// main.cpp
#include "header.hpp"
void print_mean(vec const&); // Forward declaration.
int main() {
vec x{4, 3, 5, 4, 5, 5, 6, 3, 8, 6, 8, 3, 1, 7};
print_mean(x);
}
```
Note that both `.cpp` files include the header file and thus the function definition of `mean`. Although the file is saved with include guards against double inclusion, this will result in two definitions of the same function, albeit in different compilation units.
Now, if you try to link those two compilation units — for example using the following command:
```
⟩⟩⟩ g++ -std=c++11 -pedantic main.cpp test.cpp
```
you'll get an error saying “duplicate symbol \_\_Z4meanRKNSt3\_\_16vectorIdNS\_9allocatorIdEEEE” (which is the [mangled name](https://en.wikipedia.org/wiki/Name_mangling) of our function `mean`).
If, however, you uncomment the `inline` modifier in front of the function definition, the code compiles and links correctly.
*Function templates* are a special case: they are *always* inline, regardless of whether they were declared that way. This doesn’t mean that the compiler will inline *calls* to them, but they won’t violate ODR. The same is true for member functions that are defined inside a class or struct.
|
Why should I ever use inline code?
|
[
"",
"c++",
"optimization",
"inline-functions",
"tradeoff",
""
] |
I have numerous Web Services in my project that share types.
For simplicity I will demonstrate with two Web Services.
WebService1 at <http://MyServer/WebService.asmx>
webService2 at <http://MyServer/WebService.asmx>
When I generate the proxy for these two services I use:
wsdl /sharetypes <http://MyServer/WebService1.asmx> <http://MyServer/WebService2.asmx>
/appsettingurlkey:WebServiceUrl /namespace:MyNamespace /out:MyProxy.cs
The problem is that the appsettingurlkey is the same for each Web Service
in the proxy file. I want to be able to specify multiple
appsettingurlkey parameters. How is this accomplished? I figure
since the /sharetypes parameter became available, there should be a
solution for specifying the appsettingurlkey specifically for each
Web Service identified.
If this is not possible with the wsdl.exe, what would you propose I do? I would rather not update the generated code that wsdl.exe outputs and I don't want to go through my whole application passing in the Url to each instance of the Web Services.
|
The proxy classes generated are partial classes, so my solution would be to add your own constructor in a different (non-generated) code file, which explicitly reads a different setting for each proxy.
|
To suplement Elijah's own answer, here's the email answer I gave him.
I had to blog it because the XML didn't paste well into this text box: <http://www.rickdoes.net/blog/archive/2008/09/29/wsdl-shared-types-and-configuration.aspx>
|
Defaulting to different URLs when Generating Web Service Proxy with WSDL.exe
|
[
"",
"c#",
"asp.net",
"web-services",
"wsdl",
""
] |
The first one is definitely something that works, but which one below is the efficient way?
```
switch($type) {
case 1:
print 'success';
break;
case 2:
print 'success';
break;
case 3:
print 'success';
break;
case 4:
print 'success for type 4';
break;
}
```
Since 1, 2 and 3 print do the same, can I do this?
```
switch($type) {
case 1, 2, 3:
print 'success';
break;
case 4:
print 'success for type 4';
break;
}
```
or
```
switch($type) {
case 1:
case 2:
case 3:
print 'success';
break;
case 4:
print 'success for type 4';
break;
}
```
|
```
switch($type)
{
case 1:
case 2:
case 3:
print 'success';
break;
case 4:
print 'success for type 4';
break;
}
```
Is the way to go!
|
PHP manual lists an example like your 3rd for [switch](https://www.php.net/manual/en/control-structures.switch.php):
```
<?php
switch ($i) {
case 0:
case 1:
case 2:
echo "i is less than 3 but not negative";
break;
case 3:
echo "i is 3";
}
?>
```
|
Can I have multiple cases that do the same thing?
|
[
"",
"php",
"switch-statement",
""
] |
I'm writing a python script that executes a csh script in Solaris 10. The csh script prompts the user for the root password (which I know) but I'm not sure how to make the python script answer the prompt with the password. Is this possible? Here is what I'm using to execute the csh script:
```
import commands
commands.getoutput('server stop')
```
|
Have a look at the [pexpect](http://www.noah.org/wiki/Pexpect) module. It is designed to deal with interactive programs, which seems to be your case.
Oh, and remember that hard-encoding root's password in a shell or python script is potentially a security hole :D
|
Use [subprocess](http://www.python.org/doc/2.5.2/lib/module-subprocess.html). Call Popen() to create your process and use communicate() to send it text. Sorry, forgot to include the PIPE..
```
from subprocess import Popen, PIPE
proc = Popen(['server', 'stop'], stdin=PIPE)
proc.communicate('password')
```
You would do better do avoid the password and try a scheme like sudo and sudoers. Pexpect, mentioned elsewhere, is not part of the standard library.
|
Make python enter password when running a csh script
|
[
"",
"python",
"scripting",
"passwords",
"root",
"csh",
""
] |
What MySQL query will do a text search and replace in one particular field in a table?
I.e. search for `foo` and replace with `bar` so a record with a field with the value `hello foo` becomes `hello bar`.
|
Change `table_name` and `field` to match your table name and field in question:
```
UPDATE table_name SET field = REPLACE(field, 'foo', 'bar') WHERE INSTR(field, 'foo') > 0;
```
* [REPLACE](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace) (string functions)
* [INSTR](https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_instr) (string functions)
|
```
UPDATE table_name
SET field = replace(field, 'string-to-find', 'string-that-will-replace-it');
```
|
MySQL search and replace some text in a field
|
[
"",
"mysql",
"sql",
"search",
"replace",
""
] |
Lets just say you have a table in Oracle:
```
CREATE TABLE person (
id NUMBER PRIMARY KEY,
given_names VARCHAR2(50),
surname VARCHAR2(50)
);
```
with these function-based indices:
```
CREATE INDEX idx_person_upper_given_names ON person (UPPER(given_names));
CREATE INDEX idx_person_upper_last_name ON person (UPPER(last_name));
```
Now, given\_names has no NULL values but for argument's sake last\_name does. If I do this:
```
SELECT * FROM person WHERE UPPER(given_names) LIKE 'P%'
```
the explain plan tells me its using the index but change it to:
```
SELECT * FROM person WHERE UPPER(last_name) LIKE 'P%'
```
it doesn't. The Oracle docs say that to use the function-based index will only be used when several conditions are met, one of which is ensuring there are no NULL values since they aren't indexed.
I've tried these queries:
```
SELECT * FROM person WHERE UPPER(last_name) LIKE 'P%' AND UPPER(last_name) IS NOT NULL
```
and
```
SELECT * FROM person WHERE UPPER(last_name) LIKE 'P%' AND last_name IS NOT NULL
```
In the latter case I even added an index on last\_name but no matter what I try it uses a full table scan. Assuming I can't get rid of the NULL values, how do I get this query to use the index on UPPER(last\_name)?
|
The index can be used, though the optimiser may have chosen not to use it for your particular example:
```
SQL> create table my_objects
2 as select object_id, object_name
3 from all_objects;
Table created.
SQL> select count(*) from my_objects;
2 /
COUNT(*)
----------
83783
SQL> alter table my_objects modify object_name null;
Table altered.
SQL> update my_objects
2 set object_name=null
3 where object_name like 'T%';
1305 rows updated.
SQL> create index my_objects_name on my_objects (lower(object_name));
Index created.
SQL> set autotrace traceonly
SQL> select * from my_objects
2 where lower(object_name) like 'emp%';
29 rows selected.
Execution Plan
----------------------------------------------------------
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 17 | 510 | 355 (1)|
| 1 | TABLE ACCESS BY INDEX ROWID| MY_OBJECTS | 17 | 510 | 355 (1)|
|* 2 | INDEX RANGE SCAN | MY_OBJECTS_NAME | 671 | | 6 (0)|
------------------------------------------------------------------------------------
```
The documentation you read was presumably pointing out that, just like any other index, all-null keys are not stored in the index.
|
In your example you've created the same index twice - this would give an error so I'm assuming that was a mistake in pasting, not the actual code you tried.
I tried it with
```
CREATE INDEX idx_person_upper_surname ON person (UPPER(surname));
SELECT * FROM person WHERE UPPER(surname) LIKE 'P%';
```
and it produced the expected query plan:
```
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=67)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'PERSON' (TABLE) (Cost=1
Card=1 Bytes=67)
2 1 INDEX (RANGE SCAN) OF 'IDX_PERSON_UPPER_SURNAME' (INDEX)
(Cost=1 Card=1)
```
To answer your question, yes it should work. Try double checking that you do have the second index created correctly.
Also try an explicit hint:
```
SELECT /*+INDEX(PERSON IDX_PERSON_UPPER_SURNAME)*/ *
FROM person
WHERE UPPER(surname) LIKE 'P%';
```
If that works, but only with the hint, then it is likely related to CBO statistics gone wrong, or CBO related init parameters.
|
How to use a function-based index on a column that contains NULLs in Oracle 10+?
|
[
"",
"sql",
"oracle",
""
] |
Whenever I start our Apache Felix (OSGi) based application under SUN Java ( build 1.6.0\_10-rc2-b32 and other 1.6.x builds) I see the following message output on the console (usually under Ubuntu 8.4):
> Warning: The encoding 'UTF-8' is not supported by the Java runtime.
I've seen this message display occasionally when running both Tomcat and Resin as well. If java supports unicode and UTF-8, what causes this message? I've yet to find any reference, or answer to this anywhere else.
|
According [the documentation](http://java.sun.com/javase/6/docs/api/java/nio/charset/Charset.html) "Every implementation of the Java platform is required to support the following standard charsets... US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16." So I doubt that Sun have released a build without UTF-8 support.
The actual error message appears to be from [here](http://www.jdocs.com/page/AjaxSourceCode?oid=10701), which is part of the Xerces XML parser. I imagine it is the XML parser where the problem is occurring.
|
Try the following program:
```
import java.nio.charset.Charset;
public class TestCharset {
public static void main(String[] args) {
System.out.println(Charset.forName("UTF-8"));
}
}
```
If this throws an exception, then there is something wrong with your JDK.
If it prints "UTF-8" then your JDK is OK, and your application is doing something odd.
If that's the case, run your app under the debugger, and put a breakpoint in <http://www.java2s.com/Open-Source/Java-Document/XML/xalan/org/apache/xml/serializer/ToStream.java.htm> --that's the place this warning is produced, and step to see why Xalan can't find the encoding.
|
The encoding 'UTF-8' is not supported by the Java runtime
|
[
"",
"java",
"linux",
"unicode",
""
] |
I'd like to know which method is recommended on Windows C programming: using malloc or the Win32 HeapAlloc (maybe VirtualAlloc?) function.
I've read the MSDN [Memory Management Functions](http://msdn.microsoft.com/en-us/library/aa366781(VS.85).aspx) article and the MSDN articles regarding malloc and HeapAlloc, but they do not say which one should be used and in what situations.
|
Stick with malloc unless you have a compelling reason to use something different. It will be implemented underneath in terms of the OS memory allocation primitives, but there is no real advantage in diving down to that layer yourself.
A few API calls I believe need a memory block allocated from a Windows heap, but you will know when you come across them.
Or if you want to do something more advanced, like use shared memory, or need to control the permissions on the memory pages directly then you will need to look at the Windows API calls like VirtualAlloc.
|
VirtualAlloc and friends can give you a bit of an edge if you do have heaps of data to process or if you need to go to the trouble of creating your own memory manager anyway.
Otherwise it's easier and of course more portable to just use malloc().
VirtualAlloc has this nifty feature called MEM\_RESET, which invalidates the data in a block of memory, but keeps it allocated. This means if it's paged to disk, Windows won't bother to page it back in next time you access it. It's nice if you have many megs of data that can suddenly become unnecessary, but you'll soon have something else to fill the buffer up.
It also differentiates between reserving address space and actually requesting memory. There's some nice stuff there if you have a good reason to go to all that trouble.
|
Memory allocation on Windows C code
|
[
"",
"c++",
"c",
"malloc",
"virtualalloc",
"heapalloc",
""
] |
Looking through some code I came across the following code
```
trTuDocPackTypdBd.update(TrTuDocPackTypeDto.class.cast(packDto));
```
and I'd like to know if casting this way has any advantages over
```
trTuDocPackTypdBd.update((TrTuDocPackTypeDto)packDto);
```
I've asked the developer responsible and he said he used it because it was new (which doesn't seem like a particularly good reason to me), but I'm intrigued when I would want to use the method.
|
These statements are not identical. The cast method is a normal method invocation (`invokevirtual` JVM instruction) while the other is a language construct (`checkcast` instruction). In the case you show above, you should use the second form: `(TrTuDocPackTypeDto) packDto`
The `cast` method is used in reflective programming with generics, when you have a Class instance for some variable type. You could use it like this:
```
public <T> Set<T> find(Class<T> clz, Filter criteria) {
List<?> raw = session.find(clz, criteria); /* A legacy, un-generic API. */
Set<T> safe = new HashSet<T>();
for (Object o : raw)
safe.add(clz.cast(o));
return safe;
}
```
This gives you a safe way to avoid the incorrect alternative of simply casting a raw type to a generic type:
```
/* DO NOT DO THIS! */
List raw = new ArrayList();
...
return (List<Widget>) raw;
```
The compiler will warn you, `Unchecked cast from List to List<Widget>`, meaning that in the ellipsis, someone could have added a `Gadget` to the raw list, which will eventually cause a `ClassCastException` when the caller iterates over the returned list of (supposed) `Widget` instances.
|
The main case for doing it (IME) is when you need to safely cast in a generic class/method. Due to type erasure, you can't cast to `T` but if you've been provided a `Class<? extends T>` parameter then you can use that to cast and the result will be assignable to a variable of type `T`.
|
When should I use the java 5 method cast of Class?
|
[
"",
"java",
"java-5",
""
] |
I'm trying to work through Project Euler and I'm hitting a barrier on problem 03. I have an algorithm that works for smaller numbers, but problem 3 uses a very, very large number.
**Problem 03:**
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143?
Here is my solution in C# and it's been running for I think close to an hour. I'm not looking for an answer because I do actually want to solve this myself. Mainly just looking for some help.
```
static void Main(string[] args) {
const long n = 600851475143;
//const long n = 13195;
long count, half, largestPrime = 0;
bool IsAPrime;
half = n / 2;
for (long i = half; i > 1 && largestPrime == 0; i--) {
if (n % i == 0) { // these are factors of n
count = 1;
IsAPrime = true;
while (++count < i && IsAPrime) {
if (i % count == 0) { // does a factor of n have a factor? (not prime)
IsAPrime = false;
}
}
if (IsAPrime) {
largestPrime = i;
}
}
}
Console.WriteLine("The largest prime factor is " + largestPrime.ToString() + ".");
Console.ReadLine();
}
```
|
For starters, instead of beginning your search at n / 2, start it at the square root of n. You'll get half of the factors, the other half being their complement.
eg:
```
n = 27
start at floor(sqrt(27)) = 5
is 5 a factor? no
is 4 a factor? no
is 3 a factor? yes. 27 / 3 = 9. 9 is also a factor.
is 2 a factor? no.
factors are 3 and 9.
```
|
Although the question asks for the *largest* prime factor, it doesn't necessarily mean you have to find that one first...
|
Project Euler Question 3 Help
|
[
"",
"c#",
"algorithm",
"language-agnostic",
"primes",
""
] |
Is there a way to get the Unicode Block of a character in python? The [unicodedata](http://www.python.org/doc/2.5.2/lib/module-unicodedata.html) module doesn't seem to have what I need, and I couldn't find an external library for it.
Basically, I need the same functionality as [`Character.UnicodeBlock.of()`](http://java.sun.com/javase/6/docs/api/java/lang/Character.UnicodeBlock.html#of(char)) in java.
|
I couldn't find one either. Strange!
Luckily, the number of Unicode blocks is quite manageably small.
This implementation accepts a one-character Unicode string, just like the functions in `unicodedata`. If your inputs are mostly ASCII, this linear search might even be faster than binary search using `bisect` or whatever. If I were submitting this for inclusion in the Python standard library, I'd probably write it as a binary search through an array of statically-initialized `struct`s in C.
```
def block(ch):
'''
Return the Unicode block name for ch, or None if ch has no block.
>>> block(u'a')
'Basic Latin'
>>> block(unichr(0x0b80))
'Tamil'
>>> block(unichr(0xe0080))
'''
assert isinstance(ch, unicode) and len(ch) == 1, repr(ch)
cp = ord(ch)
for start, end, name in _blocks:
if start <= cp <= end:
return name
def _initBlocks(text):
global _blocks
_blocks = []
import re
pattern = re.compile(r'([0-9A-F]+)\.\.([0-9A-F]+);\ (\S.*\S)')
for line in text.splitlines():
m = pattern.match(line)
if m:
start, end, name = m.groups()
_blocks.append((int(start, 16), int(end, 16), name))
# retrieved from http://unicode.org/Public/UNIDATA/Blocks.txt
_initBlocks('''
# Blocks-12.0.0.txt
# Date: 2018-07-30, 19:40:00 GMT [KW]
# © 2018 Unicode®, Inc.
# For terms of use, see http://www.unicode.org/terms_of_use.html
#
# Unicode Character Database
# For documentation, see http://www.unicode.org/reports/tr44/
#
# Format:
# Start Code..End Code; Block Name
# ================================================
# Note: When comparing block names, casing, whitespace, hyphens,
# and underbars are ignored.
# For example, "Latin Extended-A" and "latin extended a" are equivalent.
# For more information on the comparison of property values,
# see UAX #44: http://www.unicode.org/reports/tr44/
#
# All block ranges start with a value where (cp MOD 16) = 0,
# and end with a value where (cp MOD 16) = 15. In other words,
# the last hexadecimal digit of the start of range is ...0
# and the last hexadecimal digit of the end of range is ...F.
# This constraint on block ranges guarantees that allocations
# are done in terms of whole columns, and that code chart display
# never involves splitting columns in the charts.
#
# All code points not explicitly listed for Block
# have the value No_Block.
# Property: Block
#
# @missing: 0000..10FFFF; No_Block
0000..007F; Basic Latin
0080..00FF; Latin-1 Supplement
0100..017F; Latin Extended-A
0180..024F; Latin Extended-B
0250..02AF; IPA Extensions
02B0..02FF; Spacing Modifier Letters
0300..036F; Combining Diacritical Marks
0370..03FF; Greek and Coptic
0400..04FF; Cyrillic
0500..052F; Cyrillic Supplement
0530..058F; Armenian
0590..05FF; Hebrew
0600..06FF; Arabic
0700..074F; Syriac
0750..077F; Arabic Supplement
0780..07BF; Thaana
07C0..07FF; NKo
0800..083F; Samaritan
0840..085F; Mandaic
0860..086F; Syriac Supplement
08A0..08FF; Arabic Extended-A
0900..097F; Devanagari
0980..09FF; Bengali
0A00..0A7F; Gurmukhi
0A80..0AFF; Gujarati
0B00..0B7F; Oriya
0B80..0BFF; Tamil
0C00..0C7F; Telugu
0C80..0CFF; Kannada
0D00..0D7F; Malayalam
0D80..0DFF; Sinhala
0E00..0E7F; Thai
0E80..0EFF; Lao
0F00..0FFF; Tibetan
1000..109F; Myanmar
10A0..10FF; Georgian
1100..11FF; Hangul Jamo
1200..137F; Ethiopic
1380..139F; Ethiopic Supplement
13A0..13FF; Cherokee
1400..167F; Unified Canadian Aboriginal Syllabics
1680..169F; Ogham
16A0..16FF; Runic
1700..171F; Tagalog
1720..173F; Hanunoo
1740..175F; Buhid
1760..177F; Tagbanwa
1780..17FF; Khmer
1800..18AF; Mongolian
18B0..18FF; Unified Canadian Aboriginal Syllabics Extended
1900..194F; Limbu
1950..197F; Tai Le
1980..19DF; New Tai Lue
19E0..19FF; Khmer Symbols
1A00..1A1F; Buginese
1A20..1AAF; Tai Tham
1AB0..1AFF; Combining Diacritical Marks Extended
1B00..1B7F; Balinese
1B80..1BBF; Sundanese
1BC0..1BFF; Batak
1C00..1C4F; Lepcha
1C50..1C7F; Ol Chiki
1C80..1C8F; Cyrillic Extended-C
1C90..1CBF; Georgian Extended
1CC0..1CCF; Sundanese Supplement
1CD0..1CFF; Vedic Extensions
1D00..1D7F; Phonetic Extensions
1D80..1DBF; Phonetic Extensions Supplement
1DC0..1DFF; Combining Diacritical Marks Supplement
1E00..1EFF; Latin Extended Additional
1F00..1FFF; Greek Extended
2000..206F; General Punctuation
2070..209F; Superscripts and Subscripts
20A0..20CF; Currency Symbols
20D0..20FF; Combining Diacritical Marks for Symbols
2100..214F; Letterlike Symbols
2150..218F; Number Forms
2190..21FF; Arrows
2200..22FF; Mathematical Operators
2300..23FF; Miscellaneous Technical
2400..243F; Control Pictures
2440..245F; Optical Character Recognition
2460..24FF; Enclosed Alphanumerics
2500..257F; Box Drawing
2580..259F; Block Elements
25A0..25FF; Geometric Shapes
2600..26FF; Miscellaneous Symbols
2700..27BF; Dingbats
27C0..27EF; Miscellaneous Mathematical Symbols-A
27F0..27FF; Supplemental Arrows-A
2800..28FF; Braille Patterns
2900..297F; Supplemental Arrows-B
2980..29FF; Miscellaneous Mathematical Symbols-B
2A00..2AFF; Supplemental Mathematical Operators
2B00..2BFF; Miscellaneous Symbols and Arrows
2C00..2C5F; Glagolitic
2C60..2C7F; Latin Extended-C
2C80..2CFF; Coptic
2D00..2D2F; Georgian Supplement
2D30..2D7F; Tifinagh
2D80..2DDF; Ethiopic Extended
2DE0..2DFF; Cyrillic Extended-A
2E00..2E7F; Supplemental Punctuation
2E80..2EFF; CJK Radicals Supplement
2F00..2FDF; Kangxi Radicals
2FF0..2FFF; Ideographic Description Characters
3000..303F; CJK Symbols and Punctuation
3040..309F; Hiragana
30A0..30FF; Katakana
3100..312F; Bopomofo
3130..318F; Hangul Compatibility Jamo
3190..319F; Kanbun
31A0..31BF; Bopomofo Extended
31C0..31EF; CJK Strokes
31F0..31FF; Katakana Phonetic Extensions
3200..32FF; Enclosed CJK Letters and Months
3300..33FF; CJK Compatibility
3400..4DBF; CJK Unified Ideographs Extension A
4DC0..4DFF; Yijing Hexagram Symbols
4E00..9FFF; CJK Unified Ideographs
A000..A48F; Yi Syllables
A490..A4CF; Yi Radicals
A4D0..A4FF; Lisu
A500..A63F; Vai
A640..A69F; Cyrillic Extended-B
A6A0..A6FF; Bamum
A700..A71F; Modifier Tone Letters
A720..A7FF; Latin Extended-D
A800..A82F; Syloti Nagri
A830..A83F; Common Indic Number Forms
A840..A87F; Phags-pa
A880..A8DF; Saurashtra
A8E0..A8FF; Devanagari Extended
A900..A92F; Kayah Li
A930..A95F; Rejang
A960..A97F; Hangul Jamo Extended-A
A980..A9DF; Javanese
A9E0..A9FF; Myanmar Extended-B
AA00..AA5F; Cham
AA60..AA7F; Myanmar Extended-A
AA80..AADF; Tai Viet
AAE0..AAFF; Meetei Mayek Extensions
AB00..AB2F; Ethiopic Extended-A
AB30..AB6F; Latin Extended-E
AB70..ABBF; Cherokee Supplement
ABC0..ABFF; Meetei Mayek
AC00..D7AF; Hangul Syllables
D7B0..D7FF; Hangul Jamo Extended-B
D800..DB7F; High Surrogates
DB80..DBFF; High Private Use Surrogates
DC00..DFFF; Low Surrogates
E000..F8FF; Private Use Area
F900..FAFF; CJK Compatibility Ideographs
FB00..FB4F; Alphabetic Presentation Forms
FB50..FDFF; Arabic Presentation Forms-A
FE00..FE0F; Variation Selectors
FE10..FE1F; Vertical Forms
FE20..FE2F; Combining Half Marks
FE30..FE4F; CJK Compatibility Forms
FE50..FE6F; Small Form Variants
FE70..FEFF; Arabic Presentation Forms-B
FF00..FFEF; Halfwidth and Fullwidth Forms
FFF0..FFFF; Specials
10000..1007F; Linear B Syllabary
10080..100FF; Linear B Ideograms
10100..1013F; Aegean Numbers
10140..1018F; Ancient Greek Numbers
10190..101CF; Ancient Symbols
101D0..101FF; Phaistos Disc
10280..1029F; Lycian
102A0..102DF; Carian
102E0..102FF; Coptic Epact Numbers
10300..1032F; Old Italic
10330..1034F; Gothic
10350..1037F; Old Permic
10380..1039F; Ugaritic
103A0..103DF; Old Persian
10400..1044F; Deseret
10450..1047F; Shavian
10480..104AF; Osmanya
104B0..104FF; Osage
10500..1052F; Elbasan
10530..1056F; Caucasian Albanian
10600..1077F; Linear A
10800..1083F; Cypriot Syllabary
10840..1085F; Imperial Aramaic
10860..1087F; Palmyrene
10880..108AF; Nabataean
108E0..108FF; Hatran
10900..1091F; Phoenician
10920..1093F; Lydian
10980..1099F; Meroitic Hieroglyphs
109A0..109FF; Meroitic Cursive
10A00..10A5F; Kharoshthi
10A60..10A7F; Old South Arabian
10A80..10A9F; Old North Arabian
10AC0..10AFF; Manichaean
10B00..10B3F; Avestan
10B40..10B5F; Inscriptional Parthian
10B60..10B7F; Inscriptional Pahlavi
10B80..10BAF; Psalter Pahlavi
10C00..10C4F; Old Turkic
10C80..10CFF; Old Hungarian
10D00..10D3F; Hanifi Rohingya
10E60..10E7F; Rumi Numeral Symbols
10F00..10F2F; Old Sogdian
10F30..10F6F; Sogdian
10FE0..10FFF; Elymaic
11000..1107F; Brahmi
11080..110CF; Kaithi
110D0..110FF; Sora Sompeng
11100..1114F; Chakma
11150..1117F; Mahajani
11180..111DF; Sharada
111E0..111FF; Sinhala Archaic Numbers
11200..1124F; Khojki
11280..112AF; Multani
112B0..112FF; Khudawadi
11300..1137F; Grantha
11400..1147F; Newa
11480..114DF; Tirhuta
11580..115FF; Siddham
11600..1165F; Modi
11660..1167F; Mongolian Supplement
11680..116CF; Takri
11700..1173F; Ahom
11800..1184F; Dogra
118A0..118FF; Warang Citi
119A0..119FF; Nandinagari
11A00..11A4F; Zanabazar Square
11A50..11AAF; Soyombo
11AC0..11AFF; Pau Cin Hau
11C00..11C6F; Bhaiksuki
11C70..11CBF; Marchen
11D00..11D5F; Masaram Gondi
11D60..11DAF; Gunjala Gondi
11EE0..11EFF; Makasar
11FC0..11FFF; Tamil Supplement
12000..123FF; Cuneiform
12400..1247F; Cuneiform Numbers and Punctuation
12480..1254F; Early Dynastic Cuneiform
13000..1342F; Egyptian Hieroglyphs
13430..1343F; Egyptian Hieroglyph Format Controls
14400..1467F; Anatolian Hieroglyphs
16800..16A3F; Bamum Supplement
16A40..16A6F; Mro
16AD0..16AFF; Bassa Vah
16B00..16B8F; Pahawh Hmong
16E40..16E9F; Medefaidrin
16F00..16F9F; Miao
16FE0..16FFF; Ideographic Symbols and Punctuation
17000..187FF; Tangut
18800..18AFF; Tangut Components
1B000..1B0FF; Kana Supplement
1B100..1B12F; Kana Extended-A
1B130..1B16F; Small Kana Extension
1B170..1B2FF; Nushu
1BC00..1BC9F; Duployan
1BCA0..1BCAF; Shorthand Format Controls
1D000..1D0FF; Byzantine Musical Symbols
1D100..1D1FF; Musical Symbols
1D200..1D24F; Ancient Greek Musical Notation
1D2E0..1D2FF; Mayan Numerals
1D300..1D35F; Tai Xuan Jing Symbols
1D360..1D37F; Counting Rod Numerals
1D400..1D7FF; Mathematical Alphanumeric Symbols
1D800..1DAAF; Sutton SignWriting
1E000..1E02F; Glagolitic Supplement
1E100..1E14F; Nyiakeng Puachue Hmong
1E2C0..1E2FF; Wancho
1E800..1E8DF; Mende Kikakui
1E900..1E95F; Adlam
1EC70..1ECBF; Indic Siyaq Numbers
1ED00..1ED4F; Ottoman Siyaq Numbers
1EE00..1EEFF; Arabic Mathematical Alphabetic Symbols
1F000..1F02F; Mahjong Tiles
1F030..1F09F; Domino Tiles
1F0A0..1F0FF; Playing Cards
1F100..1F1FF; Enclosed Alphanumeric Supplement
1F200..1F2FF; Enclosed Ideographic Supplement
1F300..1F5FF; Miscellaneous Symbols and Pictographs
1F600..1F64F; Emoticons
1F650..1F67F; Ornamental Dingbats
1F680..1F6FF; Transport and Map Symbols
1F700..1F77F; Alchemical Symbols
1F780..1F7FF; Geometric Shapes Extended
1F800..1F8FF; Supplemental Arrows-C
1F900..1F9FF; Supplemental Symbols and Pictographs
1FA00..1FA6F; Chess Symbols
1FA70..1FAFF; Symbols and Pictographs Extended-A
20000..2A6DF; CJK Unified Ideographs Extension B
2A700..2B73F; CJK Unified Ideographs Extension C
2B740..2B81F; CJK Unified Ideographs Extension D
2B820..2CEAF; CJK Unified Ideographs Extension E
2CEB0..2EBEF; CJK Unified Ideographs Extension F
2F800..2FA1F; CJK Compatibility Ideographs Supplement
E0000..E007F; Tags
E0100..E01EF; Variation Selectors Supplement
F0000..FFFFF; Supplementary Private Use Area-A
100000..10FFFF; Supplementary Private Use Area-B
# EOF
''')
```
|
```
pip install unicodeblock
```
A third-party library [UnicodeBlock](https://github.com/neuront/pyunicodeblock) does this. Here is an example from their page:
```
>>> import unicodeblock.blocks
>>> print(unicodeblock.blocks.of('0'))
DIGIT
>>> print(unicodeblock.blocks.of('汉'))
CJK_UNIFIED_IDEOGRAPHS
>>> print(unicodeblock.blocks.of('あ'))
HIRAGANA
```
|
Unicode block of a character in python
|
[
"",
"python",
"unicode",
"character-properties",
""
] |
Is there a library method to copy all the properties between two (already present) instances of the same class, in Python?
I mean, something like Apache Commons' `PropertyUtilsBean.copyProperties()`
|
If your class does not modify `__getitem__` or `__setitem__` for special attribute access all your attributes are stored in `__dict__` so you can do:
```
nobj.__dict__ = oobj.__dict__.copy() # just a shallow copy
```
If you use python properties you should look at `inspect.getmembers()` and filter out the ones you want to copy.
|
Try `destination.__dict__.update(source.__dict__)`.
|
How to copy all properties of an object to another object, in Python?
|
[
"",
"python",
""
] |
Is there a case insensitive version of the [:contains](http://docs.jquery.com/Selectors/contains) jQuery selector or should I do the work manually by looping over all elements and comparing their .text() to my string?
|
What I ended up doing for jQuery 1.2 is :
```
jQuery.extend(
jQuery.expr[':'], {
Contains : "jQuery(a).text().toUpperCase().indexOf(m[3].toUpperCase())>=0"
});
```
This will extend jquery to have a :Contains selector that is case insensitive, the :contains selector remains unchanged.
Edit: For jQuery 1.3 (thanks @user95227) and later you need
```
jQuery.expr[':'].Contains = function(a,i,m){
return jQuery(a).text().toUpperCase().indexOf(m[3].toUpperCase())>=0;
};
```
Edit:
Apparently accessing the DOM directly by using
```
(a.textContent || a.innerText || "")
```
instead of
```
jQuery(a).text()
```
In the previous expression speeds it up considerably so try at your own risk if speed is an issue. (see [@John](https://stackoverflow.com/users/171662/john) 's [question](https://stackoverflow.com/questions/1407434/is-there-any-way-to-speed-up-this-solution-for-a-case-insensitive-jquery-contai))
Latest edit: For jQuery 1.8 it should be:
```
jQuery.expr[":"].Contains = jQuery.expr.createPseudo(function(arg) {
return function( elem ) {
return jQuery(elem).text().toUpperCase().indexOf(arg.toUpperCase()) >= 0;
};
});
```
|
To make it optionally case insensitive:
<http://bugs.jquery.com/ticket/278>
```
$.extend($.expr[':'], {
'containsi': function(elem, i, match, array)
{
return (elem.textContent || elem.innerText || '').toLowerCase()
.indexOf((match[3] || "").toLowerCase()) >= 0;
}
});
```
then use `:containsi` instead of `:contains`
|
Is there a case insensitive jQuery :contains selector?
|
[
"",
"javascript",
"jquery",
"jquery-selectors",
""
] |
How do I extend my parent's options array for child classes in PHP?
I have something like this:
```
class ParentClass {
public $options = array(
'option1'=>'setting1'
);
//The rest of the functions would follow
}
```
I would like to append to that options array in a child class without erasing any of the parent options. I've tried doing something like this, but haven't quite got it to work yet:
```
class ChildClass extends ParentClass {
public $options = parent::options + array(
'option2'=>'setting2'
);
//The rest of the functions would follow
}
```
What would be the best way to do something like this?
|
I think it is better practice to initialize this property in the constructor and then you can extend the value in any descendant class:
```
<?php
class ParentClass {
public $options;
public function __construct() {
$this->options = array(
'option1'=>'setting1'
);
}
//The rest of the functions would follow
}
class ChildClass extends ParentClass {
public function __construct() {
parent::__construct();
$this->options['option2'] = 'setting2';
}
//The rest of the functions would follow
}
?>
```
|
PHP or no, you should have an accessor to do it, so you can call `$self->append_elements( 'foo' => 'bar' );` and not worry about the internal implementation.
|
Appending to an array variable in a parent class in PHP
|
[
"",
"php",
"oop",
""
] |
I have an img tag in my webapp that uses the onload handler to resize the image:
```
<img onLoad="SizeImage(this);" src="foo" >
```
This works fine in Firefox 3, but fails in IE7 because the image object being passed to the `SizeImage()` function has a width and height of 0 for some reason -- maybe IE calls the function before it finishes loading?. In researching this, I have discovered that other people have had this same problem with IE. I have also discovered that this isn't valid HTML 4. This is our doctype, so I don't know if it's valid or not:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
```
Is there a reasonable solution for resizing an image as it is loaded, preferably one that is standards-compliant? The image is being used for the user to upload a photo of themselves, which can be nearly any size, and we want to display it at a maximum of 150x150. If your solution is to resize the image server-side on upload, I know that is the correct solution, but I am forbidden from implementing it :( It must be done client side, and it must be done on display.
Thanks.
**Edit**: Due to the structure of our app, it is impractical (bordering on impossible) to run this script in the document's onload. I can only reasonably edit the image tag and the code near it (for instance I could add a `<script>` right below it). Also, we already have Prototype and EXT JS libraries... management would prefer to not have to add another (some answers have suggested jQuery). If this can be solved using those frameworks, that would be great.
**Edit 2**: Unfortunately, we must support Firefox 3, IE 6 and IE 7. It is desirable to support all Webkit-based browsers as well, but as our site doesn't currently support them, we can tolerate solutions that only work in the Big 3.
|
IE7 is trying to resize the image before the DOM tree is fully rendered. You need to run it on document.onload... you'll just need to make sure your function can handle being passed a reference to the element that isn't "this."
Alternatively... and I hope this isn't a flameable offense... jQuery makes stuff like this really, really easy.
EDIT in response to EDIT 1:
You can put `document.onload(runFunction);` in any script tag, anywhere in the body. it will still wait until the document is loaded to run the function.
|
If you don't have to support IE 6, you can just use this CSS.
```
yourImageSelector {
max-width: 150px;
max-height: 150px;
}
```
|
img onload doesn't work well in IE7
|
[
"",
"javascript",
"html",
"internet-explorer",
"image",
""
] |
I am currently writing a little bootstrap code for a service that can be run in the console. It essentially boils down to calling the OnStart() method instead of using the ServiceBase to start and stop the service (because it doesn't run the application if it isn't installed as a service and makes debugging a nightmare).
Right now I am using Debugger.IsAttached to determine if I should use ServiceBase.Run or [service].OnStart, but I know that isn't the best idea because some times end users want to run the service in a console (to see the output etc. realtime).
Any ideas on how I could determine if the Windows service controller started 'me', or if the user started 'me' in the console? Apparantly [Environment.IsUserInteractive](http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/99c4594b-e6c9-424f-bfe1-c7261bba24d3/ "forum thread") is not the answer. I thought about using commandline args, but that seems 'dirty'.
I could always see about a try-catch statement around ServiceBase.Run, but that seems dirty. Edit: Try catch doesn't work.
I have a solution: putting it up here for all the other interested stackers:
```
public void Run()
{
if (Debugger.IsAttached || Environment.GetCommandLineArgs().Contains<string>("-console"))
{
RunAllServices();
}
else
{
try
{
string temp = Console.Title;
ServiceBase.Run((ServiceBase[])ComponentsToRun);
}
catch
{
RunAllServices();
}
}
} // void Run
private void RunAllServices()
{
foreach (ConsoleService component in ComponentsToRun)
{
component.Start();
}
WaitForCTRLC();
foreach (ConsoleService component in ComponentsToRun)
{
component.Stop();
}
}
```
EDIT: There was another question on StackOverflow where the guy had problems with the Environment.CurrentDirectory being "C:\Windows\System32" looks like that may be the answer. I will test today.
|
Like Ash, I write all actual processing code in a separate class library assembly, which was then referenced by the windows service executable, as well as a console app.
However, there are occasions when it is useful to know if the class library is running in the context of the service executable or the console app. The way I do this is to reflect on the base class of the hosting app. (Sorry for the VB, but I imagine that the following could be c#-ified fairly easily):
```
Public Class ExecutionContext
''' <summary>
''' Gets a value indicating whether the application is a windows service.
''' </summary>
''' <value>
''' <c>true</c> if this instance is service; otherwise, <c>false</c>.
''' </value>
Public Shared ReadOnly Property IsService() As Boolean
Get
' Determining whether or not the host application is a service is
' an expensive operation (it uses reflection), so we cache the
' result of the first call to this method so that we don't have to
' recalculate it every call.
' If we have not already determined whether or not the application
' is running as a service...
If IsNothing(_isService) Then
' Get details of the host assembly.
Dim entryAssembly As Reflection.Assembly = Reflection.Assembly.GetEntryAssembly
' Get the method that was called to enter the host assembly.
Dim entryPoint As System.Reflection.MethodInfo = entryAssembly.EntryPoint
' If the base type of the host assembly inherits from the
' "ServiceBase" class, it must be a windows service. We store
' the result ready for the next caller of this method.
_isService = (entryPoint.ReflectedType.BaseType.FullName = "System.ServiceProcess.ServiceBase")
End If
' Return the cached result.
Return CBool(_isService)
End Get
End Property
Private Shared _isService As Nullable(Of Boolean) = Nothing
#End Region
End Class
```
|
Another workaround.. so can run as WinForm or as windows service
```
var backend = new Backend();
if (Environment.UserInteractive)
{
backend.OnStart();
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Fronend(backend));
backend.OnStop();
}
else
{
var ServicesToRun = new ServiceBase[] {backend};
ServiceBase.Run(ServicesToRun);
}
```
|
Am I Running as a Service
|
[
"",
"c#",
".net",
"windows-services",
""
] |
I'm working on a project where I have 2 web services that need the same entity. The 2 web services are on the same server so on the back-end, they share the same classes.
On the front-end side, my code consumes *both* web services and sees the entities from both services as separate (in different namespaces) so I can't use the entity across both services.
Does anyone know of a way to allow this to work in .NET 2.0?
I've done this with my entity:
```
[XmlType(TypeName = "Class1", Namespace = "myNamespace")]
public class Class1
{
public int field;
}
```
Hoping that my IDE would somehow "know" that the class is the same on both web services so that it wouldn't create separate entities for both classes, but no luck.
Is this possible to do with .NET 2.0 web services?
|
I think that you can not do that from inside VS but you can manually use the `wsdl.exe` utility like this:
```
wsdl.exe /sharetypes http://localhost/MyService1.asmx?wsdl http://localhost/MyService2.asmx?wsdl
```
Notice the `/sharetypes` option which turns on the type sharing feature. This feature creates one code file with a single type definition for identical types shared between different services (the namespace, name, and wire signature must be identical).
More info:
* [Web Services Description Language tool](http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.80).aspx)
* [Framework 2.0 - WebServices Shared Types](http://weblogs.asp.net/israelio/archive/2005/01/04/346337.aspx)
|
I'm not sure about the implementation details with .NET 2.0, but I believe what you want to do is put the common classes in a seperate XSD file and refer to it from within your two WSDL's. This way, the common types have the same namespace between the two services.
Now, how you do this in .NET 2.0 I couldn't give you the specifics on...
|
One entity shared across two web services?
|
[
"",
"c#",
".net",
"web-services",
"wsdl",
""
] |
I have two iterators, a `list` and an `itertools.count` object (i.e. an infinite value generator). I would like to merge these two into a resulting iterator that will alternate yield values between the two:
```
>>> import itertools
>>> c = itertools.count(1)
>>> items = ['foo', 'bar']
>>> merged = imerge(items, c) # the mythical "imerge"
>>> merged.next()
'foo'
>>> merged.next()
1
>>> merged.next()
'bar'
>>> merged.next()
2
>>> merged.next()
Traceback (most recent call last):
...
StopIteration
```
What is the simplest, most concise way to do this?
|
A generator will solve your problem nicely.
```
def imerge(a, b):
for i, j in itertools.izip(a,b):
yield i
yield j
```
|
You can do something that is almost exaclty what @Pramod first suggested.
```
def izipmerge(a, b):
for i, j in itertools.izip(a,b):
yield i
yield j
```
The advantage of this approach is that you won't run out of memory if both a and b are infinite.
|
How do I merge two python iterators?
|
[
"",
"python",
"merge",
"iterator",
""
] |
A co worker of mine asked me to review some of my code and he sent me a diff file. I'm not new to diffs or version control in general but the diff file was very difficult to read because of the changes he made. Specifically, he used the "extract method" feature and reordered some methods. Conceptually, very easy to understand but looking at the diff, it was very hard to tell what he had done. It was much easier for me to checkout the previous revision and use Eclipse's "compare" feature, but it was still quite clunky.
Is there any version control system that stores metadata related to refactoring. Of course, it would be IDE and Programming Language specific, but we all use Eclipse and Java! Perhaps there might be some standard on which IDEs and version control implementations can play nicely?
|
Eclipse can [export refactoring history](http://help.eclipse.org/help32/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/tasks-240.htm) (see [3.2 release notes](http://mirror.calvin.edu/eclipse/downloads/drops/S-3.2M5-200602171115/eclipse-news-M5.html) as well). You could then view the refactoring changes via preview in Eclipse.
|
I don't know of compare tools that do a good job when the file has been rearranged. In general, this is a bad idea because of this type of problem. All too often people do it to simply meet their own style, which is a bad, bad reason to change code. It can effectively destroy the history, just like reformatting the entire file, and should never be done unless necessary (i.e. it is already a mess and unreadable).
The other problem is that working code will likely get broken because of someones style preferences. If it ain't broken, don't fix it!
|
Refactoring Nicely with Version Control
|
[
"",
"java",
"eclipse",
"svn",
"version-control",
""
] |
Converting a couple stored procedures from MySQL to Microsoft SQL server. Everything is going well, except one procedure used the MySQL `SHA1()` function. I cannot seem to find an equivalent to this in MS-SQL.
Does anyone know a valid equivalent for `SHA1()` on MS-SQL?
|
SQL Server 2005 and later has the [HashBytes()](http://msdn.microsoft.com/en-us/library/ms174415.aspx) function.
|
If you want to get a SHA1 hash exactly as MySQL would generate it (i.e. as a varchar), you can combine HashBytes with sys.fn\_varbintohexsubstring. E.g.
```
SELECT sys.fn_varbintohexsubstring(0, HashBytes('SHA1', 'password'), 1, 0)
```
See <http://accessrichard.blogspot.co.nz/2010/12/sql-server-and-net-equivalent-to-php.html> for more details.
|
Is there an equivalent to SHA1() in MS-SQL?
|
[
"",
"sql",
"mysql",
"sha1",
"t-sql",
"sql-server",
""
] |
Besides the CODEX what resources do you recommend to help a person new to creating plugins help create a WordPress plugin. I have an idea, but need a bit better explanation than what is in the CODEX to get started.
UPDATE: Is there a book that I could buy to help me out with this?
|
Having written the MyBlogLog plugin (the original one, that is) I found that the Wordpress Hooks list (can't remember the link offhand) was incredibly useful, as was the sample code from the Codex and WP Install files. Reading through other developer's plugins is also a good way to learn, as you can see how they implemented things and use those techniques to save yourself some R&D time.
What are you looking to create, anyways?
**Edit:**
I posted a comment with this, but just in case it gets lost...
For your specific needs, you're going to want to store data and be able to manage and retrieve it so creating a custom database table in your plugin is something you will want to do. See this codex link:
<http://codex.wordpress.org/Creating_Tables_with_Plugins>
Then you can just add your management code into the admin screens using the techniques found on this Codex page:
<http://codex.wordpress.org/Adding_Administration_Menus>
If you want to display the items on a page, you can either write yourself a custom PHP WP Page template to query the DB directly:
<http://codex.wordpress.org/Pages#Page_Templates>
Or just add a hook filter on your plugin to write the results to the page based on a keyword you specify:
<http://codex.wordpress.org/Plugin_API#Filters>
|
Here is a [useful set of links](http://www.devlounge.net/extras/how-to-write-a-wordpress-plugin) on how to do Wordpress plugins. Be aware that it is relatively "advanced" (in that it introduces a number of object oriented methods to the process). You should really read the Wordpress Codex stuff first.
|
WordPress Plugin Development
|
[
"",
"php",
"wordpress",
"plugins",
"resources",
""
] |
I was helping out some colleagues of mine with an SQL problem. Mainly they wanted to move all the rows from table A to table B (both tables having the same columns (names and types)). Although this was done in Oracle 11g I don't think it really matters.
Their initial naive implementation was something like
```
BEGIN
INSERT INTO B SELECT * FROM A
DELETE FROM A
COMMIT;
END
```
Their concern was if there were INSERTs made to table A during copying from A to B and the "DELETE FROM A" (or TRUNCATE for what was worth) would cause data loss (having the newer inserted rows in A deleted).
Ofcourse I quickly recommended storing the IDs of the copied rows in a temporary table and then deleting just the rows in A that matched the IDS in the temporary table.
However for curiosity's sake we put up a little test by adding a wait command (don't remember the PL/SQL syntax) between INSERT and DELETE. THen from a different connection we would insert rows *DURING THE WAIT*.
We observed that was a data loss by doing so. I reproduced the whole context in SQL Server and wrapped it all in a transaction but still the fresh new data was lost too in SQL Server. This made me think there is a systematic error/flaw in the initial approach.
However I can't tell if it was the fact that the TRANSACTION was not (somehow?) isolated from the fresh new INSERTs or the fact that the INSERTs came during the WAIT command.
In the end it was implemented using the temporary table suggested by me but we couldn't get the answer to "Why the data loss". Do you know why?
|
Depending on your isolation level, selecting all the rows from a table does not prevent new inserts, it will just lock the rows you read. In SQL Server, if you use the Serializable isolation level then it will prevent new rows if they would have been including in your select query.
<http://msdn.microsoft.com/en-us/library/ms173763.aspx> -
SERIALIZABLE
Specifies the following:
* Statements cannot read data that has been modified but not yet committed by other transactions.
* No other transactions can modify data that has been read by the current transaction until the current transaction completes.
* **Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.**
|
I can't speak to the transaction stability, but an alternate approach would be to have the second step delete from the source table where exists (select ids from target table).
Forgive the syntax, I have not tested this code, but you should be able to get the idea:
```
INSERT INTO B SELECT * FROM A;
DELETE FROM A WHERE EXISTS (SELECT B.<primarykey> FROM B WHERE B.<primarykey> = A.<primarykey>);
```
That way you are using the relational engine to enforce that no newer data will be deleted, and you don't need to do the two steps in a transaction.
Update: corrected syntax in subquery
|
transaction isolation problem or wrong approach?
|
[
"",
"sql",
"sql-server",
"oracle",
"transactions",
"database",
""
] |
I'm trying to build a new .NET C++ project from scratch. I am planning to mix managed and unmanaged code in this project.
this forum thread
[IDataObject : ambiguous symbol error](http://www.daniweb.com/forums/thread29742.html) answers a problem I've seen multiple times.
Post #4 states
"Move all 'using namespace XXXX' from .h to .cpp"
this looks like a good idea but now in my header files I need to reference parameters from the .NET Framework like
```
void loadConfigurations(String^ pPathname);
```
How am I supposed to move using statements in the .cpp file and use the according namespaces in the .h file?
|
It's a good idea to always use fully qualified names in header files. Because the `using` statement affects all following code regardless of `#include`, putting a `using` statement in a header file affects everybody that might include that header.
So you would change your function declaration in your header file to:
```
void loadConfigurations(SomeNamespace::String^ pPathname);
```
where SomeNamespace is the name of the namespace you were `using` previously.
|
In my experience the only namespace you have to be careful about is System itself. It's somewhat annoying that the most common, most important one is where you find problems, but that's life.
At least you can take comfort that if you're creating managed classes, you're issues with includes will get relief at the assembly barrier, unlike issues with header files for unmanaged classes.
|
How to order headers in .NET C++ projects
|
[
"",
".net",
"c++",
"reference",
"header",
""
] |
Does the placement of a function have an effect on the performance of closures within scope? If so, where is the optimal place to put these functions? If not, is the implied association by closure enough reason to place a function in another place logically?
For instance, if **foo** does not rely on the value of **localState**, does the fact that **localState** is accessible from **foo** have implications as to **foo**'s execution time, memory use, etc.?
```
(function(){
var localState;
function foo(){
// code
}
function bar(){
// code
return localState;
}
})();
```
In other words, would this be a better choice, and if so why?
```
(function(){
function foo(){
// code
}
var localState;
function bar(){
// code
return localState;
}
})();
```
[Darius Bacon](https://stackoverflow.com/users/27024/darius-bacon) has suggested [below](https://stackoverflow.com/questions/243894/javascript-closures-and-function-placement#243942) that the two samples above are identical since **localState** can be accessed anywhere from within the block. However, the example below where **foo** is defined outside the block may be a different case. What do you think?
```
function foo(){
// code
}
(function(){
var localState;
function bar(){
// code
foo();
return localState;
}
})();
```
|
Every function in Javascript is a closure. The runtime to resolve a variable's value is only incurred if the variable is referenced by the function. For instance, in this example function y captures the value of x even though x is not referenced directly by y:
```
var x = 3;
function y() eval("x");
y();
3
```
|
Both those snippets are equivalent, because they're both defined in the (same) environment of the anonymous function you're creating. I think you'd be able to access `localState` from `foo` either way.
That being said... if you have absurd amounts of variables in the environment you're creating, then `foo`'s execution time might be affected, as variable lookups will likely take longer. If there are tons of variables that you no longer use in the function you define `foo` in, and `foo` doesn't need them either, then `foo` will cause them to not be garbage-collected, so that could also be an issue.
|
javascript closures and function placement
|
[
"",
"javascript",
"performance",
"coding-style",
"structure",
"closures",
""
] |
I have a variable that is built in loop. Something like:
```
$str = "";
for($i = 0; $i < 10; $i++) $str .= "something";
```
If $str = "" is ommitted, I get undefined variable notice, but I thought php auto-declare a variable the first time it sees undeclared one?
How do I do this right?
|
You get the undefined variable because you're concatenating the value of itself with another value.
The equivalent of
> `$str = $str . "something";`
So, it can't say what's the initial value is. It's the equivalent of this:
> `$str = [undefined value] . "something";`
What's the result of a concatenation of `[undefined value]` and `"something"`? The interpreter can't say...
So, you have to put `""` in the variable first to initiate the variable's value, as you did.
HTH
|
If you really need to make it a it cleaner you could do:
```
for($i = 0, $str = ''; $i < 10; $i++) $str .= "something";
```
But what you have is what I normally do. vlceBerg explains it well.
|
PHP: Right way to declare variable before use in loop
|
[
"",
"php",
"performance",
"loops",
""
] |
### The situation
When you study social sciences, you are frequently confronted with the need for online surveys (scientific data collection online). That's one of the main reasons why I started with **PHP** in the first place.
Recently these survey applications have grown and grown. A **lot of complexity** has been added: reporting (flash charts, PDF generation), data aggregation, different levels of aggregation (e.g. company units), questionnaire module selection for the companies involved, etc.
Now I see myself confronted with a complex data gathering and reporting application which is **getting slow and unmaintainable**.
One of my platforms has up to 100 new users per day and aggregates / reports data of several **thousend users** *times* **hundreds of data items** *times* **dozens of hierarchy levels**.
### My questions
* is PHP/mysql still a way to go?
* could a framework like codeIgnitor be a basis for such an application or should I develop everything from the scratch?
* do you know any kind of survey specific framework I could use as a basis?
|
> is PHP/mysql still a way to go?
Yes, PHP/MYSQL (LAMP) has been successfully used in hundreds of sites with exponentially larger user bases than yours.
> could a framework like codeIgnitor be a basis for such an application or should I develop everything from the scratch?
Nontrivial PHP sites built without a framework (or a custom built framework) tends to get sloppy fast. PHP frameworks are now the norm for getting started, I would personally recommend the [Zend Framework](http://framework.zend.com). It's a very robust framework, providing tools for many common PHP tasks (replacement/enhancements for Database, Date, JSON, RPC, REST) and an organized methodology for web application development: [MVC](http://en.wikipedia.org/wiki/Model-view-controller) using [Zend\_Controller](http://framework.zend.com/manual/en/zend.controller.html).
> do you know any kind of survey specific framework I could use as a basis?
None that I know of, but you may want to try using [Zend\_Form](http://framework.zend.com/manual/en/zend.form.quickstart.html#zend.form.quickstart.config) to automatically generate form elements (type, filters, sanitizers, validators) from configuration files.
|
PHP/mysql should be fine for this scale, but you must tune it and give it sufficient resources.
For example, if your schema is not thought out, you'll hit all sorts of performance walls. How your data is stored and indexed is probably the most important factor for the performance of reports. I've had 60-100gb of data in mysql with sub-second response times for mildly complex queries. The important factors were:
* my data was indexed for the ways I used it
* the queries I used were thought out, tested and optimized
Next up, you must give your MySQL server sufficient resources. If you're running your app on a shared hosted server and you're using the out of the box settings, mysql probably won't run well with more than a few hundred mb's of data. Make sure your caches are tuned, your table types make sense for your application and you have enough memory and fast enough disks to meet your performance needs.
And finally, there's a trick we all use when our data sets get big: generate your reports on a cron instead of on demand. If it takes 2 minutes to generate a flash graph, have a cron run every 5 minutes to generate the data. Stick it in a file somewhere and spit that out to the graphing software instead of querying the database in real time.
|
Large custom survey / reporting applications - best practice
|
[
"",
"php",
"reporting",
"scaling",
"survey",
""
] |
I'm a jQuery novice, so the answer to this may be quite simple:
I have an image, and I would like to do several things with it.
When a user clicks on a 'Zoom' icon, I'm running the 'imagetool' plugin (<http://code.google.com/p/jquery-imagetool/>) to load a larger version of the image. The plugin creates a new div around the image and allows the user to pan around.
When a user clicks on an alternative image, I'm removing the old one and loading in the new one.
The problem comes when a user clicks an alternative image, and then clicks on the zoom button - the imagetool plugin creates the new div, but the image appears after it...
The code is as follows:
```
// Product Zoom (jQuery)
$(document).ready(function(){
$("#productZoom").click(function() {
// Set new image src
var imageSrc = $("#productZoom").attr("href");
$("#productImage").attr('src', imageSrc);
// Run the imagetool plugin on the image
$(function() {
$("#productImage").imagetool({
viewportWidth: 300,
viewportHeight: 300,
topX: 150,
topY: 150,
bottomX: 450,
bottomY: 450
});
});
return false;
});
});
// Alternative product photos (jQuery)
$(document).ready(function(){
$(".altPhoto").click(function() {
$('#productImageDiv div.viewport').remove();
$('#productImage').remove();
// Set new image src
var altImageSrc = $(this).attr("href");
$("#productZoom").attr('href', altImageSrc);
var img = new Image();
$(img).load(function () {
$(this).hide();
$('#productImageDiv').append(this);
$(this).fadeIn();
}).error(function () {
// notify the user that the image could not be loaded
}).attr({
src: altImageSrc,
id: "productImage"
});
return false;
});
});
```
It seems to me, that the imagetool plugin can no longer see the #productImage image once it has been replaced with a new image... So I think this has something to do with binding? As in because the new image is added to the dom after the page has loaded, the iamgetool plugin can no longer use it correctly... is this right?
If so, any ideas how to deal with it?
|
Wehey! I've sorted it out myself...
Turns out if I remove the containing div completely, and then rewrite it with .html, the imagetool plugin recognises it again.
Amended code for anyone who's interested:
```
$(document).ready(function(){
// Product Zoom (jQuery)
$("#productZoom").click(function() {
$('#productImage').remove();
$('#productImageDiv').html('<img src="" id="productImage">');
// Set new image src
var imageSrc = $("#productZoom").attr("href");
$("#productImage").attr('src', imageSrc);
// Run the imagetool plugin on the image
$(function() {
$("#productImage").imagetool({
viewportWidth: 300,
viewportHeight: 300,
topX: 150,
topY: 150,
bottomX: 450,
bottomY: 450
});
});
return false;
});
// Alternative product photos (jQuery)
$(".altPhoto").click(function() {
$('#productImageDiv div.viewport').remove();
$('#productImage').remove();
// Set new image src
var altImageSrc = $(this).attr("href");
// Set new image Zoom link (from the ID... is that messy?)
var altZoomLink = $(this).attr("id");
$("#productZoom").attr('href', altZoomLink);
var img = new Image();
$(img).load(function () {
$(this).hide();
$('#productImageDiv').append(this);
$(this).fadeIn();
}).error(function () {
// notify the user that the image could not be loaded
}).attr({
src: altImageSrc,
id: "productImage"
});
return false;
});
});
```
|
You could try abstracting the **productZoom.click()** function to a named function, and then re-binding it after changing to an alternate image. Something like:
```
// Product Zoom (jQuery)
$(document).ready(function(){
$("#productZoom").click(bindZoom);
// Alternative product photos (jQuery)
$(".altPhoto").click(function() {
$('#productImageDiv div.viewport').remove();
$('#productImage').remove();
// Set new image src
var altImageSrc = $(this).attr("href");
$("#productZoom").attr('href', altImageSrc);
var img = new Image();
$(img).load(function () {
$(this).hide();
$('#productImageDiv').append(this);
$(this).fadeIn();
}).error(function () {
// notify the user that the image could not be loaded
}).attr({
src: altImageSrc,
id: "productImage"
});
$("#productZoom").click(bindZoom);
return false;
});
});
function bindZoom() {
// Set new image src
var imageSrc = $("#productZoom").attr("href");
$("#productImage").attr('src', imageSrc);
// Run the imagetool plugin on the image
$(function() {
$("#productImage").imagetool({
viewportWidth: 300,
viewportHeight: 300,
topX: 150,
topY: 150,
bottomX: 450,
bottomY: 450
});
});
return false;
}
```
Also, rolled both your ready() blocks into the same block.
|
jQuery - running a function on a new image
|
[
"",
"javascript",
"jquery",
"zooming",
""
] |
Even nowadays I often see underscores in Java variables and methods. An example are member variables (like "m\_count" or "\_count"). As far as I remember, to use underscores in these cases is called bad style by [Sun](https://en.wikipedia.org/wiki/Sun_Microsystems).
The only place they should be used is in constants (like in "public final static int IS\_OKAY = 1;"), because constants should be all upper case and not [camel case](https://en.wikipedia.org/wiki/Camel_case). Here, the underscore should make the code more readable.
Do you think using underscores in Java is bad style? If so (or not), why?
|
If you have no code using it now, I'd suggest continuing that. If your codebase uses it, continue that.
The biggest thing about coding style is *consistency*. If you have nothing to be consistent with, then the language vendor's recommendations are likely a good place to start.
|
```
sunDoesNotRecommendUnderscoresBecauseJavaVariableAndFunctionNamesTendToBeLongEnoughAsItIs();
as_others_have_said_consistency_is_the_important_thing_here_so_chose_whatever_you_think_is_more_readable();
```
|
Using underscores in Java variables and method names
|
[
"",
"java",
"naming-conventions",
""
] |
I'm using a third-party library that returns "XML" that is not valid, because it contains invalid characters, as well as non-declared entities. I need to use a Java XML parser to parse this XML, but it's choking.
Is there a generic way to sanitize this XML so that it becomes valid?
|
I think your options are something like:
* [Tag Soup](http://home.ccil.org/~cowan/XML/tagsoup/)
* [JTidy](http://jtidy.sourceforge.net/)
* Roll your own.
The first two are more heavyweight, given that they're designed to parse ill formed HTML. If you know that the problems are due to encoding and entities, but otherwise well formed I'd suggest you roll your own:
* standardize an encoding to UTF-8
* use a standard encoder for the text between the > and < characters (text entities).
|
Sounds like you need to figure out if there's a way to automatically clean the data yourself before handing off to a parser. How are certain characters invalid, not valid in the declared character set, or unescaped XML meta-characters such as '<'?
For non-declared entities, I once solved this by configuring a SAX parser with an error handler which basically ignored these errors. That might help you too. See [ErrorHandler](http://java.sun.com/javase/6/docs/api/org/xml/sax/ErrorHandler.html) API.
|
Sanitizing bad XML in Java
|
[
"",
"java",
"xml",
""
] |
I have a lot of Java source code that requires custom pre-processing. I'd like rid of it but that's not feasible right now so I'm stuck with it. Given that I have an unfortunate problem that shouldn't have existed in the first place, how do I solve it using maven?
(For the full story, I'm replacing a python-based build system with a maven one, so one improvement at a time please. Fixing the non-standard source code is harder, and will come later.)
Is it possible using any existing Maven plugins to actually alter the source files during compile time? (Obviously leaving the original, unprocessed code alone)
To be clear, by preprocessing I mean preprocessing in the same sense as antenna or a C compiler would preprocess the code, and by custom I mean that it's completely proprietary and looks nothing at all like C or antenna preprocessing.
|
This is something that is very doable and I've done something very similar in the past.
An example from a project of mine, where I used the antrun plug-in to execute an external program to process sources:
```
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>process-sources</id>
<phase>process-sources</phase>
<configuration>
<tasks>
<!-- Put the code to run the program here -->
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
Note the tag where I indicate the phase where this is run. Documentation for the lifecycles in Maven is [here](http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html). Another option is to actually write your own Maven plug-in that does this. It's a little more complex, but is also doable. You will still configure it similarly to what I have documented here.
|
There is a Java preprocessor with support of MAVEN: [java-comment-preprocessor](https://github.com/raydac/java-comment-preprocessor)
|
Preprocessing source code as a part of a maven build
|
[
"",
"java",
"maven-2",
"build-process",
""
] |
I'm trying to use ASP.net health monitoring to log unhandled exceptions from an asmx web service. I've enabled health monitoring in the web.config but it's not logging anything. Does health monitoring work with asmx web services? I've googled around and seems other people have asked the same question but never got a definite answer.
|
Please do not post a response unless it answers the question being asked. Half answers to something other than the question confuses the reader. Health monitoring has nothing to do with either "really unhandled" ASP.NET exceptions or the Application\_Error event.
The answer is no. I have digged this a bit and for some reason exceptions raised within web methods never make it to the health monitoring providers. It looks like you have to catch them and handle yourself.
|
There's a difference between the normal interpretation of unhandled exceptions, and those Tom are referring to. What Tom is referring to are truly unhandled exceptions, that will kill your w3wp.exe process (before it has a chance to log it), as I've blogged about here:
<http://www.improve.dk/blog/2008/04/07/spawning-threads-in-aspnet-can-be-dangerous>
Normal unhandled exceptions will result in an exception ocurring and showing an error page to the user / callee. It will not however crash your w3wp.exe process. These exceptions should be picked up by the health monitoring.
|
Does ASP.net health monitoring work with asmx web services
|
[
"",
"c#",
"asp.net",
"asmx",
"health-monitoring",
""
] |
What's the best way of capturing an mp3 stream coming off of http and saving it to disk with python?
Thus far I've tried
```
target = open(target_path, "w")
conn = urllib.urlopen(stream_url)
while True:
target.write(conn.read(buf_size))
```
This gives me data but its garbled or wont play in mp3 players.
|
If you're on Windows, you might accidentally be doing CRLF conversions, corrupting the binary data. Try opening `target` in binary mode:
```
target = open(target_path, "wb")
```
|
The best way for this is:
```
urllib.urlretrieve(stream_url, target_path);
```
|
How do I capture an mp3 stream with python
|
[
"",
"python",
"streaming",
""
] |
Replaces Question: [Update multiple rows into SQL table](https://stackoverflow.com/questions/184096/update-multiple-rows-into-sql-table)
Here's a Code Snippet to update an exam results set.
DB structure is as given, but I can submit Stored Procedures for inclusion (Which are a pain to modify, so I save that until the end.)
The question: Is there a better way using SQL server v 2005.,net 2.0 ?
```
string update = @"UPDATE dbo.STUDENTAnswers
SET ANSWER=@answer
WHERE StudentID =@ID and QuestionNum =@qnum";
SqlCommand updateCommand = new SqlCommand( update, conn );
conn.Open();
string uid = Session["uid"].ToString();
for (int i= tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters.Clear();
updateCommand.Parameters.AddWithValue("@ID",uid);
updateCommand.Parameters.AddWithValue("@qnum",i);
updateCommand.Parameters.AddWithValue("@answer", Request.Form[i.ToString()]);
try
{
updateCommand.ExecuteNonQuery();
}
catch { }
}
```
|
A few things stand out:
* You don't show where the SqlConnection is instantiated, so it's not clear that you're disposing it properly.
* You shouldn't be swallowing exceptions in the loop - better to handle them in a top level exception handler.
* You're instantiating new parameters on each iteration through the loop - you could just reuse the parameters.
Putting this together it could look something like the following (if you don't want to use a transaction, i.e. don't care if some but not all updates succeed):
```
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand updateCommand = new SqlCommand(update, conn))
{
string uid = Session["uid"].ToString();
updateCommand.Parameters.AddWithValue("@ID", uid);
updateCommand.Parameters.AddWithValue("@qnum", i);
updateCommand.Parameters.Add("@answer", System.Data.SqlDbType.VarChar);
for (int i = tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters["@answer"] = Request.Form[i.ToString()];
updateCommand.ExecuteNonQuery();
}
}
}
```
Or to use a transaction to ensure all or nothing:
```
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlTransaction transaction = conn.BeginTransaction())
{
using (SqlCommand updateCommand = new SqlCommand(update, conn, transaction))
{
string uid = Session["uid"].ToString();
updateCommand.Parameters.AddWithValue("@ID", uid);
updateCommand.Parameters.AddWithValue("@qnum", i);
updateCommand.Parameters.Add("@answer", System.Data.SqlDbType.VarChar);
for (int i = tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters["@answer"] = Request.Form[i.ToString()];
updateCommand.ExecuteNonQuery();
}
transaction.Commit();
}
} // Transaction will be disposed and rolled back here if an exception is thrown
}
```
Finally, another problem is that you are mixing UI code (e.g. Request.Form) with data access code. It would be more modular and testable to separate these - e.g. by splitting your application into UI, Business Logic and Data Access layers.
|
For 30 updates I think you're on the right track, although the comment about the need for a using around `updateCommand` is correct.
We've found the best performing way to do bulk updates (>100 rows) is via the `SqlBulkCopy` class to a temporary table followed by a stored procedure call to populate the live table.
|
Multiple DB Updates:
|
[
"",
"c#",
".net",
"sql-server",
""
] |
Which Eclipse package should I choose for Python development with [PyDev](http://www.pydev.org/)?
Nothing on the Eclipse homepage tells me what to choose, and the PyDev documentation assumes I already have Eclipse installed. Does it matter which Eclipse package I choose?
|
If you just plan on doing Python dev, I'd just go with [Platform Runtime Binary](http://download.eclipse.org/eclipse/downloads/).
After that, I'd follow the instructions <http://pydev.org/download.html> and <http://pydev.org/manual_101_root.html> to install PyDev.
I use the same setup for Python development. I also have the RadRails plugin for Ruby on Rails development.
|
PyDev was acquired by [Aptana](http://www.aptana.com), so you might want to check that one out as well.
|
Which Eclipse package should I download for PyDev?
|
[
"",
"python",
"eclipse",
"package",
"pydev",
""
] |
Sometimes you need to skip execution of part of a method under certain non-critical error conditions. You can use *exceptions* for that, but exceptions generally are not recommended in normal application logic, only for abnormal situations.
So I do a trick like this:
```
do
{
bool isGood = true;
.... some code
if(!isGood)
break;
.... some more code
if(!isGood)
break;
.... some more code
} while(false);
..... some other code, which has to be executed.
```
I use a "fake" loop which will run once, and I can abort it by *break* or *continue*.
Some of my colleagues did not like that, and they called it "bad practice". I personally find that approach pretty slick. But what do you think?
|
Bad practice, it depends.
What I see in this code is a very creative way to write "goto" with less sulphur-smelling keywords.
There are multiple alternatives to this code, which can or can not be better, depending on the situation.
## Your do/while solution
Your solution is interesting if you have a lot of code, but will evaluate the "exit" of this processing at some limited points:
```
do
{
bool isError = false ;
/* some code, perhaps setting isError to true */
if(isError) break ;
/* some code, perhaps setting isError to true */
if(isError) break ;
/* some code, perhaps setting isError to true */
}
while(false) ;
// some other code
```
The problem is that you can't easily use your "if(isError) break ;" is a loop, because it will only exit the inner loop, not your do/while block.
And of course, if the failure is inside another function, the function must return some kind of error code, and your code must not forget to interpret the error code correctly.
I won't discuss alternatives using ifs or even nested ifs because, after some thinking, I find them inferior solutions than your own for your problem.
## Calling a goto a... goto
Perhaps you should put clearly on the table the fact you're using a goto, and document the reasons you choose this solution over another.
At least, it will show something could be wrong with the code, and prompt reviewers to validate or invalidate your solution.
You must still open a block, and instead of breaking, use a goto.
```
{
// etc.
if(/*some failure condition*/) goto MY_EXIT ;
// etc.
while(/* etc.*/)
{
// etc.
for(/* etc.*/)
{
// etc.
if(/*some failure condition*/) goto MY_EXIT ;
// etc.
}
// etc.
if(/*some failure condition*/) goto MY_EXIT ;
// etc.
}
// etc.
}
MY_EXIT:
// some other code
```
This way, as you exit the block through the goto, there is no way for you to bypass some object constructor with the goto (which is forbidden by C++).
This problem solves the process exiting from nested loops problem (and using goto to exit nested loops is an example given by B. Stroustrup as a valid use of goto), but it won't solve the fact some functions calls could fail and be ignored (because someone failed to test correctly their return code, if any).
Of course, now, you can exit your process from multiple points, from multiple loop nesting depth, so if it is a problem...
## try/catch
If the code is not supposed to fail (so, failure is exceptional), or even if the code structure can fail, but is overly complex to exit, then the following approach could be clearer:
```
try
{
// All your code
// You can throw the moment something fails
// Note that you can call functions, use reccursion,
// have multiple loops, etc. it won't change
// anything: If you want to exit the process,
// then throw a MyExitProcessException exception.
if(/* etc. */)
{
// etc.
while(/* etc.*/)
{
// etc.
for(/* etc.*/)
{
// etc.
if(/*some failure condition*/) throw MyExitProcessException() ;
// etc.
}
// etc.
callSomeFunction() ;
// the function will throw if the condition is met
// so no need to test a return code
// etc.
}
// etc.
}
// etc.
}
catch(const MyExitProcessException & e)
{
// To avoid catching other exceptions, you should
// define a "MyExitProcessException" exception
}
// some other code
```
If some condition in the code above, or inside some functions called by the code above, is not met, then throw an exception.
This is somewhat weightier than your do/while solution, but has the same advantages, and can even abort the processing from inside loops or from inside called functions.
## Discussion
Your need seems to come from the fact you can have a complex process to execute (code, functions calls, loops, etc.), but you want to interrupt it over some condition (probably either failure, or because it succeeded sooner than excepted). If you can rewrite it in a different way, you should do it. But perhaps, there is no other way.
Let's assume that.
**If you can code it with a try/catch, do it**: To interrupt a complex piece of code, throwing an exception is the right solution (the fact you can add failure/success info inside your exception object should not be underestimated). You will have a clearer code after that.
Now, if you're in a speed bottleneck, resolving your problem with thrown exceptions as an exit is not the fastest way to do it.
No one can deny your solution is a glorified goto. There won't be a goto-spaghetti code, because the do/while won't let you do that, but it is still a semantic goto. This can be the reasons some could find this code "bad": They smell the goto without finding its keyword clearly.
In this case (and in this performance, profiled-verified) case only, your solution seems Ok, and better than the alternative using if), but of lesser quality (IMHO) than the goto solution which at least, doesn't hide itself behind a false loop.
## Conclusion
As far as I am concerned, I find your solution creative, but I would stick to the thrown exception solution.
So, in order of preference:
1. Use try/catch
2. Use goto
3. Use your do/while loop
4. Use ifs/nested ifs
|
You're pretty much just disguising a "goto" as a fake loop. Whether you like gotos or not, you'd be just as far ahead using a real undisguised goto.
Personally, I'd just write it as
```
bool isGood = true;
.... some code
if(isGood)
{
.... some more code
}
if(isGood)
{
.... some more code
}
```
|
Do you consider this technique "BAD"?
|
[
"",
"c++",
"coding-style",
""
] |
I know you can use C++ keyword 'explicit' for constructors of classes to prevent an automatic conversion of type. Can you use this same command to prevent the conversion of parameters for a class method?
I have two class members, one which takes a bool as a param, the other an unsigned int. When I called the function with an int, the compiler converted the param to a bool and called the wrong method. I know eventually I'll replace the bool, but for now don't want to break the other routines as this new routine is developed.
|
No, you can't use explicit, but you can use a templated function to catch the incorrect parameter types.
With **C++11**, you can declare the templated function as `delete`d. Here is a simple example:
```
#include <iostream>
struct Thing {
void Foo(int value) {
std::cout << "Foo: value" << std::endl;
}
template <typename T>
void Foo(T value) = delete;
};
```
This gives the following error message if you try to call `Thing::Foo` with a `size_t` parameter:
```
error: use of deleted function
‘void Thing::Foo(T) [with T = long unsigned int]’
```
---
In **pre-C++11** code, it can be accomplished using an undefined private function instead.
```
class ClassThatOnlyTakesBoolsAndUIntsAsArguments
{
public:
// Assume definitions for these exist elsewhere
void Method(bool arg1);
void Method(unsigned int arg1);
// Below just an example showing how to do the same thing with more arguments
void MethodWithMoreParms(bool arg1, SomeType& arg2);
void MethodWithMoreParms(unsigned int arg1, SomeType& arg2);
private:
// You can leave these undefined
template<typename T>
void Method(T arg1);
// Below just an example showing how to do the same thing with more arguments
template<typename T>
void MethodWithMoreParms(T arg1, SomeType& arg2);
};
```
The disadvantage is that the code and the error message are less clear in this case, so the C++11 option should be selected whenever available.
Repeat this pattern for every method that takes the `bool` or `unsigned int`. Do not provide an implementation for the templatized version of the method.
This will force the user to always explicitly call the bool or unsigned int version.
Any attempt to call `Method` with a type other than `bool` or `unsigned int` will fail to compile because the member is private, subject to the standard exceptions to visibility rules, of course (friend, internal calls, etc.). If something that does have access calls the private method, you will get a linker error.
|
No. `explicit` prevents automatic conversion between specific classes, irrespective of context. And of course you can't do it for built-in classes.
|
Can you use keyword explicit to prevent automatic conversion of method parameters?
|
[
"",
"c++",
"implicit-conversion",
"explicit",
""
] |
When I create a zip Archive via `java.util.zip.*`, is there a way to split the resulting archive in multiple volumes?
Let's say my overall archive has a `filesize` of `24 MB` and I want to split it into 3 files on a limit of 10 MB per file.
Is there a zip API which has this feature? Or any other nice ways to achieve this?
Thanks
Thollsten
|
Check: <http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=38&t=004618>
*I am not aware of any public API that will help you do that.
(Although if you do not want to do it programatically, there are utilities like WinSplitter that will do it)*
*I have not tried it but, every ZipEntry while using ZippedInput/OutputStream has a compressed size. You may get a rough estimate of the size of the zipped file while creating it. If you need 2MB of zipped files, then you can stop writing to a file after the cumulative size of entries become 1.9MB, taking .1MB for Manifest file and other zip file specific elements.*
*So, in a nutshell, you can write a wrapper over the ZippedInputStream as follows:*
```
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
public class ChunkedZippedOutputStream {
private ZipOutputStream zipOutputStream;
private final String path;
private final String name;
private long currentSize;
private int currentChunkIndex;
private final long MAX_FILE_SIZE = 16000000; // Whatever size you want
private final String PART_POSTFIX = ".part.";
private final String FILE_EXTENSION = ".zip";
public ChunkedZippedOutputStream(String path, String name) throws FileNotFoundException {
this.path = path;
this.name = name;
constructNewStream();
}
public void addEntry(ZipEntry entry) throws IOException {
long entrySize = entry.getCompressedSize();
if ((currentSize + entrySize) > MAX_FILE_SIZE) {
closeStream();
constructNewStream();
} else {
currentSize += entrySize;
zipOutputStream.putNextEntry(entry);
}
}
private void closeStream() throws IOException {
zipOutputStream.close();
}
private void constructNewStream() throws FileNotFoundException {
zipOutputStream = new ZipOutputStream(new FileOutputStream(new File(path, constructCurrentPartName())));
currentChunkIndex++;
currentSize = 0;
}
private String constructCurrentPartName() {
// This will give names is the form of <file_name>.part.0.zip, <file_name>.part.1.zip, etc.
return name + PART_POSTFIX + currentChunkIndex + FILE_EXTENSION;
}
}
```
*The above program is just a hint of the approach and not a final solution by any means*.
|
If the goal is to have the output be compatible with pkzip and winzip, I'm not aware of any open source libraries that do this. We had a similar requirement for one of our apps, and I wound up writing our own implementation (compatible with the zip standard). If I recall, the hardest thing for us was that we had to generate the individual files on the fly (the way that most zip utilities work is they create the big zip file, then go back and split it later - that's a lot easier to implement. Took about a day to write and 2 days to debug.
The zip standard explains what the file format has to look like. If you aren't afraid of rolling up your sleeves a bit, this is definitely doable. You do have to implement a zip file generator yourself, but you can use Java's Deflator class to generate the segment streams for the compressed data. You'll have to generate the file and section headers yourself, but they are just bytes - nothing too hard once you dive in.
Here's the [zip specification](http://www.pkware.com/documents/casestudies/APPNOTE.TXT) - section K has the info you are looking for specifically, but you'll need to read A, B, C and F as well. If you are dealing with really big files (We were), you'll have to get into the Zip64 stuff as well - but for 24 MB, you are fine.
If you want to dive in and try it - if you run into questions, post back and I'll see if I can provide some pointers.
|
How to split a huge zip file into multiple volumes?
|
[
"",
"java",
"zip",
""
] |
Part of my latest webapp needs to write to file a fair amount as part of its logging. One problem I've noticed is that if there are a few concurrent users, the writes *can* overwrite each other (instead of appending to file). I assume this is because of the destination file can be open in a number of places at the same time.
`flock(...)` is usually superb but it doesn't appear to work on NFS... Which is a huge problem for me as the production server uses a NFS array.
The closest thing I've seen to an actual solution involves trying to create a lock dir and waiting until it can be created. To say this lacks elegance is understatement of the year, possibly decade.
Any better ideas?
Edit: I should add that I don't have root on the server and doing the storage in another way isn't really feasible any time soon, not least within my deadline.
|
Another dirty hack would be to `flock()` a "local" file, and only open / write to the NFS file if you hold the lock on the local file.
Edit: from the `flock()` page:
> flock() will not work on NFS and many
> other networked file systems. Check
> your operating system documentation
> for more details.
Edit 2:
Of course there's always using the database to synchonise access (I'm assuming your app uses a db). This would be quite a performance hit if you're doing a lot of logging though.
If it's just for logging, do you actually need a centralised log file? Could you log locally (and even combine the logs when they rotate at the end of the day if needed)?
|
Directory operations are *NOT* atomic under NFSv2 and NFSv3
(please refer to the book 'NFS Illustrated' by Brent Callaghan,
ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).
NFSv2 has two atomic operations:
* symlink
* rename
With NFSv3 the create call is also atomic.
Knowing this, you can implement spin-locks for files and
directories (in shell, not PHP):
lock current dir:
```
while ! ln -s . lock; do :; done
```
lock a file:
```
while ! ln -s ${f} ${f}.lock; do :; done
```
unlock (assumption, the running process really acquired the lock):
unlock current dir:
```
mv lock deleteme && rm deleteme
```
unlock a file:
```
mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme
```
Remove is also not atomic, therefore first the rename (which
is atomic) and then the remove.
For the symlink and rename calls, both filenames have to reside on the
same filesystem. My proposal: use only simple filenames and put
file and lock into the same directory.
|
Locking NFS files in PHP
|
[
"",
"php",
"concurrency",
""
] |
I display data into an html table, w/ a drop down box with a list of venues. Each volunteer will be assigned a venue. I envision being able to go down thru the html table and assigning each volunteer a venue. The drop down box contains all the possible venues that they can be assigned to.
```
<select>
<option value="1">Setup</option>
<option value="2">Check in</option>
etc...
</select>
```
Then once I am done assigning each volunteer, I want to hit submit and it will assign the appropriate value for each volunteer.
How would I go about doing that, I know how to do that, but only one at a time.
|
Change the name of each select, in a way it includes a volunteer id:
```
<select name="venues[1]">
<option value="1">Setup</option>
etc...
</select>
<select name="venues[2]">
<option value="1">Setup</option>
etc...
</select>
<select name="venues[3]">
<option value="1">Setup</option>
etc...
</select>
```
After submit there will be a table in $\_POST named venues and with indices: 1, 2, 3 (beeing volunteer id) and values beeing selected value for each volunteer.
Now you can iterate on `$_POST['venues']` array and save each value:
```
foreach ($_POST['venues'] as $volunteer_id => $venue) {
save_venue_for_volunteer($volunteer_id, $venue);
}
```
|
Here's a very rough example of how you might handle this. Note: your MySQL tables (assuming MySQL) must be of a type that supports transactions (InnoDB does, MyISAM does not).
```
<?php
if ( isset( $_POST['venuChoice'] ) )
{
// Create a transaction
mysql_query( 'BEGIN' );
$failure = false;
// Loop over the selections
foreach ( $_POST['venuChoice'] as $employeeId => $venueId )
{
$sql = sprintf(
'UPDATE table SET columns=%d WHERE id=%d'
, intval( mysql_real_escape_string( $venueId ) )
, intval( mysql_real_escape_string( $employeeId ) )
);
if ( ! @mysql_query( $sql ) )
{
$failure = true;
break;
}
}
// Close out the transaction
if ( $failure )
{
mysql_query( 'ROLLBACK' );
// Display and error or something
} else {
mysql_query( 'COMMIT' );
// Success!
}
}
?>
<form>
<select name="venueChoice[1]">
<option value="1">Setup</option>
<option value="2">Check in</option>
</select>
<select name="venueChoice[2]">
<option value="1">Setup</option>
<option value="2">Check in</option>
</select>
<select name="venueChoice[3]">
<option value="1">Setup</option>
<option value="2">Check in</option>
</select>
</form>
```
You could also modify this to keep track of each employee's current venue choice, compare it to the POST data, and then only execute UPDATE queries for those that were actually changed.
|
enter multiple variables into table, assigning to each appropriate record
|
[
"",
"php",
"mysql",
""
] |
I would like to know what is the difference between initializing a static member inline as in:
```
class Foo
{
private static Bar bar_ = new Bar();
}
```
or initializing it inside the static constructor as in:
```
class Foo
{
static Foo()
{
bar_ = new Bar();
}
private static Bar bar_;
}
```
|
If you have a static constructor in your type, it alters type initialization due to the [beforefieldinit](http://pobox.com/~skeet/csharp/beforefieldinit.html) flag no longer being applied.
It also affects initialization order - variable initializers are all executed before the static constructor.
That's about it as far as I know though.
|
In this case I don't believe there si any practical difference. If you need some logic in initializing the static variables - like if you would want to use different concrete types of an interface given different conditions - you would use the static constructor. Else, the inline initialization is fine in my book.
```
class Foo
{
private static IBar _bar;
static Foo()
{
if(something)
{
_bar = new BarA();
}
else
{
_bar = new BarB();
}
}
}
```
|
Difference initializing static variable inline or in static constructor in C#
|
[
"",
"c#",
".net",
"static",
"constructor",
"initialization",
""
] |
I currently capture MiniDumps of unhandled exceptions using `SetUnhandledExceptionFilter` however at times I am getting "R6025: pure virtual function".
I understand how a pure virtual function call happens I am just wondering if it is possible to capture them so I can create a MiniDump at that point.
|
If you want to catch all crashes you have to do more than just: SetUnhandledExceptionFilter
I would also set the abort handler, the purecall handler, unexpected, terminate, and invalid parameter handler.
```
#include <signal.h>
inline void signal_handler(int)
{
terminator();
}
inline void terminator()
{
int*z = 0; *z=13;
}
inline void __cdecl invalid_parameter_handler(const wchar_t *, const wchar_t *, const wchar_t *, unsigned int, uintptr_t)
{
terminator();
}
```
And in your main put this:
```
signal(SIGABRT, signal_handler);
_set_abort_behavior(0, _WRITE_ABORT_MSG|_CALL_REPORTFAULT);
set_terminate( &terminator );
set_unexpected( &terminator );
_set_purecall_handler( &terminator );
_set_invalid_parameter_handler( &invalid_parameter_handler );
```
The above will send all crashes to your unhandled exception handler.
|
See [this answer here](https://stackoverflow.com/questions/99552/where-do-pure-virtual-function-call-crashes-come-from#100555) to the question [where do “pure virtual function call” crashes come from?](https://stackoverflow.com/questions/99552/where-do-pure-virtual-function-call-crashes-come-from).
> To help with debugging these kinds of
> problems you can, in various versions
> of MSVC, replace the runtime library's
> purecall handler. You do this by
> providing your own function with this
> signature:
```
int __cdecl _purecall(void)
```
> and
> linking it before you link the runtime
> library. This gives YOU control of
> what happens when a purecall is
> detected. Once you have control you
> can do something more useful than the
> standard handler. I have a handler
> that can provide a stack trace of
> where the purecall happened; see here:
> <http://www.lenholgate.com/archives/000623.html>
> for more details.
>
> (Note you can also call
> `_set_purecall_handler()` to install your handler in some versions of
> MSVC).
So, in your purecall handler, make your minidump.
|
Capturing R6025 pure virtual call
|
[
"",
"c++",
"windows",
"minidump",
"dbghelp",
""
] |
Is it possible to use a variable from one page in a piece of code on another? e.g. submit a form on one page and on the second page use a PHP script to add the data from the form to a MySQL table
Thanks for all the help
|
This is what the GET and POST 'super' globals are for.
<http://www.tizag.com/phpT/postget.php>
|
Your best bet would be to use PHP's session functions. That way if the user navigates away from your page and then comes back, the session variables will still be there (provided the session hasn't expired). You can get more information here -- [PHP Sessions](http://www.php.net/manual/en/ref.session.php). Basically all you have to do is call
```
session_start();
```
at the top of each php page (before anything is outputted to the browser) where you want to have access to the session variables. You can then set/retrieve a variable using
```
// set
$_SESSION['varname'] = "something";
// retrieve
$somevar = $_SESSION['varname'];
```
|
Using PHP variables on multiple pages
|
[
"",
"php",
"variables",
""
] |
At a new job I started, we have both a Java application that handles most of the heavy lifting in the core business logic, and we also have a Rails application that of course handles the web interface to this server. Both of these access the same database.
Up until now, most of the focus has been on the Java application, and as such, there are no migrations in the Rails project. The sql to update the shared database is managed in a file like changes.sql.
As you can imagine, this makes it somewhat difficult to develop.
My initial thought was to combine the codebases for the Java project and the Rails application, because there is a dependency there, and to manage that SQL file in the source. However, I thought I'd ask here to see if anyone else had tackled this issue with some degree of success.
|
One approach is to use the rails migration tools, generate the DDL files for the database and use Hibernate to update the Java objects that relate to specific database entities. You don't really say how you manage database changes on the Java side or whether you use an ORM, but you can certainly synchronize the two with a little work.
Or you can go the other way around and let the Java definitions control changes on the Rails side.
I think the key to doing this successfully is to select one of the two platforms as your "primary database modeller" and develop the process to migrate that model to the other platform. Attempting to allow changes from both will only cause headaches.
|
We have a similar project structure: shared database with both java and rails applications as clients. I advocated and got buy-in to use the rails migration mechanism for handling database changes. It takes a bit of rails advocacy, and some willingness to help but the java team is also writing their own migrations.
We have some cases where we use stored procedures and database specific column types, so we changed the rails environment.rb to use sql for creating the test database.
```
# Use SQL instead of Active Record's schema dumper when creating the test database.
# This is necessary if your schema can't be completely dumped by the schema dumper,
# like if you have constraints or database-specific column types
config.active_record.schema_format = :sql
```
On the plus side managing the sql with migrations makes rails testing and setup clean for you. The downside is that some of the migration files are just not that pretty (e.g., you can't use the migration DSL to generate stored procedures so you have these execute %{blah } in your migrations).
Just remember to keep communication lines open beteen the teams. I like the fact that "cap production deploy:migrations" makes updating the production database dead simple.
|
Managing database changes between a Rails and Java project
|
[
"",
"java",
"ruby-on-rails",
"ruby",
"database",
"migration",
""
] |
## Problem
I have timestamped data, which I need to search based on the timestamp in order to get the one existing timestamp which matches my input timestamp the closest.
Preferably this should be solved with the STL. boost::\* or stl::tr1::\* (from VS9 with Featurepack) are also possible.
Example of timestamped data:
```
struct STimestampedData
{
time_t m_timestamp; // Sorting criterion
CData m_data; // Payload
}
```
## Approach with `stl::vector`, `sort()` and `equal_range()`
Since a `map` or `set` only allows me to find exact matches, I don't get any further using one of these.
So now I have a `vector` to which I append data as it is coming in. Before searching I use `<algorithm>`'s `sort()` and supply it with a custom comparison function.
After that I use `<algorithm>`'s `equal_range()` to find the two neighbors of a specified value `x`.
From these two values I check which one is closest to `x` and then I have my best match.
While this is not too complex, I wonder if there are more elegant solutions to this.
Maybe the STL already has an algorithm which does exactly that so I'm not re-inventing something here?
## Update: Linear vs. binary search
I forgot to mention that I have quite a lot of data to handle so I don't want to have to search linearly.
The reason I am sorting a vector with `sort()` is because it has random access iterators which is not the case with a `map`. Using a `map` would not allow `equal_range()` to do a search with twice logarithmic complexity.
Am I correct?
|
I would use set::lower\_bound to find the matching or greater value, then decrement the iterator to check the next lower value. You should use std::set rather than std::map since your key is embedded in the object - you'll need to provide a functor that compares the timestamp members.
```
struct TimestampCompare
{
bool operator()(const STimestampedData & left, const STimestampedData & right) const
{
return left.m_timestamp < right.m_timestamp;
}
};
typedef std::set<STimestampedData,TimestampCompare> TimestampedDataSet;
TimestampedDataSet::iterator FindClosest(TimestampedDataSet & data, STimestampedData & searchkey)
{
if (data.empty())
return data.end();
TimestampedDataSet::iterator upper = data.lower_bound(searchkey);
if (upper == data.end())
return --upper;
if (upper == data.begin() || upper->m_timestamp == searchkey.m_timestamp)
return upper;
TimestampedDataSet::iterator lower = upper;
--lower;
if ((searchkey.m_timestamp - lower->m_timestamp) < (upper->m_timestamp - searchkey.m_timestamp))
return lower;
return upper;
}
```
|
I would use equal\_range too for such a thing.
If you are using sort() every time on your vector it might be better to use a map (or set), as that's always sorted automatically, and use the member equal\_range
But that depends on the the amount of inserts / queries / amount of data. (although for something that always needs to be sorted when I query, a map would be my first choice, and I'd only use a vector if there was a very good reason)
|
Finding "best matching key" for a given key in a sorted STL container
|
[
"",
"c++",
"sorting",
"stl",
"containers",
""
] |
I have quickly read (and will read with more care soon) the article of Scott Allen concerning the possibility to use an other provider of the default SQL Express or SQL Server database to use the "[Membership and Role Providers](http://weblogs.asp.net/scottgu/archive/2006/02/24/ASP.NET-2.0-Membership_2C00_-Roles_2C00_-Forms-Authentication_2C00_-and-Security-Resources-.aspx)" for ASP.NET.
We will soon need to open a part of our project to some client via the web and I thought about using the "[Membership and Role Providers](http://weblogs.asp.net/scottgu/archive/2006/02/24/ASP.NET-2.0-Membership_2C00_-Roles_2C00_-Forms-Authentication_2C00_-and-Security-Resources-.aspx)" but our database is PostGreSql.
Does any one have some experience with "[Membership and Role Providers](http://weblogs.asp.net/scottgu/archive/2006/02/24/ASP.NET-2.0-Membership_2C00_-Roles_2C00_-Forms-Authentication_2C00_-and-Security-Resources-.aspx) and an other database type (not SQL Server)? Is it worth it or it's a pain?
|
I gave it a go on MySQL and found it to be impossible. That could be attributed to my relatively low skill, and the fact that I did it about 5 months after 2.0 release. Is it too much to write your own simple security system?
There is an alternate membership provider at: <http://www.codeplex.com/SnitzMember>
|
It actually fairly simple. It's basically just an interface with a handful of methods that need to be implemented, and most are fairly straight-forward.
I implemented one, in MSSQL, but using an existing membership database from another application. You can find it at <http://www.codeplex.com/SnitzMember>
|
Membership and Role Providers in ASP.NET 2.0 with non SQL Server provider?
|
[
"",
"c#",
".net",
"asp.net",
".net-2.0",
""
] |
I have a list of variable names, like this:
```
['foo', 'bar', 'baz']
```
(I originally asked how I convert a list of variables. See Greg Hewgill's answer below.)
How do I convert this to a dictionary where the keys are the variable names (as strings) and the values are the values of the variables?
```
{'foo': foo, 'bar': bar, 'baz': baz}
```
Now that I'm re-asking the question, I came up with:
```
d = {}
for name in list_of_variable_names:
d[name] = eval(name)
```
Can that be improved upon?
**Update**, responding to the question (in a comment) of why I'd want to do this:
I often find myself using the % operator to strings with a dictionary of names and values to interpolate. Often the names in the string is just the names of local variables. So (with the answer below) I can do something like this:
```
message = '''Name: %(name)s
ZIP: %(zip)s
Dear %(name)s,
...''' % dict((x, locals()[x]) for x in ['name', 'zip'])
```
|
Forget filtering `locals()`! The dictionary you give to the formatting string is allowed to contain unused keys:
```
>>> name = 'foo'
>>> zip = 123
>>> unused = 'whoops!'
>>> locals()
{'name': 'foo', 'zip': 123, ... 'unused': 'whoops!', ...}
>>> '%(name)s %(zip)i' % locals()
'foo 123'
```
With the new [f-string feature](https://www.python.org/dev/peps/pep-0498/) in Python 3.6, using `locals()` is no longer necessary:
```
>>> name = 'foo'
>>> zip = 123
>>> unused = 'whoops!'
>>> f'{zip: >5} {name.upper()}'
' 123 FOO'
```
|
You can use list or generator comprehensions to build a list of key, value tuples used to directly instantiate a dict. The best way is below:
```
dict((name, eval(name)) for name in list_of_variable_names)
```
In addition, if you know, for example, that the variables exist in the local symbol table you can save yourself from the dangerous eval by looking the variable directly from locals:
```
dict((name, locals()[name]) for name in list_of_variable_names)
```
After your final update, I think the answer below is really what you want. If you're just using this for string expansion with strings that you control, just pass locals() directly to the string expansion and it will cherry-pick out the desired values
If, however, these strings could ever come from an outside source (e.g. translation files), than it's a good idea to filter locals()
|
Given a list of variable names in Python, how do I a create a dictionary with the variable names as keys (to the variables' values)?
|
[
"",
"python",
"list",
"dictionary",
""
] |
I need to validate the email address of my users. Unfortunately, making a validator that [conforms to standards](https://www.rfc-editor.org/rfc/rfc2822#section-3.4.1) is hard.
[Here](http://www.ex-parrot.com/%7Epdw/Mail-RFC822-Address.html) is an example of a regex expression that tries to conform to the standard.
Is there a PHP library (preferably, open-source) that validates an email address?
|
I found a library in google code: <http://code.google.com/p/php-email-address-validation/>
Are there any others?
|
Have you looked at [PHP's filter\_ functions](https://www.php.net/manual/en/function.filter-var.php)? They're not perfect, but they do a fairly decent job in my experience.
Example usage (returns boolean):
`filter_var($someEmail, FILTER_VALIDATE_EMAIL);`
|
Is there a PHP library for email address validation?
|
[
"",
"php",
"email",
"open-source",
"email-validation",
""
] |
I want to use Sharepoint with python (C-Python)
Has anyone tried this before ?
|
I suspect that since this question was answered the SUDS library has been updated to take care of the required authentication itself. After jumping through various hoops, I found this to do the trick:
```
from suds import WebFault
from suds.client import *
from suds.transport.https import WindowsHttpAuthenticated
user = r'SERVER\user'
password = "yourpassword"
url = "http://sharepointserver/_vti_bin/SiteData.asmx?WSDL"
ntlm = WindowsHttpAuthenticated(username = user, password = password)
client = Client(url, transport=ntlm)
```
|
To get the wsdl :
```
import sys
# we use suds -> https://fedorahosted.org/suds
from suds import WebFault
from suds.client import *
import urllib2
# my 2 url conf
# url_sharepoint,url_NTLM_authproxy
import myconfig as my
# build url
wsdl = '_vti_bin/SiteData.asmx?WSDL'
url = '/'.join([my.url_sharepoint,wsdl])
# we need a NTLM_auth_Proxy -> http://ntlmaps.sourceforge.net/
# follow instruction and get proxy running
proxy_handler = urllib2.ProxyHandler({'http': my.url_NTLM_authproxy })
opener = urllib2.build_opener(proxy_handler)
client = SoapClient(url, {'opener' : opener})
print client.wsdl
```
main (mean) problem:
the sharepoint-server uses a NTLM-Auth [ :-( ]
so i had to use the NTLM-Auth-Proxy
To Rob and Enzondio : THANKS for your hints !
|
how can i use sharepoint (via soap?) from python?
|
[
"",
"python",
"sharepoint",
"soap",
"ntlm",
"suds",
""
] |
I have a `<button>` with an accesskey assgined to it. The accesskey works fine as long as the button is visible, but when I set `display: none` or `visibility: hidden`, the accesskey no longer works.
Also tried without success:
* Use a different element type: a, input (various types, even typeless).
* Assign the accesskey to a label that wraps the invisible control.
Note, I'm not sure if this is the standard behavior, but prior to Firefox 3 the accesskey seemed to worked regardless of visibility.
|
The behavior you are seeing is correct, you cannot "access" an element that is not displayed. Sal's suggestion will almost certainly work, but may I ask what your purpose is in doing this? There is probably a better way to accomplish what you are trying to achieve. Have you considered using a keypress handler?
I think you probably want to go with the other suggestions if you don't want a keypress handler. Try position:absolute; left:-9999px; to pull your content out of the page. Or use absolute position, change opacity to zero and z-index to -1. By using position absolute the element won't affect positioning of other content on the page, setting opacity will make it not visible. Even with opacity set to zero you can still click on the element and though you cannot see it it may prevent you from being able to click on other elements of the page so use a negative z-index to pull it behind other content.
|
You can apply a negative margin to push the element outsite of the visible page. I think many browsers and text readers ignore elements with `display:none` and possibly also `visibility:hidden`.
|
How can I define an accesskey for an invisible HTML element
|
[
"",
"javascript",
"html",
"css",
"firefox",
""
] |
What are some things I can do to improve query performance of an oracle query without creating indexes?
Here is the query I'm trying to run faster:
```
SELECT c.ClaimNumber, a.ItemDate, c.DTN, b.FilePath
FROM items a,
itempages b,
keygroupdata c
WHERE a.ItemType IN (112,115,189,241)
AND a.ItemNum = b.ItemNum
AND b.ItemNum = c.ItemNum
ORDER BY a.DateStored DESC
```
None of these columns are indexed and each of the tables contains millions of records. Needless to say, it takes over 3 and half minutes for the query to execute. This is a third party database in a production environment and I'm not allowed to create any indexes so any performance improvements would have to be made to the query itself.
Thanks!
|
First I'd rewrite the query to be ANSI standard:
```
SELECT c.ClaimNumber, a.ItemDate, c.DTN, b.FilePath
FROM items a
INNER JOIN itempages b ON b.ItemNum = a.ItemNum
INNER JOIN keygroupdata c ON c.ItemNum = b.ItemNum
WHERE a.ItemType IN (112,115,189,241)
ORDER BY a.DateStored DESC
```
This makes it easier to read and understand what is going on. It also helps you not make mistakes (i.e. Cross Joining)that might cause real big problems. Then I'd get the Explain plan to see what the DBMS is doing with that query. Is it trying to use some indexes? Is it joining the tables correctly?
Then I'd review the tables that I'm working with to see if there are any indexes that already exist that I could be using to make my query faster. Finally as everyone else has suggested I'd remove the Order By clause and just do that in code.
|
Ask the third party to index its join columns, as they should have done in the first place! Without indexes, Oracle has nothing to go on other than brute force.
|
Improve oracle query performance without indexing
|
[
"",
"sql",
"performance",
"oracle",
""
] |
I have been trying to learn multi-threaded programming in C# and I am confused about when it is best to use a thread pool vs. create my own threads. One book recommends using a thread pool for small tasks only (whatever that means), but I can't seem to find any real guidelines.
What are some pros and cons of thread pools vs creating my own threads? And what are some example use cases for each?
|
If you have lots of logical tasks that require constant processing and you want that to be done in parallel use the pool+scheduler.
If you need to make your IO related tasks concurrently such as downloading stuff from remote servers or disk access, but need to do this say once every few minutes, then make your own threads and kill them once you're finished.
Edit: About some considerations, I use thread pools for database access, physics/simulation, AI(games), and for scripted tasks ran on virtual machines that process lots of user defined tasks.
Normally a pool consists of 2 threads per processor (so likely 4 nowadays), however you can set up the amount of threads you want, if you know how many you need.
Edit: The reason to make your own threads is because of context changes, (thats when threads need to swap in and out of the process, along with their memory). Having useless context changes, say when you aren't using your threads, just leaving them sit around as one might say, can easily half the performance of your program (say you have 3 sleeping threads and 2 active threads). Thus if those downloading threads are just waiting they're eating up tons of CPU and cooling down the cache for your real application
|
I would suggest you use a thread pool in C# for the same reasons as any other language.
When you want to limit the number of threads running or don't want the overhead of creating and destroying them, use a thread pool.
By small tasks, the book you read means tasks with a short lifetime. If it takes ten seconds to create a thread which only runs for one second, that's one place where you should be using pools (ignore my actual figures, it's the ratio that counts).
Otherwise you spend the bulk of your time creating and destroying threads rather than simply doing the work they're intended to do.
|
When to use thread pool in C#?
|
[
"",
"c#",
"multithreading",
"threadpool",
""
] |
I'm trying to chart the number of registrations per day in our registration system. I have an Attendee table in sql server that has a smalldatetime field A\_DT, which is the date and time the person registered.
I started with this:
```
var dailyCountList =
(from a in showDC.Attendee
let justDate = new DateTime(a.A_DT.Year, a.A_DT.Month, a.A_DT.Day)
group a by justDate into DateGroup
orderby DateGroup.Key
select new RegistrationCount
{
EventDateTime = DateGroup.Key,
Count = DateGroup.Count()
}).ToList();
```
That works great, but it won't include the dates where there were no registrations, because there are no attendee records for those dates. I want every date to be included, and when there is no data for a given date, the count should just be zero.
So this is my current working solution, but I KNOW THAT IT IS TERRIBLE.
I added the following to the code above:
```
// Create a new list of data ranging from the beginning to the end of the first list, specifying 0 counts for missing data points (days with no registrations)
var allDates = new List<RegistrationCount>();
for (DateTime date = (from dcl in dailyCountList select dcl).First().EventDateTime; date <= (from dcl in dailyCountList select dcl).Last().EventDateTime; date = date.AddDays(1))
{
DateTime thisDate = date; // lexical closure issue - see: http://www.managed-world.com/2008/06/13/LambdasKnowYourClosures.aspx
allDates.Add(new RegistrationCount
{
EventDateTime = date,
Count = (from dclInner in dailyCountList
where dclInner.EventDateTime == thisDate
select dclInner).DefaultIfEmpty(new RegistrationCount
{
EventDateTime = date,
Count = 0
}).Single().Count
});
}
```
So I created ANOTHER list, and loop through a sequence of dates I generate based on the first and last registrations in the query, and for each item in the sequence of dates, I QUERY the results of my first QUERY for the information regarding the given date, and supply a default if nothing comes back. So I end up doing a subquery here and I want to avoid this.
Can anyone thing of an elegant solution? Or at least one that is less embarrassing?
|
O(n) with 2 enumerations. It's very good to pull the items into memory before trying this. Database has enough to do without thinking about this stuff.
```
if (!dailyCountList.Any())
return;
//make a dictionary to provide O(1) lookups for later
Dictionary<DateTime, RegistrationCount> lookup = dailyCountList.ToDictionary(r => r.EventDateTime);
DateTime minDate = dailyCountList[0].EventDateTime;
DateTime maxDate = dailyCountList[dailyCountList.Count - 1].EventDateTime;
int DayCount = 1 + (int) (maxDate - minDate).TotalDays;
// I have the days now.
IEnumerable<DateTime> allDates = Enumerable
.Range(0, DayCount)
.Select(x => minDate.AddDays(x));
//project the days into RegistrationCounts, making up the missing ones.
List<RegistrationCount> result = allDates
.Select(d => lookup.ContainsKey(d) ? lookup[d] :
new RegistrationCount(){EventDateTime = d, Count = 0})
.ToList();
```
|
The problem is that you have no range of dates without executing your query. So, you can either pick a date range, run a SELECT MAX and SELECT MIN against your DB, or execute your query and then add the missing dates.
```
var allDailyCountList =
from d in Range(dc[0].EventDateTime, dc[dc.Count - 1].EventDateTime)
// since you already ordered by DateTime, we don't have to search the entire List
join dc in dailyCountList on
d equals dc.EventDateTime
into rcGroup
from rc in rcGroup.DefaultIfEmpty(
new RegistrationCount()
{
EventDateTime = d,
Count = 0
}
) // gives us a left join
select rc;
public static IEnumerable<DateTime> Range(DateTime start, DateTime end) {
for (DateTime date = start, date <= end; date = date.AddDays(1)) {
yield return date;
}
}
```
|
How do I use LINQ to query for items, but also include missing items?
|
[
"",
"c#",
"asp.net",
"linq",
""
] |
I'm experimenting with a personal finance application, and I'm thinking about what approach to take to update running balances when entering a transaction in an account.
Currently the way I'm using involves retrieving all records more recent than the inserted/modified one, and go one by one incrementing their running balance.
For example, given the following transactions:
t1 date = 2008-10-21, amount = 500, running balance = 1000
t2 date = 2008-10-22, amount = 300, running balance = 1300
t3 date = 2008-10-23, amount = 100, running balance = 1400
...
Now suppose I insert a transaction between t1 and t2, then t2 and all subsequent transactions would need their running balances adjusted.
Hehe, now that I wrote this question, I think I know the answer... so I'll leave it here in case it helps someone else (or maybe there's even a better approach?)
First, I get the running balance from the previous transaction, in this case, t1. Then I update all following transactions (which would include the new one):
```
UPDATE transactions
SET running_balance = running_balance + <AMOUNT>
WHERE date > <t1.date>
```
The only issue I see is that now instead of storing only a date, I'll have to store a time too. Although, what would happen if two transactions had the exact same date/time?
PS: I'd prefer solutions not involving propietary features, as I'm using both PostgreSQL and SQLite... Although a Postgre-only solution would be helpful too.
|
I think this might work:
I was using both the date and the id to order the transactions, but now I'm going to store both the date and the id on one column, and use that for ordering. So, using comparisons (like >) should always work as expected, right? (as opposed to the situation I describe earlier where two columns have the exact datetime (however unlikely that'd be).
|
Some sort of Identity / Auto-increment columnn in there would be wise as well, purely for the transaction order if anything.
Also in addition to just the date of the transaction, a date that the transaction is inserted into the database (not always the same) would be wise / helpful as well.
These sort of things simply help you arrange things in the system and make it easier to change things i.e. for transactions, at a later time.
|
How to update a column value incrementally
|
[
"",
"sql",
""
] |
Here's the code. Not much to it.
```
<?php
include("Spreadsheet/Excel/Writer.php");
$xls = new Spreadsheet_Excel_Writer();
$sheet = $xls->addWorksheet('At a Glance');
$colNames = array('Foo', 'Bar');
$sheet->writeRow(0, 0, $colNames, $colHeadingFormat);
for($i=1; $i<=10; $i++)
{
$row = array( "foo $i", "bar $i");
$sheet->writeRow($rowNumber++, 0, $row);
}
header ("Expires: " . gmdate("D,d M Y H:i:s") . " GMT");
header ("Last-Modified: " . gmdate("D,d M Y H:i:s") . " GMT");
header ("Cache-Control: no-cache, must-revalidate");
header ("Pragma: no-cache");
$xls->send("test.xls");
$xls->close();
?>
```
The issue is that I get the following error when I actually open the file with Excel:
```
File error: data may have been lost.
```
Even stranger is the fact that, despite the error, the file seems fine. Any data I happen to be writing is there.
Any ideas on how to get rid of this error?
---
### Edit
I've modified the code sample to better illustrate the problem. I don't think the first sample was a legit test.
|
The code in the question has a bug which causes the error.
This line writes a bunch of column names to row 0
```
$sheet->writeRow(0, 0, $colNames, $colHeadingFormat);
```
Then we have the loop which is supposed to write out the value rows.
```
for($i=1; $i<=10; $i++)
{
$row = array( "foo $i", "bar $i");
$sheet->writeRow($rowNumber++, 0, $row);
}
```
The problem is that **$rowNumber** isn't declared anywhere so it overwrites row 0 on the first pass through the loop.
This overwriting seems to cause an issue with Excel Writer.
The strange thing is that, on the Excel file that gives the error, you still see the row with the column names even though it's technically been overwritten.
I found the solution [here on Google Groups](http://groups.google.com/group/spreadsheet-writeexcel/browse_thread/thread/6656856820b6f799/). Scroll down to the bottom. It's the last post by **Micah** that mentions the issue.
---
And here's the fix
```
<?php
include("Spreadsheet/Excel/Writer.php");
$xls = new Spreadsheet_Excel_Writer();
$rowNumber = 0;
$sheet = $xls->addWorksheet('At a Glance');
$colNames = array('Foo', 'Bar');
$sheet->writeRow($rowNumber, 0, $colNames, $colHeadingFormat);
for($i=1; $i<=10; $i++)
{
$rowNumber++;
$row = array( "foo $i", "bar $i");
$sheet->writeRow($rowNumber, 0, $row);
}
header ("Expires: " . gmdate("D,d M Y H:i:s") . " GMT");
header ("Last-Modified: " . gmdate("D,d M Y H:i:s") . " GMT");
header ("Cache-Control: no-cache, must-revalidate");
header ("Pragma: no-cache");
$xls->send("test.xls");
$xls->close();
?>
```
|
As Mark Biek points out the main problem is that `$rowNumber` is uninitialised and as such overwrites row 0.
This means that the generated Excel file will contain 2 data entries for cells A1 and B1, (0, 0 and 0, 1).
This wasn't a problem prior to Office Service Pack 3. However, once SP3 is installed Excel will raise a "data may have been lost" warning if it encounters duplicate entries for a cell.
The general solution is to not write more than one data to a cell. :-)
Here is a more [detailed explanation](http://groups.google.com/group/spreadsheet-writeexcel/browse_thread/thread/3dcea40e6620af3a) of the issue. It is in relation to the Perl [Spreadsheet::WriteExcel](http://search.cpan.org/~jmcnamara/Spreadsheet-WriteExcel/lib/Spreadsheet/WriteExcel.pm) module (from which the PHP module is derived) but the thrust is the same.
|
Strange error when creating Excel files with Spreadsheet_Excel_Writer
|
[
"",
"php",
"export-to-excel",
"pear",
""
] |
I'm working on a stored procedure in SQL Server 2000 with a temp table defined like this:
```
CREATE TABLE #MapTable (Category varchar(40), Code char(5))
```
After creating the table I want to insert some standard records (which will then be supplemented dynamically in the procedure). Each category (about 10) will have several codes (typically 3-5), and I'd like to express the insert operation for each category in one statement.
Any idea how to do that?
The best idea I've had so far is to keep a real table in the db as a template, but I'd really like to avoid that if possible. The database where this will live is a snapshot of a mainframe system, such that the entire database is blown away every night and re-created in a batch process- stored procedures are re-loaded from source control at the end of the process.
The issue I'm trying to solve isn't so much keeping it to one statement as it is trying to avoid re-typing the category name over and over.
|
```
insert into #maptable (category, code)
select 'foo1', b.bar
from
( select 'bar11' as bar
union select 'bar12'
union select 'bar13'
) b
union
select 'foo2', b.bar
from
( select 'bar21' as bar
union select 'bar22'
union select 'bar23'
) b
```
|
DJ's is a fine solution but could be simplified (see below).
Why does it need to be a single statement?
What's wrong with:
```
insert into #MapTable (category,code) values ('Foo','AAAAA')
insert into #MapTable (category,code) values ('Foo','BBBBB')
insert into #MapTable (category,code) values ('Foo','CCCCC')
insert into #MapTable (category,code) values ('Bar','AAAAA')
```
For me this is much easier to read and maintain.
---
Simplified DJ solution:
```
CREATE TABLE #MapTable (Category varchar(40), Code char(5))
INSERT INTO #MapTable (Category, Code)
SELECT 'Foo', 'AAAAA'
UNION
SELECT 'Foo', 'BBBBB'
UNION
SELECT 'Foo', 'CCCCC'
SELECT * FROM #MapTable
```
There's nothing really wrong with DJ's, it just felt overly complex to me.
---
From the OP:
> The issue I'm trying to solve isn't so much keeping it to one statement as it
> is trying to avoid re-typing the category name over and over.
I feel your pain -- I try to find shortcuts like this too and realize that by the time I solve the problem, I could have typed it long hand.
If I have a lot of repetitive data to input, I'll sometimes use Excel to generate the insert codes for me. Put the Category in one column and the Code in another; use all of the helpful copying techniques to do the hard work
then
```
="insert into #MapTable (category,code) values ('"&A1&"','"&B1&"')"
```
in a third row and I've generated my inserts
Of course, all of this is assuming that the Categories and Codes can't be pulled from a system table.
|
Building a temp table /map
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm developing a poker game in C#. At the moment I'm trying to get the players hand score using `RegEx`. I search the string (composed of the cards suit and number) and look for suits or numbers to match the `RegEx`. If i get 2 matches then the player has a pair, 3 matches he has 3 of a kind.
I have 3 classes at the moment, a Card class (with number and suit), a Deck class (that contains 52 Cards) and a Hand class that gets five cards from the shuffled deck.
Deck class has a `shuffleDeck()`;
Hand class has the functions to calculate the score (is in these functions that I am using RegEx).
I generate the string on which I use `RegEx` by adding the 5 suits and numbers that the hand has.
Is this a good idea or should I do it another way, if so, how?
Thank you for your help
PS. I am one of the unexperienced programmers that want to use a newly learned tool for everything
|
I do not think that a regex is the appropriate way to deal with this. You probably should be using a more sophisticated representation of a hand than a string.
|
You have not provided much detail, but what from what I have read, I assume you're not pushing the OOP very far...
I would have a Card class that has a Rank and Suit class instances. I would then have a deck class that handles shuffling / dealing...
I would then have a Hand class that would contain your poker hand of n Card objects...
In this way you can build up rules to evaluate each hand object, thus being more flexible and more extensible in the future...say if you want to make another card game / add support for another variant of poker...
Using Regular expressions to do all of this seems to be a pretty poor choice.
|
Using RegEx to get Poker Hand score
|
[
"",
"c#",
"regex",
"poker",
""
] |
Just for my own purposes, I'm trying to build a tokenizer in Java where I can define a regular grammar and have it tokenize input based on that. The StringTokenizer class is deprecated, and I've found a couple functions in Scanner that hint towards what I want to do, but no luck yet. Anyone know a good way of going about this?
|
The name "Scanner" is a bit misleading, because the word is often used to mean a lexical analyzer, and that's not what Scanner is for. All it is is a substitute for the `scanf()` function you find in C, Perl, *et al*. Like StringTokenizer and `split()`, it's designed to scan ahead until it finds a match for a given pattern, and whatever it skipped over on the way is returned as a token.
A lexical analyzer, on the other hand, has to examine and classify every character, even if it's only to decide whether it can safely ignore them. That means, after each match, it may apply several patterns until it finds one that matches *starting at that point*. Otherwise, it may find the sequence "//" and think it's found the beginning of a comment, when it's really inside a string literal and it just failed to notice the opening quotation mark.
It's actually much more complicated than that, of course, but I'm just illustrating why the built-in tools like StringTokenizer, `split()` and Scanner aren't suitable for this kind of task. It is, however, possible to use Java's regex classes for a limited form of lexical analysis. In fact, the addition of the Scanner class made it much easier, because of the new Matcher API that was added to support it, i.e., regions and the `usePattern()` method. Here's an example of a rudimentary scanner built on top of Java's regex classes.
```
import java.util.*;
import java.util.regex.*;
public class RETokenizer
{
static List<Token> tokenize(String source, List<Rule> rules)
{
List<Token> tokens = new ArrayList<Token>();
int pos = 0;
final int end = source.length();
Matcher m = Pattern.compile("dummy").matcher(source);
m.useTransparentBounds(true).useAnchoringBounds(false);
while (pos < end)
{
m.region(pos, end);
for (Rule r : rules)
{
if (m.usePattern(r.pattern).lookingAt())
{
tokens.add(new Token(r.name, m.start(), m.end()));
pos = m.end();
break;
}
}
pos++; // bump-along, in case no rule matched
}
return tokens;
}
static class Rule
{
final String name;
final Pattern pattern;
Rule(String name, String regex)
{
this.name = name;
pattern = Pattern.compile(regex);
}
}
static class Token
{
final String name;
final int startPos;
final int endPos;
Token(String name, int startPos, int endPos)
{
this.name = name;
this.startPos = startPos;
this.endPos = endPos;
}
@Override
public String toString()
{
return String.format("Token [%2d, %2d, %s]", startPos, endPos, name);
}
}
public static void main(String[] args) throws Exception
{
List<Rule> rules = new ArrayList<Rule>();
rules.add(new Rule("WORD", "[A-Za-z]+"));
rules.add(new Rule("QUOTED", "\"[^\"]*+\""));
rules.add(new Rule("COMMENT", "//.*"));
rules.add(new Rule("WHITESPACE", "\\s+"));
String str = "foo //in \"comment\"\nbar \"no //comment\" end";
List<Token> result = RETokenizer.tokenize(str, rules);
for (Token t : result)
{
System.out.println(t);
}
}
}
```
This, by the way, is the only good use I've ever found for the `lookingAt()` method. :D
|
If I understand your question well then here are two example methods to tokenize a string. You do not even need the Scanner class, only if you want to pre-cast the tokens, or iterate through them more sofistically than using an array. If an array is enough just use String.split() as given below.
Please give more requirements to enable more precise answers.
```
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
String textToTokenize = "This is a text that will be tokenized. I will use 1-2 methods.";
Scanner scanner = new Scanner(textToTokenize);
scanner.useDelimiter("i.");
while (scanner.hasNext()){
System.out.println(scanner.next());
}
System.out.println(" **************** ");
String[] sSplit = textToTokenize.split("i.");
for (String token: sSplit){
System.out.println(token);
}
}
}
```
|
How do I tokenize input using Java's Scanner class and regular expressions?
|
[
"",
"java",
"regex",
"compiler-construction",
"tokenize",
""
] |
Can anoyne recommend a good library that will let me easily read/write private member fields of a class? I was looking through apache commons, but couldnt see it. I must be getting blind ?
Edit: Asking questions on the border of legalities always give these questions of "why"? I am writing several javarebel plugins for hotswapping classes. Accessing private variables is only step 1, I might even have to replace implementations of some methods.
|
In most of the cases java [reflection](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/package-summary.html) solves the problem:
Example:
```
public class Foo {
/**
* Gets the name Field.
*
* @return the name
*/
public final String getName() {
return name;
}
/**
* Sets the name Field with the name input value.
*
* @param name the name to set
*/
public final void setName(String name) {
this.name = name;
}
private String name;
}
```
Now the Reflection Code:
```
import java.lang.reflect.Field;
....
Foo foo = new Foo();
foo.setName("old Name");
String fieldName = "name";
Class class1 = Foo.class;
try {
System.out.println(foo.getName());
Field field = class1.getDeclaredField(fieldName);
field.setAccessible(true);
field.set(foo, "My New Name");
System.out.println(foo.getName());
} catch (NoSuchFieldException e) {
System.out.println("FieldNotFound: " + e);
} catch (IllegalAccessException e) {
System.out.println("Ilegal Access: " + e);
}
```
**UPDATE:**
It's worth mentioning that this approach can be thwarted by a SecurityManager. – Dan Dyer
|
Without knowing why you want that level of access, I can only wonder what you could need that level of access for. Private members are private for a reason. They are not intended to be accessed from outside the class and could result in undocumented behavior.
That said, you can get access to most things through the java.lang.reflect package.
In my opinion, you should examine why you think you need that level of access and see if you could be doing something differently to not need it.
|
Library that subverts java access control and lets me access private member variables?
|
[
"",
"java",
""
] |
I'm looking for best practices for establishing connections between Oracle 8 and Visual Studio 2005 applications. The target would be a Windows Forms application written in C# that hits the database once a second to monitor tables looking for their last inserted record. I'm considering using "Application settings" to store the connection string there, but I'd love to hear from you guys. Thanks in advance!
This is a very rudimentary draft:
```
using System.Data;
using System.Data.OracleClient;
try
{
StringBuilder str = new StringBuilder();
string ora = Properties.Settings.Default.OracleConnectionString;
OracleConnection con = new OracleConnection(ora);
OracleCommand cmd = new OracleCommand();
cmd.Connection = con;
cmd.CommandText = "SELECT timestamp FROM jde_out WHERE rownum = 1";
cmd.CommandType = CommandType.Text;
con.Open();
OracleDataReader rdr = cmd.ExecuteReader();
rdr.Read();
str.AppendLine(cmd.ExecuteScalar().ToString());
this.lblJDEtime.Text = str.ToString();
rdr.Close();
con.Close();
}
catch (OracleException err)
{
MessageBox.Show("Exception caught:\n\n" + err.ToString());
}
```
I've just updated the code needed to perform the connection. Changed the Exception type to the more specific OracleException. Added the connection string via Properties.Settings.
|
Based on my experience with Oracle 10g....
I recommend using the Oracle data provider (ODP.Net) and not using the Microsoft for Oracle data provider based on my experience with Oracle 10g. Microsoft's has not been updated in years and does not support everything in Oracle 10g, so I would definitely check into that for Oracle 8.
Following [Microsoft guidance](http://msdn.microsoft.com/en-us/library/ms254494.aspx) on connection string in the app.config file, you should store it like:
```
<?xml version='1.0' encoding='utf-8'?>
<configuration>
<connectionStrings>
<clear />
<add name="Name"
providerName="System.Data.ProviderName"
connectionString="Valid Connection String;" />
</connectionStrings>
</configuration>
```
I've also worked on apps with the connection information stored in application settings, which worked fine for our application.
|
[ConnectionStrings.com](http://www.connectionstrings.com/) has pretty much every type of connection string you'll ever want including [Oracle](http://www.connectionstrings.com/?carrier=oracle).
|
What would be the most convenient way to connect Visual Studio 2005 (C#) to Oracle8?
|
[
"",
"c#",
".net",
"visual-studio",
"winforms",
"oracle",
""
] |
As I've read somewhere it is advised to use !== and === instead.
|
"Use the strict equality operator (===) when you want to check that the two operands are of the same type and value. Use the regular equality operator (==) if you care only about the value and the type does not matter. If, for example, one operand is the number 5 and the other operand is the string "5", standard equality operator will return true, but, since they are not of the same type, a strict equality operator will return false."
<http://www.webreference.com/js/column26/stricteq.html>
|
It depends on what your'e trying to do.
!== and === compare both value *and* type.
```
5 == "5"
```
The above will return true. If that's not what you intended, you should do this instead:
```
5 === "5"
```
That would return false.
|
Need an example explaining why use of comparison operands != and == is considered bad practice in JavaScript
|
[
"",
"javascript",
""
] |
I started off programming in Basic on the [ZX81](http://en.wikipedia.org/wiki/ZX81), then [BASICA](http://en.wikipedia.org/wiki/IBM_BASICA), [GW-BASIC](http://en.wikipedia.org/wiki/GW-BASIC), and [QBasic](http://en.wikipedia.org/wiki/QBasic). I moved on to C (Ah, Turbo C 3.1, I hardly knew ye...)
When I got started in microcontrollers I regressed with the [BASIC Stamp](http://en.wikipedia.org/wiki/BASIC_Stamp) from Parallax. However, BASIC is/was awesome because it was so easy to understand and so hard to make a mistake. I moved on to assembly and C eventually because I needed the additional power (speed, capacity, resources, etc.), but I know that if the bar was much higher many people would never get into programming microcontrollers.
I keep getting an itch to make my own on-chip BASIC interpretor, but I wonder if there's need for BASIC now that Lua and Python are easily embeddable, and just as approachable as BASIC.
* What, if any, are the advantages BASIC has over other languages?
* Why is it still around?
* If I decide to make a general purpose microcontroller board with an interpreter, is there any reason to make a version of BASIC?
Plenty of other languages are considered dead, but BASIC just keeps hanging on.
|
As an architecture, the main claim to fame of BASIC is that you could make BASIC interpreters very small - just a few KB. In the days of a DG Nova this was a win as you could use systems like Business BASIC to build a multiuser application on a machine with 64K of RAM (or even less).
BASIC (VB in particular) is a legacy system and has a large existing code-base. Arguably VB is really a language (some would say a thin wrapper over COM) that has a BASIC-like syntax. These days, I see little reason to keep the language around apart from people's familiarity with it and to maintain the existing code base. I certainly would not advocate new development in it (note that VB.Net is not really BASIC but just has a VB-like syntax. The type system is not broken in the way that VB's was.)
What *is* missing from the computing world is a *relevant* language that is easy to learn and tinker with and has mind-share in mainstream application development. I grew up in the days of 8-bit machines, and the entry barrier to programming on those systems was very low. The architecture of the machines was very simple, and you could learn to program and write more-or-less relevant applications on these machines very easily.
Modern architectures are much more complex and have a bigger hump to learn. You can see people pontificating on how kids can't learn to program as easily as they could back in the days of BASIC and 8-bit computers and I think that argument has some merit. There is something of a hole left that makes programming just that bit harder to get into. Toy languages are not much use here - for programming to be attractive it has to be possible to aspire to build something relevant with the language you are learning.
This leads to the problem of a language that is easy for kids to learn but still allows them to write relevant programmes (or even games) that they might actually want. It also has to be widely perceived as relevant.
The closest thing I can think of to this is Python. It's not the only example of a language of that type, but it is the one with the most mind-share - and (IMO) a perception of relevance is necessary to play in this niche. It's also one of the easiest languages to learn that I've experienced (of the 30 or so that I've used over the years).
|
[This may come off sounding more negative than it really is. I'm not saying Basic is the root of all evil, [others have said that](http://en.wikiquote.org/wiki/Edsger_Dijkstra). I'm saying it's a legacy we can afford to leave behind.]
**"because it was so easy to understand and so hard to make a mistake"** That's certainly debatable. I've had some bad experiences with utterly opaque basic. Professional stuff -- commercial products -- perfectly awful code. Had to give up and decline the work.
**"What, if any, are the advantages Basic has over other languages?"** None, really.
**"Why is it still around?"** Two reasons: (1) Microsoft, (2) all the IT departments that started doing VB and now have millions of lines of VB legacy code.
**"Plenty of other languages are considered dead..."** Yep. Basic is there along side COBOL, PL/I and RPG as legacies that sometimes have more cost than value. But because of the "if it ain't broke don't fix it" policy of big IT, there they sit, sucking up resources who could easily replace it with something smaller, simpler and cheaper to maintain. Except it hasn't "failed" -- it's just disproportionately expensive.
30-year old COBOL is a horrible situation to rework. Starting in 2016 we'll be looking at 30-year old MS Basic that we just can't figure out, don't want to live without, and can't decide how to replace.
**"but basic just keeps hanging on"** It appears that some folks love Basic. Others see it as yet another poorly-designed language; it's advantages are being early to market and being backed by huge vendors (IBM, initially). Poorly-design, early-to-market only leaves us with a legacy that we'll be suffering with for decades.
I still have my 1965-edition Dartmouth Basic manual. I don't long for the good old days.
|
With Lua and Python embeddable, is there a place for Basic?
|
[
"",
"python",
"scripting",
"lua",
"interpreter",
"basic",
""
] |
I've been asked to display the 'correct' time on our website which I frankly feel is rather pointless as 'correct' can be interpretted in such a variety of ways.
Our current method definately results in an inaccurate time as it uses a server control rendering JavaScript that runs onload using the datetime from the server as a parameter to create a clock object in JavaScript that finally renders on the page and then starts incrementing the clock. Between the server processing, network latency and client-side performance (there's plenty other stuff running onload) the clock ends up way off the actual server time and who knows compared to the client PC.
So to get the 'correct' time shown I could;
* Use local PC time and pass new Date() to the JavaScript clock object. Pros: Should be as close to the PC clock as possible. Cons: Not sure how accurate the PC clock is let alone in which timezone.
* Use web service for TCP request to NTP server to update clock on web page. Pros: If local PC also sync'd to NTP will be accurate and best possible match. Cons: Will have to handle all the timezone adjustments relative to our servers. If PC clock is out will still have mismatch.
Do I implement my own web service or use something like; [Earth Tools](http://www.earthtools.org/webservices.htm) or World Time Web Service (**EDIT:** link removed - now 404)
Here's a blog post from [Jon Galloway on Atomic Clock Web Service](http://weblogs.asp.net/jgalloway/archive/2004/04/08/110029.aspx) which is pretty old and yet ranks high when I google and he doesn't reach a conclusion.
Hoepfully I can win the argument with management why syncing to our server clock (GMT) doesn't makes sense if your not in that timezone and why we even need to match a local PC.
Any angles on this I'm missing?
|
I needed to show the accurate time to clients in an auction web app. You can send the current server time with the page and have the javascript initialize right away without waiting for the rest of the page to load. So the, you only are dealing with network latency which in the worst case is not likely to be more than a couple of seconds.
After that, you're pretty darn close to accurate time. As long as your Javascript timer code is written properly, you're not going to far out of sync before the next page load. But I have seen a lot of bad JS clock code. (Hint: Date() good, setTimeout() bad.)
If you have an application that users are going to be sitting on for a long time, just refresh your time sync either by reloading the page or Ajax.
I wouldn't worry about time zones, just use UTC time so there is no confusion about what time things will happen.
|
First, be certain that your client is aware that Windows, Linux, and OSX all have built-in clocks that are almost always visible to the users (or made visible very easily). Also, be certain that your client is aware of physical clocks that are often located near any kiosks that might be setup to hide the built in clock from the operating system.
If you know that, and your client still wants a clock on your website, have your client define "correct" time, then implement the solution that matches their definition (both of your solutions seem like they would take care of the two most-likely definitions).
|
Display accurate local time on web site?
|
[
"",
"javascript",
"web-services",
"timezone",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.