Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I've setup a static website on GAE using hints found elsewhere, but can't figure out how to return a 404 error. My app.yaml file looks like
```
- url: (.*)/
static_files: static\1/index.html
upload: static/index.html
- url: /
static_dir: static
```
with all the static html/jpg files stored under the static directory. The above works for files that exist, but returns a null length file if they don't. The answer is probably to write a python script to return a 404 error, but how do you set things up to serve the static files that exist but run the script for files that don't?
Here is the log from fetching a non-existent file (nosuch.html) on the development application server:
```
ERROR 2008-11-25 20:08:34,084 dev_appserver.py] Error encountered reading file "/usr/home/ctuffli/www/tufflinet/static/nosuch.html":
[Errno 2] No such file or directory: '/usr/home/ctuffli/www/tufflinet/static/nosuch.html'
INFO 2008-11-25 20:08:34,088 dev_appserver.py] "GET /nosuch.html HTTP/1.1" 404 -
```
|
You need to register a catch-all script handler. Append this at the end of your app.yaml:
```
- url: /.*
script: main.py
```
In main.py you will need to put this code:
```
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class NotFoundPageHandler(webapp.RequestHandler):
def get(self):
self.error(404)
self.response.out.write('<Your 404 error html page>')
application = webapp.WSGIApplication([('/.*', NotFoundPageHandler)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
```
Replace `<Your 404 error html page>` with something meaningful. Or better use a template, you can read how to do that [here](http://code.google.com/appengine/docs/gettingstarted/templates.html).
Please let me know if you have problems setting this up.
|
google app engine now has [Custom Error Responses](https://cloud.google.com/appengine/docs/python/config/appconfig?csw=1#Python_app_yaml_Custom_error_responses)
so you can now add an error\_handlers section to your app.yaml, as in this example:
```
error_handlers:
- file: default_error.html
- error_code: over_quota
file: over_quota.html
```
|
Google App Engine and 404 error
|
[
"",
"python",
"google-app-engine",
"http-status-code-404",
""
] |
I've been looking for a generic way to deal with bidirectional associations and a way to handle the inverse updates in manual written Java code.
For those who don't know what I'm talking about, here is an example. Below it are my current results of (unsatisfying) solutions.
```
public class A {
public B getB();
public void setB(B b);
}
public class B {
public List<A> getAs();
}
```
Now, when updating any end of the association, in order to maintain consistency, the other end must be updated as well. Either manually each time
```
a.setB(b);
b.getA().add(a);
```
or by putting matching code in the setter / getter and use a custom List implementation.
I've found an outdated, unmaintained project whose dependencies are no longer available (<https://e-nspire-gemini.dev.java.net/>). It deals with the problem by using annotations that are used to inject the necessary code automatically.
Does anyone know of another framework that deals with this in a generic, unobtrusive way ala gemini?
ciao,
Elmar
|
google collections (from google's internal code) -- <http://code.google.com/p/google-collections/> is Java Generics compatible(not only compatible, uses generics very well)
Class BiMap -- <http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?http://google-collections.googlecode.com/svn/trunk/javadoc/com/google/common/collect/package-summary.html>
allows for Bidirectional associations.
Some of these classes are expected to make their way into JDK 7.
|
Thanks for all suggestions. But none came close to what I was looking for, I probably formulated the question in a wrong way.
I was looking for a replacement for gemini, so for a way to handle this in an unobtrusive manner, without polluting the code with endless checks and special List implementations. This calls of course for an AOP based approach, as suggested by Kevin.
When i looked around a little more I found a package of gemini on cnet that contain all sources and dependencies with sources. The missing sources for the dependencies was the only concern that stopped me from using it. Since now all sources are available bugs can be fixed.
In case anyone looks for this: <http://www.download.com/Gemini/3000-2413_4-10440077.html>
|
Generic Java Framework to Manage Bidirectional Associations and Inverse Updates
|
[
"",
"java",
"associations",
"bidirectional",
"inverse",
""
] |
I'm working in .Net 3.5sp1 in C# for an ASP.Net solution and I'm wondering if there's any way to turn on the Class Name and Method Name drop-downs in the text editor that VB.Net has at the top. It's one of the few things from VB that I actually miss.
Edit: Also, is there any way to get the drop downs to be populated with the possible events?
e.g. (Page Events) | (Declarations)
|
Go To:
```
Tools -> Options -> Text Editor -> C# -> General -> Navigation Bar
```
Make sure it is clicked, and that should show something at the top of your code that has all the classes and methods listed in your file.
|
In Visual Studio 2008 (and probably earlier versions):
Tools -> Options -> Text Editor -> C#(\*) -> General -> Navigation bar
(\*) or your preferred editor language
|
Visual Studio 2008: is there any way to turn on the Class Name/Method Name drop downs in C#? (like VB.Net has)
|
[
"",
"c#",
"vb.net",
"visual-studio-2008",
""
] |
this code always returns 0 in PHP 5.2.5 for microseconds:
```
<?php
$dt = new DateTime();
echo $dt->format("Y-m-d\TH:i:s.u") . "\n";
?>
```
Output:
```
[root@www1 ~]$ php date_test.php
2008-10-03T20:31:26.000000
[root@www1 ~]$ php date_test.php
2008-10-03T20:31:27.000000
[root@www1 ~]$ php date_test.php
2008-10-03T20:31:27.000000
[root@www1 ~]$ php date_test.php
2008-10-03T20:31:28.000000
```
Any ideas?
|
This seems to work, although it seems illogical that <http://us.php.net/date> documents the microsecond specifier yet doesn't really support it:
```
function getTimestamp()
{
return date("Y-m-d\TH:i:s") . substr((string)microtime(), 1, 8);
}
```
|
You can specify that your input contains microseconds when constructing a `DateTime` object, and use `microtime(true)` directly as the input.
Unfortunately, this will fail if you hit an exact second, because there will be no `.` in the microtime output; so use `sprintf` to force it to contain a `.0` in that case:
```
date_create_from_format(
'U.u', sprintf('%.f', microtime(true))
)->format('Y-m-d\TH:i:s.uO');
```
Or equivalently (more OO-style)
```
DateTime::createFromFormat(
'U.u', sprintf('%.f', microtime(true))
)->format('Y-m-d\TH:i:s.uO');
```
|
PHP DateTime microseconds always returns 0
|
[
"",
"php",
"datetime",
""
] |
I have an odd problem...I'm using a documentation generator which generates a lot of output like docs/foo.php.html. It's XHTML, and thus contains `<?xml...>` tags at the beginning of file. The problem is, Apache has somehow decided to run it through the PHP interpreter, even though ".php" appears in the middle of the filename, and not at the end. This, in turn, triggers a PHP error, because it sees "`<?`" as the command to start executing PHP code, and immediately gets confused by the "`xml...`" which follows it.
How do I configure Apache to ONLY execute .php files and not .php.html files? The string "php.html" does not appear explicitly anywhere in my Apache config files. There is a line "`AddHandler php5-script .php`", but I don't see how that would also include ".php.html" files.
|
The problem seems to be in [mod\_mime](http://httpd.apache.org/docs/2.2/mod/mod_mime.html).
Quote from the Apache mod\_mime documentation page:
> If you would prefer only the last dot-separated part of the filename to be mapped to a particular piece of meta-data, then do not use the Add\* directives. For example, if you wish to have the file foo.html.cgi processed as a CGI script, but not the file bar.cgi.html, then instead of using AddHandler cgi-script .cgi, use
>
> `<FilesMatch \.cgi$>`
>
> `SetHandler cgi-script`
>
> `</FilesMatch>`
Also, you can google for `apache mod_mime "multiple extensions"`
|
You could disable PHP's shorttags -- this is the recommended way to mix PHP and XML.
<http://us.php.net/ini.core>
```
short_open_tag = 0
```
|
Why is Apache executing .php.html files as PHP?
|
[
"",
"php",
"html",
"apache",
""
] |
I have never seen a way to do this nicely, i would be interested in seeing how others do it. Currently i format it like this:
```
public Booking createVehicleBooking(Long officeId,
Long start,
Long end,
String origin,
String destination,
String purpose,
String requirements,
Integer numberOfPassengers) throws ServiceException {
/*..Code..*/
}
```
|
A large set of parameters like this is often (but not always) an indicator that you could be using an object to represent the parameter set. This is especially true if either:
* There are several methods with similar large parameter sets, that can be replaced with a single method taking a parameter object.
* The method is called `create...`
So your above code could become (pardon my C++, I'm a Java developer):
```
class BuildVehicleBooking {
Long officeId;
Long start;
Long end;
String origin;
String destination;
String purpose;
String requirements;
Integer numberOfPassengers;
Booking createVehicleBooking () throws ServiceException { ... }
}
```
This is the **Builder Pattern**. The advantage of this pattern is that you can build up a complex set of parameters in pieces, including multiple variations on how the parameters relate to each other, and even overwriting parameters as new information becomes available, before finally calling the `create` method at the end.
Another potential advantage is that you could add a `verifyParameters` method that checked their consistence before you go as far as `creating` the final object. This is applicable in cases where creating the object involves non-reversible steps, such as writing to a file or database.
Note that, as with all patterns, this doesn't apply in every case and may not apply in yours. If your code is simple enough then this pattern may be over-engineering it. If the code is getting messy, refactoring into this pattern can be a good way to simplify it.
|
```
public Booking createVehicleBooking(
Long officeId,
Long start,
Long end,
String origin,
String destination,
String purpose,
String requirements,
Integer numberOfPassengers)
throws ServiceException {
/*..Code..*/
}
```
|
How to format methods with large parameter lists
|
[
"",
"java",
"formatting",
"methods",
""
] |
I'm writing a [Windows Forms](http://en.wikipedia.org/wiki/Windows_Forms) application which is supposed to play three sound files and at the end of each sound file, it's to change the source of an image.
I can get it to play the sounds using `System.Media.SoundPlayer`. However, it seems to play the sound in a different thread, continuing on.
The net effect of this is that only the last sound is played and all the images are changed.
I've tried `Thread.Sleep`, but it sleeps the whole GUI and after the sleep period everything happens at once and the last sound it played.
**UPDATE**
I thought PlaySynch was working, but it seems to freeze my GUI which is less than ideal. What else can I do?
|
Did you try [SoundPlayer.PlaySync Method](http://msdn.microsoft.com/en-us/library/system.media.soundplayer.playsync.aspx)? From the help:
> The PlaySync method uses the current
> thread to play a .wav file, preventing
> the thread from handling other
> messages until the load is complete.
|
Instead of using the Play method, use the [PlaySync](http://msdn.microsoft.com/en-us/library/system.media.soundplayer.playsync.aspx) method.
|
Play a sound, wait for it to finish and then do something?
|
[
"",
"c#",
"multithreading",
"audio",
"synchronization",
""
] |
I have a factory class that populates objects with data. I want to implementing saving from the object, but do not want to populate the object with db stuff - is it stupid to have my Factory that creates the class also save the data?
ie: in my .Save() method on the object i would call Factory.Save(myObject);
|
If you are concerned about database stuff in classes - have you considered using O/R mapper?
This would keep the database stuff entirely out of your code and your domain objects clean.
Maybe take a look at [NHibernate](http://www.hibernate.org/343.html) or [Active Record](http://www.castleproject.org/activerecord/index.html).
|
The factory class is a creational pattern that helps with creating new objects.
There are various patterns which deal with persisting objects, one of which is data mapper
<http://martinfowler.com/eaaCatalog/dataMapper.html>
This is often used in conjection with Repository
<http://martinfowler.com/eaaCatalog/repository.html>
You can use these patterns to abstract the database away from your domain/business objects, and access them from within your application to query and save objects.
So the data mapper/repository is responsible for both aspects of persistence (populating from database, and saving back to database).
|
Factory Class - Save Objects
|
[
"",
"c#",
"factory",
""
] |
The solution we developed uses a database (sqlserver 2005) for persistence purposes, and thus, all updated data is saved to the database, instead of sent to the program.
I have a front-end (desktop) that currently keeps polling the database for updates that may happen anytime on some critical data, and I am not really a fan of database polling and wasted CPU cycles with work that is being redone uselessly.
Our manager doesn't seem to mind us polling the database. The amount of data is small (less than 100 records) and the interval is high (1 min), but I am a coder. I do. Is there a better way to accomplish a task of keeping the data on memory as synced as possible with the data on the database? The system is developed using C# 3.5.
|
Since you're on SQL2005, you can use a [SqlDependency](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldependency.aspx) to be notified of changes. Note that you can use it pretty effortlessly with [System.Web.Caching.Cache](http://msdn.microsoft.com/en-us/library/system.web.caching.cache.aspx), which, despite it's namespace runs [just fine in a WinForms app](http://weblogs.asp.net/rosherove/archive/2003/08/12/23813.aspx).
|
First thought off the top of my head is a trigger combined with a message queue.
|
Keeping in sync with database
|
[
"",
"c#",
"sql-server",
".net-3.5",
""
] |
I have a Serializable object which is supposed to hold a java.awt.Image as its member. How should I go about serializing it?
*(Edited from a not so clear first version, sorry.)*
|
ImageIcon implements Serializable and it can be used to wrap an Image class
<http://java.sun.com/j2se/1.5.0/docs/api/javax/swing/ImageIcon.html>
|
javax.swing.ImageIcon, as a part of Swing, does not guarantee to have compatible serialised form between versions. However, you can cheat and look at its readObject and writeObject code - find width and height, grab the pixels with PixelGrabber. I'm not entirely sure that covers the colour model correctly. The obvious alternative is to write a real image format with javax.imageio.
|
How to best serialize a java.awt.Image?
|
[
"",
"java",
"image",
"serialization",
""
] |
We're running a web app on Tomcat 6 and Apache mod\_proxy 2.2.3. Seeing a lot of 502 errors like this:
> Bad Gateway!
> The proxy server received an invalid response from an upstream server.
>
> The proxy server could not handle the request GET /the/page.do.
>
> Reason: Error reading from remote server
>
> If you think this is a server error, please contact the webmaster.
>
> Error 502
Tomcat has plenty of threads, so it's not thread-constrained. We're pushing 2400 users via JMeter against the app. All the boxes are sitting inside our firewall on a fast unloaded network, so there shouldn't be any network problems.
Anyone have any suggestions for things to look at or try? We're heading to tcpdump next.
UPDATE 10/21/08: Still haven't figured this out. Seeing only a very small number of these under load. The answers below haven't provided any magical answers...yet. :)
|
So, answering my own question here. We ultimately determined that we were seeing 502 and 503 errors in the load balancer due to Tomcat threads timing out. In the short term we increased the timeout. In the longer term, we fixed the app problems that were causing the timeouts in the first place. Why Tomcat timeouts were being perceived as 502 and 503 errors at the load balancer is still a bit of a mystery.
|
Just to add some specific settings, I had a similar setup (with Apache 2.0.63 reverse proxying onto Tomcat 5.0.27).
For certain URLs the Tomcat server could take perhaps 20 minutes to return a page.
I ended up modifying the following settings in the Apache configuration file to prevent it from timing out with its proxy operation (with a large over-spill factor in case Tomcat took longer to return a page):
```
Timeout 5400
ProxyTimeout 5400
```
---
## Some backgound
[ProxyTimeout](http://httpd.apache.org/docs/2.0/mod/mod_proxy.html#proxytimeout) alone wasn't enough. Looking at the documentation for [Timeout](http://httpd.apache.org/docs/2.0/mod/core.html#timeout) I'm *guessing* (I'm not sure) that this is because while Apache is waiting for a response from Tomcat, there is no traffic flowing between Apache and the Browser (or whatever http client) - and so Apache closes down the connection to the browser.
I found that if I left the Timeout setting at its default (300 seconds), then if the proxied request to Tomcat took longer than 300 seconds to get a response the browser would display a "502 Proxy Error" page. I believe this message is generated by Apache, in the knowledge that it's acting as a reverse proxy, before it closes down the connection to the browser (this is my current understanding - it may be flawed).
The proxy error page says:
> Proxy Error
>
> The proxy server received an invalid
> response from an upstream server. The
> proxy server could not handle the
> request GET.
>
> Reason: Error reading from remote server
...which suggests that it's the ProxyTimeout setting that's too short, while investigation shows that Apache's Timeout setting (timeout between Apache and the client) that also influences this.
|
Bad Gateway 502 error with Apache mod_proxy and Tomcat
|
[
"",
"java",
"apache",
"tomcat",
"mod-proxy",
""
] |
In JPA the Entities are nice annotated Plain Old Java Objects. But I have not found a good way to interact with them and the database.
In my current app, my basic design is always to have a sequence based id as primary key so I usually have to look up entities by other properties than PK.
And for each Entity I have a stateless EJB of
```
@Stateless
public class MyEntApiBean implements MyEntApi {
@PersistenceContext(unitName = "xxx") @Inject EntityManager entityManager;
```
with query methods that all are some variation
```
/**
* @return A List of all MyEnts that have some property
* @param someProp some property
*/
public List<MyEnt> getAllMyEntsFromProp(final String someProp) {
try {
final Query query = entityManager.createQuery("select me from MyEnt me where me.someProp = :someProp");
query.setParameter("someProp", someProp);
return query.getResultList();
} catch(final NoResultException nre) {
log.warn("No MyEnts found");
}
return new ArrayList<MyEnt>();
}
```
So:
1. I really hate having these methods in an EJB because they seem to belong with the entities themselves, and the EJB local interfaces annoy the crap out of me.
2. I hate the duplication that I have in each method with "try, createQuery, getResultList,catch, log, return" (mostly a consequence of no closures or "with statement" or somesuch in Java).
Does anyone have a suggestion for a better way to interact with the Entities and Database that addresses one or both of my issues?
I am currently thinking of doing some base methods with generics and reflection to get some generic query methods to reduce the duplication (issue 2) (I will put a prototype up for review later).
Thanks,
Anders
|
Try Seam. The [Query Objects](http://docs.jboss.com/seam/2.1.0.BETA1/reference/en-US/html_single/#d0e7527) do most of the work for you, and they're easily extendable. Or, you could always implement a similar pattern.
In general, Seam does a lot of useful stuff to bridge the gap between JPA and you view and business layers. You don't have to use JSF for Seam to be useful.
|
You're being unnecessarily verbose. For one thing, getResultList() doesn't throw an exception when no rows are returned (at least not in Eclipse or Toplink--I can't imagine another provider being any different). getSingleResult() does, getResultList() doesn't. Also, you can use the builder pattern so:
```
@SuppressWarnings("unchecked")
public List<MyEnt> getAllMyEntsFromProp(final String someProp) {
return entityManager.createQuery("select me from MyEnt me where me.someProp = :someProp")
.setParameter("someProp", someProp);
.getResultList();
}
```
should be sufficient to return a List of results if there are any or an empty List if there are none. Two things to note:
1. @SuppressWarnings("unchecked") is unnecessary but it gets rid of an otherwise unavoidable warning when casting the non-generic List result from getResultList() to a generic List; and
2. It's probably worth replacing the createQuery() call with a @NamedQuery on MyEnt (typically). For one thing, this will enable deploy-time validation and other useful things.
Its reasonably concise and complete.
|
What is a good design for a query "layer" for Java JPA
|
[
"",
"java",
"hibernate",
"jpa",
"ejb",
""
] |
I have a `datetime` column in MySQL.
How can I convert it to the display as **mm/dd/yy H:M (AM/PM)** using PHP?
|
If you're looking for a way to normalize a date into MySQL format, use the following
```
$phpdate = strtotime( $mysqldate );
$mysqldate = date( 'Y-m-d H:i:s', $phpdate );
```
The line `$phpdate = strtotime( $mysqldate )` accepts a string and performs a series of heuristics to turn that string into a unix timestamp.
The line `$mysqldate = date( 'Y-m-d H:i:s', $phpdate )` uses that timestamp and PHP's [`date`](http://php.net) function to turn that timestamp back into MySQL's standard date format.
(**Editor Note**: This answer is here because of an original question with confusing wording, and the general Google usefulness this answer provided even if it didnt' directly answer the question that now exists)
|
To convert a date retrieved from MySQL into the format requested (`mm/dd/yy H:M (AM/PM)`):
```
// $datetime is something like: 2014-01-31 13:05:59
$time = strtotime($datetimeFromMysql);
$myFormatForView = date("m/d/y g:i A", $time);
// $myFormatForView is something like: 01/31/14 1:05 PM
```
Refer to the [PHP date formatting options](http://php.net/manual/en/function.date.php#refsect1-function.date-parameters) to adjust the format.
|
Convert from MySQL datetime to another format with PHP
|
[
"",
"php",
"mysql",
"datetime",
""
] |
Right now I have a log parser reading through 515mb of plain-text files (a file for each day over the past 4 years). My code currently stands as this: <http://gist.github.com/12978>. I've used psyco (as seen in the code) and I'm also compiling it and using the compiled version. It's doing about 100 lines every 0.3 seconds. The machine is a standard 15" MacBook Pro (2.4ghz C2D, 2GB RAM)
Is it possible for this to go faster or is that a limitation on the language/database?
|
Don't waste time profiling. The time is always in the database operations. Do as few as possible. Just the minimum number of inserts.
Three Things.
One. Don't SELECT over and over again to conform the Date, Hostname and Person dimensions. Fetch all the data ONCE into a Python dictionary and use it in memory. Don't do repeated singleton selects. Use Python.
Two. Don't Update.
Specifically, Do not do this. It's bad code for two reasons.
```
cursor.execute("UPDATE people SET chats_count = chats_count + 1 WHERE id = '%s'" % person_id)
```
It be replaced with a simple SELECT COUNT(\*) FROM ... . Never update to increment a count. Just count the rows that are there with a SELECT statement. [If you can't do this with a simple SELECT COUNT or SELECT COUNT(DISTINCT), you're missing some data -- your data model should always provide correct complete counts. Never update.]
And. Never build SQL using string substitution. Completely dumb.
If, for some reason the `SELECT COUNT(*)` isn't fast enough (benchmark first, before doing anything lame) you can cache the result of the count in another table. AFTER all of the loads. Do a `SELECT COUNT(*) FROM whatever GROUP BY whatever` and insert this into a table of counts. Don't Update. Ever.
Three. Use Bind Variables. Always.
```
cursor.execute( "INSERT INTO ... VALUES( %(x)s, %(y)s, %(z)s )", {'x':person_id, 'y':time_to_string(time), 'z':channel,} )
```
The SQL never changes. The values bound in change, but the SQL never changes. This is MUCH faster. Never build SQL statements dynamically. Never.
|
In the for loop, you're inserting into the 'chats' table repeatedly, so you only need a single sql statement with bind variables, to be executed with different values. So you could put this before the for loop:
```
insert_statement="""
INSERT INTO chats(person_id, message_type, created_at, channel)
VALUES(:person_id,:message_type,:created_at,:channel)
"""
```
Then in place of each sql statement you execute put this in place:
```
cursor.execute(insert_statement, person_id='person',message_type='msg',created_at=some_date, channel=3)
```
This will make things run faster because:
1. The cursor object won't have to reparse the statement each time
2. The db server won't have to generate a new execution plan as it can use the one it create previously.
3. You won't have to call santitize() as special characters in the bind variables won't part of the sql statement that gets executed.
Note: The bind variable syntax I used is Oracle specific. You'll have to check the psycopg2 library's documentation for the exact syntax.
Other optimizations:
1. You're incrementing with the "UPDATE people SET chatscount" after each loop iteration. Keep a dictionary mapping user to chat\_count and then execute the statement of the total number you've seen. This will be faster then hitting the db after every record.
2. Use bind variables on ALL your queries. Not just the insert statement, I choose that as an example.
3. Change all the find\_\*() functions that do db look ups to cache their results so they don't have to hit the db every time.
4. psycho optimizes python programs that perform a large number of numberic operation. The script is IO expensive and not CPU expensive so I wouldn't expect to give you much if any optimization.
|
How do you make Python / PostgreSQL faster?
|
[
"",
"python",
"postgresql",
""
] |
I have a custom class loader so that a desktop application can dynamically start loading classes from an AppServer I need to talk to. We did this since the amount of jars that are required to do this are ridiculous (if we wanted to ship them). We also have version problems if we don't load the classes dynamically at run time from the AppServer library.
Now, I just hit a problem where I need to talk to two different AppServers and found that depending on whose classes I load first I might break badly... Is there any way to force the unloading of the class without actually killing the JVM?
Hope this makes sense
|
The only way that a Class can be unloaded is if the Classloader used is garbage collected. This means, references to every single class and to the classloader itself need to go the way of the dodo.
One possible solution to your problem is to have a Classloader for every jar file, and a Classloader for each of the AppServers that delegates the actual loading of classes to specific Jar classloaders. That way, you can point to different versions of the jar file for every App server.
This is not trivial, though. The OSGi platform strives to do just this, as each bundle has a different classloader and dependencies are resolved by the platform. Maybe a good solution would be to take a look at it.
If you don't want to use OSGI, one possible implementation could be to use one instance of [JarClassloader](https://docs.oracle.com/javase/tutorial/deployment/jar/jarclassloader.html) class for every JAR file.
And create a new, MultiClassloader class that extends Classloader. This class internally would have an array (or List) of JarClassloaders, and in the defineClass() method would iterate through all the internal classloaders until a definition can be found, or a NoClassDefFoundException is thrown. A couple of accessor methods can be provided to add new JarClassloaders to the class. There is several possible implementations on the net for a MultiClassLoader, so you might not even need to write your own.
If you instanciate a MultiClassloader for every connection to the server, in principle it is possible that every server uses a different version of the same class.
I've used the MultiClassloader idea in a project, where classes that contained user-defined scripts had to be loaded and unloaded from memory and it worked quite well.
|
Yes there are ways to load classes and to "unload" them later on. The trick is to implement your own classloader which resides between high level class loader (the System class loader) and the class loaders of the app server(s), and to hope that the app server's class loaders do delegate the classloading to the upper loaders.
A class is defined by its package, its name, and the class loader it originally loaded. Program a "proxy" classloader which is the first that is loaded when starting the JVM. Workflow:
* The program starts and the real "main"-class is loaded by this proxy classloader.
* Every class that then is normally loaded (i.e. not through another classloader implementation which could break the hierarchy) will be delegated to this class loader.
* The proxy classloader delegates `java.x` and `sun.x` to the system classloader (these *must not* be loaded through any other classloader than the system classloader).
* For every class that is replaceable, instantiate a classloader (which really loads the class and does not delegate it to the parent classloader) and load it through this.
* Store the package/name of the classes as keys and the classloader as values in a data structure (i.e. Hashmap).
* Every time the proxy classloader gets a request for a class that was loaded before, it returns the class from the class loader stored before.
* It should be enough to locate the byte array of a class by your class loader (or to "delete" the key/value pair from your data structure) and reload the class in case you want to change it.
Done right there should not come a [ClassCastException](https://docs.oracle.com/javase/7/docs/api/java/lang/ClassCastException.html) or [LinkageError](https://docs.oracle.com/javase/7/docs/api/java/lang/LinkageError.html) etc.
For more informations about class loader hierarchies (yes, that's exactly what you are implementing here ;- ) look at ["Server-Based Java Programming" by Ted Neward](http://www.manning.com/neward3/) - that book helped me implementing something very similar to what you want.
|
Unloading classes in java?
|
[
"",
"java",
"classloader",
""
] |
I have a form with many input fields.
When I catch the submit form event with jQuery, is it possible to get all the input fields of that form in an associative array?
|
```
$('#myForm').submit(function() {
// get all the inputs into an array.
var $inputs = $('#myForm :input');
// not sure if you wanted this, but I thought I'd add it.
// get an associative array of just the values.
var values = {};
$inputs.each(function() {
values[this.name] = $(this).val();
});
});
```
---
Thanks to the tip from Simon\_Weaver, here is another way you could do it, using [`serializeArray`](http://api.jquery.com/serializeArray/):
```
var values = {};
$.each($('#myForm').serializeArray(), function(i, field) {
values[field.name] = field.value;
});
```
Note that this snippet will fail on `<select multiple>` elements.
It appears that the [new HTML 5 form inputs](http://diveintohtml5.ep.io/forms.html) don't work with `serializeArray` in jQuery version 1.3. This works in version 1.4+
|
Late to the party on this question, but this is even easier:
```
$('#myForm').submit(function() {
// Get all the forms elements and their values in one step
var values = $(this).serialize();
});
```
|
Obtain form input fields using jQuery?
|
[
"",
"javascript",
"jquery",
""
] |
I love the way I can profile a Java/.Net app to find performance bottlenecks or memory problems. For example, it's very easy to find a performance bottleneck looking at [the call tree with execution times and invocation counts per method](http://www.yourkit.com/dotnet/features/cpu-tree-threads-together.gif). In SQL Server, I have stored procedures that call other stored procedures that depend on views, which is similar to Java/.Net methods calling other methods. So it seems the same kind of profiler would be very helpful here. However, I looked far and wide and could not find one. Is anyone aware of such tools, either for SQL Server or any other DBMS?
Update: Thanks fro your replies around SQL Server Profiler, but this tool is very limited. Take a look at [the screenshot](http://www.yourkit.com/dotnet/features/cpu-tree-threads-together.gif).
|
Check out [SQL Nexus Tool](http://www.codeplex.com/sqlnexus/Wiki/View.aspx?title=SqlNexusReports&referringTitle=Home). This has some good reports on identifying bottlenecks.
SQL Nexus is a tool that helps you identify the root cause of SQL Server performance issues. It loads and analyzes performance data collected by SQLDiag and PSSDiag. It can dramatically reduce the amount of time you spend manually analyzing data.
In one of the Inside SQL 2005 books (maybe T-SQL Querying), there was a cool technique in which the author dumps the SQL profiler output to a table or excel file and applies a pivot to get the output in a similar format as your screenshot.
I have not seen any built-in SQL tools which gives you that kind of analysis.
Another useful [post](http://blogs.msdn.com/khen1234/archive/2007/12/12/a-sql-profiler-trace-swiss-army-knife.aspx).
|
In addition to SQL Server Profiler, as mentioned in a comment from @Galwegian, also check out your execution plan when you run a query.
<http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx>
<http://en.wikipedia.org/wiki/Query_plan>
|
Is there a SQL Server profiler similar to Java/.Net profilers?
|
[
"",
"java",
"sql-server",
"stored-procedures",
"profiling",
""
] |
What is the [MIME](http://en.wikipedia.org/wiki/MIME) type of javascript?
More specifically, what is the right thing to put in the "type" attribute of a script tag? `application/x-javascript` and `text/javascript` seem to be the main contenders.
|
This is a common mistake. The MIME type for javascript wasn't standardized for years. It's now [officially](http://www.rfc-editor.org/rfc/rfc4329.txt): "**application/javascript**".
The real kicker here is that most browsers won't use that attribute anyway, at least not in the case of the script tag. They actually peek inside the packet and determine the type for themselves.
So the bottom line is that the type="text/javascript" doesn't do anything as far as the javascript is concerned, but it's part of the spec for both **HTML 4** and **XHTML 1.0**.
|
Far out this is the first page I've found on the topic with any sense about it.
My collective research suggests:
1. `text/javascript` as Keparo stated must be used in html4 and xhtml1 if you want it to validate. Though browsers ignore it.
2. `application/javascript` is expected to be the new official MIME type if everyone agrees and when everything catches up.
3. `application/x-javascript` (x meaning unofficial) is the current server side MIME reference for javascript.
4. Everyone expects that as per usual, Microsoft will decide to do something completely different to further confuse and stuff up the matter.
Summary: For now, if you want your html/xhtml to work in MSIE and validate with W3C then declare `type="text/javascript"`. If you want your web server to know that you mean javascript then use `application/x-javascript`.
|
What is the javascript MIME type for the type attribute of a script tag?
|
[
"",
"javascript",
"mime-types",
""
] |
Instead of having to remember to initialize a simple 'C' structure, I might derive from it and zero it in the constructor like this:
```
struct MY_STRUCT
{
int n1;
int n2;
};
class CMyStruct : public MY_STRUCT
{
public:
CMyStruct()
{
memset(this, 0, sizeof(MY_STRUCT));
}
};
```
This trick is often used to initialize Win32 structures and can sometimes set the ubiquitous **cbSize** member.
Now, as long as there isn't a virtual function table for the memset call to destroy, is this a safe practice?
|
## PREAMBLE:
While my answer is still Ok, I find [litb's answer](https://stackoverflow.com/questions/112085/is-this-c-structure-initialization-trick-safe/1315024#1315024) quite superior to mine because:
1. It teaches me a trick that I did not know (litb's answers usually have this effect, but this is the first time I write it down)
2. It answers exactly the question (that is, initializing the original struct's part to zero)
So please, consider litb's answer before mine. In fact, I suggest the question's author to consider litb's answer as the right one.
## Original answer
Putting a true object (i.e. std::string) etc. inside will break, because the true object will be initialized before the memset, and then, overwritten by zeroes.
Using the initialization list doesn't work for g++ (I'm surprised...). Initialize it instead in the CMyStruct constructor body. It will be C++ friendly:
```
class CMyStruct : public MY_STRUCT
{
public:
CMyStruct() { n1 = 0 ; n2 = 0 ; }
};
```
P.S.: I assumed you did have **no** control over MY\_STRUCT, of course. With control, you would have added the constructor directly inside MY\_STRUCT and forgotten about inheritance. Note that you can add non-virtual methods to a C-like struct, and still have it behave as a struct.
EDIT: Added missing parenthesis, after Lou Franco's comment. Thanks!
EDIT 2 : I tried the code on g++, and for some reason, using the initialization list does not work. I corrected the code using the body constructor. The solution is still valid, though.
Please reevaluate my post, as the original code was changed (see changelog for more info).
EDIT 3 : After reading Rob's comment, I guess he has a point worthy of discussion: "Agreed, but this could be an enormous Win32 structure which may change with a new SDK, so a memset is future proof."
I disagree: Knowing Microsoft, it won't change because of their need for perfect backward compatibility. They will create instead an extended MY\_STRUCT**Ex** struct with the same initial layout as MY\_STRUCT, with additionnal members at the end, and recognizable through a "size" member variable like the struct used for a RegisterWindow, IIRC.
So the only valid point remaining from Rob's comment is the "enormous" struct. In this case, perhaps a memset is more convenient, but you will have to make MY\_STRUCT a variable member of CMyStruct instead of inheriting from it.
I see another hack, but I guess this would break because of possible struct alignment problem.
EDIT 4: Please take a look at Frank Krueger's solution. I can't promise it's portable (I guess it is), but it is still interesting from a technical viewpoint because it shows one case where, in C++, the "this" pointer "address" moves from its base class to its inherited class.
|
You can simply value-initialize the base, and all its members will be zero'ed out. This is guaranteed
```
struct MY_STRUCT
{
int n1;
int n2;
};
class CMyStruct : public MY_STRUCT
{
public:
CMyStruct():MY_STRUCT() { }
};
```
For this to work, there should be no user declared constructor in the base class, like in your example.
No nasty `memset` for that. It's not guaranteed that `memset` works in your code, even though it should work in practice.
|
Is this C++ structure initialization trick safe?
|
[
"",
"c++",
"data-structures",
""
] |
I have a Tapestry application that is serving its page as UTF-8. That is, server responses have header:
```
Content-type: text/html;charset=UTF-8
```
Now within this application there is a single page that should be served with ISO-8859-1 encoding. That is, server response should have this header:
```
Content-type: text/html;charset=ISO-8859-1
```
How to do this? I don't want to change default encoding for whole application.
Based on google searching I have tried following:
```
@Meta({ "org.apache.tapestry.output-encoding=ISO-8859-1",
"org.apache.tapestry.response-encoding=ISO-8859-1",
"org.apache.tapestry.template-encoding=ISO-8859-1",
"tapestry.response-encoding=ISO-8859-1"})
abstract class MyPage extends BasePage {
@Override
protected String getOutputEncoding() {
return "ISO-8859-1";
}
}
```
But neither setting those values with @Meta annotation or overriding getOutputEncoding method works.
I am using Tapestry 4.0.2.
EDIT: I ended up doing this with a Servlet filter with subclassed HttpServletResposeWrapper. The wrapper overrides setContentType() to force required encoding for the response.
|
Have you considered a Filter? Maybe not as elegant as something within Tapestry, but using a plain Filter, that registers the url mapping(s) of interest. One of its init parameters would be the encoding your after. Example:
```
public class EncodingFilter implements Filter {
private String encoding;
private FilterConfig filterConfig;
/**
* @see javax.servlet.Filter#init(javax.servlet.FilterConfig)
*/
public void init(FilterConfig fc) throws ServletException {
this.filterConfig = fc;
this.encoding = filterConfig.getInitParameter("encoding");
}
/**
* @see javax.servlet.Filter#doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain)
*/
public void doFilter(ServletRequest req, ServletResponse resp,
FilterChain chain) throws IOException, ServletException {
req.setCharacterEncoding(encoding);
chain.doFilter(req, resp);
}
/**
* @see javax.servlet.Filter#destroy()
*/
public void destroy() {
}
}
```
|
You could have done:
```
@Override
public ContentType getResponseContentType() {
return new ContentType("text/html;charset=" + someCharEncoding);
}
```
|
Setting ISO-8859-1 encoding for a single Tapestry 4 page in application that is otherwise totally UTF-8
|
[
"",
"java",
"encoding",
"utf-8",
"tapestry",
"iso-8859-1",
""
] |
I'm having a problem using the java.text.MessageFormat object.
I'm trying to create SQL insert statements. The problem is, when I do something like this:
```
MessageFormat messageFormat = "insert into {0} values ( '{1}', '{2}', '{3}', {4} )";
Object[] args = { str0, str1, str2, str3, str4 };
String result = messageFormat.format(args);
```
I get this for the value of `result`:
```
"insert into <str0> values ( {1}, {2}, {3}, <str4> )"
```
As you can see, the problem is that any of the target locations that are enclosed by single quotes do not get replaced by arguments. I have tried using double single quotes like this: `''{1}''` and escaped characters like this: `\'{1}\'` but it still gives the same result.
edit: I forgot to mention that I also tried `'''{1}'''`. The result is: `"insert into <str0> values ( '{1}', '{2}', '{3}', <str4> )"`. It is keeping the original quotes around but still not inserting the values.
How can I resolve this issue? For the record, I am using JDK 6u7.
|
I just tried double quotes and it worked fine for me:
```
MessageFormat messageFormat = new MessageFormat("insert into {0} values ( ''{1}'', ''{2}'', ''{3}'', {4} )");
Object[] args = {"000", "111", "222","333","444","555"};
String result = messageFormat.format(args);
```
The result is:
```
insert into 000 values ( '111', '222', '333', 444 )
```
Is this what you need?
|
Sorry if this is off the side, but it looks like you're trying to replicate the PreparedStatement that is already in JDBC.
If you are trying to create SQL to run against a database then I suggest that you look at PreparedStatement, it already does what you're trying to do (with a slightly different syntax).
Sorry if this isn't what you are doing, I just thought I would point it out.
|
Java MessageFormat - How can I insert values between single quotes?
|
[
"",
"java",
""
] |
We need to optimize the text rendering for a C# [Windows Forms](http://en.wikipedia.org/wiki/Windows_Forms) application displaying a large number of small strings in an irregular grid. At any time there can be well over 5000 cells visible that update 4 times per second. The font family and size is consistent across the cells, though the color may vary from cell to cell, as will bold/italic/plain.
I've seen conflicting information on the web about `TextRenderer.DrawText` vs. `Graphics.DrawString` being the fastest/best, which reduces to a [GDI](http://en.wikipedia.org/wiki/Graphics_Device_Interface) vs. [GDI+](http://en.wikipedia.org/wiki/Graphics_Device_Interface#GDI.2B) comparison at the [Win32](http://en.wikipedia.org/wiki/Windows_API) level.
I've also seen radically different results on Windows XP vs. Windows Vista, but my main target is Windows XP. Articles promising great advances
under [WinFX](http://en.wikipedia.org/wiki/.NET_Framework_3.0#.NET_Framework_3.0) and [DirectX 10](http://en.wikipedia.org/wiki/DirectX#DirectX_10) aren't helpful here :-)
What's the best approach here? I'm not afraid of introducing a small C++/CLI layer and optimizing device context handling to squeeze out more performance, but I'd like some definitive advice about which direction to take.
EDIT:
Thanks for the initial responses. I'll be trying a combination of background bitmap rendering and sticking with the GDI equivalent calls.
|
A Microsoft developer has posted a [GDI vs. GDI+ Text Rendering Performance](http://blogs.msdn.com/cjacks/archive/2006/05/19/602021.aspx) article on his blog which answers the raw speed question: on his system, GDI DrawText was about 6 times faster than GDI+ DrawString.
If you need to be a real speed demon, TextOut is faster than DrawText, but you'll have to take care of clipping and word-wrapping yourself. ExtTextOut supports clipping.
GDI rendering (TextRenderer) will be more consistent with other parts of Windows using GDI; GDI+ tries to be device-independent and so [some spacing and emboldening are inconsistent](http://windowsclient.net/articles/gdiptext.aspx). See the SQL Server 2005 Surface Area Configuration tool for an example of inconsistent rendering.
|
5000+ text rendering is slow even with GDI, especially if you need scrolling. Create a separate rendering thread and notify the UI thread every 200 ms and bitblt the current results. It gives a smooth user experience.
|
Fastest API for rendering text in Windows Forms?
|
[
"",
"c#",
"windows",
"performance",
""
] |
If I have an array of a fixed size depending on how it is defined and used, I typically use one of two ways to reference it.
Array type 1: Since it is a fixed size based on a define, I just use that define in all my loops referencing it.
```
#define MAXPLAYERS 4
int playerscores[MAXPLAYERS];
for(i=0;i<MAXPLAYERS;++i)
{
.... do something with each player
}
```
Array type 2: Since this array can grow as items are added to it, I use the sizeof to count the number of entries in it. The size would be converted to a constant by the compiler so there shouldn't be any runtime penalty to doing it this way.
```
typedef struct
{
fields....
}MYSTRUCT_DEF;
MYSTRUCT_DEF mystruct[]={
{entry 1},
{entry 2},
{entry 3...n}
};
for(i=0;i<(sizeof(mystruct)/sizeof(MYSTRUCT_DEF));++i)
{
..... do something with each entry
}
```
Is there a more elegant solution to handling processing of arrays without going past the end or stopping too early. Thoughts? Comments?
|
This will work for both of your cases, regardless of array element type:
```
#define ARRAY_COUNT(x) (sizeof(x)/sizeof((x)[0]))
...
struct foo arr[100];
...
for (i = 0; i < ARRAY_COUNT(arr); ++i) {
/* do stuff to arr[i] */
}
```
|
In C++ just use the vector class.
If you can't for some reason then there are macro implementations of what you want.
See this answer for a set of macros from winnt.h that work in C and even more safely in C++:
[Can this macro be converted to a function?](https://stackoverflow.com/questions/95500/can-this-macro-be-converted-to-a-function#95714)
|
Best practices for handling variable size arrays in c / c++?
|
[
"",
"c++",
"c",
"arrays",
""
] |
In C# how do you detect is a specific drive is a Hard Drive, Network Drive, CDRom, or floppy?
|
The method GetDrives() returns a DriveInfo class which has a property DriveType that corresponds to the enumeration of System.IO.DriveType:
```
public enum DriveType
{
Unknown, // The type of drive is unknown.
NoRootDirectory, // The drive does not have a root directory.
Removable, // The drive is a removable storage device,
// such as a floppy disk drive or a USB flash drive.
Fixed, // The drive is a fixed disk.
Network, // The drive is a network drive.
CDRom, // The drive is an optical disc device, such as a CD
// or DVD-ROM.
Ram // The drive is a RAM disk.
}
```
Here is a slightly adjusted example from MSDN that displays information for all drives:
```
DriveInfo[] allDrives = DriveInfo.GetDrives();
foreach (DriveInfo d in allDrives)
{
Console.WriteLine("Drive {0}, Type {1}", d.Name, d.DriveType);
}
```
|
[DriveInfo.DriveType](http://msdn.microsoft.com/en-us/library/system.io.driveinfo.drivetype.aspx) should work for you.
```
DriveInfo[] allDrives = DriveInfo.GetDrives();
foreach (DriveInfo d in allDrives)
{
Console.WriteLine("Drive {0}", d.Name);
Console.WriteLine(" File type: {0}", d.DriveType);
}
```
|
How to detect if any specific drive is a hard drive?
|
[
"",
"c#",
"hardware",
"hard-drive",
""
] |
In C# we can define a generic type that imposes constraints on the types that can be used as the generic parameter. The following example illustrates the usage of generic constraints:
```
interface IFoo
{
}
class Foo<T> where T : IFoo
{
}
class Bar : IFoo
{
}
class Simpson
{
}
class Program
{
static void Main(string[] args)
{
Foo<Bar> a = new Foo<Bar>();
Foo<Simpson> b = new Foo<Simpson>(); // error CS0309
}
}
```
Is there a way we can impose constraints for template parameters in C++.
---
C++0x has native support for this but I am talking about current standard C++.
|
As someone else has mentioned, C++0x is getting this built into the language. Until then, I'd recommend [Bjarne Stroustrup](http://en.wikipedia.org/wiki/Bjarne_Stroustrup)'s [suggestions for template constraints](http://www.stroustrup.com/bs_faq2.html#constraints).
Edit: [Boost](http://boost.org) also has an [alternative of its own](http://www.boost.org/doc/libs/1_36_0/libs/concept_check/concept_check.htm).
Edit2: Looks like [concepts have been removed from C++0x](http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=441).
|
If you use C++11, you can use `static_assert` with `std::is_base_of` for this purpose.
For example,
```
#include <type_traits>
template<typename T>
class YourClass {
YourClass() {
// Compile-time check
static_assert(std::is_base_of<BaseClass, T>::value, "type parameter of this class must derive from BaseClass");
// ...
}
}
```
|
Template Constraints C++
|
[
"",
"c++",
"templates",
"constraints",
""
] |
When I have a regular textbox in a UpdatePanel (not an ASP.NET control) with some JavaScript events on it, after the UpdatePanel updates my events are gone. Is there a way to re-attach my events after the update? (Preferably without putting my events inline).
|
You can use the [endRequest](http://www.asp.net/AJAX/documentation/live/ClientReference/Sys.WebForms/PageRequestManagerClass/PageRequestManagerEndRequestEvent.aspx) event of the [PageRequestManager](http://www.asp.net/AJAX/documentation/live/ClientReference/Sys.WebForms/PageRequestManagerClass/default.aspx) class.
|
You can have a `setInterval()` loop on document load that would search for the element in the update panel and if it didn't have the events, it can re-attach them.
|
How do I perform an action after an UpdatePanel updates?
|
[
"",
"javascript",
"asp.net-ajax",
"updatepanel",
""
] |
Does anyone have an example of script that can work reliably well across IE/Firefox to detect if the browser is capable of displaying embedded flash content. I say reliably because I know its not possible 100% of the time.
|
[SWFObject](https://github.com/swfobject/swfobject) is very reliable. I have used it without trouble for quite a while.
|
I agree with [Max Stewart](https://stackoverflow.com/questions/159261/cross-browser-flash-detection-in-javascript/159374#159374). [SWFObject](https://github.com/swfobject/swfobject) is the way to go. I'd like to supplement his answer with a code example. This ought to to get you started:
Make sure you have included the `swfobject.js` file (get it [here](https://github.com/swfobject/swfobject/releases)):
```
<script type="text/javascript" src="swfobject.js"></script>
```
Then use it like so:
```
if(swfobject.hasFlashPlayerVersion("9.0.115"))
{
alert("You have the minimum required flash version (or newer)");
}
else
{
alert("You do not have the minimum required flash version");
}
```
Replace "9.0.115" with whatever minimum flash version you need. I chose 9.0.115 as an example because that's the version that added h.264 support.
If the visitor does not have flash, it will report a flash version of "0.0.0", so if you just want to know if they have flash at all, use:
```
if(swfobject.hasFlashPlayerVersion("1"))
{
alert("You have flash!");
}
else
{
alert("You do not flash :-(");
}
```
|
Cross Browser Flash Detection in Javascript
|
[
"",
"javascript",
"flash",
"cross-browser",
""
] |
So I have a ListView with an upper limit of about 1000 items. I need to be able to filter these items using a textbox's TextChanged event. I have some code that works well for a smaller number of items (~400), but when I need to re-display a full list of all 1000 items, it takes about 4 seconds.
I am not creating new ListViewItems every time. Instead, I keep a list of the full item collection and then add from that. It seems that the .Add method is taking a long time regardless. Here is a little sample:
```
this.BeginUpdate();
foreach (ListViewItem item in m_cachedItems)
{
MyListView.Add(item);
}
this.EndUpdate;
```
I have tried only adding the missing items (i.e., the difference between the items currently being displayed and the total list of items), but this doesn't work either. There can be a situation in which there is only one item currently displayed, the user clears the textbox, and I need to display the entire list.
I am not very experienced in eeking performance out of .NET controls with a large sample like this, so I don't really know a better way to do it. Is there any way around using the .Add() method, or if not, just e better general solution?
|
There are two things to address this:
1. Turn off sorting while manipulating the list contents.
2. Hide the list so it doesn't try to paint.
The 1st point is the biggest performance gain in list manipulation out of these two. To achieve this, just set the ListViewItemSorter to null for the duration of the modification and set it back at the end.
For the 2nd option, I often draw the list to a bitmap and then show that bitmap in a PictureBox so the user doesn't see the list disappear, then just reshow the list when I'm done.
|
There is a better way, you can use the [VirtualMode](http://msdn.microsoft.com/en-us/library/system.windows.forms.listview.virtualmode.aspx) of the list view.
That documentation should get you started. The idea is to provide information to the ListView only as it's needed. Such information is retrieved using events. All you have to do is implement those events and tell the list view how many items it contains.
|
How to efficiently filter a large LIstViewItemCollection?
|
[
"",
"c#",
".net",
"winforms",
"performance",
"listview",
""
] |
Out of order execution in CPUs means that a CPU can reorder instructions to gain better performance and it means the CPU is having to do some very nifty bookkeeping and such. There are other processor approaches too, such as hyper-threading.
Some fancy compilers understand the (un)interrelatedness of instructions to a limited extent, and will automatically interleave instruction flows (probably over a longer window than the CPU sees) to better utilise the processor. Deliberate compile-time interleaving of floating and integer instructions is another example of this.
Now I have highly-parallel task. And I typically have an ageing single-core x86 processor without hyper-threading.
Is there a straight-forward way to get my the body of my 'for' loop for this highly-parallel task to be interleaved so that two (or more) iterations are being done together? (This is slightly different from 'loop unwinding' as I understand it.)
My task is a 'virtual machine' running through a set of instructions, which I'll really simplify for illustration as:
```
void run(int num) {
for(int n=0; n<num; n++) {
vm_t data(n);
for(int i=0; i<data.len(); i++) {
data.insn(i).parse();
data.insn(i).eval();
}
}
}
```
So the execution trail might look like this:
```
data(1) insn(0) parse
data(1) insn(0) eval
data(1) insn(1) parse
...
data(2) insn(1) eval
data(2) insn(2) parse
data(2) insn(2) eval
```
Now, what I'd like is to be able to do two (or more) iterations explicitly in parallel:
```
data(1) insn(0) parse
data(2) insn(0) parse \ processor can do OOO as these two flow in
data(1) insn(0) eval /
data(2) insn(0) eval \ OOO opportunity here too
data(1) insn(1) parse /
data(2) insn(1) parse
```
I know, from profiling, (e.g. using Callgrind with --simulate-cache=yes), that parsing is about random memory accesses (cache missing) and eval is about doing ops in registers and then writing results back. Each step is several thousand instructions long. So if I can intermingle the two steps for two iterations at once, the processor will hopefully have something to do whilst the cache misses of the parse step are occurring...
Is there some c++ template madness to get this kind of explicit parallelism generated?
Of course I can do the interleaving - and even staggering - myself in code, but it makes for much less readable code. And if I really want unreadable, I can go so far as assembler! But surely there is some pattern for this kind of thing?
|
It sounds like you ran into the same problem chip designers face: Executing a single instruction takes a lot of effort, but it involves a bunch of different steps that can be strung together in an [execution pipeline](http://en.wikipedia.org/wiki/Pipeline_(computing)). (It is easier to execute things in parallel when you can build them out of separate blocks of hardware.)
The most obvious way is to split each task into different threads. You might want to create a single thread to execute each instruction to completion, or create one thread for each of your two execution steps and pass data between them. In either case, you'll have to be very careful with how you share data between threads and make sure to handle the case where one instruction affects the result of the following instruction. Even though you only have one core and only one thread can be running at any given time, your operating system should be able to schedule compute-intense threads while other threads are waiting for their cache misses.
(A few hours of your time would probably pay for a single very fast computer, but if you're trying to deploy it widely on cheap hardware it might make sense to consider the problem the way you're looking at it. Regardless, it's an interesting problem to consider.)
|
Given optimizing compilers and pipelined processors, I would suggest you just write clear, readable code.
|
Explicit code parallelism in c++
|
[
"",
"c++",
"performance",
"design-patterns",
""
] |
I've always just FTPed files down from sites, edited them and put them back up when creating sites, but feel it's worth learning to do things properly.
I've just commited everything to a SVN repo, and have tried sshing into the server and checking out a tagged build, as well as updating that build using switch.
All good, but it's a lot lot slower than my current process.
What's the best way to set something like this up? Most of my time is just bug fixes or small changes rather than large rewrites, so I'm frequently updating things.
|
You don't necessarily need to use SVN to deploy the files to the server. Keep using FTP for that and just use SVN for revision history.
|
I'd recommend you keep using Subversion to track all changes, even bug fixes. When you wish to deploy to your production server, you should use SSH and call `svn update`. This process can be automated using [Capistrano](http://www.capify.org/), meaning that you can sit at your local box and call `cap deploy` -- Capistrano will SSH into your server and perform the Subversion update. Saves a lot of tedious manual labor.
|
What's the best way to use SVN to version control a PHP site?
|
[
"",
"php",
"svn",
""
] |
For example, the standard division symbol '/' rounds to zero:
```
>>> 4 / 100
0
```
However, I want it to return 0.04. What do I use?
|
There are three options:
```
>>> 4 / float(100)
0.04
>>> 4 / 100.0
0.04
```
which is the same behavior as the C, C++, Java etc, or
```
>>> from __future__ import division
>>> 4 / 100
0.04
```
You can also activate this behavior by passing the argument `-Qnew` to the Python interpreter:
```
$ python -Qnew
>>> 4 / 100
0.04
```
The second option will be the default in Python 3.0. If you want to have the old integer division, you have to use the `//` operator.
**Edit**: added section about `-Qnew`, thanks to [ΤΖΩΤΖΙΟΥ](https://stackoverflow.com/users/6899/)!
|
Other answers suggest how to get a floating-point value. While this wlil be close to what you want, it won't be exact:
```
>>> 0.4/100.
0.0040000000000000001
```
If you actually want a *decimal* value, do this:
```
>>> import decimal
>>> decimal.Decimal('4') / decimal.Decimal('100')
Decimal("0.04")
```
That will give you an object that properly knows that 4 / 100 in *base 10* is "0.04". Floating-point numbers are actually in base 2, i.e. binary, not decimal.
|
How do I get a decimal value when using the division operator in Python?
|
[
"",
"python",
"python-2.7",
"math",
"syntax",
"operators",
""
] |
Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much *looks* the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python.
|
## General idea
Option 1: Load both images as arrays (`scipy.misc.imread`) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference.
Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images.
However, there are some decisions to make first.
## Questions
You should answer these questions first:
* Are images of the same shape and dimension?
If not, you may need to resize or crop them. PIL library will help to do it in Python.
If they are taken with the same settings and the same device, they are probably the same.
* Are images well-aligned?
If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it.
If the camera and the scene are still, the images are likely to be well-aligned.
* Is exposure of the images always the same? (Is lightness/contrast the same?)
If not, you may want [to normalize](http://en.wikipedia.org/wiki/Normalization_(image_processing)) images.
But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different.
* Is color information important?
If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code.
* Are there distinct edges in the image? Are they likely to move?
If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second.
* Is there noise in the image?
All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here.
* What kind of changes do you want to notice?
This may affect the choice of norm to use for the difference between images.
Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ.
## Example
I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images.
You will need these imports:
```
import sys
from scipy.misc import imread
from scipy.linalg import norm
from scipy import sum, average
```
Main function, read two images, convert to grayscale, compare and print results:
```
def main():
file1, file2 = sys.argv[1:1+2]
# read images as 2D arrays (convert to grayscale for simplicity)
img1 = to_grayscale(imread(file1).astype(float))
img2 = to_grayscale(imread(file2).astype(float))
# compare
n_m, n_0 = compare_images(img1, img2)
print "Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size
print "Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size
```
How to compare. `img1` and `img2` are 2D SciPy arrays here:
```
def compare_images(img1, img2):
# normalize to compensate for exposure difference, this may be unnecessary
# consider disabling it
img1 = normalize(img1)
img2 = normalize(img2)
# calculate the difference and its norms
diff = img1 - img2 # elementwise for scipy arrays
m_norm = sum(abs(diff)) # Manhattan norm
z_norm = norm(diff.ravel(), 0) # Zero norm
return (m_norm, z_norm)
```
If the file is a color image, `imread` returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g. `.pgm`):
```
def to_grayscale(arr):
"If arr is a color image (3D array), convert it to grayscale (2D array)."
if len(arr.shape) == 3:
return average(arr, -1) # average over the last axis (color channels)
else:
return arr
```
Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255]. `arr` is a SciPy array here, so all operations are element-wise:
```
def normalize(arr):
rng = arr.max()-arr.min()
amin = arr.min()
return (arr-amin)*255/rng
```
Run the `main` function:
```
if __name__ == "__main__":
main()
```
Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference:
```
$ python compare.py one.jpg one.jpg
Manhattan norm: 0.0 / per pixel: 0.0
Zero norm: 0 / per pixel: 0.0
```
If we blur the image and compare to the original, there is some difference:
```
$ python compare.py one.jpg one-blurred.jpg
Manhattan norm: 92605183.67 / per pixel: 13.4210411116
Zero norm: 6900000 / per pixel: 1.0
```
P.S. Entire [compare.py](http://gist.github.com/626356) script.
## Update: relevant techniques
As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant:
* background subtraction and segmentation (to detect foreground objects)
* sparse optical flow (to detect motion)
* comparing histograms or some other statistics instead of images
I strongly recommend taking a look at “Learning OpenCV” book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ≥ 2.3, and its `cv2` Python module.
The most simple version of the background subtraction:
* learn the average value μ and standard deviation σ for every pixel of the background
* compare current pixel values to the range of (μ-2σ,μ+2σ) or (μ-σ,μ+σ)
More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass).
The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use [Lucas-Kanade method](http://en.wikipedia.org/wiki/Lucas%E2%80%93Kanade_method) (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different.
Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in [Courbon et al, 2010](http://www.sciencedirect.com/science/article/pii/S0967066110000808):
> *Similarity of consecutive frames.* The distance between two consecutive frames is measured. If it is too high, it means that the second frame is corrupted and thus the image is eliminated. The [Kullback–Leibler distance](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence), or mutual entropy, on the histograms of the two frames:
>
> 
>
> where *p* and *q* are the histograms of the frames is used. The threshold is fixed on 0.2.
|
**A simple solution:**
Encode the image as a **jpeg** and look for a substantial change in **filesize**.
I've implemented something similar with video thumbnails, and had a lot of success and scalability.
|
How can I quantify difference between two images?
|
[
"",
"python",
"image-processing",
"background-subtraction",
"image-comparison",
"timelapse",
""
] |
What is the best way to convert an int or null to boolean value in an SQL query, such that:
* Any non-null value is **TRUE** in the results
* Any null value is **FALSE** in the results
|
To my knowledge (correct me if I'm wrong), there is no concept of literal boolean values in SQL. You can have expressions evaluating to boolean values, but you cannot output them.
This said, you can use CASE WHEN to produce a value you can use in a comparison:
```
SELECT
CASE WHEN ValueColumn IS NULL THEN 'FALSE' ELSE 'TRUE' END BooleanOutput
FROM
table
```
|
No need to use case... when:
```
select (column_name is not null) as result from table_name;
```
Returns 1 for all fields not NULL and 0 for all fields that are NULL, which is as close as you can get to booleans in SQL.
|
What is the best way to convert an int or null to boolean value in an SQL query?
|
[
"",
"sql",
"null",
"boolean",
"type-conversion",
""
] |
Have a n-tire web application and search often times out after 30 secs. How to detect the root cause of the problem?
|
Log at separation of concerns, in this case at the boundaries of each layer.
when you say search, I'd assume that this is a web app where the user is searching for some text.
Is the DB search involved ?
Is there a possibility that your garbage collector is kicking in and the search timed out ?
Try to log your garbage collection pauses using java command line switches. Later, you can analyse when your JVM was doing GCs. OR you can use a tool like JConsole or JvisualVM.
BR,
~A
|
You can add trace code to log its activities and then look at each operation to see what part times out and why.
|
How to detect intermittent time out problem in web applications?
|
[
"",
"java",
"web-services",
""
] |
I need a select from table which does not have column that tells when row was inserted, only `timestamp` column (values like: 0x0000000000530278). Some data was imported to the table yesterday and now I need to find out what exactly was imported :(
Is there a way to do it using only `timestamp` info? [Here](http://www.sqlhacks.com/index.php/Dates/Timestamp) I found that:
* Timestamp is a 8 bytes sequential Hex number, that has nothing to do with neither the date nor the time.
* To get the current value of timestamp, use: @@DBTS.
Perhaps there is a way to find what was `timestamp` value around specific time? That would help to form a select. Or maybe there is a well known solution?
|
The Transact-SQL timestamp data type is a binary data type with no time-related values.
So to answer your question: *Is there a way to get DateTime value from timestamp type column?*
The answer is: *No*
|
The timestamp datatype in SQL Server 2005 is a synonym of rowversion and is just a number that is automatically incremented with each row update.
You can cast it to bigint to see its value.
To get what you want for new or updated rows, you should propably add another datetime column (lastupdate) and a trigger to update that column with each update.
For rows that have already been inserted in the past I don't think that you can do something to find the exact time.
|
Is there a way to get DateTime value from timestamp type column?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
This code works (C# 3)
```
double d;
if(d == (double)(int)d) ...;
```
1. Is there a better way to do this?
2. For extraneous reasons I want to avoid the double cast so; what nice ways exist other than this? (even if they aren't as good)
*Note:* Several people pointed out the (important) point that == is often problematic regrading floating point. In this cases I expect values in the range of 0 to a few hundred and they are supposed to be integers (non ints are errors) so if those points "shouldn't" be an issue for me.
|
```
d == Math.Floor(d)
```
does the same thing in other words.
NB: Hopefully you're aware that you have to be very careful when doing this kind of thing; floats/doubles will very easily accumulate miniscule errors that make exact comparisons (like this one) fail for no obvious reason.
|
This would work I think:
```
if (d % 1 == 0) {
//...
}
```
|
Test if a floating point number is an integer
|
[
"",
"c#",
"floating-point",
"numerical",
""
] |
In c#, how can I check to see if a link button has been clicked in the page load method?
I need to know if it was clicked before the click event is fired.
|
```
if( IsPostBack )
{
// get the target of the post-back, will be the name of the control
// that issued the post-back
string eTarget = Request.Params["__EVENTTARGET"].ToString();
}
```
|
if that doesn't work.Try `UseSubmitBehavior="false"`
|
ASP.NET : Check for click event in page_load
|
[
"",
"c#",
"asp.net",
".net",
"pageload",
""
] |
Is there a fix or a workaround for the memory leak in getpwnam?
|
`getpwnam()` **does not** suffer of memory leak. Subsequent calls, indeed, will overwrite its static internal buffer.
Such kind of functions are instead *non-reentrant* and therefore *non-thread safe*. Paul suggested the use of `getpwnam_r()` which is the reentrant version, that is safe to be used in a multithread context.
That said, memory leaks are caused by those system calls that allocate memory by means of `malloc()` and leave the application the responsability to `free()` the memory once the returned data has used.
In these cases the RAII idiom is advisable in order to not forget to free the allocated memory -- see exception safety. `std::tr1::shared_ptr<>` is also a viable way: For the shared\_ptr a custom deleter must be provided to `free()` the raw pointer when the shared\_ptr goes out of the scope.
Under this perspective some dangerous functions are `scandir()`, `asprintf()`, `vasprintf()` etc.
|
Use getpwnam\_r.
|
Is there a fix or a workaround for the memory leak in getpwnam?
|
[
"",
"c++",
"linux",
""
] |
Ok, my actual problem was this: I was implementing an `IList<T>`. When I got to `CopyTo(Array array, int index)`, this was my solution:
```
void ICollection.CopyTo(Array array, int index)
{
// Bounds checking, etc here.
if (!(array.GetValue(0) is T))
throw new ArgumentException("Cannot cast to this type of Array.");
// Handle copying here.
}
```
This worked in my original code, and still works. But it has a small flaw, which wasn't exposed till I started building tests for it, specifically this one:
```
public void CopyToObjectArray()
{
ICollection coll = (ICollection)_list;
string[] testArray = new string[6];
coll.CopyTo(testArray, 2);
}
```
Now, this test should pass. It throws the `ArgumentException` about not being able to cast. Why? `array[0] == null`. The `is` keyword always returns false when checking a variable that is set to `null`. Now, this is handy for all sorts of reasons, including avoiding null dereferences, etc. What I finally came up with for my type checking was this:
```
try
{
T test = (T)array.GetValue(0);
}
catch (InvalidCastException ex)
{
throw new ArgumentException("Cannot cast to this type of Array.", ex);
}
```
This isn't exactly elegant, but it works... Is there a better way though?
|
The only way to be sure is with reflection, but 90% of the time you can avoid the cost of that by using `array is T[]`. Most people are going to pass a properly typed array in, so that will do. But, you should always provide the code to do the reflection check as well, just in case. Here's what my general boiler-plate looks like (note: I wrote this here, from memory, so this might not compile, but it should give the basic idea):
```
class MyCollection : ICollection<T> {
void ICollection<T>.CopyTo(T[] array, int index) {
// Bounds checking, etc here.
CopyToImpl(array, index);
}
void ICollection.CopyTo(Array array, int index) {
// Bounds checking, etc here.
if (array is T[]) { // quick, avoids reflection, but only works if array is typed as exactly T[]
CopyToImpl((T[])localArray, index);
} else {
Type elementType = array.GetType().GetElementType();
if (!elementType.IsAssignableFrom(typeof(T)) && !typeof(T).IsAssignableFrom(elementType)) {
throw new Exception();
}
CopyToImpl((object[])array, index);
}
}
private void CopyToImpl(object[] array, int index) {
// array will always have a valid type by this point, and the bounds will be checked
// Handle the copying here
}
}
```
**EDIT**: Ok, forgot to point something out. A couple answers naively used what, in this code, reads as `element.IsAssignableFrom(typeof(T))` only. You *should* also allow `typeof(T).IsAssignableFrom(elementType)`, as the BCL does, in case a developer knows that all of the values in this specific `ICollection` are actually of a type `S` derived from `T`, and passes an array of type `S[]`
|
There is a method on Type specifically for this, try:
```
if(!typeof(T).IsAssignableFrom(array.GetElementType()))
```
|
Proper nullable type checking in C#?
|
[
"",
"c#",
"casting",
"nullable",
""
] |
I am currently initializing a Hashtable in the following way:
```
Hashtable filter = new Hashtable();
filter.Add("building", "A-51");
filter.Add("apartment", "210");
```
I am looking for a nicer way to do this.
I tried something like
```
Hashtable filter2 = new Hashtable() {
{"building", "A-51"},
{"apartment", "210"}
};
```
However the above code does not compile.
|
The exact code you posted:
```
Hashtable filter2 = new Hashtable()
{
{"building", "A-51"},
{"apartment", "210"}
};
```
Compiles perfectly in C# 3. Given you reported compilation problems, I'm guessing you are using C# 2? In this case you can at least do this:
```
Hashtable filter2 = new Hashtable();
filter2["building"] = "A-51";
filter2["apartment"] = "210";
```
|
In C# 3 it should compile fine like this:
```
Hashtable table = new Hashtable {{1, 1}, {2, 2}};
```
|
Is there a better way to initialize a Hastable in .NET without using Add method?
|
[
"",
"c#",
""
] |
An idiom commonly used in OO languages like Python and Ruby is instantiating an object and chaining methods that return a reference to the object itself, such as:
```
s = User.new.login.get_db_data.get_session_data
```
In PHP, it is possible to replicate this behavior like so:
```
$u = new User();
$s = $u->login()->get_db_data()->get_session_data();
```
Attempting the following results in `syntax error, unexpected T_OBJECT_OPERATOR`:
```
$s = new User()->login()->get_db_data()->get_session_data();
```
It seems like this could be accomplished using static methods, which is probably what I'll end up doing, but I wanted to check the lazyweb: **Is there actually a clean, simple way to instantiate PHP classes "inline" (as shown in the above snippet)** for this purpose?
If I do decide to use static methods, **is it too sorcerous to have a class's static method return an instantiation of the class itself**? (Effectively writing my own constructor-that-isn't-a-constructor?) It feels kind of dirty, but if there aren't too many scary side effects, I might just do it.
I guess I could also pre-instantiate a UserFactory with a get\_user() method, but I'm curious about solutions to what I asked above.
|
All of these proposed solutions complicate your code in order to bend PHP to accomplish some syntactic nicety. Wanting PHP to be something it's not (like good) is the path to madness.
I would just use:
```
$u = new User();
$s = $u->login()->get_db_data()->get_session_data();
```
It is clear, relatively concise and involves no *black magic* that can introduce errors.
And of course, you could always move to Ruby or Python. It will change your life.
* And yeah, I am harsh on PHP. I use it every day. Been using it for years. The reality is that it has **accreted**, rather than been designed and it shows.
|
```
<?php
class User
{
function __construct()
{
}
function Login()
{
return $this;
}
function GetDbData()
{
return $this;
}
function GetSession()
{
return array("hello" => "world");
}
}
function Create($name)
{
return new $name();
}
$s = Create("User")->Login()->GetDbData()->GetSession();
var_dump($s);
?>
```
This is a possible solution :) Of course, you should choose a better name for the function...
Or if you don't mind a little overhead:
```
<?php
class User
{
function __construct($test)
{
echo $test;
}
...
}
function CreateArgs($name)
{
$ref = new ReflectionClass($name);
return $ref->newInstanceArgs(array_slice(func_get_args(), 1));
}
$s = CreateArgs("User", "hi")->Login()->GetDbData()->GetSession();
var_dump($s);
?>
```
|
"Inline" Class Instantiation in PHP? (For Ease of Method Chaining)
|
[
"",
"php",
"oop",
"static-methods",
""
] |
Please bear with me, I'm just learning C++.
I'm trying to write my header file (for class) and I'm running into an odd error.
```
cards.h:21: error: expected unqualified-id before ')' token
cards.h:22: error: expected `)' before "str"
cards.h:23: error: expected `)' before "r"
```
What does "expected unqualified-id before ')' token" mean? And what am I doing wrong?
Edit: Sorry, I didn't post the entire code.
```
/*
Card header file
[Author]
*/
// NOTE: Lanugage Docs here http://www.cplusplus.com/doc/tutorial/
#define Card
#define Hand
#define AppError
#include <string>
using namespace std;
// TODO: Docs here
class Card { // line 17
public:
enum Suit {Club, Diamond, Spade, Heart};
enum Rank {Two, Three, Four, Five, Six, Seven, Eight, Nine,
Ten, Jack, Queen, King, Ace};
Card(); // line 22
Card(string str);
Card(Rank r, Suit s);
```
Edit: I'm just trying to compile the header file by itself using "g++ file.h".
Edit: Closed question. My code is working now. Thanks everyone!
Edit: Reopened question after reading [Etiquette: Closing your posts](https://stackoverflow.com/questions/34456/etiquette-closing-your-posts)
|
Your issue is your `#define`. You did `#define Card`, so now everywhere `Card` is seen as a token, it will be replaced.
Usually a `#define Token` with no additional token, as in `#define Token Replace` will use the value `1`.
Remove the `#define Card`, it's making line 22 read: `1();` or `();`, which is causing the complaint.
|
(edited for updated question)
Remove the `#define` statements, they're mangling the file. Were you trying to implement an include guard? That would be something like this:
```
#ifndef CARD_H
#define CARD_H
class Card ...
...
#endif
```
---
old answer:
It means that `string` is not defined in the current line. Try `std::string`.
|
GCC - "expected unqualified-id before ')' token"
|
[
"",
"c++",
""
] |
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server.
Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime.
What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal.
The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together.
I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable.
Edit: Expanded description to clear up some misconceptions.
|
Building a DSL to be interpreted by Python.
Step 1. Build the run-time classes and objects. These classes will have all the cursor loops and SQL statements and all of that algorithmic processing tucked away in their methods. You'll make heavy use of the [Command](http://exciton.cs.rice.edu/javaresources/DesignPatterns/command.htm) and [Strategy](http://exciton.cs.rice.edu/javaresources/DesignPatterns/StrategyPattern.htm) design patterns to build these classes. Most things are a command, options and choices are plug-in strategies. Look at the design for Apache Ant's [Task](http://ant.apache.org/manual/develop.html) API -- it's a good example.
Step 2. Validate that this system of objects actually works. Be sure that the design is simple and complete. You're tests will construct the Command and Strategy objects, and then execute the top-level Command object. The Command objects will do the work.
At this point you're largely done. Your run-time is just a configuration of objects created from the above domain. [This isn't as easy as it sounds. It requires some care to define a set of classes that can be instantiated and then "talk among themselves" to do the work of your application.]
Note that what you'll have will require nothing more than declarations. What's wrong with procedural? One you start to write a DSL with procedural elements, you find that you need more and more features until you've written Python with different syntax. Not good.
Further, procedural language interpreters are simply hard to write. State of execution, and scope of references are simply hard to manage.
You can use native Python -- and stop worrying about "getting out of the sandbox". Indeed, that's how you'll unit test everything, using a short Python script to create your objects. Python will be the DSL.
["But wait", you say, "If I simply use Python as the DSL people can execute arbitrary things." Depends on what's on the PYTHONPATH, and sys.path. Look at the [site](http://docs.python.org/lib/module-site.html) module for ways to control what's available.]
A declarative DSL is simplest. It's entirely an exercise in representation. A block of Python that merely sets the values of some variables is nice. That's what Django uses.
You can use the [ConfigParser](http://docs.python.org/lib/module-ConfigParser.html) as a language for representing your run-time configuration of objects.
You can use [JSON](http://pypi.python.org/pypi/python-json/) or [YAML](http://pyyaml.org/) as a language for representing your run-time configuration of objects. Ready-made parsers are totally available.
You can use XML, too. It's harder to design and parse, but it works fine. People love it. That's how Ant and Maven (and lots of other tools) use declarative syntax to describe procedures. I don't recommend it, because it's a wordy pain in the neck. I recommend simply using Python.
Or, you can go off the deep-end and invent your own syntax and write your own parser.
|
I think we're going to need a bit more information here. Let me know if any of the following is based on incorrect assumptions.
First of all, as you pointed out yourself, there already exists a DSL for selecting rows from arbitrary tables-- it is called "SQL". Since you don't want to reinvent SQL, I'm assuming that you only need to query from a single table with a fixed format.
If this is the case, you probably don't need to implement a DSL (although that's certainly one way to go); it may be easier, if you are used to Object Orientation, to create a Filter object.
More specifically, a "Filter" collection that would hold one or more SelectionCriterion objects. You can implement these to inherit from one or more base classes representing types of selections (Range, LessThan, ExactMatch, Like, etc.) Once these base classes are in place, you can create column-specific inherited versions which are appropriate to that column. Finally, depending on the complexity of the queries you want to support, you'll want to implement some kind of connective glue to handle AND and OR and NOT linkages between the various criteria.
If you feel like it, you can create a simple GUI to load up the collection; I'd look at the filtering in Excel as a model, if you don't have anything else in mind.
Finally, it should be trivial to convert the contents of this Collection to the corresponding SQL, and pass that to the database.
However: if what you are after is simplicity, and your users understand SQL, you could simply ask them to type in the contents of a WHERE clause, and programmatically build up the rest of the query. From a security perspective, if your code has control over the columns selected and the FROM clause, and your database permissions are set properly, and you do some sanity checking on the string coming in from the users, this would be a relatively safe option.
|
Writing a Domain Specific Language for selecting rows from a table
|
[
"",
"python",
"database",
"algorithm",
"dsl",
""
] |
I have a form in which people will be entering dollar values.
Possible inputs:
$999,999,999.99
999,999,999.99
999999999
99,999
$99,999
The user can enter a dollar value however they wish. I want to read the inputs as doubles so I can total them.
I tried just typecasting the strings to doubles but that didn't work. Total just equals 50 when it is output:
```
$string1 = "$50,000";
$string2 = "$50000";
$string3 = "50,000";
$total = (double)$string1 + (double)$string2 + (double)$string3;
echo $total;
```
|
A regex won't convert your string into a number. I would suggest that you use a regex to validate the field (confirm that it fits one of your allowed formats), and then just loop over the string, discarding all non-digit and non-period characters. If you don't care about validation, you could skip the first step. The second step will still strip it down to digits and periods only.
By the way, you cannot safely use floats when calculating currency values. You will lose precision, and very possibly end up with totals that do not exactly match the inputs.
Update: Here are two functions you could use to verify your input and to convert it into a decimal-point representation.
```
function validateCurrency($string)
{
return preg_match('/^\$?(\d{1,3})(,\d{3})*(.\d{2})?$/', $string) ||
preg_match('/^\$?\d+(.\d{2})?$/', $string);
}
function makeCurrency($string)
{
$newstring = "";
$array = str_split($string);
foreach($array as $char)
{
if (($char >= '0' && $char <= '9') || $char == '.')
{
$newstring .= $char;
}
}
return $newstring;
}
```
The first function will match the bulk of currency formats you can expect "$99", "99,999.00", etc. It will not match ".00" or "99.", nor will it match most European-style numbers (99.999,00). Use this on your original string to verify that it is a valid currency string.
The second function will just strip out everything except digits and decimal points. Note that by itself it may still return invalid strings (e.g. "", "....", and "abc" come out as "", "....", and ""). Use this to eliminate extraneous commas once the string is validated, or possibly use this by itself if you want to skip validation.
|
You don't ever want to represent monetary values as floats!
For example, take the following (seemingly straight forward) code:
```
$x = 1.0;
for ($ii=0; $ii < 10; $ii++) {
$x = $x - .1;
}
var_dump($x);
```
You might assume that it would produce the value zero, but that is not the case. Since `$x` is a floating point, it actually ends up being a tiny bit more than zero (1.38777878078E-16), which isn't a big deal in itself, but it means that comparing the value with another value isn't guaranteed to be correct. For example `$x == 0` would produce false.
|
How to add currency strings (non-standardized input) together in PHP?
|
[
"",
"php",
""
] |
I have an object of class F. I want to output the contents of the object using Console.WriteLine for quick and dirty status updates like this:
Console.WriteLine(objectF);
This prints out only the name of the class to the console:
```
F
```
I want to overload this somehow so that I can instead print out some useful information about the object and its properties.
I have a workaround already: To overload the ToString method in my class and then call:
Console.WriteLine(objectF.ToString());
But I would rather have the simpler syntax. Any ideas?
|
Console.WriteLine(objectF)
Should work, if you overloaded `ToString`. When the framework needs to convert an object to a string representation, it invokes `ToString`.
```
public override string ToString()
{
// replace the line below with your code
return base.ToString();
}
```
|
You should override ToString() though in some situations you may just find the following code useful:
```
public static class ObjectUtility
{
public static string ToDebug(this object obj)
{
if (obj == null)
return "<null>";
var type = obj.GetType();
var props = type.GetProperties();
var sb = new StringBuilder(props.Length * 20 + type.Name.Length);
sb.Append(type.Name);
sb.Append("\r\n");
foreach (var property in props)
{
if (!property.CanRead)
continue;
// AppendFormat defeats the point
sb.Append(property.Name);
sb.Append(": ");
sb.Append(property.GetValue(obj, null));
sb.Append("\r\n");
}
return sb.ToString();
}
}
```
Usage is to simply include the namespace containing ObjectUtility and then...
```
var f = new F();
Console.WriteLine(f.ToDebug());
```
The reflection usage above isn't very good for high performance code so don't use it in a production scenario where high performance is desired.
|
What interface or method should my class implement to print what I want in Console.WriteLine?
|
[
"",
"c#",
".net",
"console-application",
"console.writeline",
""
] |
How do I prevent my users from accessing directly pages meant for ajax calls only?
Passing a key during ajax call seems like a solution, whereas access without the key will not be processed. But it is also easy to fabricate the key, no? Curse of View Source...
p/s: Using Apache as webserver.
EDIT: To answer why, I have jQuery ui-tabs in my index.php, and inside those tabs are forms with scripts, which won't work if they're accessed directly. Why a user would want to do that, I don't know, I just figure I'd be more user friendly by preventing direct access to forms without validation scripts.
|
As others have said, Ajax request can be emulated be creating the proper headers.
If you want to have a basic check to see if the request is an Ajax request you can use:
```
if($_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest') {
//Request identified as ajax request
}
```
However you should never base your security on this check. It will eliminate direct accesses to the page if that is what you need.
|
There is no way of guaranteeing that they're accessing it through AJAX. Both direct access and AJAX access come from the client, so it can easily be faked.
Why do you want to do this anyways?
If it's because the PHP code isn't very secure, make the PHP code more secure. (For example, if your AJAX passes the user id to the PHP file, write code in the PHP file to make sure that is the correct user id.)
|
Prevent direct access to a PHP page
|
[
"",
"php",
"ajax",
"apache",
""
] |
Assuming I have three tables :
TableA (key, value)
TableB (key, value)
TableC (key, value)
and I want to return a value for all keys. If the key exists in TableC return that value else if the key exists in B return that value else return the value from table A
The best I have come up with so far is
```
SELECT key,Value
FROM TableA
WHERE key NOT IN (SELECT key FROM TableB)
AND key NOT IN (SELECT key FROM TableC)
UNION
SELECT key,Value
FROM TableB
WHERE key NOT IN (SELECT key FROM TableC)
UNION
SELECT key,Value
FROM TableC
```
But this seems pretty brute force. Anyone know a better way?
Edit: Here is a more concrete example. Consider TableA as a standard work schedule where the key is a date and the value is the assigned shift. Table B is a statutory holiday calendar that overrides the standard work week. Table C is an exception schedule that is used to override the other two schedules when someone is asked to come in and work either an extra shift or a different shift.
|
OK, using your concrete example as a basis, I came up with a solution different from the others posted (although I think I like your solution better). This was tested on MS SQL Server 2005 - changes may be needed for your SQL dialect.
First, some DDL to set the stage:
```
CREATE TABLE [dbo].[StandardSchedule](
[scheduledate] [datetime] NOT NULL,
[shift] [varchar](25) NOT NULL,
CONSTRAINT [PK_StandardSchedule] PRIMARY KEY CLUSTERED
( [scheduledate] ASC ));
CREATE TABLE [dbo].[HolidaySchedule](
[holidaydate] [datetime] NOT NULL,
[shift] [varchar](25) NOT NULL,
CONSTRAINT [PK_HolidaySchedule] PRIMARY KEY CLUSTERED
( [holidaydate] ASC ));
CREATE TABLE [dbo].[ExceptionSchedule](
[exceptiondate] [datetime] NOT NULL,
[shift] [varchar](25) NOT NULL,
CONSTRAINT [PK_ExceptionDate] PRIMARY KEY CLUSTERED
( [exceptiondate] ASC ));
INSERT INTO ExceptionSchedule VALUES ('2008.01.06', 'ExceptionShift1');
INSERT INTO ExceptionSchedule VALUES ('2008.01.08', 'ExceptionShift2');
INSERT INTO ExceptionSchedule VALUES ('2008.01.10', 'ExceptionShift3');
INSERT INTO HolidaySchedule VALUES ('2008.01.01', 'HolidayShift1');
INSERT INTO HolidaySchedule VALUES ('2008.01.06', 'HolidayShift2');
INSERT INTO HolidaySchedule VALUES ('2008.01.09', 'HolidayShift3');
INSERT INTO StandardSchedule VALUES ('2008.01.01', 'RegularShift1');
INSERT INTO StandardSchedule VALUES ('2008.01.02', 'RegularShift2');
INSERT INTO StandardSchedule VALUES ('2008.01.03', 'RegularShift3');
INSERT INTO StandardSchedule VALUES ('2008.01.04', 'RegularShift4');
INSERT INTO StandardSchedule VALUES ('2008.01.05', 'RegularShift5');
INSERT INTO StandardSchedule VALUES ('2008.01.07', 'RegularShift6');
INSERT INTO StandardSchedule VALUES ('2008.01.09', 'RegularShift7');
INSERT INTO StandardSchedule VALUES ('2008.01.10', 'RegularShift8');
```
Using these tables/rows as a basis, this SELECT statement retrieves the desired data:
```
SELECT DISTINCT
COALESCE(e2.exceptiondate, e.exceptiondate, holidaydate, scheduledate) AS ShiftDate,
COALESCE(e2.shift, e.shift, h.shift, s.shift) AS Shift
FROM standardschedule s
FULL OUTER JOIN holidayschedule h ON s.scheduledate = h.holidaydate
FULL OUTER JOIN exceptionschedule e ON h.holidaydate = e.exceptiondate
FULL OUTER JOIN exceptionschedule e2 ON s.scheduledate = e2.exceptiondate
ORDER BY shiftdate
```
|
Here is an alternate SQL statement:-
```
SELECT
ALL_KEYS.KEY,
NVL( TABLEC.VALUE, NVL( TABLEB.VALUE, TABLEA.VALUE)) AS VALUE
FROM
(SELECT KEY AS KEY FROM TABLEA
UNION
SELECT KEY FROM TABLEB
UNION
SELECT KEY FROM TABLEC) ALL_KEYS,
TABLEA,
TABLEB,
TABLEC
WHERE
ALL_KEYS.KEY = TABLEA.KEY(+) AND
ALL_KEYS.KEY = TABLEB.KEY(+) AND
ALL_KEYS.KEY = TABLEC.KEY(+);
```
NB. The NVL() is an Oracle function. If the first parameter is NULL, the second parameter is returned otherwise the first parameter is returned. You didn't say which database you were using but no doubt there are equivalents in everything.
|
Return values from different tables based on a priority
|
[
"",
"sql",
""
] |
I'm curious as to if there are any best practices relating to JQuery when constructing encapsulated code blocks.
Generally, when I construct a page I like to encapsulate the functions used within that page inside an object. This allows me some encapsulation when building applications. There's nothing I hate more than seeing a JavaScript file with a bunch of this
```
function doSomethingOnlyRelevantOnThisPage() {
// do some stuff
}
```
I this makes for messy design, and doesn't really encapsulate functionality nicely.
Commonly in many frameworks, there is a standard that is used to perform this encapsulation.
In Mootools they favor the Object Literal Notation:
```
var Site = {
// properties and methods
}
```
In YUI they favor the Self Executing Function notation:
```
(function() { // properties and methods })()
```
The nice thing about the second example is that a closure is created, thus allowing you to define private properties and methods.
My question is this: Do any JQuery aficionados have any best practices for creating these cleanly encapsulated structures? What is the rationale behind their use?
|
Since I've been working with jQuery for a while now, I've decided on a standard pattern that works well for me.
It's a combination of the YUI module pattern with a bit of jQuery plugin pattern mixed in.. We ended up using the self executing closure pattern. This is beneficial in a few ways:
1. It keeps code to a minimum
2. It enforces separation of behavior from presentation
3. It provides a closure which prevents naming conflicts
This is what it looks like:
```
;(function($) {
var myPrivateFunction = function() {
};
var init = function() {
myPrivateFunction();
};
$(init);
})(jQuery);
```
We realized that assigning the result of the function execution, similar to the YUI module pattern, exposes code that could potentially be called from within presentation code. We want to prevent this, so this pattern fits.
Of course we could have written the init method inline, without defining a variable for the function. We agreed that explicitly defining the init function made the code clearer to readers.
What happens if we want to share functions between pages/external js files? We simply hook into the existing mechanism that jQuery provides for extending the jQuery object itself - the extend function.
If the functions are static, we use $.extend, and if they operate over a wrapped set, we use the $.fn.extend function.
Hope this helps someone.
|
I use YUI and jQuery when developing (YUI widgets and a few convenience functions, jQuery selectors and javascript "extensions"), and the general javascript template I use is this:
```
/*global YAHOO, $ */
// Create the public-scope accessable namespace for this page
YAHOO.namespace('Project');
YAHOO.Project.page = function() {
// Private members
var self = {};
// Public members
var pub = {};
pub.init = function() {
};
return pub;
} ();
// When the DOM is loaded, initialize this module
$(document).ready(YAHOO.Project.page.init);
```
Now clearly, you can remove the YAHOO.namespace() call if you don't want to use YUI, but that's the basic outline. It uses object literal notation, but also allows you to define private properties and methods.
Just remember that when calling private members or referencing private variables to reference them as `self.funcName()`. You can define them outside of the `self` object, but then you get a mismash everywhere in your object, where you're trying to figure out if `size_of()` is a private method or defined globally somewhere else.
|
JQuery JavaScript Design: Self Executing Function or Object Literal?
|
[
"",
"javascript",
"jquery",
""
] |
I have a page being loaded with jQuery. The initial load includes 100 records with 6 icons per record. Needless to say, it takes a few seconds to load and I want to give the user a "loading" prompt/animation.
Any ideas?
|
<http://www.ajaxload.info/> is a great resource for generating animated GIFs for this sort of thing.
Once you've got your animation, render it as part of the page; fire the jQuery load mechanism on the animation's onload() event (so you're certain the animation has loaded first), and use a callback at the end of your loading sequence to disable/hide the animated GIF.
|
Matt Berseth has [**a nice jQuery-based way of doing this**](http://mattberseth.com/blog/2007/10/yui_style_ajax_progress_indica.html).
|
Do you have any Ajax loading image ideas for use with jQuery?
|
[
"",
"javascript",
"jquery",
"ajax",
""
] |
Would having a nice little feature that makes it quicker to write code like Automatic Properties fit very nicely with the mantra of VB.NET?
Something like this would work perfect:
```
Public Property FirstName() As String
Get
Set
End Property
```
**UPDATE:** VB.NET 10 (coming with Visual Studio 2010 and .NET 4.0) will have Automatic Properties. Here's a link that shows a little info about the feature: [http://geekswithblogs.net/DarrenFieldhouse/archive/2008/12/01/new-features-in-vb.net-10-.net-4.0.aspx](https://web.archive.org/web/20200119055303/http://geekswithblogs.net:80/DarrenFieldhouse/archive/2008/12/01/new-features-in-vb.net-10-.net-4.0.aspx)
In VB.NET 10 Automatic Properties will be defines like this:
```
Public Property CustomerID As Integer
```
|
One reason many features get delayed in VB is that the development structure is much different than in C# and additionally, that often more thought goes into details. The same seems to be true in this case, as suggested by [Paul Vick's post](http://www.panopticoncentral.net/archive/2008/03/27/23050.aspx) on the matter. This is unfortunate because it means a delay in many cases (automatic properties, iterator methods, multiline lambdas, to name but a few) but on the other hand, the VB developers usually get a much more mature feature in the long run (looking at the discussion, this will be especially true for iterator methods).
So, long story short: VB 10 will (hopefully!) see automatic properties.
|
It also wasn't as big of a pain point in vb.net, since visual studio will automatically create 90% of the skeleton code of a property for you whereas with C# you used to have to type it all out.
|
Why doesn't VB.NET 9 have Automatic Properties like C# 3?
|
[
"",
"c#",
"vb.net",
"properties",
"language-features",
""
] |
How do I find the location of my `site-packages` directory?
|
There are two types of site-packages directories, *global* and *per user*.
1. **Global** site-packages ("[dist-packages](https://stackoverflow.com/questions/9387928/whats-the-difference-between-dist-packages-and-site-packages)") directories are listed in `sys.path` when you run:
```
python -m site
```
For a more concise list run `getsitepackages` from the [site module](https://docs.python.org/3/library/site.html#site.getsitepackages) in Python code:
```
python -c 'import site; print(site.getsitepackages())'
```
*Caution:* In virtual environments [getsitepackages is not available](https://github.com/pypa/virtualenv/issues/228) with [older versions of `virtualenv`](https://github.com/pypa/virtualenv/pull/2379/files), `sys.path` from above will list the virtualenv's site-packages directory correctly, though. In Python 3, you may use the [sysconfig module](https://docs.python.org/3/library/sysconfig.html#using-sysconfig-as-a-script) instead:
```
python3 -c 'import sysconfig; print(sysconfig.get_paths()["purelib"])'
```
2. The **per user** site-packages directory ([PEP 370](https://www.python.org/dev/peps/pep-0370/)) is where Python installs your local packages:
```
python -m site --user-site
```
If this points to a non-existing directory check the exit status of Python and see `python -m site --help` for explanations.
*Hint:* Running `pip list --user` or `pip freeze --user` gives you a list of all installed *per user* site-packages.
---
## Practical Tips
* `<package>.__path__` lets you identify the location(s) of a specific package: ([details](https://stackoverflow.com/questions/2699287/what-is-path-useful-for))
```
$ python -c "import setuptools as _; print(_.__path__)"
['/usr/lib/python2.7/dist-packages/setuptools']
```
* `<module>.__file__` lets you identify the location of a specific module: ([difference](https://softwareengineering.stackexchange.com/questions/111871/module-vs-package))
```
$ python3 -c "import os as _; print(_.__file__)"
/usr/lib/python3.6/os.py
```
* Run `pip show <package>` to show Debian-style package information:
```
$ pip show pytest
Name: pytest
Version: 3.8.2
Summary: pytest: simple powerful testing with Python
Home-page: https://docs.pytest.org/en/latest/
Author: Holger Krekel, Bruno Oliveira, Ronny Pfannschmidt, Floris Bruynooghe, Brianna Laugher, Florian Bruhin and others
Author-email: None
License: MIT license
Location: /home/peter/.local/lib/python3.4/site-packages
Requires: more-itertools, atomicwrites, setuptools, attrs, pathlib2, six, py, pluggy
```
|
```
>>> import site; site.getsitepackages()
['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
```
(or just first item with `site.getsitepackages()[0]`)
|
How do I find the location of my Python site-packages directory?
|
[
"",
"python",
""
] |
I am looking at some code and it has this statement:
```
~ConnectionManager()
{
Dispose(false);
}
```
The class implements the `IDisposable` interface, but I do not know if that is part of that the tilde(~) is used for.
|
**~ is the destructor**
1. Destructors are invoked automatically, and cannot be invoked explicitly.
2. Destructors cannot be overloaded. Thus, a class can have, at most, one destructor.
3. Destructors are not inherited. Thus, a class has no destructors other than the one, which may be declared in it.
4. Destructors cannot be used with structs. They are only used with classes.
An instance becomes eligible for destruction when it is no longer possible for any code to use the instance.
5. Execution of the destructor for the instance may occur at any time after the instance becomes eligible for destruction.
6. When an instance is destructed, the destructors in its inheritance chain are called, in order, from most derived to least derived.
**Finalize**
In C#, the Finalize method performs the operations that a standard C++ destructor would do. In C#, you don't name it Finalize -- you use the C++ destructor syntax of placing a tilde ( ~ ) symbol before the name of the class.
**Dispose**
It is preferable to dispose of objects in a `Close()` or `Dispose()` method that can be called explicitly by the user of the class. Finalize (destructor) are called by the GC.
The *IDisposable* interface tells the world that your class holds onto resources that need to be disposed and provides users a way to release them. If you do need to implement a finalizer in your class, your Dispose method *should* use the `GC.SuppressFinalize()` method to ensure that finalization of your instance is suppressed.
**What to use?**
It is not legal to call a destructor explicitly. Your destructor will be called by the garbage collector. If you do handle precious unmanaged resources (such as file handles) that you want to close and dispose of as quickly as possible, you ought to implement the IDisposable interface.
|
This is a [finalizer](http://msdn.microsoft.com/en-us/library/system.object.finalize.aspx). To be honest, you should very rarely need to write a finalizer. You really only need to write one if:
* You have direct access to an unmanaged resource (e.g. through an `IntPtr`) and you can't use `SafeHandle` which makes it easier
* You are implementing `IDisposable` in a class which isn't sealed. (My preference is to seal classes unless they're designed for inheritance.) A finalizer is part of the canonical Dispose pattern in such cases.
|
What does the tilde before a function name mean in C#?
|
[
"",
"c#",
"syntax",
"tilde",
""
] |
I want to write something that acts just like confirm() in javascript, but I want to write it myself so I can skin the dialog box. In having trouble thinking through how I would basically force the javascript thread to wait until the user responds and then return true or false.
|
If I were you, I would look at one of the popular javascript libraries. Most contain some sort of modal dialog.
A couple I found for JQuery are [jqModal](http://dev.iceburg.net/jquery/jqModal/) and [SimpleModal](http://www.ericmmartin.com/projects/simplemodal/).
When you build the modal dialog, you will have to tie events to the buttons, so you would do something like:
```
function askUserYesOrNo() {
var myDialog = $('<div class="mydialog"><p>Yes or No?</p><input type="button" id="yes" value="Yes"/><input type="button" id="no" value="No"/></div>');
$("#yes").click(handleYes);
$("#no").click(handleNo);
myDialog.modal(); //This would have to be replaced by whatever syntax the modal framework uses
}
function handleYes() {
//handle yes...
}
function handleNo() {
//handle no...
}
```
|
You really want to use a framework for this, because of the number of weird cross-browser issues that you'll encounter trying to build it yourself.
I've had good results using [jqModal](http://dev.iceburg.net/jquery/jqModal/), a plugin for the jQuery framework that lets you define modal dialogs, but it's by no means the only option; try Googling [jquery modal](http://www.google.com/search?q=jquery+modal) or [yui modal](http://www.google.com/search?q=yui+modal) for some alternatives.
|
How do I write my own confirm dialog in javascript?
|
[
"",
"javascript",
"modal-dialog",
""
] |
I recall reading, on multiple occasions and in multiple locations, that when firing the typical event:
```
protected virtual OnSomethingHappened()
{
this.SomethingHappened(this, EventArgs.Empty);
}
```
e should be EventArgs.Empty if there are no interesting event args, not null.
I've followed the guidance in my code, but I realized that I'm not clear on why that's the preferred technique. Why does the stated contract prefer EventArgs.Empty over null?
|
I believe the reasoning behind the NOT NULL is that when passed as a parameter, it is not expected for the method to need to potentially handle a null reference exception.
If you pass null, and the method tries to do something with e it will get a null reference exception, with EventArgs.Empty it will not.
|
`EventArgs.Empty` is an instance of the [Null object pattern](http://en.wikipedia.org/wiki/Null_Object_pattern).
Basically, having an object representing "no value" to avoid checking for null when using it.
|
Why use EventArgs.Empty instead of null?
|
[
"",
"c#",
"events",
"eventargs",
""
] |
In the [Python documentation](https://docs.python.org/2/library/threading.html#thread-objects)
it says:
> A thread can be flagged as a "daemon thread". The significance of this
> flag is that the entire Python program exits when only daemon threads
> are left. The initial value is inherited from the creating thread.
Does anyone have a clearer explanation of what that means or a practical example showing where you would set threads as `daemonic`?
Clarify it for me: so the only situation you wouldn't set threads as `daemonic`, is when you want them to continue running after the main thread exits?
|
Some threads do background tasks, like sending keepalive packets, or performing periodic garbage collection, or whatever. These are only useful when the main program is running, and it's okay to kill them off once the other, non-daemon, threads have exited.
Without daemon threads, you'd have to keep track of them, and tell them to exit, before your program can completely quit. By setting them as daemon threads, you can let them run and forget about them, and when your program quits, any daemon threads are killed automatically.
|
Let's say you're making some kind of dashboard widget. As part of this, you want it to display the unread message count in your email box. So you make a little thread that will:
1. Connect to the mail server and ask how many unread messages you have.
2. Signal the GUI with the updated count.
3. Sleep for a little while.
When your widget starts up, it would create this thread, designate it a daemon, and start it. Because it's a daemon, you don't have to think about it; when your widget exits, the thread will stop automatically.
|
Daemon Threads Explanation
|
[
"",
"python",
"multithreading",
"daemon",
"python-multithreading",
""
] |
I am searching for an open source Java library to generate thumbnails for a given URL. I need to bundle this capability, rather than call out to external services, such as [Amazon](http://aws.amazon.com/ast/) or [websnapr](http://www.websnapr.com/).
<http://www.webrenderer.com/> was mentioned in this post: [Server generated web screenshots](https://stackoverflow.com/questions/119116/server-generated-web-screenshots#119264), but it is a commercial solution.
I'm hoping for a Java based solution, but may need to look into executing an external process such as [khtml2png](http://khtml2png.sourceforge.net/index.php?page=faq), or integrating something like [html2ps](http://user.it.uu.se/~jan/html2ps.html).
Any suggestions?
|
The first thing that comes to mind is using AWT to capture a screen grab (see code below). You could look at capturing the [JEditorPane](http://java.sun.com/javase/6/docs/api/javax/swing/JEditorPane.html), the [JDIC](https://jdic.dev.java.net/) [WebBrowser](https://jdic.dev.java.net/nonav/documentation/javadoc/jdic/org/jdesktop/jdic/browser/package-summary.html) control or the [SWT](http://www.eclipse.org/swt/) [Browser](http://help.eclipse.org/stable/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/browser/package-summary.html) (via the [AWT embedding support](http://help.eclipse.org/stable/nftopic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/swt/awt/package-summary.html)). The latter two embed native browsers (IE, Firefox), so introduce dependencies; the JEditorPane HTML support stopped at HTML 3.2. It may be that none of these will work on a headless system.
```
import java.awt.Component;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.JLabel;
public class Capture {
private static final int WIDTH = 128;
private static final int HEIGHT = 128;
private BufferedImage image = new BufferedImage(WIDTH, HEIGHT,
BufferedImage.TYPE_INT_RGB);
public void capture(Component component) {
component.setSize(image.getWidth(), image.getHeight());
Graphics2D g = image.createGraphics();
try {
component.paint(g);
} finally {
g.dispose();
}
}
private BufferedImage getScaledImage(int width, int height) {
BufferedImage buffer = new BufferedImage(width, height,
BufferedImage.TYPE_INT_RGB);
Graphics2D g = buffer.createGraphics();
try {
g.drawImage(image, 0, 0, width, height, null);
} finally {
g.dispose();
}
return buffer;
}
public void save(File png, int width, int height) throws IOException {
ImageIO.write(getScaledImage(width, height), "png", png);
}
public static void main(String[] args) throws IOException {
JLabel label = new JLabel();
label.setText("Hello, World!");
label.setOpaque(true);
Capture cap = new Capture();
cap.capture(label);
cap.save(new File("foo.png"), 64, 64);
}
}
```
|
You're essentially asking for a complete rendering engine accessible by Java. Personally, I would save myself the hassle and call out to a child process.
Otherwise, I ran into this pure Java browser: [Lobo](http://lobobrowser.org/java-browser.jsp)
|
Open source Java library to produce webpage thumbnails server-side
|
[
"",
"java",
"open-source",
"thumbnails",
""
] |
In VB.NET (or C#) how can I determine programmatically if a public variable in class helper.vb is used anywhere within a project?
|
From [MSDN](http://msdn.microsoft.com/en-us/library/aa300858(VS.71).aspx)
The Find object allows you to search for and replace text in places of the environment that support such operations, such as the Code editor.
It is intended primarily for macro recording purposes. The editor's macro recording mechanism uses Find rather than TextSelection.FindPattern so that you can discover the global find functionality, and because it generally is more useful than using the TextSelection Object for such operations as Find-in-files.
If the search operation is asynchronous, such as Find All, then the **FindDone** Event occurs when the operation completes.
```
Sub ActionExample()
Dim objFind As Find = objTextDoc.DTE.Find
' Set the find options.
objFind.Action = vsFindAction.vsFindActionFindAll
objFind.Backwards = False
objFind.FilesOfType = "*.vb"
objFind.FindWhat = "<Variable>"
objFind.KeepModifiedDocumentsOpen = False
objFind.MatchCase = True
objFind.MatchInHiddenText = True
objFind.MatchWholeWord = True
objFind.PatternSyntax = vsFindPatternSyntax.vsFindPatternSyntaxLiteral
objFind.ResultsLocation = vsFindResultsLocation.vsFindResultsNone
objFind.SearchPath = "c:\<Your>\<Project>\<Path>"
objFind.SearchSubfolders = False
objFind.Target = vsFindTarget.vsFindTargetCurrentDocument
' Perform the Find operation.
objFind.Execute()
End Sub
<System.ContextStaticAttribute()> _
Public WithEvents FindEvents As EnvDTE.FindEvents
Public Sub FindEvents_FindDone(ByVal Result As EnvDTE.vsFindResult, _
ByVal Cancelled As Boolean) _
Handles FindEvents.FindDone
Select Case Result
case vsFindResultFound
'Found!
case else
'Not Found
Ens select
End Sub
```
|
Find all References is your friend.
|
How do you check if a variable is used in a project programmatically?
|
[
"",
"c#",
".net",
"vb.net",
"add-in",
"envdte",
""
] |
After reading this answer:
[best way to pick a random subset from a collection?](https://stackoverflow.com/questions/136474/best-way-to-pick-a-random-subset-from-a-collection#136513)
It got me wondering, how does one pick a random seed in Java?
And don't say use System.currentTimeMillis() or System.nanoTime(). Read the article to see why not.
That's a hard question, but let me make it harder. Let's say you need to generate a random seed without connecting to the internet, without using user input (IE, there's no gui), and it has to be cross platform (therefore no JNI to access hardware).
Is there some JVM variables we can monitor as a source of our randomness?
Can this be done? Or is it impossible?
|
Take a look at [Uncommons Maths](https://uncommons-maths.dev.java.net) (full disclosure: I wrote it). It should solve most of the problems you'll ever have with random numbers in Java.
Even, if you don't use it you should be able to get some ideas from the various [SeedGenerator](https://uncommons-maths.dev.java.net/nonav/api/org/uncommons/maths/random/SeedGenerator.html) implementations it provides. Basically, it defaults to using [/dev/random](http://en.wikipedia.org/wiki/Urandom). If that doesn't exist (e.g. Windows) it either tries to download data from [random.org](http://www.random.org) or it uses [SecureRandom.generateSeed](http://java.sun.com/javase/6/docs/api/java/security/SecureRandom.html#generateSeed(int)).
I think SecureRandom.generateSeed is the best that you can do without relying on anything platform specific or on the Internet.
|
Um, that article says that 32-bit seeds are bad, but 64-bit seeds are good. System.currentTimeMillis() is a 64-bit seed.
|
What is a cross platform way to select a random seed in Java?
|
[
"",
"java",
"random",
"cross-platform",
"random-seed",
""
] |
What is the slickest way to initialize an array of dynamic size in C# that you know of?
This is the best I could come up with
```
private bool[] GetPageNumbersToLink(IPagedResult result)
{
if (result.TotalPages <= 9)
return new bool[result.TotalPages + 1].Select(b => true).ToArray();
...
```
|
use [Enumerable.Repeat](http://msdn.microsoft.com/en-us/library/bb348899%28v=vs.110%29.aspx)
```
Enumerable.Repeat(true, result.TotalPages + 1).ToArray()
```
|
If by 'slickest' you mean fastest, I'm afraid that **Enumerable.Repeat** may be 20x slower than a **for** loop.
See <http://dotnetperls.com/initialize-array>:
```
Initialize with for loop: 85 ms [much faster]
Initialize with Enumerable.Repeat: 1645 ms
```
So use Dotnetguy's SetAllValues() method.
|
C# Array initialization - with non-default value
|
[
"",
"c#",
".net",
""
] |
Is there any way in the SQL language or in MySQL (or other DBMA) to transfer a value from one cell to another? For example, say there is a table called user\_cars with the following structure:
```
|id| |user_name| |num_cars|
```
Bob has 5 cars, and John has 3 cars. Is there any way to in one query subtract 2 cars from Bob and add 2 to John? I know this can be done with two update queries, but I'd just like to know if there was a more efficient way.
|
For Oracle you could do this. Don't know if there is an equivalent in mysql. Obviously this particular statement is very specific to the example you stated.
```
UPDATE user_cars
SET num_cars = num_cars +
CASE WHEN user_name='Bob' THEN -2
WHEN user_name='John' THEN +2
END
WHERE user_name IN ( 'Bob', 'John' )
```
|
This will work, but it is not pleasant:
```
UPDATE USER_CARS UC
SET
NUM_CARS = NUM_CARS + CASE WHEN UC.USER_NAME = 'Bob'
THEN -2 --take from bob
WHEN UC.USER_NAME = 'John'
THEN 2 --give to John
ELSE 0 --no change for anybody else
END
```
|
Any way to transfer value from one cell to another?
|
[
"",
"sql",
"mysql",
"database",
""
] |
What is the best way to determine available bandwidth in .NET?
We have users that access business applications from various remote access points, wired and wireless and at times the bandwidth can be very low based on where the user is. When the applications appear to be running slow, the issue could be due to low bandwidth and not some other issue.
I would like to be able to run some kind of service that would warn users whenever the available bandwidth dips below a specific threshold.
Any thoughts?
|
Not beyond the obvious of downloading a file of a known size and timing how long it takes. the disadvantage of that is that you'd need to waste a lot of bandwidth to do it. Also, if you wanted to alert when throughput drops below a threshold, you'll have to run the test more-or-less continuously.
IMHO, I'd live with poor performance in some locations, given that you can't do anything about it if it does occur.
Sorry.
|
There's no easy way to measure bandwidth without actually using it - which of course will starve the applications. A couple of points to bear in mind though:
1) Is it actually bandwidth that's the problem, or latency? You can measure latency in a less intrusive manner than bandwidth.
2) Are the applications all run from the same server (or at least the same network)? You may find that users will have a good connection to some areas of the net but not others. (It's likely that the last mile will be the limiting factor, but it's not *always* the case.)
|
Determining available bandwidth
|
[
"",
"c#",
".net",
""
] |
I have a large int[] array and a much smaller int[] array. I want to fill up the large array with values from the small array, by repeat copying the small array into the large array until it is full (so that large[0] = large[13] = large[26] ... = small[0] etc.). I already have a simple method:
```
int iSource = 0;
for (int i = 0; i < destArray.Length; i++)
{
if (iSource >= sourceArray.Length)
{
iSource = 0; // reset if at end of source
}
destArray[i] = sourceArray[iSource++];
}
```
But I need something more elegant, and hopefully faster.
|
Interestingly the winning answer is the slowest with the provided source array!
The solution I was going to propose was
```
for (int i = 0; i < destArray.Length; i++)
{
destArray[i] = sourceArray[i%sourceArray.Length];
}
```
but when i tested the perf over 100000 iterations using the inputs in the answering question it performed worse than the questioners loop.
here is the output from my little test app
```
array copy 164ms (Nelson LaQuet's code)
assign copy 77ms (MusiGenesis code)
assign mod copy 161ms (headsling's code)
```
|
Have your loop work using the `Array.Copy()` overload that lets you copy from one array into the a particular index in the destination array.
```
if (sourceArray.Length == 0) return; // don't get caught in infinite loop
int idx = 0;
while ((idx + sourceArray.Length) < destArray.Length) {
Array.Copy( sourceArray, 0, destArray, idx, sourceArray.Length);
idx += sourceArray.Length;
}
Array.Copy( sourceArray, 0, destArray, idx, destArray.Length - idx);
```
|
What's the best way to copy/fill a large array with a smaller array in C#?
|
[
"",
"c#",
"arrays",
".net-2.0",
""
] |
Given a table structure like this:
```
CREATE TABLE `user` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(32) NOT NULL,
`username` varchar(16) NOT NULL,
`password` char(32) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `username` (`username`)
);
```
Is there any use in using the LIMIT keyword when searching by username, or is the DB smart enough to know that there can only possibly be one result, and therefore stop searching once it's found one?
```
SELECT * FROM `user` WHERE `username` = 'nick';
-- vs --
SELECT * FROM `user` WHERE `username` = 'nick' LIMIT 1;
```
---
*Update:* Thanks for the answers, they've been enlightening. It seems like, even though it's unnecessary, putting `LIMIT 1` on the query doesn't hurt, and probably increases readability (you don't have to go looking into the DB schema to know that only one is going to be returned). Special shoutout for JR's answer - I didn't even know you could do that with indices.
Also, there's a similar question I've found [here](https://stackoverflow.com/questions/34488/does-limiting-a-query-to-one-record-improve-performance), which might also help.
|
I've always been told and read that you should include the `LIMIT` everytime you only want 1 result. This just tells the DB that it should stop so matter what. In your case, you're probably right it doesn't make a difference, but I think it's better just to always do than always deciding and leaving it out one time when you need it.
|
There is no need to use LIMIT.
However... If you have a HUGE HUGE HUGE cardinality on the table (and you're worried about speed), you might consider not using the special UNIQUE constraint in the DB and manage it from your application. Then, you can specify an INDEX on only the first few characters of your username field. This will drastically reduce the size of your index, being sure the whole thing fits into RAM, and possibly speed up your queries.
So, you might try:
```
CREATE TABLE `user` (
`id` int(10) unsigned NOT NULL auto_increment,
`name` varchar(32) NOT NULL,
`username` varchar(16) NOT NULL,
`password` char(32) NOT NULL,
PRIMARY KEY (`id`),
KEY `username` (`username`(4))
);
```
as an alternative if your first attempt turns out to be too slow. This means, of course, you have to check for duplicate user names before inserting, but you'll probably have to do that anyway to tell the user they have to pick another user name.
Depending on what DB server and engine you're using, it may also be faster to specify fixed width fields for all your strings also. Instead of *varchar*, use *char*.
|
Using LIMIT when searching by a unique field
|
[
"",
"sql",
"indexing",
""
] |
I have an SP that takes 10 seconds to run about 10 times (about a second every time it is ran). The platform is asp .net, and the server is SQL Server 2005. I have indexed the table (not on the PK also), and that is not the issue. Some caveats:
* usp\_SaveKeyword is not the issue. I commented out that entire SP and it made not difference.
* I set @SearchID to 1 and the time was significantly reduced, only taking about 15ms on average for the transaction.
* I commented out the entire stored procedure except the insert into tblSearches and strangely it took more time to execute.
Any ideas of what could be going on?
```
set ANSI_NULLS ON
go
ALTER PROCEDURE [dbo].[usp_NewSearch]
@Keyword VARCHAR(50),
@SessionID UNIQUEIDENTIFIER,
@time SMALLDATETIME = NULL,
@CityID INT = NULL
AS
BEGIN
SET NOCOUNT ON;
IF @time IS NULL SET @time = GETDATE();
DECLARE @KeywordID INT;
EXEC @KeywordID = usp_SaveKeyword @Keyword;
PRINT 'KeywordID : '
PRINT @KeywordID
DECLARE @SearchID BIGINT;
SELECT TOP 1 @SearchID = SearchID
FROM tblSearches
WHERE SessionID = @SessionID
AND KeywordID = @KeywordID;
IF @SearchID IS NULL BEGIN
INSERT INTO tblSearches
(KeywordID, [time], SessionID, CityID)
VALUES
(@KeywordID, @time, @SessionID, @CityID)
SELECT Scope_Identity();
END
ELSE BEGIN
SELECT @SearchID
END
END
```
|
Why are you using `top 1 @SearchID` instead of `max (SearchID)` or `where exists` in this query? `top` requires you to run the query and retrieve the first row from the result set. If the result set is large this could consume quite a lot of resources before you get out the final result set.
```
SELECT TOP 1 @SearchID = SearchID
FROM tblSearches
WHERE SessionID = @SessionID
AND KeywordID = @KeywordID;
```
I don't see any obvious reason for this - either of aforementioned constructs should get you something semantically equivalent to this with a very cheap index lookup. Unless I'm missing something you should be able to do something like
```
select @SearchID = isnull (max (SearchID), -1)
from tblSearches
where SessionID = @SessionID
and KeywordID = @KeywordID
```
This ought to be fairly efficient and (unless I'm missing something) semantically equivalent.
|
Enable "Display Estimated Execution Plan" in SQL Management Studio - where does the execution plan show you spending the time? It'll guide you on the heuristics being used to optimize the query (or not in this case). Generally the "fatter" lines are the ones to focus on - they're ones generating large amounts of I/O.
Unfortunately even if you tell us the table schema, only you will be able to see actually how SQL chose to optimize the query. One last thing - have you got a clustered index on tblSearches?
|
Stored Procedure; Insert Slowness
|
[
"",
"sql",
"performance",
"stored-procedures",
""
] |
When a Java VM crashes with an EXCEPTION\_ACCESS\_VIOLATION and produces an hs\_err\_pidXXX.log file, what does that indicate? The error itself is basically a null pointer exception. Is it always caused by a bug in the JVM, or are there other causes like malfunctioning hardware or software conflicts?
Edit: there is a native component, this is an SWT application on win32.
|
Most of the times this is a bug in the VM.
But it can be caused by any native code (e.g. JNI calls).
The hs\_err\_pidXXX.log file should contain some information about where the problem happened.
You can also check the "Heap" section inside the file. Many of the VM bugs are caused by the garbage collection (expecially in older VMs). This section should show you if the garbage was running at the time of the crash. Also this section shows, if some sections of the heap are filled (the percentage numbers).
The VM is also much more likely to crash in a low memory situation than otherwise.
|
Answer found!
I had the same error and noticed that others who provided the contents of the pid log file were running 64 bit Windows. Just like me. At the end log file, it included the PATH statement. There I could see C:\Windows\SysWOW64 was incorrectly listed ahead of: %SystemRoot%\system32. Once I corrected it, the exception disappeared.
|
Possible causes of Java VM EXCEPTION_ACCESS_VIOLATION?
|
[
"",
"java",
"null",
"jvm",
"crash",
"swt",
""
] |
Assume I have a class foo, and wish to use a std::map to store some boost::shared\_ptrs, e.g.:
```
class foo;
typedef boost::shared_ptr<foo> foo_sp;
typeded std::map<int, foo_sp> foo_sp_map;
foo_sp_map m;
```
If I add a new foo\_sp to the map but the key used already exists, will the existing entry be deleted? For example:
```
foo_sp_map m;
void func1()
{
foo_sp p(new foo);
m[0] = p;
}
void func2()
{
foo_sp p2(new foo);
m[0] = p2;
}
```
Will the original pointer (p) be freed when it is replaced by p2? I'm pretty sure it will be, but I thought it was worth asking/sharing.
|
First off, your question title says boost::auto\_ptr, but you actually mean boost::shared\_ptr
And yes, the original pointer will be freed (if there are no further shared references to it).
|
It depends on what happens in your ... section
Your container class contains *copies* of instances of foo\_sp, when you execute `m[0] = p2;` the copy of `p` that was originally in that place goes out of scope. At that time it will be deleted *if there are no other foo\_sp refers to it*.
If the copy that was declared in the second line `foo_sp p(new foo);` is still around then the memory will not be deallocated. The entry will be delete once all references to it have been removed.
|
boost::shared_ptr standard container
|
[
"",
"c++",
"smart-pointers",
"std",
"stdmap",
""
] |
I'm trying to create a c++ library for use on windows/MSVC.
My problem is that it seems that in order to link properly, I need to distribute a bunch of different versions, linked against different versions of MSVC's c++ runtimes - single and multi-threaded, debug and release, different compiler versions, various other security and other options.
I'd love to just distribute maybe two, 32 bit and 64 bit.
My idea is to maybe use a different new operator (say, mynew) and custom allocators for all my STL types. When creating the lib, /nodefaultlib. Then, when linking in from a parent project, require them to thunk mynew to new, and my stl allocator to the standard one (or one of their choosing). I guess I'd need to do delete, and a few other functions. Naturally I'd provide an example thunking implementation with the library, but this would hopefully save everyone a lot of headache.
Is this possible? Has anyone ever tried this? Is there a best practices for library creation/distribution on windows/MSVC?
|
You want static linking, as a general answer.
Quick note on Chris' answer (don't want to de-boost cause it's mostly good, but...):
DO NOT link to msvcrt.dll (the unversioned one); this is the OS-specific version DLL, and if you link to it, your app probably will not work on other versions of Windows. You should always be linking to msvcrt##.dll, as far as I know. The DDK may contain a lib for it, but don't link to it unless you really know what you're doing.
|
Link statically to C++ runtime library:
1. Open project properties.
2. Go to Configuration Properties | C/C++ | Code Generation section.
3. Set Runtime Library to Multi-threaded (/MT).
|
Building windows c++ libraries without a runtime?
|
[
"",
"c++",
"visual-c++",
"runtime",
""
] |
I need to fire an event when the mouse is above a PictureBox with the mouse button already clicked and held down.
Problems:
The MouseDown and MouseEnter event handlers do not work together very well.
For instance once a mouse button is clicked and held down, C# will fire the MouseDown event handler, but when the cursor moves over the PictureBox the MouseEnter event does not fire, until the mouse button is realeased.
|
When the mouse is pressed down most controls will then *Control.Capture* the mouse input. This means that all *MouseMove* events are sent to the original control that captured rather than the control the mouse happens to be over. This continues until the mouse loses capture which typically happens on the mouse up.
If you really need to know when the mouse is over your control even when another control has captured mouse input then you only really have one way. You need to snoop the windows messages destined for other controls inside your application. To do that you need add a message filter ...
```
Application.AddMessageFilter(myFilterClassInstance);
```
Then you need to implement the IMessageFilter on a suitable class...
```
public class MyFilterClass : IMessageFilter
{
public bool PreFilterMessage(ref Message m)
{
if (m.Msg == WM_MOUSEMOVE)
// Check if mouse is over my picture box!
return false;
}
}
```
Then you watch for mouse move events and check if they are over your picture box and do whatever it is you want to do.
|
**Mouse events**
Use the MouseDown event to just detect a down press of a mouse button and set this.Capture to true so that you then get other mouse events, even when the mouse leaves the control (i.e. you won't get a MouseLeave event because you captured the mouse). Release capture by setting this.Capture to false when MouseUp occurs.
**Just checking the state of the mouse**
This may not be relevant, but you can check `System.Windows.Control.MousePosition` and see if it is in the `PictureBox.ClientRectangle`, then check the `Control.MouseButtons` static property for which buttons might be down at any time.
As in:
```
if (pictureBox.ClientRectangle.Contains(pictureBox.PointToClient(Control.MousePosition)))
{
if ((Control.MouseButtons & MouseButtons.Left) != 0)
{
// Left button is down.
}
}
```
|
How can I detect a held down mouse button over a PictureBox?
|
[
"",
"c#",
"winforms",
"mouse",
"picturebox",
""
] |
I have an array of items that are time sensitive. After an amount of time, the last item needs to fall off and a new item is put at the beginning.
What is the best way to do this?
|
I would suggest using a queue, just a special instance of an array or list. When your timed event occurs, pop the last item from the queue, and then push your new item on.
|
Probably the easiest way to do this with an array is to use a circular index. Rather than always looking at array[n], you would reference array[cIndex] (where cIndex referrs to the item in the array being indexed (cIndex is incremented based on the arraySize (cIndex % arraySize)).
When you choose to drop the oldest item in the array, you would simply reference the element located at ((cIndex + (arraySize - 1)) % arraySize).
Alternatively, you could use a linkedList approach.
|
How to shift items in an array?
|
[
"",
"c#",
"data-structures",
""
] |
Is there a way to identify unused attributes/methods in Visual C++ 2008 Professional? If it's not possible by default, recommendations of 3rd-party tools are also much appreciated.
Thanks,
Florian
**Edit:** nDepend only works for .NET assemblies. I'm looking for something that can be used with native C++ applications.
|
Try [PC-Lint](http://www.gimpel.com/). It's pretty good at finding redundant code.
I haven't tried version 9 yet. Version 8 does take some time to configure.
Try the online interactive demo.
|
I have not personally used their productivity tools (I use their windows control suit), but it looks like DevExpress has a C++ refactor'er called [Refactor! for C++](http://www.devexpress.com/Products/Visual_Studio_Add-in/RefactorCPP/). I didn't immediately spot the features that you are looking for, but maybe they have it?
|
How to find unused attributes/methods in Visual C++ 2008
|
[
"",
"c++",
"visual-studio",
"visual-studio-2008",
"visual-c++",
"static-analysis",
""
] |
What features of C++ should be avoided in embedded systems?
Please classify the answer by reason such as:
* memory usage
* code size
* speed
* portability
EDIT: Lets' use an ARM7TDMI with 64k ram as a target to control the scope of the answers.
|
**RTTI and Exception Handling:**
* Increases code-size
* Decreases performance
* Can often be replaced by cheaper mechanisms or a better software-design.
**Templates:**
* be careful with them if code-size is an issue. If your target CPU has no or only a very tiny ínstruction cache it may reduce the performance as well. (templates tend to bloat code if used without care). Otoh clever meta-programming can decrease the code-size as well. There is no clear cut answer on his.
**Virtual functions and inheritance:**
* These are fine for me. I write almost all of my embedded code in C. That does not stop me from using function-pointer tables to mimic virtual functions. They never became a peformance problem.
|
Choosing to avoid certain features should always be driven by quantitative analysis of the behavior of *your* software, on *your* hardware, with *your* chosen toolchain, under the constraints *your* domain entails. There are a lot of conventional wisdom "don'ts" in C++ development which are based on superstition and ancient history rather than hard data. Unfortunately, this often results in a lot of extra workaround code being written to avoid using features that someone, somewhere, had a problem with once upon a time.
|
C++ usage in embedded systems
|
[
"",
"c++",
"embedded",
""
] |
This is a part algorithm-logic question (how to do it), part implementation question (how to do it best!). I'm working with Django, so I thought I'd share with that.
In Python, it's worth mentioning that the problem is somewhat related to [how-do-i-use-pythons-itertoolsgroupby](https://stackoverflow.com/questions/773/how-do-i-use-pythons-itertoolsgroupby).
Suppose you're given two Django Model-derived classes:
```
from django.db import models
class Car(models.Model):
mods = models.ManyToManyField(Representative)
```
and
```
from django.db import models
class Mods(models.Model):
...
```
How does one get a list of Cars, grouped by Cars with a common set of Mods?
I.e. I want to get a class likeso:
```
Cars_by_common_mods = [
{ mods: { 'a' }, cars: { 'W1', 'W2' } },
{ mods: { 'a', 'b' }, cars: { 'X1', 'X2', 'X3' }, },
{ mods: { 'b' }, cars: { 'Y1', 'Y2' } },
{ mods: { 'a', 'b', 'c' }, cars: { 'Z1' } },
]
```
I've been thinking of something like:
```
def cars_by_common_mods():
cars = Cars.objects.all()
mod_list = []
for car in cars:
mod_list.append( { 'car': car, 'mods': list(car.mods.all()) }
ret = []
for key, mods_group in groupby(list(mods), lambda x: set(x.mods)):
ret.append(mods_group)
return ret
```
However, that doesn't work because (perhaps among other reasons) the groupby doesn't seem to group by the mods sets. I guess the mod\_list has to be sorted to work with groupby. All to say, I'm confident there's something simple and elegant out there that will be both enlightening and illuminating.
*Cheers & thanks!*
|
Have you tried sorting the list first? The algorithm you proposed should work, albeit with lots of database hits.
```
import itertools
cars = [
{'car': 'X2', 'mods': [1,2]},
{'car': 'Y2', 'mods': [2]},
{'car': 'W2', 'mods': [1]},
{'car': 'X1', 'mods': [1,2]},
{'car': 'W1', 'mods': [1]},
{'car': 'Y1', 'mods': [2]},
{'car': 'Z1', 'mods': [1,2,3]},
{'car': 'X3', 'mods': [1,2]},
]
cars.sort(key=lambda car: car['mods'])
cars_by_common_mods = {}
for k, g in itertools.groupby(cars, lambda car: car['mods']):
cars_by_common_mods[frozenset(k)] = [car['car'] for car in g]
print cars_by_common_mods
```
Now, about those queries:
```
import collections
import itertools
from operator import itemgetter
from django.db import connection
cursor = connection.cursor()
cursor.execute('SELECT car_id, mod_id FROM someapp_car_mod ORDER BY 1, 2')
cars = collections.defaultdict(list)
for row in cursor.fetchall():
cars[row[0]].append(row[1])
# Here's one I prepared earlier, which emulates the sample data we've been working
# with so far, but using the car id instead of the previous string.
cars = {
1: [1,2],
2: [2],
3: [1],
4: [1,2],
5: [1],
6: [2],
7: [1,2,3],
8: [1,2],
}
sorted_cars = sorted(cars.iteritems(), key=itemgetter(1))
cars_by_common_mods = []
for k, g in itertools.groupby(sorted_cars, key=itemgetter(1)):
cars_by_common_mods.append({'mods': k, 'cars': map(itemgetter(0), g)})
print cars_by_common_mods
# Which, for the sample data gives me (reformatted by hand for clarity)
[{'cars': [3, 5], 'mods': [1]},
{'cars': [1, 4, 8], 'mods': [1, 2]},
{'cars': [7], 'mods': [1, 2, 3]},
{'cars': [2, 6], 'mods': [2]}]
```
Now that you've got your lists of car ids and mod ids, if you need the complete objects to work with, you could do a single query for each to get a complete list for each model and create a lookup `dict` for those, keyed by their ids - then, I believe, Bob is your proverbial father's brother.
|
check [regroup](http://docs.djangoproject.com/en/dev//ref/templates/builtins/#regroup). it's only for templates, but i guess this kind of classification belongs to the presentation layer anyway.
|
Django/Python - Grouping objects by common set from a many-to-many relationships
|
[
"",
"python",
"django",
"algorithm",
"puzzle",
""
] |
Been using **PHP/MySQL** for a little while now, and I'm wondering if there are any specific advantages (performance or otherwise) to using `mysql_fetch_object()` vs `mysql_fetch_assoc()` / `mysql_fetch_array()`.
|
Performance-wise it doesn't matter what you use. The difference is that mysql\_fetch\_object returns object:
```
while ($row = mysql_fetch_object($result)) {
echo $row->user_id;
echo $row->fullname;
}
```
mysql\_fetch\_assoc() returns associative array:
```
while ($row = mysql_fetch_assoc($result)) {
echo $row["userid"];
echo $row["fullname"];
}
```
and mysql\_fetch\_array() returns array:
```
while ($row = mysql_fetch_array($result)) {
echo $row[0];
echo $row[1] ;
}
```
|
`mysql_fetch_array` makes your code difficult to read = a maintenace nightmare. You can't see at a glance what data your object is dealing with. It slightly faster, but if that is important to you you are processing so much data that PHP is probably not the right way to go.
`mysql_fetch_object` has some drawbacks, especially if you base a db layer on it.
* Column names may not be valid PHP identifiers, e.g `tax-allowance` or `user.id` if your database driver gives you the column name as specified in the query. Then you have to start using `{}` all over the place.
* If you want to get a column based on its name, stroed in some variable you also have to start using variable properties `$row->{$column_name}`, while array syntax `$row[$column_name]`
* Constructors don't get invoked when you might expect if you specify the classname.
* If you don't specify the class name you get a `stdClass`, which is hardly better than an array anyway.
`mysql_fetch_assoc` is the easiest of the three to work with, and I like the distinction this gives in the code between objects and database result rows...
```
$object->property=$row['column1'];
$object->property=$row[$column_name];
foreach($row as $column_name=>$column_value){...}
```
While many OOP fans (and I am an OOP fan) like the idea of turning *everything* into an object, I feel that the associative array is a better model of a row from a database than an object, as in my mind an object is a set of properties with methods to act upon them, whereas the row is just data and should be treated as such without further complication.
|
Mysql results in PHP - arrays or objects?
|
[
"",
"php",
"mysql",
"arrays",
"object",
""
] |
Is it possible to do something like this?
```
var pattern = /some regex segment/ + /* comment here */
/another segment/;
```
Or do I have to use new `RegExp()` syntax and concatenate a string? I'd prefer to use the literal as the code is both more self-evident and concise.
|
Here is how to create a regular expression without using the regular expression literal syntax. This lets you do arbitrary string manipulation before it becomes a regular expression object:
```
var segment_part = "some bit of the regexp";
var pattern = new RegExp("some regex segment" + /*comment here */
segment_part + /* that was defined just now */
"another segment");
```
If you have two regular expression literals, you can in fact concatenate them using this technique:
```
var regex1 = /foo/g;
var regex2 = /bar/y;
var flags = (regex1.flags + regex2.flags).split("").sort().join("").replace(/(.)(?=.*\1)/g, "");
var regex3 = new RegExp(regex1.source + regex2.source, flags);
// regex3 is now /foobar/gy
```
It's just more wordy than just having expression one and two being literal strings instead of literal regular expressions.
|
Just randomly concatenating regular expressions **objects** can have some adverse side effects. Use the [RegExp.source](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp/source) instead:
```
var r1 = /abc/g;
var r2 = /def/;
var r3 = new RegExp(r1.source + r2.source,
(r1.global ? 'g' : '')
+ (r1.ignoreCase ? 'i' : '') +
(r1.multiline ? 'm' : ''));
console.log(r3);
var m = 'test that abcdef and abcdef has a match?'.match(r3);
console.log(m);
// m should contain 2 matches
```
This will also give you the ability to retain the regular expression flags from a previous RegExp using the standard RegExp flags.
[jsFiddle](http://jsfiddle.net/LZf7H/514/)
|
How can I concatenate regex literals in JavaScript?
|
[
"",
"javascript",
"regex",
"syntax",
""
] |
I have 7-8 xml files. Each one is approximately 50 MB in size. What is the best way to merge files programmatically in C# without getting System.OutOfMemory Exception? So far I have tried reading each file in a StringBuilder and than putting it in an array of string builder but I still get system.outofmemoery exception. Any help??
Thank you,
-Nimesh
|
Personally, when I *have* to deal with XML files (forced by threat of physical violence usually), I do this:
1. Load each file into a .NET DataSet via DataSet.ReadXML()
2. Combine the information (via DataSet queries).
3. Write out the combined DataSet to XML via DataSet.WriteXML()
Then I aggressively delete the orginal XML file and wipe the sectors where it existed on the disk to remove the taint. :-)
|
The thing about stringbuilder is you're still trying to keep the entire contents in memory. You want to only keep a small portion in memory at a time, and that means using filestreams. Don't read an entire file into memory, open a stream on it and keep reading from the stream.
The problem with xml is that you can't just append them to each other: you'll break the tag nesting. So you need to know something about the structure of your xml files so that you can have an idea of what to do at each file boundry.
If you have something that works in theory with StringBuilder, but only fails in practice because of memory constraints, you should be able to translate the StringBuilder's .Append() and .AppendLine() method calls into .Write() and .WriteLine() calls for a filestream.
|
Merging big files in C#
|
[
"",
"c#",
""
] |
There are situations, where it is practical to have a type-cast return a null value instead of throwing a ClassCastException. C# has the `as` operator to do this. Is there something equivalent available in Java so you don't have to explicitly check for the ClassCastException?
|
Here's an implementation of as, as suggested by @Omar Kooheji:
```
public static <T> T as(Class<T> clazz, Object o){
if(clazz.isInstance(o)){
return clazz.cast(o);
}
return null;
}
as(A.class, new Object()) --> null
as(B.class, new B()) --> B
```
|
I'd think you'd have to roll your own:
```
return (x instanceof Foo) ? (Foo) x : null;
```
EDIT: If you don't want your client code to deal with nulls, then you can introduce a [Null Object](http://en.wikipedia.org/wiki/Null_Object_pattern)
```
interface Foo {
public void doBar();
}
class NullFoo implements Foo {
public void doBar() {} // do nothing
}
class FooUtils {
public static Foo asFoo(Object o) {
return (o instanceof Foo) ? (Foo) o : new NullFoo();
}
}
class Client {
public void process() {
Object o = ...;
Foo foo = FooUtils.asFoo(o);
foo.doBar(); // don't need to check for null in client
}
}
```
|
How to emulate C# as-operator in Java
|
[
"",
"java",
"casting",
""
] |
In terms of quick dynamically typed languages, I'm really starting to like Javascript, as I use it a lot for web projects, especially because it uses the same syntax as Actionscript (flash).
It would be an ideal language for shell scripting, making it easier to move code from the front and back end of a site, and less of the strange syntax of python.
Is there a good, javascript interpreter that is easy to install (I know there's one based on java, but that would mean installing all the java stuff to use),
|
Of course, in Windows, the JavaScript interpreter is shipped with the OS.
Just run `cscript` or `wscript` against any .js file.
|
I personally use SpiderMonkey, but here's an extensive list of [ECMAScript shells](http://www.discerning.com/burstproject.org/build/doc/shells.html)
Example spidermonkey install and use on Ubuntu:
```
$ sudo apt-get install spidermonkey
$ js myfile.js
output
$ js
js> var f = function(){};
js> f();
```
|
Javascript interpreter to replace Python
|
[
"",
"javascript",
"shell",
"scripting",
""
] |
For example, imagine a simple mutator that takes a single boolean parameter:
```
void SetValue(const bool b) { my_val_ = b; }
```
Does that `const` actually have any impact? Personally I opt to use it extensively, including parameters, but in this case I wonder if it makes any difference.
I was also surprised to learn that you can omit `const` from parameters in a function declaration but can include it in the function definition, e.g.:
**.h file**
```
void func(int n, long l);
```
**.cpp file**
```
void func(const int n, const long l) { /* ... */ }
```
Is there a reason for this? It seems a little unusual to me.
|
The reason is that `const` for the parameter only applies locally within the function, since it is working on a copy of the data. This means the function signature is really the same anyways. It's probably bad style to do this a lot though.
I personally tend to not use `const` except for reference and pointer parameters. For copied objects it doesn't really matter, although it can be safer as it signals intent within the function. It's really a judgement call. I do tend to use `const_iterator` though when looping on something and I don't intend on modifying it, so I guess to each his own, as long as `const` correctness for reference types is rigorously maintained.
|
> *`const` is pointless when the argument is passed by value since you will
> not be modifying the caller's object.*
Wrong.
It's about self-documenting your code and your assumptions.
If your code has many people working on it and your functions are non-trivial then you should mark `const` any and everything that you can. When writing industrial-strength code, you should always assume that your coworkers are psychopaths trying to get you any way they can (especially since it's often yourself in the future).
Besides, as somebody mentioned earlier, it *might* help the compiler optimize things a bit (though it's a long shot).
|
Does using const on function parameters have any effect? Why does it not affect the function signature?
|
[
"",
"c++",
"parameters",
"constants",
"function-declaration",
"const-correctness",
""
] |
Is there an easy way to cache things when using urllib2 that I am over-looking, or do I have to roll my own?
|
You could use a decorator function such as:
```
class cache(object):
def __init__(self, fun):
self.fun = fun
self.cache = {}
def __call__(self, *args, **kwargs):
key = str(args) + str(kwargs)
try:
return self.cache[key]
except KeyError:
self.cache[key] = rval = self.fun(*args, **kwargs)
return rval
except TypeError: # incase key isn't a valid key - don't cache
return self.fun(*args, **kwargs)
```
and define a function along the lines of:
```
@cache
def get_url_src(url):
return urllib.urlopen(url).read()
```
This is assuming you're not paying attention to HTTP Cache Controls, but just want to cache the page for the duration of the application
|
If you don't mind working at a slightly lower level, httplib2 (<https://github.com/httplib2/httplib2>) is an excellent HTTP library that includes caching functionality.
|
Caching in urllib2?
|
[
"",
"python",
"caching",
"urllib2",
""
] |
Let's say I have to implement a piece of T-SQL code that must return a table as result. I can implement a table-valued function or else a stored procedure that returns a set of rows. What should I use?
In short, what I want to know is:
**Which are the main differences between functions and stored procedures?** What considerations do I have to take into account for using one or the other?
|
If you're likely to want to combine the result of this piece of code with other tables, then obviously a table-valued function will allow you to compose the results in a single SELECT statement.
Generally, there's a hierarchy (View < TV Function < Stored Proc). You can do more in each one, but the ability to compose the outputs, and for the optimizer to get really involved decreases as the functionality increases.
So use whichever one minimally allows you to express your desired result.
|
Functions must be deterministic, and cannot be used to make changes to the database, whereas stored procedures allow you to do inserts and updates, etc.
You should limit your use of functions, since they pose a huge scalability problem for big, complex queries. They become sort of a "black box" for the query optimizer, and you'll see enormous differences in performance between using functions and simply inserting the code into a query.
But they are definitely useful for table-valued returns in very specific cases.
If you need to parse a comma-delimited list, to simulate passing an array to a procedure, a function can turn the list into a table for you. This is common practice with Sql Server 2005, since we can't pass in tables to stored procedures yet (we can with 2008).
|
Functions vs Stored Procedures
|
[
"",
"sql",
"sql-server",
"database",
"function",
"stored-procedures",
""
] |
This
```
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE '%some_value%';
```
is slower than this
```
SELECT * FROM SOME_TABLE WHERE SOME_FIELD = 'some_value';
```
but what about this?
```
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE 'some_value';
```
My testing indicates the second and third examples are exactly the same. If that's true, my question is, why ever use "=" ?
|
There is a clear difference when you use bind variables, which you should be using in Oracle for anything other than data warehousing or other bulk data operations.
Take the case of:
```
SELECT * FROM SOME_TABLE WHERE SOME_FIELD LIKE :b1
```
Oracle cannot know that the value of :b1 is '%some\_value%', or 'some\_value' etc. until execution time, so it will make an estimation of the cardinality of the result based on heuristics and come up with an appropriate plan that either may or may not be suitable for various values of :b, such as '%A','%', 'A' etc.
Similar issues can apply with an equality predicate but the range of cardinalities that might result is much more easily estimated based on column statistics or the presence of a unique constraint, for example.
So, personally I wouldn't start using LIKE as a replacement for =. The optimizer is pretty easy to fool sometimes.
|
Check out the [EXPLAIN PLAN](http://www.adp-gmbh.ch/ora/explainplan.html) for both. They generate the same execution plan, so to the database, they're the same thing.
You would use = to test for equality, not similarity. If you're controlling the comparison value as well, then it doesn't make much of a difference. If that's being submitted by a user, then 'apple' and 'apple%' would give you much different results.
|
Any performance impact in Oracle for using LIKE 'string' vs = 'string'?
|
[
"",
"sql",
"performance",
"oracle",
""
] |
This is a pretty straight forward attempt. I haven't been using python for too long. Seems to work but I am sure I have much to learn. Someone let me know if I am way off here. Needs to find patterns, write the first line which matches, and then add a summary message for remaining consecutive lines which match pattern and return modified string.
Just to be clear...regex `.*Dog.*` would take
```
Cat
Dog
My Dog
Her Dog
Mouse
```
and return
```
Cat
Dog
::::: Pattern .*Dog.* repeats 2 more times.
Mouse
#!/usr/bin/env python
#
import re
import types
def remove_repeats (l_string, l_regex):
"""Take a string, remove similar lines and replace with a summary message.
l_regex accepts strings and tuples.
"""
# Convert string to tuple.
if type(l_regex) == types.StringType:
l_regex = l_regex,
for t in l_regex:
r = ''
p = ''
for l in l_string.splitlines(True):
if l.startswith('::::: Pattern'):
r = r + l
else:
if re.search(t, l): # If line matches regex.
m += 1
if m == 1: # If this is first match in a set of lines add line to file.
r = r + l
elif m > 1: # Else update the message string.
p = "::::: Pattern '" + t + "' repeats " + str(m-1) + ' more times.\n'
else:
if p: # Write the message string if it has value.
r = r + p
p = ''
m = 0
r = r + l
if p: # Write the message if loop ended in a pattern.
r = r + p
p = ''
l_string = r # Reset string to modified string.
return l_string
```
|
The rematcher function seems to do what you want:
```
def rematcher(re_str, iterable):
matcher= re.compile(re_str)
in_match= 0
for item in iterable:
if matcher.match(item):
if in_match == 0:
yield item
in_match+= 1
else:
if in_match > 1:
yield "%s repeats %d more times\n" % (re_str, in_match-1)
in_match= 0
yield item
if in_match > 1:
yield "%s repeats %d more times\n" % (re_str, in_match-1)
import sys, re
for line in rematcher(".*Dog.*", sys.stdin):
sys.stdout.write(line)
```
# EDIT
In your case, the final string should be:
```
final_string= '\n'.join(rematcher(".*Dog.*", your_initial_string.split("\n")))
```
|
Updated your code to be a bit more effective
```
#!/usr/bin/env python
#
import re
import types
def remove_repeats (l_string, l_regex):
"""Take a string, remove similar lines and replace with a summary message.
l_regex accepts strings/patterns or tuples of strings/patterns.
"""
# Convert string/pattern to tuple.
if not hasattr(l_regex, '__iter__'):
l_regex = l_regex,
ret = []
last_regex = None
count = 0
for line in l_string.splitlines(True):
if last_regex:
# Previus line matched one of the regexes
if re.match(last_regex, line):
# This one does too
count += 1
continue # skip to next line
elif count > 1:
ret.append("::::: Pattern %r repeats %d more times.\n" % (last_regex, count-1))
count = 0
last_regex = None
ret.append(line)
# Look for other patterns that could match
for regex in l_regex:
if re.match(regex, line):
# Found one
last_regex = regex
count = 1
break # exit inner loop
return ''.join(ret)
```
|
What is best way to remove duplicate lines matching regex from string using Python?
|
[
"",
"python",
"regex",
""
] |
I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
|
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
|
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
```
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
```
For more examples, and information perhaps look at this:
[XA transactions using Spring](http://www.javaworld.com/javaworld/jw-04-2007/jw-04-xa.html?page=1)
|
What is the 'best' way to do distributed transactions across multiple databases using Spring and Hibernate
|
[
"",
"java",
"hibernate",
"spring",
"transactions",
"xa",
""
] |
Java supplies standard User Interface guidelines for applications built using Java Swing. The basic guidelines are good, but I really feel the look and feel is really boring and outdated.
Is anyone aware of a publicly available Java User Interface Guide that has better look & feel guidelines than the Sun provided guidelines?
|
Along the line of Chii's answer, I would recommend taking a look at the [Windows Vista User Experience Guidelines](http://msdn.microsoft.com/en-us/library/aa511258.aspx "Windows Vista User Experience Guidelines") for general tips on making user interfaces.
Although the name ("Windows Vista User Experience Guidelines") and source (Microsoft) may suggest that it only contains Windows-centric tips and advice, it does offer good general tips and directions that can be used when designing interfaces for non-Windows applications as well.
The [Design Principles](http://msdn.microsoft.com/en-us/library/aa511328.aspx) sections address some points to keep in mind when designing an effective user interface. For example, bullet three of [How to Design a Great User Experience](http://msdn.microsoft.com/en-us/library/aa511335.aspx) says:
> **Don't be all things to all people** Your
> program is going to be more successful
> by delighting its target users than
> attempting to satisfy everyone.
These are the kinds of tips that apply to designing user interfaces on any platform. Of course, there are also Windows-specific guidelines as well.
I believe one of the biggest reasons why look and feel of Swing applications seems "boring" and "outdated" is due to the platform-independent nature of Java. In order for the graphical user interfaces to work on several different platforms, Java needs to have facilities to adapt the user interface to the different host operating systems.
For example, various platforms have various sizes for windows, buttons, and other visual components, so absolute positioning does not work too well. To combat that problem, Swing uses [Layout Managers](http://java.sun.com/docs/books/tutorial/uiswing/layout/using.html "Using Layout Managers") which (generally) use relative positioning to place the visual components on the screen.
Despite these "limitations" of building graphical user interfaces for Java, I think that using tips from guidelines that are provided by non-Sun sources and non-Java-specific sources can still be a good source of information in designing and implementing an user interface that is effective. After all, designing an user interface is less about programming languages and more about human-machine interaction.
|
the apple developer guide has a human computer interface guide - <http://developer.apple.com/documentation/UserExperience/Conceptual/AppleHIGuidelines/XHIGIntro/chapter_1_section_1.html#//apple_ref/doc/uid/TP30000894-TP6> .
Even though its targeted at the mac platform, you could learn something from it - its the reason why so many mac apps are pleasant to use, as well as aesthetically pleasing!
|
Java User Interface Specification
|
[
"",
"java",
"swing",
"user-interface",
""
] |
Is it possible to sign only part of an applet? Ie, have an applet that pops up no security warnings about being signed, but if some particular function is used (that requires privileges) then use the signed jar?
From what I can tell, some (perhaps most) browsers will pop up the warning for a signed applet even if you don't request privileges at all at execution time. I'd rather avoid that if possible.
|
Try splitting your code into an unsigned jar and a signed jar.
|
In theory you can (signed + unsigned jar), but in practice it will result that your code will be handled as unsigned. The access decision should be made from the thread, not the immediate caller. If the thread contains in the stack a call made from an object from unsigned code, the whole call should be treated as unsigned. If you work around this you've found a bug.
In other words... **No**.
If I'm not being to curious, may I inquire why do you want to partially sign your code?
|
Java Applet - Partially Signed?
|
[
"",
"java",
"security",
"applet",
"signed-applet",
""
] |
I learned to use "exists" instead of "in".
BAD
```
select * from table where nameid in (
select nameid from othertable where otherdesc = 'SomeDesc' )
```
GOOD
```
select * from table t where exists (
select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeDesc' )
```
1. The explanation was: "The reason why this is better is because only the matching values will be returned instead of building a massive list of possible results". Does that mean that while the first subquery might return 900 results the second will return only 1?
2. I have had the RDBMS complain: "only the first 1000 rows might be retrieved". Would this second approach solve that problem?
3. What is the scope of the alias in the second subquery? Does the alias only live in the parenthesis?
For example
```
select * from table t where exists (
select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeDesc' )
```
and
```
select nameid from othertable o where t.nameid = o.nameid and otherdesc = 'SomeOtherDesc' )
```
That is, if I use the same alias (`o` for `table` and `othertable`) in the second `exist` will it present any problem with the first `exist`? Or are they independent?
Is this only Oracle-related or it is valid for most RDBMSs?
|
It's specific to each DBMS and depends on the query optimizer. Some optimizers detect IN clause and translate it.
In all DBMSes I tested, alias is only valid inside the ( )
BTW, you can rewrite the query as:
```
select t.*
from table t
join othertable o on t.nameid = o.nameid
and o.otherdesc in ('SomeDesc','SomeOtherDesc');
```
And, to answer your questions:
1. Yes
2. Yes
3. Yes
|
You are treading into complicated territory, known as 'correlated sub-queries'. Since we don't have detailed information about your tables and the key structures, some of the answers can only be 'maybe'.
In your initial IN query, the notation would be valid whether or not OtherTable contains a column NameID (and, indeed, whether OtherDesc exists as a column in Table or OtherTable - which is not clear in any of your examples, but presumably is a column of OtherTable). This behaviour is what makes a correlated sub-query into a correlated sub-query. It is also a routine source of angst for people when they first run into it - invariably by accident. Since the SQL standard mandates the behaviour of interpreting a name in the sub-query as referring to a column in the outer query if there is no column with the relevant name in the tables mentioned in the sub-query but there is a column with the relevant name in the tables mentioned in the outer (main) query, no product that wants to claim conformance to (this bit of) the SQL standard will do anything different.
The answer to your Q1 is "it depends", but given plausible assumptions (NameID exists as a column in both tables; OtherDesc only exists in OtherTable), the results should be the same in terms of the data set returned, but may not be equivalent in terms of performance.
The answer to your Q2 is that in the past, you were using an inferior if not defective DBMS. If it supported EXISTS, then the DBMS might still complain about the cardinality of the result.
The answer to your Q3 as applied to the first EXISTS query is "t is available as an alias throughout the statement, but o is only available as an alias inside the parentheses". As applied to your second example box - with AND connecting two sub-selects (the second of which is missing the open parenthesis when I'm looking at it), then "t is available as an alias throughout the statement and refers to the same table, but there are two different aliases both labelled 'o', one for each sub-query". Note that the query might return no data if OtherDesc is unique for a given NameID value in OtherTable; otherwise, it requires two rows in OtherTable with the same NameID and the two OtherDesc values for each row in Table with that NameID value.
|
SQL - table alias scope
|
[
"",
"sql",
"scope",
"table-alias",
""
] |
Why am I getting a textbox that returns undefined list of variables?
When I run this code:
```
var query = (from tisa in db.TA_Info_Step_Archives
where tisa.ta_Serial.ToString().StartsWith(prefixText)
select tisa.TA_Serial.ToString()).Distinct().Take(Convert.ToInt32(count));
return query.ToList<string>().ToArray();
```
I get this XML file:
```
<string>200700160</string>
<string>200700161</string>
<string>200700162</string>
<string>200700163</string>
<string>200700164</string>
<string>200700170</string>
<string>200700171</string>
<string>200700172</string>
<string>200700173</string>
<string>200700174</string>
<string>200700175</string>
<string>200700176</string>
<string>200700177</string>
<string>200700178</string>
<string>200700179</string>
<string>200700180</string>
<string>200700181</string>
<string>200700182</string>
<string>200700183</string>
<string>200700184</string>
```
BUT, the textbox returns a list of `undefined`....
Help please?
|
updated my ajax kit to version 1.0.10920 then changed my code to the following:
```
foreach (DataRow dr in dt.Rows)
{
items.SetValue("\"" + dr["somenumber"].ToString() + "\"", i);
i++;
}
```
Late friday nights with .net is not fun. I have no life. :-P
|
I tried the code below and it worked for me:
```
items.SetValue("'"+dr["somenumber"]+"'", i);
```
|
Why is my ASP.NET AutoCompleteExtender returning undefined?
|
[
"",
"c#",
"linq",
"web-services",
"autocomplete",
""
] |
I'd like my .exe to have access to a resource string with my svn version. I can type this in by hand, but I'd prefer an automated way to embed this at compile time. Is there any such capability in Visual Studio 2008?
|
I wanted a similar availability and found $Rev$ to be insufficient because it was only updated for a file if that file's revision was changed (which meant it would have to be edited and committed very time: not something I wanted to do.) Instead, I wanted something that was based on the repository's revision number.
For the project I'm working on now, I wrote a Perl script that runs `svnversion -n` from the top-most directory of my working copy and outputs the most recent revision information to a .h file (I actually compare it to a saved reversion in a non-versioned file in my working copy so that I'm not overwriting current revision information at every compile but whether you chose to do so is up to you.) This .h file (or a number of files if necessary, depending on your approach) is referenced both in my application code and in the resource files to get the information where I'd like it.
This script is run as a pre-build step so that everything is up-to-date before the build kicks off and the appropriate files are automatically rebuilt by your build tool.
|
How about using [SubWCRev](http://tortoisesvn.net/docs/nightly/TortoiseSVN_en/tsvn-subwcrev.html) the command line tool that ships with [TortoiseSVN](http://tortoisesvn.net/). You create a template file with tokens in it like $WCREV$ $WCDATE$ etc. Then have a pre-build step that run SubWCRev on your template file to create the actual source file that is fed to the compiler.
|
Embedding SVN Revision number at compile time in a Windows app
|
[
"",
"c++",
"windows",
"visual-studio",
"svn",
""
] |
I was writing a (seemingly) straight-forward SQL snippet that drops a column after it makes sure the column exists.
The problem: if the column does NOT exist, the code *inside* the IF clause complains that it can't find the column! Well, *doh*, that's why it's inside the IF clause!
So my question is, why does a piece of code that shouldn't be executed give errors?
Here's the snippet:
```
IF exists (select * from syscolumns
WHERE id=object_id('Table_MD') and name='timeout')
BEGIN
ALTER TABLE [dbo].[Table_MD]
DROP COLUMN timeout
END
GO
```
...and here's the error:
`Error executing SQL script [...]. Invalid column name 'timeout'`
I'm using Microsoft SQL Server 2005 Express Edition.
|
```
IF exists (select * from syscolumns
WHERE id=object_id('Table_MD') and name='timeout')
BEGIN
DECLARE @SQL nvarchar(1000)
SET @SQL = N'ALTER TABLE [dbo].[Table_MD] DROP COLUMN timeout'
EXEC sp_executesql @SQL
END
GO
```
Reason:
When Sql server compiles the code, they check it for used objects ( if they exists ). This check procedure ignores any "IF", "WHILE", etc... constructs and simply check all used objects in code.
|
By the way, there is a similar issue in Oracle, and a similar workaround using the "execute immediate" clause.
|
Why does a T-SQL block give an error even if it shouldn't even be executed?
|
[
"",
"sql",
"t-sql",
""
] |
IIS is literally sending `<?php ... ?>` code to the browser rather then executing it.
But, only for the root `http://domain.com/index.php` file.
All other .php files in that folder and index.php files in subfolders execute as expected.
How can I get my root index.php code to execute?
---
Update: "index.php" is a Default Document of my Web Site...
[alt text http://img412.imageshack.us/img412/4130/defaultdocumentmt9.gif](http://img412.imageshack.us/img412/4130/defaultdocumentmt9.gif)
|
* [IIS 5.1 does not run PHP properly under root directory, but fine in all other folders](http://forums.devshed.com/iis-97/iis-5-1-does-not-run-php-properly-under-root-163070.html)
* [Running a WordPress blog in site root using IIS](http://www.simmonsconsulting.com/2008/04/21/running-a-wordpress-blog-in-site-root-using-iis/)
UPDATED: I have found a few possible workarounds for PHP 5 and IIS 7. If those solutions are not working, please provide more details about your `index.php`, IIS setup, or try to use IIS 6 compatibility.
* [Problem with PHP Includes on IIS7](http://codingforums.com/showthread.php?t=148637)
* [PHP5 set-up - Relative paths for includes and other file references](http://www.webmasterworld.com/php/3685216.htm)
|
It seems you have properly configured your handlers.
If you're using `<? ... ?>` make sure you have
`short_open_tag = On`
in your php.ini.
|
Why isn't IIS executing the PHP code of my site root index.php file?
|
[
"",
"php",
"iis",
""
] |
I'm trying to write a resolution selection dialog that pops up when a program first starts up. To prevent boring the user, I want to implement the fairly standard feature that you can turn off that dialog with a checkbox, but get it back by holding down the alt key at startup.
Unfortunately, there is no obvious way to ask java whether a given key is **currently being pressed**. You can only register to be informed of new key presses via a KeyListener, but that doesn't help if the keypress starts before the app launches.
|
```
public class LockingKeyDemo {
static Toolkit kit = Toolkit.getDefaultToolkit();
public static void main(String[] args) {
System.out.println("caps lock2 = "
+ kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK));
}
}
```
|
```
import java.awt.*;
import java.awt.event.*;
import javax.swing.JFrame;
public class LockingKeyDemo {
static Toolkit kit = Toolkit.getDefaultToolkit();
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.addWindowListener(new WindowAdapter() {
public void windowActivated(WindowEvent e) {
System.out.println("caps lock1 = "
+ kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK));
try {
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyRelease(KeyEvent.VK_CONTROL);
} catch (Exception e2) {
System.out.println(e2);
}
System.out.println("caps lock2 = "
+ kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK));
}
});
frame.addKeyListener(new KeyAdapter() {
public void keyReleased(KeyEvent e) {
System.out.println("caps lock3 = "
+ kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK));
}
});
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(200, 200);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
```
|
How to check for key being held down on startup in Java
|
[
"",
"java",
"swing",
"keyboard",
"key",
"awt",
""
] |
So C# now allows you to use `default(Foo)` to get a recognized "not filled in yet"/empty instance of a class -- I'm not sure if it is exactly the same as `new Foo()` or not. Many library classes also implement a `Foo.Empty` property, which returns a similar instance. And of course any reference type can point to `null`. So really, what's the difference? When is one right or wrong? What's more consistent, or performs better? What tests should I use when checking if an object is conceptually "not ready for prime time"? Not everybody has `Foo.IsNullOrEmpty()`.
|
`default(Foo)` will return null when `Foo` is a class type, zero where `Foo` is a value type (such as int), and an instance of `Foo` with all fields initialized to *their* respective `default()` values where `Foo` is a struct. It was added to the language so that generics could support both value and reference types - more [info at MSDN](http://msdn.microsoft.com/en-us/library/xwth0h0d.aspx)
Use `default(Foo)` when you're testing a T in the context of `SomeClass<T>` or `MyMethod<T>` and you don't know whether T will be value type, a class type or a struct.
Otherwise, null should mean "unknown", and empty should mean "I know this is empty". Use the Foo.Empty pattern if you genuinely need an empty - but non-null - instance of your class; e.g. `String.Empty` as an alternative to `""` if you need to initialize some variable to the empty string.
Use null if you know you're working with reference types (classes), there's no generics involved, and you're explicitly testing for uninitialized references.
|
`default(Foo)` works for both value types and reference types. `New Foo(), null` and `Foo.Empty()` do not. This makes it a good choice for use with generic types, for example, when you may not know which you're dealing with. But in most reference-type cases, `null` is probably good enough.
|
Should I use default(Foo), Foo.Empty, or null?
|
[
"",
"c#",
"constructor",
"conventions",
""
] |
Is there a library or acceptable method for sanitizing the input to an html page?
In this case I have a form with just a name, phone number, and email address.
Code must be C#.
For example:
`"<script src='bobs.js'>John Doe</script>"` should become `"John Doe"`
|
We are using the [HtmlSanitizer](https://github.com/mganss/HtmlSanitizer) .Net library, which:
* Is open-source (MIT) - [GitHub link](https://github.com/mganss/HtmlSanitizer)
* Is fully customizable, e.g. configure which elements should be removed. [see wiki](https://github.com/mganss/HtmlSanitizer/wiki)
* [Is actively maintained](https://github.com/mganss/HtmlSanitizer/graphs/contributors)
* Doesn't have the [problems like Microsoft Anti-XSS library](https://eksith.wordpress.com/2012/02/13/antixss-4-2-breaks-everything/)
* Is unit tested with the
[OWASP XSS Filter Evasion Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html)
* Is special built for this (in contrast to *HTML Agility Pack*, which is a parser - not a sanitizer)
* Doesn't use regular expressions ([HTML isn't a regular language!](https://stackoverflow.com/questions/6751105/why-its-not-possible-to-use-regex-to-parse-html-xml-a-formal-explanation-in-la))
Also on [NuGet](https://www.nuget.org/packages/HtmlSanitizer/)
|
Based on the comment you made to this answer, you might find some useful info in this question:
<https://stackoverflow.com/questions/72394/what-should-a-developer-know-before-building-a-public-web-site>
Here's a parameterized query example. Instead of this:
```
string sql = "UPDATE UserRecord SET FirstName='" + txtFirstName.Text + "' WHERE UserID=" + UserID;
```
Do this:
```
SqlCommand cmd = new SqlCommand("UPDATE UserRecord SET FirstName= @FirstName WHERE UserID= @UserID");
cmd.Parameters.Add("@FirstName", SqlDbType.VarChar, 50).Value = txtFirstName.Text;
cmd.Parameters.Add("@UserID", SqlDbType.Integer).Value = UserID;
```
---
Edit: Since there was no injection, I removed the portion of the answer dealing with that. I left the basic parameterized query example, since that may still be useful to anyone else reading the question.
--Joel
|
How to use C# to sanitize input on an html page?
|
[
"",
"c#",
"html-sanitizing",
"antixsslibrary",
""
] |
Alright. So I have a very large amount of binary data (let's say, 10GB) distributed over a bunch of files (let's say, 5000) of varying lengths.
I am writing a Java application to process this data, and I wish to institute a good design for the data access. Typically what will happen is such:
* One way or another, all the data will be read during the course of processing.
* Each file is (typically) read sequentially, requiring only a few kilobytes at a time. However, it is often necessary to have, say, the first few kilobytes of *each file simultaneously*, or the middle few kilobytes of each file simultaneously, etc.
* There are times when the application will want random access to a byte or two here and there.
Currently I am using the RandomAccessFile class to read into byte buffers (and ByteBuffers). My ultimate goal is to encapsulate the data access into some class such that it is fast and I never have to worry about it again. The basic functionality is that I will be asking it to read frames of data from specified files, and I wish to minimize the I/O operations given the considerations above.
Examples for typical access:
* Give me the first 10 kilobytes of all my files!
* Give me byte 0 through 999 of file F, then give me byte 1 through 1000, then give me 2 through 1001, etc, etc, ...
* Give me a megabyte of data from file F starting at such and such byte!
Any suggestions for a good design?
|
Use Java NIO and MappedByteBuffers, and treat your files as a list of byte arrays. Then, let the OS worry about the details of caching, read, flushing etc.
|
@Will
Pretty good results. Reading a large binary file quick comparison:
* Test 1 - Basic sequential read with RandomAccessFile.
**2656 ms**
* Test 2 - Basic sequential read with buffering.
**47 ms**
* Test 3 - Basic sequential read with MappedByteBuffers and further frame buffering optimization.
**16 ms**
|
Java: Advice on handling large data volumes. (Part Deux)
|
[
"",
"java",
"performance",
"data-access",
""
] |
When creating a class library in C++, you can choose between dynamic (`.dll`, `.so`) and static (`.lib`, `.a`) libraries. What is the difference between them and when is it appropriate to use which?
|
Static libraries increase the size of the code in your binary. They're always loaded and whatever version of the code you compiled with is the version of the code that will run.
Dynamic libraries are stored and versioned separately. It's possible for a version of the dynamic library to be loaded that wasn't the original one that shipped with your code **if** the update is considered binary compatible with the original version.
Additionally dynamic libraries aren't necessarily loaded -- they're usually loaded when first called -- and can be shared among components that use the same library (multiple data loads, one code load).
Dynamic libraries were considered to be the better approach most of the time, but originally they had a major flaw (google DLL hell), which has all but been eliminated by more recent Windows OSes (Windows XP in particular).
|
Others have adequately explained what a static library is, but I'd like to point out some of the caveats of using static libraries, at least on Windows:
* **Singletons:** If something needs to be global/static and unique, be very careful about putting it in a static library. If multiple DLLs are linked against that static library they will each get their own copy of the singleton. However, if your application is a single EXE with no custom DLLs, this may not be a problem.
* **Unreferenced code removal:** When you link against a static library, only the parts of the static library that are referenced by your DLL/EXE will get linked into your DLL/EXE.
For example, if `mylib.lib` contains `a.obj` and `b.obj` and your DLL/EXE only references functions or variables from `a.obj`, the entirety of `b.obj` will get discarded by the linker. If `b.obj` contains global/static objects, their constructors and destructors will not get executed. If those constructors/destructors have side effects, you may be disappointed by their absence.
Likewise, if the static library contains special entrypoints you may need to take care that they are actually included. An example of this in embedded programming (okay, not Windows) would be an interrupt handler that is marked as being at a specific address. You also need to mark the interrupt handler as an entrypoint to make sure it doesn't get discarded.
Another consequence of this is that a static library may contain object files that are completely unusable due to unresolved references, but it won't cause a linker error until you reference a function or variable from those object files. This may happen long after the library is written.
* **Debug symbols:** You may want a separate PDB for each static library, or you may want the debug symbols to be placed in the object files so that they get rolled into the PDB for the DLL/EXE. The Visual C++ documentation explains [the necessary options](http://msdn.microsoft.com/en-us/library/958x11bc.aspx).
* **RTTI:** You may end up with multiple `type_info` objects for the same class if you link a single static library into multiple DLLs. If your program assumes that `type_info` is "singleton" data and uses `&typeid()` or `type_info::before()`, you may get undesirable and surprising results.
|
When to use dynamic vs. static libraries
|
[
"",
"c++",
"dll",
"shared-libraries",
"static-linking",
"dynamic-linking",
""
] |
Is there a built-in mechanism in .NET to match patterns other than [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) (regex)? I'd like to match using Unix-style ([glob](https://en.wikipedia.org/wiki/Glob_(programming))) wildcards (\* = any number of any character).
I'd like to use this for a end-user facing control. I fear that permitting all regex capabilities will be very confusing.
|
I found the actual code for you:
```
Regex.Escape( wildcardExpression ).Replace( @"\*", ".*" ).Replace( @"\?", "." );
```
|
I like my code a little more semantic, so I wrote this extension method:
```
using System.Text.RegularExpressions;
namespace Whatever
{
public static class StringExtensions
{
/// <summary>
/// Compares the string against a given pattern.
/// </summary>
/// <param name="str">The string.</param>
/// <param name="pattern">The pattern to match, where "*" means any sequence of characters, and "?" means any single character.</param>
/// <returns><c>true</c> if the string matches the given pattern; otherwise <c>false</c>.</returns>
public static bool Like(this string str, string pattern)
{
return new Regex(
"^" + Regex.Escape(pattern).Replace(@"\*", ".*").Replace(@"\?", ".") + "$",
RegexOptions.IgnoreCase | RegexOptions.Singleline
).IsMatch(str);
}
}
}
```
(change the namespace and/or copy the extension method to your own string extensions class)
Using this extension, you can write statements like this:
```
if (File.Name.Like("*.jpg"))
{
....
}
```
Just sugar to make your code a little more legible :-)
|
Glob pattern matching in .NET
|
[
"",
"c#",
".net",
"glob",
""
] |
My .NET application fails when run from a network drive even when the very same executable runs perfectly fine from a local hard drive?
I tried checking for "Full trust" like so:
```
try
{
// Demand full trust permissions
PermissionSet fullTrust = new PermissionSet( PermissionState.Unrestricted );
fullTrust.Demand();
// Perform normal application logic
}
catch( SecurityException )
{
// Report that permissions were not full trust
MessageBox.Show( "This application requires full-trust security permissions to execute." );
}
```
However, this isn't helping, by which I mean the application starts up and the catch block is never entered. However, a debug build shows that the exception thrown is a SecurityException caused by an InheritanceDemand. Any ideas?
|
It indeed has to do with the fact the apps on a network location are less trusted then on your local hdd (due to the default policy of the .NET framework).
If I'm not mistaken Microsoft finally corrected this annoyance in .NET 3.5 SP1 (after a lot of developers complaining).
I google'd it: [.NET Framework 3.5 SP1 Allows managed code to be launched from a network share!](http://blogs.msdn.com/vancem/archive/2008/08/13/net-framework-3-5-sp1-allows-managed-code-to-be-launched-from-a-network-share.aspx)
|
Did you try Using [CasPol to Fully Trust a Share](http://blogs.msdn.com/shawnfa/archive/2004/12/30/344554.aspx)?
|
Why does my .NET application crash when run from a network drive?
|
[
"",
"c#",
".net",
"networking",
"securityexception",
"permissionset",
""
] |
I often use the `execv()` function in C++, but if some of the arguments are in C++ strings, it annoys me that I cannot do this:
```
const char *args[4];
args[0] = "/usr/bin/whatever";
args[1] = filename.c_str();
args[2] = someparameter.c_str();
args[3] = 0;
execv(args[0], args);
```
This doesn't compile because `execv()` takes `char *const argv[]` which is not compatible with `const char *`, so I have to copy my `std::string`s to character arrays using `strdup()`, which is a pain.
Does anyone know the reason for this?
|
The Open Group Base Specifications explains why this is: for compatibility with existing C code. Neither the pointers nor the string contents themselves are intended to be changed, though. Thus, in this case, you can get away with `const_cast`-ing the result of `c_str()`.
[Quote:](http://pubs.opengroup.org/onlinepubs/9699919799/functions/exec.html)
> The statement about `argv[]` and `envp[]` being constants is included to make explicit to future writers of language bindings that these objects are completely constant. Due to a limitation of the ISO C standard, it is not possible to state that idea in standard C. Specifying two levels of `const`- qualification for the `argv[]` and `envp[]` parameters for the exec functions may seem to be the natural choice, given that these functions do not modify either the array of pointers or the characters to which the function points, but this would disallow existing correct code. Instead, only the array of pointers is noted as constant.
The table and text after that is even more insightful. However, Stack Overflow doesn't allow tables to be inserted, so the quote above should be enough context for you to search for the right place in the linked document.
|
const is a C++ thing - execv has taken char \* arguments since before C++ existed.
You can use const\_cast instead of copying, because execv doesn't actually modify its arguments. You might consider writing a wrapper to save yourself the typing.
Actually, a bigger problem with your code is that you declared an array of characters instead of an array of strings.
Try:
const char\* args[4];
|
execv() and const-ness
|
[
"",
"c++",
"unix",
""
] |
What I have now (which successfully loads the plug-in) is this:
```
Assembly myDLL = Assembly.LoadFrom("my.dll");
IMyClass myPluginObject = myDLL.CreateInstance("MyCorp.IMyClass") as IMyClass;
```
This only works for a class that has a constructor with no arguments. How do I pass in an argument to a constructor?
|
You cannot. Instead use [Activator.CreateInstance](http://msdn.microsoft.com/en-us/library/wcxyzt4d.aspx) as shown in the example below (note that the Client namespace is in one DLL and the Host in another. Both must be found in the same directory for code to work.)
However, if you want to create a truly pluggable interface, I suggest you use an Initialize method that take the given parameters in your interface, instead of relying on constructors. That way you can just demand that the plugin class implement your interface, instead of "hoping" that it accepts the accepted parameters in the constructor.
```
using System;
using Host;
namespace Client
{
public class MyClass : IMyInterface
{
public int _id;
public string _name;
public MyClass(int id,
string name)
{
_id = id;
_name = name;
}
public string GetOutput()
{
return String.Format("{0} - {1}", _id, _name);
}
}
}
namespace Host
{
public interface IMyInterface
{
string GetOutput();
}
}
using System;
using System.Reflection;
namespace Host
{
internal class Program
{
private static void Main()
{
//These two would be read in some configuration
const string dllName = "Client.dll";
const string className = "Client.MyClass";
try
{
Assembly pluginAssembly = Assembly.LoadFrom(dllName);
Type classType = pluginAssembly.GetType(className);
var plugin = (IMyInterface) Activator.CreateInstance(classType,
42, "Adams");
if (plugin == null)
throw new ApplicationException("Plugin not correctly configured");
Console.WriteLine(plugin.GetOutput());
}
catch (Exception e)
{
Console.Error.WriteLine(e.ToString());
}
}
}
}
```
|
call
```
public object CreateInstance(string typeName, bool ignoreCase, BindingFlags bindingAttr, Binder binder, object[] args, CultureInfo culture, object[] activationAttributes)
```
instead.
[MSDN Docs](http://msdn.microsoft.com/en-us/library/ck6xe688.aspx)
EDIT: If you are going to vote this down, please give insight into why this approach is wrong/or not the best way.
|
How can I pass an argument to a C# plug-in being loaded through Assembly.CreateInstance?
|
[
"",
"c#",
"assemblies",
"arguments",
"createinstance",
""
] |
I need to do a lot of searches of certain patterns in source files while the user is changing them, so I need to do regexp matching that is efficient in time and memory. The pattern repeats itself so should be compiled once, but I need to be able to retrieve subparts (rather than just confirm a match)
I'm considering using java.util.regexp or the Jakarta perl5util (if it still exists, been a few years since I used it), or perhaps the Eclipse search engine though I doubt that ti's smarter.
Is there any significant performance difference between the two?
|
I am not sure there is a huge performance gap in term of the different regexp java engines.
But there sure is a performance issue when constructing a regexp (and that is, if the data is large enough, as [noted by Jeff Atwood](http://blog.codinghorror.com/regex-performance/))
The only thing you should avoid is [catastrophic backtracking](http://www.regular-expressions.info/catastrophic.html), better avoided when using [atomic grouping](http://www.regular-expressions.info/atomic.html).
So, by default I would use the java.utils.regexp engine, unless you have specific perl-compliant sources of regexp you need to reuse in your program.
Then I would carefully construct the regexp I intend to use.
But in term of choosing one engine or another... as it [has been](https://stackoverflow.com/questions/48732/what-are-your-top-design-consideration#48737) [said](https://stackoverflow.com/questions/42383/java-object-allocation-overhead#42410) in [many](https://stackoverflow.com/questions/30754/performance-vs-readability#30789) [other questions](https://stackoverflow.com/questions/2750/data-verifications-in-gettersetter-or-elsewhere#2761)...:
* "make it work, make it fast - in that order"
* beware of "premature optimization".
|
As VonC says, you need to know your regexps. It doesn't hurt to compile the Regexes beforehand OTHERWISE, the cost of compiling regex each time can hurt the performance badly.
For some categories, there are alternate libraries : <http://jint.sourceforge.net/jint.html> which might have better performance. Then again, it depends upon which version of java you're using.
JDK 1.6 shows the maturity of the regex engine with good features and performance combined.
|
Is java.util.regexp efficient enough?
|
[
"",
"java",
"regex",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.