Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
The function below takes a python file handle, reads in packed binary data from the file, creates a Python dictionary and returns it. If I loop it endlessly, it'll continually consume RAM. What's wrong with my RefCounting? ``` static PyObject* __binParse_getDBHeader(PyObject *self, PyObject *args){ PyObject *o; //generic object PyObject* pyDB = NULL; //this has to be a py file object if (!PyArg_ParseTuple(args, "O", &pyDB)){ return NULL; } else { Py_INCREF(pyDB); if (!PyFile_Check(pyDB)){ Py_DECREF(pyDB); PyErr_SetString(PyExc_IOError, "argument 1 must be open file handle"); return NULL; } } FILE *fhDB = PyFile_AsFile(pyDB); long offset = 0; DB_HEADER *pdbHeader = malloc(sizeof(DB_HEADER)); fseek(fhDB,offset,SEEK_SET); //at the beginning fread(pdbHeader, 1, sizeof(DB_HEADER), fhDB ); if (ferror(fhDB)){ fclose(fhDB); Py_DECREF(pyDB); PyErr_SetString(PyExc_IOError, "failed reading database header"); return NULL; } Py_DECREF(pyDB); PyObject *pyDBHeader = PyDict_New(); Py_INCREF(pyDBHeader); o=PyInt_FromLong(pdbHeader->version_number); PyDict_SetItemString(pyDBHeader, "version", o); Py_DECREF(o); PyObject *pyTimeList = PyList_New(0); Py_INCREF(pyTimeList); int i; for (i=0; i<NUM_DRAWERS; i++){ //epochs o=PyInt_FromLong(pdbHeader->last_good_test[i]); PyList_Append(pyTimeList, o); Py_DECREF(o); } PyDict_SetItemString(pyDBHeader, "lastTest", pyTimeList); Py_DECREF(pyTimeList); o=PyInt_FromLong(pdbHeader->temp); PyDict_SetItemString(pyDBHeader, "temp", o); Py_DECREF(o); free(pdbHeader); return (pyDBHeader); } ```
`PyDict_New()` returns a new reference, check the [docs](http://docs.python.org/c-api/dict.html) for `PyDict`. So if you increase the refcount immediately after creating it, you have two references to it. One is transferred to the caller when you return it as a result value, but the other one never goes aways. You also don't need to incref `pyTimeList`. It's yours when you create it. However, you need to decref it, but you only decref it once, so it's leaked as well. You also don't need to call `Py_INCREF` on `pyDB`. It's a borrowed reference and it won't go away as long as your function does not return, because it's still referenced in a lower stack frame. Only if you want to keep the reference in another structure somewhere, you need to increse the refcount. Cf. the [API docs](http://docs.python.org/c-api/arg.html)
OT: Using successive calls to `PyList_Append` is a performance issue. Since you know how many results you'll get in advance, you can use: ``` PyObject *pyTimeList = PyList_New(NUM_DRAWERS); int i; for (i=0; i<NUM_DRAWERS; i++){ o = PyInt_FromLong(pdbHeader->last_good_test[i]); PyList_SET_ITEM(pyTimeList, i, o); } ``` Observe that you may not decrease the refcount of `o` after calling `PyList_SET_ITEM`, because it "steals" a reference. Check the [docs](http://docs.python.org/c-api/list.html).
Why is my Python C Extension leaking memory?
[ "", "python", "c", "refcounting", "" ]
Take the following code for example; ``` if (Convert.ToString(frm.WindowState) == "Minimized") Layout.WindowState = "Maximized"; else Layout.WindowState = Convert.ToString(frm.WindowState); ``` We are analysing the string definition of the window state, i.e. "Minimized". Would this string description change between cultures? Lastly, whilst on this code, is there an Enum which we could use in order to check against the window state? Can we refactor this code segment?
The [`WindowState`](http://msdn.microsoft.com/en-us/library/system.windows.forms.form.windowstate.aspx) value *is* an enumeration - [`System.Windows.Forms.FormWindowState`](http://msdn.microsoft.com/en-us/library/system.windows.forms.formwindowstate.aspx). Just compare to the enumeration constants, skip the `ToString()` madness.
It shouldn't change across culture, since it's merely turning the Enum name into string. Enum name doesn't change when you use different culture of .Net/Windows/IDE, thus it will remain as what it was originally written.
C# Will "WindowState.ToString" change between cultures
[ "", "c#", "windowstate", "" ]
I have a Winform with a BackgroundWorker. The BackgroundWorker, among other things, has to make an HTTP call to a page, fill out some data, submit the form, and retrieve the HTML that comes back after "clicking" the submit button. I've run into a number of roadblocks while doing this: 1. Can't POST the data because the target webserver doesn't 405 support that method. 2. Can't use a WebClient.UploadValues, again, because the webserver doesn't support POST. 3. Can't use a WebBrowser control because BackgroundWorkers suck at COM Interop and an exception is thrown that says it must be in a STA thread (Single-Threaded Apartment) 4. Can't run another seperate thread because the BW has to sit and wait for the result before it can continue (Can't, or at least I don't know a way to do this that won't crash) 5. Can't change the ApartmentState of the thread because it's a BackgroundWorker and it throws if told to go to STA mode What should I do to resolve this? [Edit]: The app entrypoint is already tagged with the [STAThread] attribute.
What web-server is it that doesn't support POST in this scenario? What does the comparable HTML form do? A POST or a GET? Just do the same.. I suspect that WebClient or HttpWebRequest will do the job fine.
Have you tried using [WebClient.UploadValues](http://msdn.microsoft.com/en-us/library/system.net.webclient.uploadvalues(VS.80).aspx) with the Method argument set to "GET" ?
C#/Winform: Enter data into HTML page, submit form
[ "", "c#", "multithreading", "http", "interop", "backgroundworker", "" ]
This is perhaps similar to previous posts, but I want to be specific about the use of locking on a network, rather than locally. I want to write a file to a shared location, so it may well go on a network (certainly a Windows network, maybe Mac). I want to prevent other people from reading any part of this file whilst it it being written. This will not be a highly concurrent process, and the files will be typically less than 10MB. I've read the `FileLock` documentation and `File` documentation and am left somewhat confused, as to what is safe and what is not. I want to lock the entire file, rather than portions of it. Can I use `FileChannel.tryLock()`, and it is safe on a network, or does it depend on the type of network? Will it work on a standard Windows network (if there is such a thing). If this does not work, is the best thing to create a zero byte file or directory as a lock file, and then write out the main file. Why does that `File.createNewFile()` documentation say don't use this for file locking? I appreciate this is subject to race conditions, and is not ideal.
This can't be reliably done on a network file system. As long as your application is the only application that accesses the file, it's best to implement some kind of cooperative locking process (perhaps writing a lock file to the network filesystem when you open the file). The reason that is not recommended, however, is that if your process crashes or the network goes down or any other number of issues happen, your application gets into a nasty, dirty state.
You can have a empty file which is lying on the server you want to write to. When you want to write to the server you can catch the token. Only when you have the token you should write to any file which is lying on the server. When you are ready with you file operations or an exception was thrown you have to release the token. The helper class can look like ``` private FileLock lock; private File tokenFile; public SLTokenLock(String serverDirectory) { String tokenFilePath = serverDirectory + File.separator + TOKEN_FILE; tokenFile = new File(tokenFilePath); } public void catchCommitToken() throws TokenException { RandomAccessFile raf; try { raf = new RandomAccessFile(tokenFile, "rw"); //$NON-NLS-1$ FileChannel channel = raf.getChannel(); lock = channel.tryLock(); if (lock == null) { throw new TokenException(CANT_CATCH_TOKEN); } } catch (Exception e) { throw new TokenException(CANT_CATCH_TOKEN, e); } } public void releaseCommitToken() throws TokenException { try { if (lock != null && lock.isValid()) { lock.release(); } } catch (Exception e) { throw new TokenException(CANT_RELEASE_TOKEN, e); } } ``` Your operations then should look like ``` try { token.catchCommitToken(); // WRITE or READ to files inside the directory } finally { token.releaseCommitToken(); } ```
Java file locking on a network
[ "", "java", "file", "networking", "" ]
What is the best way to generate a string of `\t`'s in C#? I am learning C# and experimenting with different ways of saying the same thing. `Tabs(uint t)` is a function that returns a `string` with `t` amount of `\t`'s For example, `Tabs(3)` returns `"\t\t\t"` Which of these three ways of implementing `Tabs(uint numTabs)` is best? Of course that depends on what "best" means. 1. The LINQ version is only two lines, which is nice. But are the calls to Repeat and Aggregate unnecessarily time/resource consuming? 2. The `StringBuilder` version is very clear, but is the `StringBuilder` class somehow slower? 3. The `string` version is basic, which means it is easy to understand. 4. Does it not matter at all? Are they all equal? These are all questions to help me get a better feel for C#. ``` private string Tabs(uint numTabs) { IEnumerable<string> tabs = Enumerable.Repeat("\t", (int) numTabs); return (numTabs > 0) ? tabs.Aggregate((sum, next) => sum + next) : ""; } private string Tabs(uint numTabs) { StringBuilder sb = new StringBuilder(); for (uint i = 0; i < numTabs; i++) sb.Append("\t"); return sb.ToString(); } private string Tabs(uint numTabs) { string output = ""; for (uint i = 0; i < numTabs; i++) { output += '\t'; } return output; } ```
What about this: ``` string tabs = new string('\t', n); ``` Where `n` is the number of times you want to repeat the string. Or better: ``` static string Tabs(int n) { return new string('\t', n); } ```
``` string.Concat(Enumerable.Repeat("ab", 2)); ``` Returns > "abab" And ``` string.Concat(Enumerable.Repeat("a", 2)); ``` Returns > "aa" from... [Is there a built-in function to repeat string or char in .net?](https://stackoverflow.com/questions/4115064/is-there-a-built-in-function-to-repeat-string-or-char-in-net)
Best way to repeat a character in C#
[ "", "c#", ".net", "string", "" ]
I am using LINQ to EF and have the following LINQ query: ``` var results = (from x in ctx.Items group x by x.Year into decades orderby decades.Count() descending select new { Decade = decades.Key, DecadeCount = decades.Count() }); ``` So this kind of gets me to where I want to be, in that I get the items broken down by year and a count of items in that year. (i.e. 2001 - 10, 1975 - 15, 2005 - 5, 1976 - 1) The thing I really want to do though is to break them down by decade (i.e. 2000s - 15, 1970s - 16). How does one have a "calculated field" in the "by" part of the group clause for a Linq statement. I think what I want is basically something like: ``` var results = (from x in ctx.Items group x by (x => x.Year.Value.ToString().Substring(0, 3) + "0s") into decades orderby decades.Count() descending select new { Decade = decades.Key, DecadeCount = decades.Count() }); ``` Or more generally the syntax so that I can do some more complicated evaluation/calculation to do the group by on. Any ideas? EDIT (update): (x => x.Year.Value.ToString().Substring(0, 3) + "0s") - Doesn't Work - "LINQ to Entities does not recognize the method 'System.String ToString()' method, and this method cannot be translated into a store expression." (x.Year / 10 \* 10) - Functionally works (thank you) - the only "problem" is that the 's' is not on the end (i.e. 1970 vs. 1970s) Is there anyway to put a function in the by clause? i.e. group x by this.ManipulateYear(x.Year) into decades ... or ... x => x.Year.Value.ToString().Substring(0,3) + "0s" ?? It would be nice to have some technique (such as calling a function or using a lambda expression) so that I can cover any case that I can think of. Thanks again for everyone's help on this.
It looks like we cannot do a grouping or select or similar on calculated fields that are definied in the partial classes on the entity framework. The calculated fields can be used on LINQ to objects (so you could return all the data as objects and then do a grouping)
You can use the `let` clause to avoid counting the decades multiple times: ``` from x in ctx.Items group x by (x.Year / 10 * 10) into decades let decadeCount = decades.Count() orderby decadeCount descending select new { Decade = decades.Key, DecadeCount = decadeCount } ```
Using LINQ how do I have a grouping by a "calculated field"
[ "", "c#", ".net", "linq", ".net-3.5", "linq-to-entities", "" ]
Hey I am developing an desktop application using Spring and Hibernate, and I have a problem with lazy initiation. I looked in the web and every solution is related to the open session in view pattern, but I can't use this pattern. I've also tried to get the `sessionfactory` from the `HibernateTemplate`, but it returns to me a disconnected session. Does anyone know other solution?
I would suggest that you basically have two solutions: 1. Make arrangements to keep a Hibernate session open when you access a lazy-initialized object or collection. That means you're going to have to carefully mark your transaction boundaries in your code, a la the "open session in view" pattern. Spring makes this possible, but in a desktop application it won't be as straightforward as a web application where the transaction boundaries are a little more obvious. 2. Turn off all the lazy-initialization for your persisted objects in Hibernate. Option 2 could lead to a lot of unnecessary database access, and option 1 means you have to seriously study your workflow and use cases. Hope that helps!
One option is to call Hibernate.initialize() on the entities or collections to force initialize them. You'd want to do this before you return the data back to your view. I would consider this carefully, since it's going to generate a lot of SQL statements back to the database. You may want to look into using "fetch" in your HQL queries or configuration the fetch mode to "eager" in your mappings (I believe it's FetchMode.EAGER in JPA or lazy="false" in hbm.xml). @Jose: Don't manage the Session in your own ThreadLocal. Use SessionFactory.getCurrentSession() and configure Hibernate to use the "thread" SessionContext.
Spring and Hibernate, Lazy initiation problem
[ "", "java", "hibernate", "spring", "lazy-loading", "" ]
I'm exploring the possibility of running a Java app on a machine with very large amounts of RAM (anywhere from 300GB to 15TB, probably on an SGI Altix 4700 machine), and I'm curious as to how Java's GC is likely to perform in this scenario. I've heard that IBM's or JRockit's JVMs may be better suited to this than Sun's. Does anyone know of any research or data on JVM performance in this situation?
On the Sun JVM, you can use the option -XX:UseConcMarkSweepGC to turn on the Concurrent mark and sweep Collector, which will avoid the "stop the world" phases of the default GC algorithm almost completely, at the cost of a little bit more overhead. The advise to use more than on VM on such a machine is IMHO outdated. In real world applications you often have enough shared data so that the performance with the CMS and one JVM is better.
The question is: do you want to run within a single process (JVM) or not? If you do, then you're going to have a problem. Refer to [Tuning Java Virtual Machines](http://edocs.beasys.com/wls/docs81/perform/JVMTuning.html), [Oracle Coherence User Guide](http://coherence.oracle.com/display/COH34UG/Coherence+User+Guide+(Full)) and similar documentation. The rule of thumb I've operated by is try and avoid heaps larger than 1GB. Whereas a 512MB-1GB full GC might take less than a second. A 2-4GB full GC could potentially take 5 seconds or longer. Obvioiusly this depends on many factors but the moral of the story is that GC overhead does not scale linearly and once you get into the one second range performance then degrades rapidly.
Java performance with very large amounts of RAM
[ "", "java", "garbage-collection", "" ]
Is it possible to have a variable number of fields using django forms? The specific application is this: A user can upload as many pictures as they want on the image upload form. Once the pictures are uploaded they are taken to a page where they can give the pictures a name and description. The number of pictures will depend on how many the user has chosen to upload. So **how do I get django to generate a form using a variable number of input fields** (which could be passed as an argument if necessary)? **edit:** a few things have changed since the [article mentioned in jeff bauer's answer](http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/) was written. Namely this line of code which doesn't seem to work: ``` # BAD CODE DO NOT USE!!! return type('ContactForm', [forms.BaseForm], { 'base_fields': fields }) ``` So here is what I came up with... # The Answer I used: ``` from tagging.forms import TagField from django import forms def make_tagPhotos_form(photoIdList): "Expects a LIST of photo objects (ie. photo_sharing.models.photo)" fields = {} for id in photoIdList: id = str(id) fields[id+'_name'] = forms.CharField() fields[id+'_tags'] = TagField() fields[id+'_description'] = forms.CharField(widget=forms.Textarea) return type('tagPhotos', (forms.BaseForm,), { 'base_fields': fields }) ``` note tagging is not part of django, but it is free and very useful. check it out: [django-tagging](http://code.google.com/p/django-tagging/)
Yes, it's possible to create forms dynamically in Django. You can even mix and match dynamic fields with normal fields. ``` class EligibilityForm(forms.Form): def __init__(self, *args, **kwargs): super(EligibilityForm, self).__init__(*args, **kwargs) # dynamic fields here ... self.fields['plan_id'] = CharField() # normal fields here ... date_requested = DateField() ``` For a better elaboration of this technique, see James Bennett's article: **So you want a dynamic form?** <http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/>
If you run ``` python manage.py shell ``` and type: ``` from app.forms import PictureForm p = PictureForm() p.fields type(p.fields) ``` you'll see that p.fields is a SortedDict. you just have to insert a new field. Something like ``` p.fields.insert(len(p.fields)-2, 'fieldname', Field()) ``` In this case it would insert before the last field, a new field. You should now adapt to your code. Other alternative is to make a for/while loop in your template and do the form in HTML, but django forms rock for some reason, right?
Variable number of inputs with Django forms possible?
[ "", "python", "django", "django-forms", "" ]
I am trying to add a link into the pop-up text bubble of a marker in Google Maps through the API. I have successfully run the below code: ``` echo '<marker lat="43.91892" lng="-78.89231" html="Albertus Magnus College&lt;br&gt;Link to Admissions" label="Albertus Magnus College" />'; ``` But once I actually try to add the link it fails. Like this: ``` echo '<marker lat="43.91892" lng="-78.89231" html="Albertus Magnus College&lt;br&gt;&lt;a href='http://www.albertus.edu/admission/index.shtml'&gt;Admissions&lt;\/a&gt;" label="Albertus Magnus College" />'; ``` Does anyone know how to successfully write this code? I am writing it into PHP because I have some other functionality that won't let me just write it in XML. Update: I got it to work like this for some reason... ``` $window2a_url = '&lt;a href=&apos;http://www.albertus.edu/admission/index.shtml&apos;&gt;Admissions'; echo '<marker lat="41.331304" lng="-72.921438" html=" Albertus Magnus College&lt;br&gt;'; echo $window2a_url; echo '" label="Albertus Magnus College" />'; ``` I had to escape the apostrophes... If anyone has a more elegant solution, I am all ears!
This is the answer: ``` $window2a_url = '&lt;a href=&apos;http://www.albertus.edu/admission/index.shtml&apos;&gt;Admissions'; echo '<marker lat="41.331304" lng="-72.921438" html=" Albertus Magnus College&lt;br&gt;'; echo $window2a_url; echo '" label="Albertus Magnus College" />'; ``` I had to escape the apostrophes...
Seems you are putting an apostrophe (') inside the string. You should use an escape character (may be "\", I don't know PHP's syntax) near the apostrophe.
Google Map API and Links
[ "", "php", "api", "google-maps", "" ]
Let's say I have a table like this: ``` name | score_a | score_b -----+---------+-------- Joe | 100 | 24 Sam | 96 | 438 Bob | 76 | 101 ... | ... | ... ``` I'd like to select the minimum of score\_a and score\_b. In other words, something like: ``` SELECT name, MIN(score_a, score_b) FROM table ``` The results, of course, would be: ``` name | min -----+----- Joe | 24 Sam | 96 Bob | 76 ... | ... ``` However, when I try this in Postgres, I get, "No function matches the given name and argument types. You may need to add explicit type casts." MAX() and MIN() appear to work across *rows* rather than *columns.* Is it possible to do what I'm attempting?
[LEAST](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-GREATEST-LEAST)(a, b): > The `GREATEST` and `LEAST` functions select the largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result (see [Section 10.5](https://www.postgresql.org/docs/current/static/typeconv-union-case.html) for details). NULL values in the list are ignored. The result will be NULL only if all the expressions evaluate to NULL. > > Note that `GREATEST` and `LEAST` are not in the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL...
Here's the link to docs for the `LEAST()` function in PostgreSQL: <http://www.postgresql.org/docs/current/static/functions-conditional.html#AEN15582>
How do I get the MIN() of two fields in Postgres?
[ "", "sql", "postgresql", "min", "" ]
What communication is going on between Eclipse and my application server (JBoss) when I run the server from within Eclipse in debugging mode? How does this work?
When you start the server in debug mode, it listens on a specified TCP port. Eclipse connects to that port, and they talk using the Java Debug Wire Protocol (JDWP). Read the details here: <http://java.sun.com/j2se/1.5.0/docs/guide/jpda/>
I think it is called JDWP (Java Debugging Wire Protocol) - read more [here](http://java.sun.com/j2se/1.5.0/docs/guide/jpda/architecture.html)
How does Eclipse debug code in an application server?
[ "", "java", "eclipse", "jakarta-ee", "jdb", "" ]
I wish to store URLs in a database (MySQL in this case) and process it in Python. Though the database and programming language are probably not this relevant to my question. In my setup I receive unicode strings when querying a text field in the database. But is a URL actually text? Is encoding from and decoding to unicode an operation that should be done to a URL? Or is it better to make the column in the database a binary blob? So, how do you handle this problem? Clarification: This question is not about urlencoding non-ASCII characters with the percent notation. It's about the distiction that unicode represents text and byte strings represent a way to encode this text into a sequence of bytes. In Python (prior to 3.0) this distinction is between the unicode and the str types. In MySQL it is TEXT to BLOBS. So the concepts seem to correspond between programming language and database. But what is the best way to handle URLs in this scheme?
The relevant answer is found in [RFC 2396](http://www.ietf.org/rfc/rfc2396.txt), section 2.1 *URI and non-ASCII characters* --- The relationship between URI and characters has been a source of confusion for characters that are not part of US-ASCII. To describe the relationship, it is useful to distinguish between a "character" (as a distinguishable semantic entity) and an "octet" (an 8-bit byte). There are two mappings, one from URI characters to octets, and a second from octets to original characters: URI character sequence->octet sequence->original character sequence A URI is represented as a sequence of characters, not as a sequence of octets. That is because URI might be "transported" by means that are not through a computer network, e.g., printed on paper, read over the radio, etc. ---
On the question: "But is a URL actually text?" It depends on the context, in some languages or libraries (for example java, I'm not sure about python), a URL may be represented internally as an object. However, a URL always has a well defined text representation. So storing the text-representation is much more portable than storing the internal representation used by whatever is the current language of choice. URL syntax and semantics are covered by quite a few standards, recommendations and implementations, but I think the most authoritative source for parsing and constructing correct URL-s would be [RFC 2396](http://www.ietf.org/rfc/rfc2396.txt). On the question about unicode, section 2.1 deals with non-ascii characters. (Edit: changed rfc-reference to the newest edition, thank you S.Lott)
URLs: Binary Blob, Unicode or Encoded Unicode String?
[ "", "python", "mysql", "database", "url", "" ]
Something I do often if I'm storing a bunch of string values and I want to be able to find them in O(1) time later is: ``` foreach (String value in someStringCollection) { someDictionary.Add(value, String.Empty); } ``` This way, I can comfortably perform **constant-time** lookups on these string values later on, such as: ``` if (someDictionary.containsKey(someKey)) { // etc } ``` However, I feel like I'm cheating by making the value **String.Empty**. Is there a more appropriate .NET Collection I should be using?
If you're using .Net 3.5, try [HashSet](http://msdn.microsoft.com/en-us/library/bb359438.aspx). If you're not using .Net 3.5, try [C5](http://www.itu.dk/research/c5/). Otherwise your current method is ok (bool as @leppie suggests is better, or not as @JonSkeet suggests, dun dun dun!). ``` HashSet<string> stringSet = new HashSet<string>(someStringCollection); if (stringSet.Contains(someString)) { ... } ```
You can use `HashSet<T>` in .NET 3.5, else I would just stick to you current method (actually I would prefer `Dictionary<string,bool>` but one does not always have that luxury).
'Proper' collection to use to obtain items in O(1) time in C# .NET?
[ "", "c#", ".net", "optimization", "collections", "string", "" ]
I don't think this can be done "cleanly", but I'll ask anyway. I have a system which needs to get a JSON resource via a REST GET call in order to initialize. At the moment the system waits until the onLoad event and fires an ajax request to retrieve the resource, which I don't think is the best way to do it, as the resource is needed a run time. What I would love to do is somehow load the resource at runtime inside an HTML tag then eval the contents. But what I'm working on is an API to be used by others, so I would like to achieve this in a logical and standards based way. So is there any tag which fits the bill? A tag which can be placed in the doc head, that I will be able to read and eval the contents of at runtime? Regards, Chris
I was thinking of putting it in an iframe but then I realized that you have a problem with that the content-type is application/json. When I tested FF, IE and Chrome was trying to download the file and asked the user where to store it (Opera displayed the file) Putting it in a LINK will not help you since the browser will not try to fetch the document (it only fetches for known resources like style-sheet) To me it looks like you have to use AJAX. Can you elaborate on why that's a problem?
Maybe I'm not understanding but couldn't you just: ``` <?php $json_data = json_encode($your_data); ?> <script> var data = <?= $json_data ?>; </script> ```
Load JSON at runtime rather than dynamically via AJAX
[ "", "javascript", "html", "dom", "" ]
I have a C# public API that is used by many third-party developers that have written custom applications on top of it. In addition, the API is used widely by internal developers. This API wasn't written with testability in mind: most class methods aren't virtual and things weren't factored out into interfaces. In addition, there are some helper static methods. For many reasons I can't change the design significantly without causing breaking changes for applications developed by programmers using my API. However, I'd still like to give internal and external developers using this API the chance to write unit tests and be able to mock the objects in the API. There are several approaches that come to mind, but none of them seem great: 1. The traditional approach would be to force developers to create a proxy class that they controlled that would talk to my API. This won't work in practice because there are now hundreds of classes, many of which are effectively strongly typed data transfer objects that would be a pain to reproduce and maintain. 2. Force all developers using the API that want to unit test it to buy [TypeMock](http://www.typemock.com/). This seems harsh to force people to pay $300+ per developer and potentially require them to learn a different mock object tool than what their used to. 3. Go through the entire project and make all the methods virtual. This would allow mock-ing of objects using free tools like [Moq](http://code.google.com/p/moq/) or [Rhino Mocks](http://ayende.com/projects/rhino-mocks.aspx), but it could potentially open up security risks for classes that were never meant to be derived from. Additionally this could cause breaking changes. 4. I could create a tool that given an input assembly would output an assembly with the same namespaces, classes, and members, but would make all of the methods virtual and it would make the method body just return the default value for the return type. Then, I could ship this dummy test assembly each time I released an update to the API. Developers could then write tests for the API against the dummy assembly since it had virtual members that are very mock-able. This might work, but it seems a bit tedious to write a custom tool for this and I can't seem to find an existing one that does it well (especially that works well with generics). Furthermore, it has the complication that it requires developers to use two different assemblies that could possibly go out of date. 5. Similar to #4, I could go through every file and add something like "#ifdef UNITTEST" to every method and body to do the equivalent of what a tool would do. This doesn't require an external tool, but it would pollute the codebase with a lot of ugly "#ifdef"'s. Is there something else that I haven't considered that would be a good fit? Does a tool like what I mentioned in #4 already exist? Again, the complicating factor is that this is a rather large API (hundreds of classes and ~10 files) and has existing applications using it which makes it hard to do drastic design changes. There [have](https://stackoverflow.com/questions/131806/mocking-classes-that-arent-interfaces) [been](https://stackoverflow.com/questions/42785/how-do-you-retrofit-unit-tests-into-a-code-base) [several](https://stackoverflow.com/questions/346345/how-do-you-unit-test-a-class-that-is-dependent-on-many-other-classes) [questions](https://stackoverflow.com/questions/107919/is-it-feasible-to-introduce-test-driven-development-tdd-in-a-mature-project) on Stack Overflow that were generic about retrofitting an existing application to make it testable, but none seem to address the concerns I have (specifically in the context of a widely used API with many third-party developers). I'm also aware of "[Working Effectively With Legacy Code](http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf)" and think it has good advice, but I am looking for a specific .net approach given the constraints mentioned above. **UPDATE:** I appreciate the answers so far. One that [Patrik Hägne](https://stackoverflow.com/users/46187/patrik-h228gne) brought up is "why not extract interfaces?" This indeed works to a point, but has some problems such as the existing design has many cases where we take expose a concrete class. For example: ``` public class UserRepository { public UserData GetData(string userName) { ... } } ``` Existing customers that are expecting the concrete class (e.g. "UserData") would break if they were given an "IUserData." Additionally, as mentioned in the comments there are cases where we take in a class and then expose it for convenience. This could cause problems if we took in an interface and then had to expose it as a concrete class. The biggest challenge to a significant rewrite or redesign is that there is a huge investment in the current API (thousands of hours of development and probably just as much third party training). So, while I agree that a better [SOLID](http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod) design rewrite or abstraction layer (that eventually could become the new API) that focused on items like the [Interface Separation Principle](http://www.objectmentor.com/resources/articles/isp.pdf) would be a plus from a testability perspective, it'd be a large undertaking that probably can't be cost justified at the present time. We do have testing for the current API, but it is more complicated integration testing rather than unit-testing. Additionally, as mentioned by [Chad Myers](https://stackoverflow.com/users/10862/chadmyers), this is question addresses a similar problem that the .NET framework itself faces in some areas. I realize that I'm probably looking for a "silver bullet" here that doesn't exist, but all help is appreciated. The important part is protecting the huge time investments by many third party developers as well as the huge existing development to create the current API. All answers, especially those that consider the business side of the problem, will be carefully reviewed. Thanks!
What you're really asking is, "How do I design my API with SOLID and similar principles in mind so my API plays well with others?" It's not just about testability. If your customers are having problems testing their code with yours, then they're also having problems WRITING/USING their code with yours, so this is a bigger problem than just testability. Simply extracting interfaces will not solve the problem because it's likely your existing class interfaces (what the concrete classes expose as their methods/properties) aren't design with Interface Segregation Principle in mind, so the extracted interface would have all sorts of problems (some of which you mentioned in comment to a previous answer). I like to call this the IHttpContext problem. ASP.NET, as you know, is very difficult to test around or with due to the "Magic Singleton Dependency" problem of HttpContext.Current. HttpContext is not mockable without fancy tricks like what TypeMock uses. Simply extracting an interface of HttpContext is not going to help that much because it's SO huge. Eventually, even IHttpContext would become a burden to test with so much so that it's almost not worth doing any more than trying to mock HttpContext itself. Identifying object responsibilities, slicing up interfaces and interactions appropriately, and designing with Open/Closed Principle in mind is not something you and try to force/cram into an existing API designed without these principles in mind. I hate to leave you with such a grim answer, so I'll give you one positive suggest: How's about YOU take all the grief on behalf of your customers and make some sort of service/facade layer over top of your old API. This service layer will have to deal with the minutiae and pain of your API, but will present a nice, clean, SOLID-friendly public API that your customers can use with much less friction. This also has the added benefit of allowing you to slowly replace parts of your API and eventually make it so your new API isn't just a facade, it IS the API (and the old API is phased out).
Another approach would be to create a seperate branch of the API and do option 3 there. Then you just maintain these two versions and deprecate the former. Merging changes from one branch into the other should work automatically most of the time.
How to make an existing public API testable for external programmers using it?
[ "", "c#", ".net", "unit-testing", "" ]
I have a Gridview in which i have two templatefields of dropdownlist. I bound them on runtime with same list item. ``` li = new listitem ("1","1"); dl1.items.add(li); dl2.items.add(li); li = new listitem ("2","2"); dl1.items.add(li); dl2.items.add(li); li = new listitem ("3","3"); dl1.items.add(li); dl2.items.add(li); dl1.selectedvalue = "2"; dl2.selectedvalue = "3"; ``` After executing above, dl1 & dl2 both show me "3" as selected value. Why? I know the work around of using 2 different listitems while binding but i wanna know why the above happens?
The ListItem class has a property "Selected" which marks if the item is selected. I haven't checked the DDL SelectedValue property to see what it does, but my guess is that the ListItem.Selected property is being set to true, and since you are using the same object in both drop-down lists, it is being marked as 'selected' in both. I'm sure if this was a multi-select list, both "2" and "3" would be marked as 'selected'.
Looking at just the last part of the code: you've got a single list item, and it's appearing in two different lists. But it's still one object. How would you expect one object to have two different values for a single property (SelectedValue)?
Dropdowns filled with same list item
[ "", "c#", "gridview", ".net-2.0", "" ]
I really need help with interfaces in general... Any resources that you guys would recommend me? ### Related: * [How are Java interfaces actually used?](https://stackoverflow.com/questions/504904/how-are-java-interfaces-actually-used) * [Java interface and inheritance](https://stackoverflow.com/questions/1215437/java-interface-and-inheritance) * [Java Interface Usage Guidelines — Are getters and setters in an interface bad?](https://stackoverflow.com/questions/1130294/java-interface-usage-guidelines-are-getters-and-setters-in-an-interface-bad) * [Why can’t I define a static method in a Java interface?](https://stackoverflow.com/questions/512877/why-cant-i-define-a-static-method-in-a-java-interface)
[What is an Interface?](http://java.sun.com/docs/books/tutorial/java/concepts/interface.html) from Java's official tutorial Edit: A second resource from the same tutorial, is the [Interfaces and Inheritence](http://java.sun.com/docs/books/tutorial/java/IandI/createinterface.html) section.
In general: You can see an interface as a contract or an agreement between two parties. So they can develop independently as long as the interface does not change. The calling party, knows which behaviour is available and the implementing party knows what to implement. There are a lot of advantages by using interfaces. One of them is that it is easy to switch between implementations. The other one is that classes can have different (inter)faces without using multiple inheritance.
Java Interfaces?
[ "", "java", "oop", "" ]
I want to set up a continuous integration and test framework for my open source C++ project. The desired features are: ``` 1. check out the source code 2. run all the unit and other tests 3. run performance tests (these measure the software quality - for example how long does it take the system to complete the test) 4. produce a report based on 3. and 4. daily 5. archive the reports for future reference ``` To implement this, which test framework and what continuous integration process would you recommend? Right now I am leaning towards Google Test Framework (I am aware of some of the comparisons of unit test frameworks discussed [in other questions](https://stackoverflow.com/questions/13699/choosing-a-c-unit-testing-toolframework)) for tests and [Cruisecontrol](http://cruisecontrol.sourceforge.net/) for continuous integration. But I don't know if Cruisecontrol allows easy integration of performance metrics. *Edit*: To answer Wilhelmtell, code should work with both Windows and Linux.
I am using CruiseControl and UnitTest++ today for exactly this task. UnitTest++ is really nice although I feel sometimes limited by it around the corner. At least it is 10 times better than cppunit. Still haven't tried the google testing framework, it will be for my next project. I have been extremely disappointed by CruiseControl. There are a few bugs and questions asked on the mailing list were never answered. Many of the default "features" to manage program execution and logging were not satisfying. We had to write our own report generation, our own stdout and stderr capturing, our own status mailer. There is not much left for CruiseControl, apart from running test at regular intervals and gathering them in a central web server.
I have written an article that might help you. It describes continuous integration of C++ code using googletest and hudson, using gcov for code coverage metrics. <http://meekrosoft.wordpress.com/2010/06/02/continuous-code-coverage-with-gcc-googletest-and-hudson/>
c++ continuous integration with performance metrics
[ "", "c++", "unit-testing", "continuous-integration", "automated-tests", "cruisecontrol", "" ]
Are there any Java VMs which can save their state to a file and then reload that state? If so, which ones?
Another option, which may or may not be relevant in your case, is to run the JVM (any JVM) inside a virtual machine. Most virtual machines offer the option to store and resume state, so you *should* be able to restart your PC, fire up the VM when it comes back up and have the Java process pick up from where it was. I use VMWare Player for testing on IE at work, and this works as noted above when I close and later reopen it. I don't generally do this when apps are doing anything of note in the VM, but as long as they aren't accessing any external resources (e.g. network sockets), I would expect it to work as if the VM was never shut down.
[Continuations](http://en.wikipedia.org/wiki/Continuation) are probably be what you are looking for: > [...] first class continuations, which are constructs > that give a programming language the > ability to save the execution state at > any point and return to that point at > a later point in the program. There are at least two continuation libraries for Java: [RIFE continuations](https://svn.rifers.org/rife-continuations/trunk/) and [javaflow](http://jakarta.apache.org/commons/sandbox/javaflow/). I know that javaflow at least allows serializing state to disk: > A `Continuation` can be serialized if > all objects it captured is also > serializable. In other words, all the > local variables (including all `this` > objects) need to be marked as > `Serializable`. In this example, you > need to mark the MyRunnable class as > Serializable . A serialized > continuation can be sent over to > another machine or used later. - [Javaflow Tutorial](http://commons.apache.org/sandbox/javaflow/tutorial.html)
Are there any Java VMs which can save their state to a file and then reload that state?
[ "", "java", "persistence", "virtual-machine", "" ]
I've got a bunch of strings like: ``` "Hello, here's a test colon&#58;. Here's a test semi-colon&#59;" ``` I would like to replace that with ``` "Hello, here's a test colon:. Here's a test semi-colon;" ``` And so on for all [printable ASCII values](http://www.w3schools.com/tags/ref_ascii.asp). At present I'm using [`boost::regex_search`](http://www.boost.org/doc/libs/1_37_0/libs/regex/doc/html/boost_regex/ref/regex_search.html) to match `&#(\d+);`, building up a string as I process each match in turn (including appending the substring containing no matches since the last match I found). Can anyone think of a better way of doing it? I'm open to non-regex methods, but regex seemed a reasonably sensible approach in this case. Thanks, Dom
The big advantage of using a regex is to deal with the tricky cases like `&#38;#38;` Entity replacement isn't iterative, it's a single step. The regex is also going to be fairly efficient: the two lead characters are fixed, so it will quickly skip anything not starting with `&#`. Finally, the regex solution is one without a lot of surprises for future maintainers. I'd say a regex was the right choice. Is it the best regex, though? You know you need two digits and if you have 3 digits, the first one will be a 1. Printable ASCII is after all `&#32;-&#126;`. For that reason, you could consider `&#1?\d\d;`. As for replacing the content, I'd use the [basic algorithm described for boost::regex::replace](http://www.boost.org/doc/libs/1_37_0/libs/regex/doc/html/boost_regex/ref/regex_replace.html) : ``` For each match // Using regex_iterator<> Print the prefix of the match Remove the first 2 and last character of the match (&#;) lexical_cast the result to int, then truncate to char and append. Print the suffix of the last match. ```
``` * Repaired SNOBOL4 Solution * &#38;#38; -> &#38; digit = '0123456789' main line = input :f(end) result = swap line arb . l + '&#' span(digit) . n ';' rem . line :f(out) result = result l char(n) :(swap) out output = result line :(main) end ```
Regex Replacing &#58; to ":" etc
[ "", "c++", "regex", "boost", "ascii", "ncr", "" ]
I'm currently refactoring/tidying up some old C code used in a C++ project, and regularly see functions such as: ``` int f(void) ``` which I would tend to write as: ``` int f() ``` Is there any reason not to replace (void) with () throughout the codebase in order to improve consistency, or is there a subtle difference between the two that I am unaware of? More specifically, if a virtual member function in C++ is described as: ``` virtual int f(void) ``` and a derived class includes a member function: ``` int f() ``` is this a valid override? Additionally, am I likely to encounter any linker problems based on almost identical signatures?
In C, the declaration `int f(void)` means a function returning int that takes no parameters. The declaration `int f()` means a function returning int that takes any number of parameters. Thus, if you have a function that takes no parameters in C, the former is the correct prototype. In C++, I believe `int f(void)` is deprecated, and `int f()` is preferred, as it specifically means a function that takes no parameters.
To add to [Chris's answer](https://stackoverflow.com/questions/416345/is-fvoid-deprecated-in-modern-c-and-c/416354#416354), using `int f()` is bad practice in C, in my experience, since you lose the compiler's ability to compare the function's declaration to its definition, to ensure that it will be called correctly. For example, the following code is standards-compliant C: ``` #include <stdio.h> void foo(); void bar(void) { foo(); } void foo(int a) { printf("%d\n", a); } ``` But it results in undefined behavior, since `a` was not passed to `foo`. In C++, there are two versions of `foo`: one that takes no arguments and one that takes an `int`. So `bar` winds up calling the undefined version, which would result in a linker error (assuming there are no other definitions of `foo` anywhere).
Is f(void) deprecated in modern C and C++?
[ "", "c++", "c", "refactoring", "void", "" ]
I need to minify some C# code in a handful of Silverlight .cs and .xmal files. What are your tips for maintaining one code base and running a "tool" to generate minified code for a project? Are there any tools (like Resharper) that will do this? If not fully, partially or assist in some way... EDIT: I realize that there is no need for C# minification. This is an exercise that I'm doing that does not make sense on the face of it. (This is not homework.)
How about a source-code obfuscator? They generally abbreviate names, etc - and certainly remove white space. For example, [here](http://www.semdesigns.com/Products/Obfuscators/CSharpObfuscator.html), with demo [here](http://www.semdesigns.com/Products/Obfuscators/CSharpObfuscationExample.html) (although you'd probably want to disable the string encoding if possible). Note that this isn't a direct recommendation: this is just the first hit I got for [C# code obfuscator](http://www.google.com/search?q=c%23+code+obfuscator).
Is that necessary? It was my understanding that the compiled .Net assembly would be sent across the wire, not the C# (or whatever language) source code.
Do you have any tips for C# Minification?
[ "", "c#", "xaml", "minify", "" ]
I have a TextBox control on my Form. I use the Leave event on the control to process user input. It works fine if the user clicks on some other control on the form, but the even doesn't get fired when the user goes straight to the main menu. Any ideas which event should I use to get it fired everytime?
I found a reasonable workaround, I set the focus on the main menu manually: EDIT: As suggested by @TcKs, I changed the event from ItemClicked to MenuActivate. Thanks very much for help! ``` private void menuStrip1_MenuActivate( object sender, EventArgs e ) { menuStrip1.Focus(); } ```
You should use "Validating" and "Validated" events for checking user's input. Then if user go to another control "A", and the control "A" has property "CausesValidation" set to "true" ( its default value ) the "Validating" and "Validated" event will be fired. The menu has "CausesValidation" property too. **Edit:** Sorry, I forgot the "CausesValidation" in menu strip is our functionality and not built-in. But the check for validating is pretty simple: ``` private void menuStrip1_MenuActivate( object sender, EventArgs e ) { bool ret = this.Validate( false ); if ( false == ret ) { // user's input is wrong } } ``` Use any ContainerControl instead of "this", if you want check the validation in another control than the "this" form. For example in MDI Child window.
WinForms: Textbox Leave event doesn't get fired after going to main menu
[ "", "c#", ".net", "winforms", "" ]
Is there anyway I can make the process of adding references to C# projects less painful? Every time I create a new C# class library project. I have to use the Add Reference dialog for 5 times at least.
Install the [PowerCommands for Visual Studio](http://code.msdn.microsoft.com/PowerCommands). You can then simply copy and paste a bunch of references between projects (plus lots of other useful commands). Some of the other useful commands are: * Collapse Projects (my favourite) * Copy References and Paste References * Remove and Sort Usings * Open Containing Folder * Open Command Prompt
You can select more than one reference at a time to add using CTRL-Click. You can also use the Recent tab to find references that you've added recently to other projects easily.
Add references manually
[ "", "c#", ".net", "visual-studio", "" ]
I cannot seem to access the context object using a loop context is set: `var context = [id1, id2, id3];` This callback function works: ``` function OnChangeSucceeded(result, context, methodName) { document.getElementById(context[0]).disabled = result; document.getElementById(context[1]).disabled = result; document.getElementById(context[2]).disabled = result; } ``` This callback function fails: ``` function OnChangeSucceeded(result, context, methodName) { for(var indx = 0; indx < context.length; indx++) { document.getElementById(context[indx]).disabled = result; } } ```
Thats for the pointer to firebug tvanfosson. I have redone the function and it now works as: ``` function OnChangeSucceeded(result, context, methodName) { for (controlId in context) { document.getElementById(context[controlId]).disabled = result; } } ``` I am not sure if it was because the context was original created as: ``` context = [id1, id2, id3]; ``` which I have now replaced with: ``` context = new Array(id1, id2, id3); ```
It would be handy to see the calling code so that we could see how your context is established. I'm going to guess that you've set it up as an association and not an array so that when you go to use it in the callback, there is no length property (or it's 0). When you set it up it should look like: ``` var context = new Array(); context[0] = 'elem0'; context[1] = 'elem1'; context[2] = 'elem2'; ``` not var context = { 0: 'elem0', 1: 'elem1', 2: 'elem2' }; If that isn't the problem, then try checking it out in FireFox/FireBug by setting a breakpoint in the onChangeSucceeded function and examining the actual context object to see what properties it has.
How to access the context object as an array in PageMethods callback
[ "", "asp.net", "javascript", "callback", "" ]
I am looking at an asp.net 2 web application that I am maintaining (but did not write). Some things that should happen on the page loading do not, but only sometimes, and it seems to be if you are using Firefox 3 inside a VM. JQuery and asp.net Ajax are used. The important function that should run every time (but does not) is attached by the following Javascript: ``` <script type="text/javascript"> //<![CDATA[ $(document).ready(function(){ Sys.Application.add_load(ImportantFunction); }); $(document).ready(function(){ Otherstuff(); }); $(document).ready(function(){ MoreStuff(); }); //]]> </script> ``` But if I use firebug to set a breakpoint inside ImportantFunction(), it is not hit on page load in firefox 3, but it is hit on an ajax update. In the page there are multiple calls to $(document).ready since they come from different parts of the asp.net code behind it. Yes, they do all execute.
Try this: ``` <script type="text/javascript"> //<![CDATA[ $(document).ready(ImportantFunction); $(document).ready(Otherstuff); $(document).ready(MoreStuff); //]]> </script> ``` Put the call to `Sys.Application.add_load` in the body of `ImportantFunction`, i.e in your .js file: ``` function importantFunction() { Sys.Application.add_load(ImportantFunction); } ``` **Edit:** I'm not sure if its possible to add multiple functions to be run on `$(document).ready` event. It might help if you did this instead: ``` <script type="text/javascript"> //<![CDATA[ $(document).ready(init); //]]> </script> ``` And in `init` you can include calls to all the other functions, i.e: ``` function init() { importantFunction(); otherStuff(); moreStuff(); //Any other functions to be called upon page load go here } ``` It will also make the code easier to read :)
Is there any reason why you can't use the ASP.NET AJAX pageLoad function instead of $(document).ready()? ``` function pageLoad(sender, args) { ImportantFunction(); OtherStuff(); MoreStuff(); } ``` This is part of the ASP.NET AJAX client page lifecycle and all JavaScript code inside will be executed on every page load, including asynchronous postbacks.
Javascript function should be running on every page load
[ "", "javascript", "jquery", "ajax", "asp.net-ajax", "" ]
Without having the full module path of a Django model, is it possible to do something like: ``` model = 'User' [in Django namespace] model.objects.all() ``` ...as opposed to: ``` User.objects.all(). ``` EDIT: I am trying to make this call based on command-line input. Is it possible to avoid the import statement, e.g., ``` model = django.authx.models.User ``` Without Django returning the error: ``` "global name django is not defined." ```
I think you're looking for this: ``` from django.db.models.loading import get_model model = get_model('app_name', 'model_name') ``` There are other methods, of course, but this is the way I'd handle it if you don't know what models file you need to import into your namespace. (Note there's really no way to safely get a model without first knowing what app it belongs to. Look at the source code to loading.py if you want to test your luck at iterating over all the apps' models.) **Update for Django 1.7+:** According to Django's [deprecation timeline](https://docs.djangoproject.com/en/dev/internals/deprecation/#deprecation-removed-in-1-9), `django.db.models.loading` has been deprecated in Django 1.7 and will be removed in Django 1.9. As pointed out in [Alasdair's answer](https://stackoverflow.com/a/28380435/996114), In Django 1.7+, there is an [applications registry](https://docs.djangoproject.com/en/1.8/ref/applications/#module-django.apps). You can use the [`apps.get_model`](https://docs.djangoproject.com/en/1.8/ref/applications/#django.apps.apps.get_model) method to dynamically get a model: ``` from django.apps import apps MyModel = apps.get_model('app_label', 'MyModel') ```
For Django 1.7+, there is an [applications registry](https://docs.djangoproject.com/en/1.8/ref/applications/#module-django.apps). You can use the [`apps.get_model`](https://docs.djangoproject.com/en/1.8/ref/applications/#django.apps.apps.get_model) method to dynamically get a model. ``` from django.apps import apps MyModel = apps.get_model('app_label', 'MyModel') ```
How do I retrieve a Django model class dynamically?
[ "", "python", "django", "django-models", "django-queryset", "" ]
On my blog, I want to display the all the posts from the last month. But if that is less than 10 posts, I want to show the ten most recent posts (in other words, there should never be less than 10 posts on the front page). I am wondering if there is a way to do this in a single query? Currently, I first run this query: ``` select count(*) from posts where timestamp > ($thirty_days_ago) order by timestamp desc ``` If that count is greater than or equal to 10: ``` select * from posts where timestamp > ($thirty_days_ago) order by timestamp desc ``` Otherwise: ``` select * from posts order by timestamp desc limit 10 ``` But this requires me to run two queries. Is there a more efficient way to do this with a single query? (I'm using MySQL.)
Just do this: ``` select * from posts order by timestamp desc limit 100 ``` And filter the results further in memory. (assumes 100 is a practical upper limit for "posts in a month" that people would want to see in a single page) This is a "more efficient single query".
``` (SELECT * FROM posts WHERE `timestamp` >= NOW() - INTERVAL 30 DAY) UNION (SELECT * FROM posts ORDER BY `timestamp` DESC LIMIT 10); ``` **edit:** Re @doofledorfer's comment: I ran this on my test database, and it worked fine. I tried comparing `timestamp` to a date literal as well as the constant expression as shown in the above query, but it made no difference to the optimization plan. Of course I was using a trivial amount of data, and the optimization plan may be different if there are thousands of rows. In any case, the OP was asking how to get the correct result in a single query, *not* how to make the execution plan optimal. This is a UNION query after all, and is bound to incur a filesort. ``` +------+--------------+------------+------+---------------+------+---------+------+------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+--------------+------------+------+---------------+------+---------+------+------+----------------+ | 1 | PRIMARY | posts | ALL | timestamp | NULL | NULL | NULL | 20 | Using where | | 2 | UNION | posts | ALL | NULL | NULL | NULL | NULL | 20 | Using filesort | | NULL | UNION RESULT | <union1,2> | ALL | NULL | NULL | NULL | NULL | NULL | | +------+--------------+------------+------+---------------+------+---------+------+------+----------------+ ```
How to query for 10 most recent items or items from last month, whichever is more?
[ "", "sql", "mysql", "" ]
I'm rewriting a PHP web site in ASP.NET MVC. I'd like to maintain the same user base but the passwords are hashed using the PHP crypt() function. I need the same function in .Net so that I can hash a password on login and check it against the hashed password in the user database. crypt in this case is using the CRYPT\_MD5 implementation - the hashes all start with $1$ I've tried Phalanger but it doesn't have an MD5 implementation of the crypt function. Does anyone know of one in .Net? The C# example of crypt() on CodeProject uses DES, not MD5. I've tried the following code in C#, with different permutations of salt+password, password+salt and salt with and without $1$ prefix and $ suffix. None gives same result as PHP: ``` static void Main(string[] args) { const string salt = "somesalt"; const string password = "fubar"; const string plaintextString = password + salt; byte[] plaintext = GetBytes(plaintextString); var md5 = MD5.Create("MD5"); byte[] hash = md5.ComputeHash(plaintext); string s = System.Convert.ToBase64String(hash); Console.WriteLine("Hash of " + password + " is " + s); Console.ReadKey(); } private static byte[] GetBytes(string s) { var result = new byte[s.Length]; for (int i = 0; i < s.Length; i++) result[i] = (byte)s[i]; return result; } ```
The only solution I found was to call a trivial PHP script that simply performs a hash of the input string and returns it :-(
There are a few .NET methods for md5 hashing, `System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile(password, format)` is the easiest to use, even though it's a mouthful. Just pass "md5" through as the format. Depending on how PHP is doing this, it may be as simple as chopping the `$1$` off the beginning of the hash when you import it. It may be more complex. If you can post an example password/hash, I'll see if I can come up with some C# that generates the same hash from that password for you.
PHP crypt() function in .Net?
[ "", "php", ".net", "asp.net", "security", "encryption", "" ]
i know that the JDK consists of all java packages .But what does the JRE consist of apart from java.exe ? I could understand the necessities of things in 'bin' folder in the JRE but what about the 'lib' folder ?
Have a look at this [JDK and JRE File Structure](http://java.sun.com/javase/6/docs/technotes/tools/windows/jdkfiles.html) document from Sun's JDK documentation. It specifically says the following about the `lib` directory: > Code libraries, property settings, and resource files used by the Java runtime environment. For example: > > * rt.jar -- the bootstrap classes (the RunTime classes that comprise the Java platform's core API). > * charsets.jar -- character conversion classes. > > Aside from the ext subdirectory (described below) there are several additional resource subdirectories not described here.
JRE is composed of the JVM which is the runtime interpreter for the Java language, the Class Loader, Secure Execution Implementation classes, Java APIs(core classes, SE classes) and the Java Web (Deployment) foundation which includes Java Web Start. The lib part of JRE is Java's Library containing classes that lay the foundation for features like JavaBeans Components(JBCL) and Generic Collections(GCL).
What does the JRE consist of?
[ "", "java", "" ]
Currently our Java application uses the values held within a tab delimited \*.cfg file. We need to change this application so that it now uses an XML file. What is the best/simplest library to use in order to read in values from this file?
There are of course a lot of good solutions based on what you need. If it is just configuration, you should have a look at Jakarta [commons-configuration](http://commons.apache.org/configuration/) and [commons-digester](http://commons.apache.org/digester/). You could always use the standard JDK method of getting a document : ``` import java.io.File; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.Document; [...] File file = new File("some/path"); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document document = db.parse(file); ```
XML Code: ``` <?xml version="1.0"?> <company> <staff id="1001"> <firstname>yong</firstname> <lastname>mook kim</lastname> <nickname>mkyong</nickname> <salary>100000</salary> </staff> <staff id="2001"> <firstname>low</firstname> <lastname>yin fong</lastname> <nickname>fong fong</nickname> <salary>200000</salary> </staff> </company> ``` Java Code: ``` import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.DocumentBuilder; import org.w3c.dom.Document; import org.w3c.dom.NodeList; import org.w3c.dom.Node; import org.w3c.dom.Element; import java.io.File; public class ReadXMLFile { public static void main(String argv[]) { try { File fXmlFile = new File("/Users/mkyong/staff.xml"); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder dBuilder = dbFactory.newDocumentBuilder(); Document doc = dBuilder.parse(fXmlFile); doc.getDocumentElement().normalize(); System.out.println("Root element :" + doc.getDocumentElement().getNodeName()); NodeList nList = doc.getElementsByTagName("staff"); System.out.println("----------------------------"); for (int temp = 0; temp < nList.getLength(); temp++) { Node nNode = nList.item(temp); System.out.println("\nCurrent Element :" + nNode.getNodeName()); if (nNode.getNodeType() == Node.ELEMENT_NODE) { Element eElement = (Element) nNode; System.out.println("Staff id : " + eElement.getAttribute("id")); System.out.println("First Name : " + eElement.getElementsByTagName("firstname") .item(0).getTextContent()); System.out.println("Last Name : " + eElement.getElementsByTagName("lastname") .item(0).getTextContent()); System.out.println("Nick Name : " + eElement.getElementsByTagName("nickname") .item(0).getTextContent()); System.out.println("Salary : " + eElement.getElementsByTagName("salary") .item(0).getTextContent()); } } } catch (Exception e) { e.printStackTrace(); } } } ``` Output: ``` ---------------- Root element :company ---------------------------- Current Element :staff Staff id : 1001 First Name : yong Last Name : mook kim Nick Name : mkyong Salary : 100000 Current Element :staff Staff id : 2001 First Name : low Last Name : yin fong Nick Name : fong fong Salary : 200000 ``` I recommended you reading this: [Normalization in DOM parsing with java - how does it work?](https://stackoverflow.com/questions/13786607/normalization-in-dom-parsing-with-java-how-does-it-work) [Example source.](http://www.mkyong.com/java/how-to-read-xml-file-in-java-dom-parser/)
What is the best/simplest way to read in an XML file in Java application?
[ "", "java", "xml", "file", "" ]
This is probably a simple question, and I'm slightly embarrassed to ask it, but I've been working with this chunk of JavaScript ad code for a while and it's bothered me that it's never really made sense to me and is probably out dated now with modern browsers. My question is, do we need to check for browser types still, and what is that second bit of script doing? ``` <script type="text/javascript"> document.write('<scr' + 'ipt src="" type="text/javascript"></scr' + 'ipt>'); </script> <script type="text/javascript"> if ((!document.images && navigator.userAgent.indexOf('Mozilla/2.') >= 0) || navigator.userAgent.indexOf("WebTV")>= 0) { document.write('<a href="">'); document.write('<img src="" border="0" alt="" /></a>'); } </script> ``` I'd like to clarify that I'm actually calling someone some ad code, so while I could check for browser types, that would really be the responsibility of the keeper of the code. I'd love it if I could get this into jQuery - but I'm having trouble with the call (see my other post below). What I was wondering is, do I still need to check for these browser types? Cheers, Steve
On the second snippet of code: it's checking for two things: * That the browser opening the document supports the document.images portion of the DOM, that the document contains any images, and the browser's UserAgent string (an identifier) contains "Mozilla/2.", * OR that the UserAgent string contains "WebTV" in those cases, it outputs an empty link and image tag.
Mostly we use javascript libraries like [jQuery](http://jquery.com/) which handle this kind of thing for us. Strangely I find myself hacking per-browser CSS much more often these days.
Do we still need to check for different browser types in JavaScript?
[ "", "javascript", "jquery", "" ]
I'm just learning Qt with C++. I have successfully implemented signals and slots to trap standard events like `ButtonPushed()`, etc. However, I want to have a function called when I mouse over and mouse out of a `QLabel`. It looks like [QHoverEvent](http://doc.qt.io/qt-4.8/qhoverevent.html) will do what I need, but I can't seem to find any tutorials or examples on how to implement this. Is it done the same way as signals and slots?. I tried: ``` connect(ui.lbl_test, SIGNAL(QHoverEvent), this, SLOT(TestFunc(QEvent::Type type, const QPoint & pos, const QPoint & oldPos))); ``` .. but the function didn't get called when I hovered over the label. Here is the function, listed in the header file as a public slot: ``` void MyDialog::TestFunc(QEvent::Type type, const QPoint & pos, const QPoint & oldPos) { QMessageBox::information(this, tr("Hey"), tr("Listen!")); } ``` Can anyone help me figure this out or point me to a good example? EDIT: After reading a post below, I found no `setFlag()` member to call for my label widget, but I did try: ``` ui.lbl_test->setMouseTracking(true); connect(ui.lbl_test, SIGNAL(ui.lbl_test->mouseMoveEvent()), this, SLOT(TestFunc(QMouseEvent *event))); ``` And updated `TestFunc()` accordingly. But still nothing happens when I mouse over. After looking I am not sure `QLabel` even inherits the mouseMoveEvent() even from `QWidget`. If this is true, is there a widget that does, or a list of objects that inherit it somewhere?. All I can tell from the documentation on their site is how many inherited functions an object has..
Using signals and slots for this purpose isn't going to work. `mouseMoveEvent()` is not a signal or meta-method and cannot be connected to a slot. Subclassing the widget class and overriding `mouseMoveEvent()` will allow you to get mouse-move-events, but that is a very heavyweight way to accomplish this (and adds one more class to your source base). Instead, consider implementing an `eventFilter()` method on your `MyDialog` class and installing it on the `QLabel`. With this event filter method, you can intercept all the events for a given `QObject` instance. Here is the documentation on Event Filters. <http://doc.qt.io/qt-4.8/eventsandfilters.html#event-filters> Additionally, through looking at the code sample, I'd recommend you take a moment to investigate what the `SIGNAL()` and `SLOT()` macros do. You can see how they are defined in `$QTDIR/src/corelib/kernel/qobjectdefs.h`
<https://doc.qt.io/qt-5/qwidget.html#enterEvent> <https://doc.qt.io/qt-5/qwidget.html#leaveEvent> <https://doc.qt.io/qt-5/qwidget.html#widget-attributes> ## `Qt::WA_Hover` > Forces Qt to generate paint events when the mouse enters or leaves the > widget. This feature is typically used when implementing custom > styles; see the Styles example for details. <http://qt-project.org/doc/qt-5/qtwidgets-widgets-styles-example.html#norwegianwoodstyle-class-implementation> > This `QStyle::polish()` overload is called once on every widget drawn > using the style. We reimplement it to set the `Qt::WA_Hover` attribute > on `QPushButtons` and `QComboBoxes`. When this attribute is set, Qt > generates paint events when the mouse pointer enters or leaves the > widget. This makes it possible to render push buttons and comboboxes > differently when the mouse pointer is over them. ## How to receive Enter and Leave Events on a `QWidget` 1. Set the Widget Attribute for `WA_Hover` ``` // in your widget's constructor (probably) this->setAttribute(Qt::WA_HOVER, true); ``` 2. Implement `QWidget::enterEvent()` and `QWidget::leaveEvent()`. ``` void Widget::enterEvent(QEvent * event) { qDebug() << Q_FUNC_INFO << this->objectName(); QWidget::enterEvent(event); } void Widget::leaveEvent(QEvent * event) { qDebug() << Q_FUNC_INFO << this->objectName(); QWidget::leaveEvent(event); } ``` 3. Done ## `QHoverEvent` in `QWidget` <https://doc.qt.io/qt-5/qhoverevent.html#details> <https://doc.qt.io/qt-5/qobject.html#event> <https://doc.qt.io/qt-5/qwidget.html#event> ``` // in your widget's constructor (probably) this->setAttribute(Qt::WA_HOVER, true); // ... void Widget::hoverEnter(QHoverEvent * event) {qDebug() << Q_FUNC_INFO << this->objectName();} void Widget::hoverLeave(QHoverEvent * event) {qDebug() << Q_FUNC_INFO << this->objectName();} void Widget::hoverMove(QHoverEvent * event) {qDebug() << Q_FUNC_INFO << this->objectName();} bool Widget::event(QEvent * e) { switch(e->type()) { case QEvent::HoverEnter: hoverEnter(static_cast<QHoverEvent*>(e)); return true; break; case QEvent::HoverLeave: hoverLeave(static_cast<QHoverEvent*>(e)); return true; break; case QEvent::HoverMove: hoverMove(static_cast<QHoverEvent*>(e)); return true; break; default: break; } return QWidget::event(e); } ``` **UPDATE:** ## Simple Example Hover the button and see the count change. Look at the application output for more information. <https://gist.github.com/peteristhegreat/d6564cd0992351f98aa94f869be36f77> Hope that helps.
How do I implement QHoverEvent in Qt?
[ "", "c++", "qt", "events", "" ]
**UPDATE:** Obviously, you'd want to do this using templates or a base class rather than macros. Unfortunately for various reasons I can't use templates, or a base class. --- At the moment I am using a macro to define a bunch of fields and methods on various classes, like this: ``` class Example { // Use FIELDS_AND_METHODS macro to define some methods and fields FIELDS_AND_METHODS(Example) }; ``` `FIELDS_AND_METHODS` is a multi-line macro that uses stringizing and token-pasting operators. I would like to replace this with the following kind of thing ``` class Example { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #define TYPE_NAME Example #include "FieldsNMethods.h" }; ``` Here I #define the name of the class (previously the parameter to the macro), and the `FieldsNMethods.h` file contains the content of the original macro. However, because I'm #including I can step into the code at runtime, when debugging. However I am having trouble 'stringizing' and 'token pasting' the `TYPE_NAME` preprocessor symbol in the `FieldsNMethods.h` file. For example, I want to define the destructor of the class in `FieldsNMethods.h`, so this would need to use the value of `TYPE_NAME` as below: ``` ~TYPE_NAME() { //... } ``` But with `TYPE_NAME` replaced by its value. Is what I'm attempting possible? I can't use the stringizing and token-pasting operators directly, because I'm not in a macro definition.
This cries out for a template. ``` class Example<class T> { ...class definition... }; ``` The direct answer to the last part of your question - "given that I'm not in a macro definition any more, how do I get pasting and stringizing operators to work" - is "You can't". Those operators only work in macros, so you'd have to write macro invocations in order to get them to work. *Added*: @mackenir said "templates are not an option". Why are templates not an option? The code is simulating templates the old-fashioned pre-standard, pre-template way, and does so causing much pain and grief. Using templates would avoid that pain -- though there'd be a conversion operation. @mackenir asked "is there a way to make things work with macros?" Yes, you can, but you should use templates - they are more reliable and maintainable. To make it work with macros, then you'd have to have the function names in the code in the included header be macro invocations. You need to go through a level of indirection to get this to work correctly: ``` #define PASTE_NAME(x, y) PASTE_TOKENS(x, y) #define PASTE_TOKENS(x, y) x ## y #define TYPE_NAME Example int PASTE_NAME(TYPE_NAME, _function_suffix)(void) { ... } ``` This level of indirection is an often necessary idiom for both tokenizing and stringizing operators. --- Additional comments from @mackenir indicate continued problems. Let's make it concrete. > At the moment I am using a macro to define a bunch of fields and methods on various classes, like this: ``` class Example { // Use FIELDS_AND_METHODS macro to define some methods and fields FIELDS_AND_METHODS(Example) }; ``` > FIELDS\_AND\_METHODS is a multi-line macro that uses stringizing and token-pasting operators. > > I would like to replace this with the following kind of thing ``` class Example { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #define TYPE_NAME Example #include "FieldsNMethods.h" }; ``` OK. To make this concrete, we need a `FIELDS_AND_METHODS(type)` macro that is multi-line and uses token-pasting (I'm not going to deal with stringizing - the same basic mechanisms will apply, though). ``` #define FIELDS_AND_METHODS(type) \ type *next; \ type() : next(0) { } \ type * type ## _next() { return next; } ``` With luck, this declares a member of the type 'pointer to argument type', a constructor for that type, and a method (Example\_next in this case) that returns that pointer. So, this might be the macro - and we need to replace it such that the '#include' does the equivalent job. The content of fieldsNmethods.h becomes: ``` #ifndef TYPE_NAME #error TYPE_NAME not defined #endif #define FNM_PASTE_NAME(x, y) FNM_PASTE_TOKENS(x, y) #define FNM_PASTE_TOKENS(x, y) x ## y TYPE_NAME *next; TYPE_NAME() : next(0) { } TYPE_NAME * FNM_PASTE_NAME(TYPE_NAME, _next)() { return next; } #undef FNM_PASTE_NAME #undef FNM_PASTE_TOKENS ``` Note that the header would not contain multiple-inclusion guards; its raison d'etre is to allow it to be included multiple times. It also undefines its helper macros to permit multiple inclusion (well, since the redefinitions would be identical, they're 'benign' and wouldn't cause an error), and I prefixed them with `FNM_` as a primitive namespace control on the macros. This generates the code I'd expect from the C pre-processor. and G++ doesn't witter but produces an empty object file (because the types declared are not used in my example code). Note that this does not require any changes to the calling code except the one outlined in the question. I think the question should be improved using the SPOT "Single Point of Truth" principle (or DRY "Don't Repeat Yourself"): ``` #define TYPE_NAME Example class TYPE_NAME { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #include "FieldsNMethods.h" }; ```
You have to add an extra layer of macros: ``` #define STRINGIZE(x) STRINGIZE2(x) #define STRINGIZE2(x) #x #define TOKENPASTE(x, y) TOKENPASTE2(x, y) #define TOKENPASTE2(x, y) x ## y ``` The reason is that when you have a macro, the preprocessor normally recursively expands the arguments before performing the macro substitution. However, if any argument is used with the stringizing operator # or the token-pasting operator ##, it is *not* expanded. Therefore, you need an extra layer of macros, where the first layer expands the arguments, and the second layer performs the stringizing or token pasting. If the arguments need to be expanded multiple times (such as `#define A B, #define B C, #define C D, STRINGIZE(A)`), then you need to add that many more layers before you apply the # or ## operators.
How do I replace this preprocessor macro with a #include?
[ "", "c++", "macros", "include", "c-preprocessor", "" ]
My application needs to execute a fairly complicated series of queries against a database. Normally I would just dump everything into a stored procedure and execute it that way. But I do not have access to the database I'm trying to access so I can't create a stored procedure. Is there a better way of doing this instead of hitting the database 5-6 times to get the results that I need? I could join everything into a single query, but I would like to avoid that if possible since I would need to join about 10 tables.
There's nothing wrong with joining 10 tables if that's ultimately what you need to do. Generally SQL is good at this kind of thing. However, if there isn't that tight of a coupling between your 5-6 queries, then run them separately. If you choose to break up the query, hitting the DB 5-6 times is fine--absolutely nothing wrong with that. Your access method (e.g. ADO.NET) probably gives you connection pooling for free anyway so the overhead of multiple queries is very small.
You can always execute the same series of queries in a single shot by separating them by ";" .
Application execute complex SQL query
[ "", "sql", "sql-server", "t-sql", "" ]
In the process of fixing a poorly imported database with issues caused by using the wrong database encoding, or something like that. Anyways, coming back to my question, in order to fix this issues I'm using a query of this form: > UPDATE `table_name` SET field\_name = > replace(field\_name,’search\_text’,'replace\_text’); And thus, if the table I'm working on has multiple columns I have to call this query for each of the columns. And also, as there is not only one pair of things to run the find and replace on I have to call the query for each of this pairs as well. So as you can imagine, I end up running tens of queries just to fix one table. What I was wondering is if there is a way of either combine multiple find and replaces in one query, like, lets say, look for this set of things, and if found, replace with the corresponding pair from this other set of things. Or if there would be a way to make a query of the form I've shown above, to run somehow recursively, for each column of a table, regardless of their name or number. Thank you in advance for your support, titel
Let's try and tackle each of these separately: If the set of replacements is the same for every column in every table that you need to do this on (or there are only a couple patterns), consider creating a user-defined function that takes a varchar and returns a varchar that just calls `replace(replace(@input,'search1','replace1'),'search2','replace2')` nested as appropriate. To update multiple columns at the same time you should be able to do `UPDATE table_name SET field_name1 = replace(field_name1,...), field_name2 = replace(field_name2,...)` or something similar. As for running something like that for every column in every table, I'd think it would be easiest to write some code which fetches a list of columns and generates the queries to execute from that.
I don't know of a way to automatically run a search-and-replace on each column, however the problem of multiple pairs of search and replace terms in a single `UPDATE` query is easily solved by nesting calls to `replace()`: ``` UPDATE table_name SET field_name = replace( replace( replace( field_name, 'foo', 'bar' ), 'see', 'what', ), 'I', 'mean?' ) ```
SQL to search and replace in mySQL
[ "", "sql", "mysql", "" ]
I [half-answered a question about finding clusters of mass in a bitmap](https://stackoverflow.com/questions/411837/finding-clusters-of-mass-in-a-matrix-bitmap#411855). I say half-answered because I left it in a condition where I had all the points in the bitmap sorted by mass and left it to the reader to filter the list removing points from the same cluster. Then when thinking about that step I found that the solution didn't jump out at me like I thought it would. So now I'm asking you guys for help. We have a list of points with masses like so (a Python list of tuples, but you can represent it as you see fit in any language): ``` [ (6, 2, 6.1580555555555554), (2, 1, 5.4861111111111107), (1, 1, 4.6736111111111107), (1, 4, 4.5938888888888885), (2, 0, 4.54), (1, 5, 4.4480555555555554), (4, 7, 4.4480555555555554), (5, 7, 4.4059637188208614), (4, 8, 4.3659637188208613), (1, 0, 4.3611111111111107), (5, 8, 4.3342191043083904), (5, 2, 4.119574829931973), ... (8, 8, 0.27611111111111108), (0, 8, 0.24138888888888888) ] ``` Each tuple is of the form: ``` (x, y, mass) ``` Note that the list is sorted here. If your solution prefers to not have them sorted it's perfectly OK. The challenge, [if you recall](https://stackoverflow.com/questions/411837/finding-clusters-of-mass-in-a-matrix-bitmap), is to find the main clusters of mass. The number of clusters is not known. But you know the dimensions of the bitmap. Sometimes several points within a cluster has more mass than the center of the next (in size) cluster. So what I want to do is go from the higher-mass points and remove points in the same cluster (points nearby). When I tried this I ended up having to walk through parts of the list over and over again. I have a feeling I'm just stupid about it. How would you do it? Pseudo code or real code. Of course, if you can just take off where I left in that answer with Python code it's easier for me to experiment with it. Next step is to figure out how many clusters there really are in the bitmap. I'm still struggling with defining that problem so I might return with a question about it. **EDIT:** I should clarify that I know that there's no "correct" answer to this question. And the name of the question is key. Phase one of the my clustering is done. **Im in search of a fast, accurate-"enough" method of filtering away nearby points.** Let me know if you see how I can make the question clearer.
Just so you know, you are asking for a solution to an [ill-posed](http://en.wikipedia.org/wiki/Ill-posed) problem: no definitive solution exists. That's fine...it just makes it more fun. Your problem is ill-posed mostly because you don't know how many clusters you want. Clustering is one of the key areas of machine learning and there a quite a few approaches that have been developed over the years. As Arachnid pointed out, the [k-means](http://en.wikipedia.org/wiki/K_means) algorithm tends to be a good one and it's pretty easy to implement. The results depend critically on the initial guess made and on the number of desired clusters. To overcome the initial guess problem, it's common to run the algorithm many times with random initializations and pick the best result. You'll need to define what "best" means. One measure would be the mean squared distance of each point to its cluster center. If you want to automatically guess how many clusters there are, you should run the algorithm with a whole range of numbers of clusters. For any good "best" measure, more clusters will always look better than fewer, so you'll need a way to penalize having too many clusters. The [MDL](http://en.wikipedia.org/wiki/Minimum_description_length) discussion on wikipedia is a good starting point. K-means clustering is basically the simplest [mixture model](http://en.wikipedia.org/wiki/Mixture_model). Sometimes it's helpful to upgrade to a mixture of Gaussians learned by expectation maximization (described in the link just given). This can be more robust than k-means. It takes a little more effort to understand it, but when you do, it's not much harder than k-means to implement. There are plenty of other [clustering techniques](http://en.wikipedia.org/wiki/Data_clustering) such as agglomerative clustering and spectral clustering. Agglomerative clustering is pretty easy to implement, but choosing when to stop building the clusters can be tricky. If you do agglomerative clustering, you'll probably want to look at [kd trees](http://en.wikipedia.org/wiki/Kd-tree) for faster nearest neighbor searches. smacl's answer describes one slightly different way of doing agglomerative clustering using a Voronoi diagram. There are models that can automatically choose the number of clusters for you such as ones based on [Latent Dirichlet Allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation), but they are a lot harder to understand an implement correctly. You might also want to look at the [mean-shift](http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/mean_shift/mean_shift.ppt) algorithm to see if it's closer to what you really want.
It sounds to me like you're looking for the [K-means](http://en.wikipedia.org/wiki/K_means) algorithm.
Filtering away nearby points from a list
[ "", "python", "algorithm", "language-agnostic", "bitmap", "filtering", "" ]
I have the following enumeration: ``` public enum AuthenticationMethod { FORMS = 1, WINDOWSAUTHENTICATION = 2, SINGLESIGNON = 3 } ``` The problem however is that I need the word "FORMS" when I ask for AuthenticationMethod.FORMS and not the id 1. I have found the following solution for this problem ([link](http://www.codeproject.com/Articles/11130/String-Enumerations-in-C)): First I need to create a custom attribute called "StringValue": ``` public class StringValue : System.Attribute { private readonly string _value; public StringValue(string value) { _value = value; } public string Value { get { return _value; } } } ``` Then I can add this attribute to my enumeration: ``` public enum AuthenticationMethod { [StringValue("FORMS")] FORMS = 1, [StringValue("WINDOWS")] WINDOWSAUTHENTICATION = 2, [StringValue("SSO")] SINGLESIGNON = 3 } ``` And of course I need something to retrieve that StringValue: ``` public static class StringEnum { public static string GetStringValue(Enum value) { string output = null; Type type = value.GetType(); //Check first in our cached results... //Look for our 'StringValueAttribute' //in the field's custom attributes FieldInfo fi = type.GetField(value.ToString()); StringValue[] attrs = fi.GetCustomAttributes(typeof(StringValue), false) as StringValue[]; if (attrs.Length > 0) { output = attrs[0].Value; } return output; } } ``` Good now I've got the tools to get a string value for an enumeration. I can then use it like this: ``` string valueOfAuthenticationMethod = StringEnum.GetStringValue(AuthenticationMethod.FORMS); ``` Okay now all of these work like a charm but I find it a whole lot of work. I was wondering if there is a better solution for this. I also tried something with a dictionary and static properties but that wasn't better either.
Try [type-safe-enum](http://www.javacamp.org/designPattern/enum.html) pattern. ``` public sealed class AuthenticationMethod { private readonly String name; private readonly int value; public static readonly AuthenticationMethod FORMS = new AuthenticationMethod (1, "FORMS"); public static readonly AuthenticationMethod WINDOWSAUTHENTICATION = new AuthenticationMethod (2, "WINDOWS"); public static readonly AuthenticationMethod SINGLESIGNON = new AuthenticationMethod (3, "SSN"); private AuthenticationMethod(int value, String name){ this.name = name; this.value = value; } public override String ToString(){ return name; } } ``` --- **Update** Explicit (or implicit) type conversion can be done by * adding static field with mapping ``` private static readonly Dictionary<string, AuthenticationMethod> instance = new Dictionary<string,AuthenticationMethod>(); ``` + n.b. In order that the initialisation of the the "enum member" fields doesn't throw a NullReferenceException when calling the instance constructor, be sure to put the Dictionary field before the "enum member" fields in your class. This is because static field initialisers are called in declaration order, and before the static constructor, creating the weird and necessary but confusing situation that the instance constructor can be called before all static fields have been initialised, and before the static constructor is called. * filling this mapping in instance constructor ``` instance[name] = this; ``` * and adding [user-defined type conversion operator](http://msdn.microsoft.com/en-us/library/09479473.aspx) ``` public static explicit operator AuthenticationMethod(string str) { AuthenticationMethod result; if (instance.TryGetValue(str, out result)) return result; else throw new InvalidCastException(); } ```
Use method ``` Enum.GetName(Type MyEnumType, object enumvariable) ``` as in (Assume `Shipper` is a defined Enum) ``` Shipper x = Shipper.FederalExpress; string s = Enum.GetName(typeof(Shipper), x); ``` There are a bunch of other static methods on the Enum class worth investigating too...
String representation of an Enum
[ "", "c#", "enums", "" ]
How would you open a file (that has a known file/app association in the registry) into a "running instance" of the application it's supposed to open in? An example would be, I have Excel open and I click on an XLS file.....the file opens up in the current Excel instance. I want to do this for a custom application...how does the eventing/messaging work that "tells" the current instance that it needs to open a file? Is there a "file watcher" that looks for a request to do so etc? Thanks..
What you want to do is inherit a class from [WindowsFormsApplicationBase](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.applicationservices.windowsformsapplicationbase.aspx), setting the protected [IsSingleInstance](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.applicationservices.windowsformsapplicationbase.issingleinstance.aspx) property to true: ``` // This should all be refactored to make it less tightly-coupled, obviously. class MyWindowsApplicationBase : WindowsFormsApplicationBase { internal MyWindowsApplicationBase() : base() { // This is a single instance application. this.IsSingleInstance = true; // Set to the instance of your form to run. this.MainForm = new MyForm(); } } ``` The Main method of your app then looks like this: ``` // This should all be refactored to make it less tightly-coupled, obviously. public static void Main(string args[]) { // Process the args. <process args here> // Create the application base. MyWindowsApplicationBase appBase = new MyWindowsApplicationBase(); // <1> Set the StartupNextInstance event handler. appBase.StartupNextInstance = <event handler code>; // Show the main form of the app. appBase.Run(args); } ``` Note the section marked <1>. You set this up with an event handler for the [StartupNextInstanceEvent](http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.applicationservices.windowsformsapplicationbase.startupnextinstance.aspx). This event is fired when the *next* instance of your app is fired when you have a single instance application (which you specified in the constructor of MyWindowsApplicationBase). The event handler will pass an EventArgs-derived class which will have the command line arguments which you can then process in the running instance of your app. Then, all you have to do is set the file associations normally for the file types you want your app to process, and you are set.
Looks like what you are looking for is creating a single instance application. This can be done in C# by using WindowsFormsApplicationBase located in Microsoft.VisualBasic.dll For details, take a look at: <http://www.hanselman.com/blog/TheWeeklySourceCode31SingleInstanceWinFormsAndMicrosoftVisualBasicdll.aspx> or search for `WindowsFormsApplicationBase`
Opening a "known file type" into running instance of custom app - .NET
[ "", "c#", ".net", "winforms", "" ]
I am using JavaScript and jQuery. My main file has `My.js` and Ajax. ### My.js ``` function build_one(){ alert("inside build_one"); } ``` ### My main file ``` <script type="text/javascript"> .. // Here I want to make call function defined in My.js build_one() .. // Here is the Ajax call $.ajax({ type:'POST', url: 'ajax.php', data:'id='+id , success: function(data){ $("#response").html(data); } }); ... </script> ``` How do I make the build\_one() function call before the Ajax function?
This should work: ``` <script type="text/javascript" src="My.js"></script> <script type="text/javascript"> build_one(); $.ajax({ type:'POST', url: 'ajax.php', data:'id='+id , success: function(data){ $("#response").html(data); } }); </script> ```
First you have to import your file before calling the function using the following ``` <script type="text/javascript" src="My.js"></script> ``` Now you can call your function where ever you want.
Calling a JavaScript function inside jQuery
[ "", "javascript", "jquery", "" ]
I need to query a table for values given a string. The table is case sensitive but I want to do a ToLower() in the comparison. Suppose I have a classes table with the following data. ``` class teacher ----------------- Mat101 Smith MAT101 Jones mat101 Abram ENG102 Smith ``` My query should be something like ``` Select teacher From classes where lower(class) = 'math101' ``` Is this the best way to do the comparison? ## Update I have no control over the database or the data. I am a read only consumer.
No; it would be better to improve the data: create a numeric ID that represents these seemingly meaningless variations of class (and probably an associated lookup table to get the ID). Use the ID column in the where clause and you should be hitting an indexed numeric column. If that's no an option, consider a function-based index on lower(class). If that's not an option, and the question of "best" is strictly relative to performance, consider denormalizing and adding a column that contains lower(class), probably populated with a trigger. If that's not an option, update the data so that it's all lowercase (and take measures to insert/update only lowercase class data). If you can't update the data like that, then the answer is "maybe". In any case, you can't call it best if you haven't tested indexing of the column.
Here is more information about Function-based Indexes (what Dave was referring to above): * [Using Function-based Indexes for Performance](http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/data_acc.htm#i9946) * [Function-Based Indexes](http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1161)
Compare Strings in Oracle
[ "", "sql", "oracle", "comparison", "" ]
How can you prematurely exit from a function without returning a value if it is a void function? I have a void method that needs to not execute its code if a certain condition is true. I really don't want to have to change the method to actually return a value.
Use a return statement! ``` return; ``` or ``` if (condition) return; ``` You don't need to (and can't) specify any values, if your method returns `void`.
You mean like this? ``` void foo ( int i ) { if ( i < 0 ) return; // do nothing // do something } ```
How do you exit from a void function in C++?
[ "", "c++", "" ]
Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger? I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be passed debug options too.
A solution working with Python 2.4 (it should work with any version superior to 2.1) and Pydev: ``` import inspect def isdebugging(): for frame in inspect.stack(): if frame[1].endswith("pydevd.py"): return True return False ``` The same should work with pdb by simply replacing `pydevd.py` with `pdb.py`. As do3cc suggested, it tries to find the debugger within the stack of the caller. Useful links: * [The Python Debugger](https://docs.python.org/library/pdb.html) * [The interpreter stack](https://docs.python.org/library/inspect.html#the-interpreter-stack)
Python debuggers (as well as profilers and coverage tools) use the `sys.settrace` function (in the `sys` module) to register a callback that gets called when interesting events happen. If you're using Python 2.6, you can call `sys.gettrace()` to get the current trace callback function. If it's not `None` then you can assume you should be passing debug parameters to the JVM. It's not clear how you could do this pre 2.6.
How to detect that Python code is being executed through the debugger?
[ "", "python", "debugging", "" ]
Is there a way to use the formatter that comes with eclipse, outside of eclipse? I would like to format some java files using my formatter.xml file that I have configured using eclipse. Does anyone have any code examples that would allow me to do this? I would also like to use this standalone, so the specific jars that are used would be nice.
Apparently you can [directly invoke Eclipse's code formatter from the command line](https://peterfriese.wordpress.com/2007/05/28/formatting-your-code-using-the-eclipse-code-formatter/).
Here's the [offical eclipse docs](https://help.eclipse.org/luna/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Ftasks%2Ftasks-231.htm) on how to do this ### Dump of those docs: Running the formatter application is as simple as running the org.eclipse.jdt.core.JavaCodeFormatter application from the commandline: ``` eclipse -vm <path to virtual machine> -application org.eclipse.jdt.core.JavaCodeFormatter [ OPTIONS ] <files> ``` When invoked on MacOS, the paths to point to the configuration file or the source files can be relative, but they will be computed from the location of the eclipse.ini file. This is a limitation of the Eclipse launcher on MacOS. On all other platforms, the relative paths are computed relative to the current user directory. Java source files and/or directories to format. Only files ending with .java will be formatted in the given directory. -config Use the formatting style from the specified properties file. Refer to Generating a config file for the formatter application for details. -help Display the help message. -quiet Only print error messages. -verbose Be verbose about the formatting job.
Can the Eclipse Java formatter be used stand-alone
[ "", "java", "eclipse", "formatter", "" ]
This is an example of RegisterClientScriptBlock ``` Page.ClientScript.RegisterClientScriptBlock(Me.GetType, "key","scriptblock", True) ``` Why do the method needs the type as the first parameter ? Thanks.
From the MSDN docs: "A client script is uniquely identified by its key and its type. Scripts with the same key and type are considered duplicates." Basically it gives you an additional way to uniquely identify your scripts. You could have the same key value across different types of controls.
I've wondered about this myself. As far as I can see in Reflector, it's not used by RegisterClientScriptBlock() directly, it is only passed through to be used by the GetHashCode() method of the ScriptKey class. There it probably serves to uniquely identify the script block further beyond just the user-supplied key, since it is linked to the specified type.
What is the significance of the Type parameter in the RegisterClientScriptBlock method call?
[ "", "asp.net", "javascript", "" ]
In a Java web application I have a recurring message load of type A (e.g., 20,000 every hour). Then I have a second type of messages (type B) that show up occasionally but have a higher priority than type A (say, 3,000). I want to be able to process these messages on one or more machines using open source software. It seems to me that I could do that with JMS if I had a JMS server that would send messages from its queue based on priorities (e.g., send three message of type B and then one of type A even though all messages of type A are at the top of the message queue). Do you know a JMS server that can do that - or do you know another way to implement this?
Set the message priority when you call "send(..)" on the MessageProducer (QueueSender, etc). Or, you can set the default priority on the MessageProducer from 0-9 (9 is highest). Setting the priority on the message itself won't work. It's overridden by the Producer. Uri is correct--whether or not the priority is respected is implementation specific. I believe OpenJMS generally respects the priority out-of-the-box, but does not guarantee it. JMS Spec states: `"JMS does not require that a provider strictly implement priority ordering of messages; however, it should do its best to deliver expedited messages ahead of normal messages."`
The JMS standard supports message priorities default is 4, you can specify others). I think you need to set that in the message producer AND in the message itself (there are methods on both). I think that ActiveMQ does support it. However, many JMS broker have priority handling disabled by default. There is a flag that you may need to change, something like "supportJMSPriority", somewhere in the broker configuration. Also, Apache Camel lets you write your own message resequencers so you could implement any form of priority that you would like.
Message processing with priorities
[ "", "java", "message-queue", "" ]
I'm curious about conventions for type-punning pointers/arrays in C++. Here's the use case I have at the moment: > Compute a simple 32-bit checksum over a binary blob of data by treating it as an array of 32-bit integers (we know its total length is a multiple of 4), and then summing up all values and ignoring overflow. I would expect such an function to look like this: ``` uint32_t compute_checksum(const char *data, size_t size) { const uint32_t *udata = /* ??? */; uint32_t checksum = 0; for (size_t i = 0; i != size / 4; ++i) checksum += udata[i]; return udata; } ``` Now the question I have is, what do you consider the "best" way to convert `data` to `udata`? C-style cast? ``` udata = (const uint32_t *)data ``` C++ cast that assumes all pointers are convertible? ``` udata = reinterpret_cast<const uint32_t *>(data) ``` C++ cast that between arbitrary pointer types using intermediate `void*`? ``` udata = static_cast<const uint32_t *>(static_cast<const void *>(data)) ``` Cast through a union? ``` union { const uint32_t *udata; const char *cdata; }; cdata = data; // now use udata ``` I fully realize that this will not be a 100% portable solution, but I am only expecting to use it on a small set of platforms where I know it works (namely unaligned memory accesses and compiler assumptions on pointer aliasing). What would you recommend?
As far as the C++ standard is concerned, [litb](https://stackoverflow.com/questions/346622/opinions-on-type-punning-in-c#346675)'s answer is completely correct and the most portable. Casting `const char *data` to a `const uint3_t *`, whether it be via a C-style cast, `static_cast`, or `reinterpret_cast`, breaks the strict aliasing rules (see [Understanding Strict Aliasing](http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html)). If you compile with full optimization, there's a good chance that the code will not do the right thing. Casting through a union (such as litb's `my_reint`) is probably the best solution, although it does technically violate the rule that if you write to a union through one member and read it through another, it results in undefined behavior. However, practically all compilers support this, and it results in the the expected result. If you absolutely desire to conform to the standard 100%, go with the bit-shifting method. Otherwise, I'd recommend going with casting through a union, which is likely to give you better performance.
Ignoring efficiency, for simplicity of code I'd do: ``` #include <numeric> #include <vector> #include <cstring> uint32_t compute_checksum(const char *data, size_t size) { std::vector<uint32_t> intdata(size/sizeof(uint32_t)); std::memcpy(&intdata[0], data, size); return std::accumulate(intdata.begin(), intdata.end(), 0); } ``` I also like litb's last answer, the one that shifts each char in turn, except that since char might be signed, I think it needs an extra mask: ``` checksum += ((data[i] && 0xFF) << shift[i % 4]); ``` When type punning is a potential issue, I prefer not to type pun rather than to try to do so safely. If you don't create any aliased pointers of distinct types in the first place, then you don't have to worry what the compiler might do with aliases, and neither does the maintenance programmer who sees your multiple static\_casts through a union. If you don't want to allocate so much extra memory, then: ``` uint32_t compute_checksum(const char *data, size_t size) { uint32_t total = 0; for (size_t i = 0; i < size; i += sizeof(uint32_t)) { uint32_t thisone; std::memcpy(&thisone, &data[i], sizeof(uint32_t)); total += thisone; } return total; } ``` Enough optimisation will get rid of the memcpy and the extra uint32\_t variable entirely on gcc, and just read an integer value unaligned, in whatever the most efficient way to do that is on your platform, straight out of the source array. I'd hope the same is true of other "serious" compilers. But this code is now bigger than litb's, so there's not much to be said for it other than mine is easier to turn into a function template that will work just as well with uint64\_t, and mine works as native endian-ness rather than picking little-endian. This is of course not completely portable. It assumes that the storage representation of sizeof(uint32\_t) chars corresponds to the storage representation of a uin32\_t in the way we want. This is implied by the question, since it states that one can be "treated as" the other. Endian-ness, whether a char is 8 bits, and whether uint32\_t uses all bits in its storage representation can obviously intrude, but the question implies that they won't.
Opinions on type-punning in C++?
[ "", "c++", "casting", "type-punning", "" ]
This question is related to ["How to make consistent dll binaries across VS versions ?"](https://stackoverflow.com/questions/232926/how-to-make-consistent-dll-binaries-across-vs-versions) * We have applications and DLLs built with VC6 and a new application built with VC9. The VC9-app has to use DLLs compiled with VC6, most of which are written in C and one in C++. * The C++ lib is problematic due to name decoration/mangling issues. * Compiling everything with VC9 is currently not an option as there appear to be some side effects. Resolving these would be quite time consuming. * I can modify the C++ library, however it must be compiled with VC6. * The C++ lib is essentially an OO-wrapper for another C library. The VC9-app uses some static functions as well as some non-static. While the static functions can be handled with something like ``` // Header file class DLL_API Foo { int init(); } extern "C" { int DLL_API Foo_init(); } // Implementation file int Foo_init() { return Foo::init(); } ``` it's not that easy with the non-static methods. As I understand it, [Chris Becke's](https://stackoverflow.com/questions/232926/how-to-make-consistent-dll-binaries-across-vs-versions#232959) suggestion of using a COM-like interface won't help me because the interface member names will still be decorated and thus inaccessible from a binary created with a different compiler. *Am I right there?* Would the only solution be to write a C-style DLL interface using handlers to the objects or am I missing something? In that case, I guess, I would probably have less effort with directly using the wrapped C-library.
Interface member names will *not* be decorated -- they're just offsets in a vtable. You can define an interface (using a C struct, rather than a COM "interface") in a header file, thusly: ``` struct IFoo { int Init() = 0; }; ``` Then, you can export a function from the DLL, with no mangling: ``` class CFoo : public IFoo { /* ... */ }; extern "C" IFoo * __stdcall GetFoo() { return new CFoo(); } ``` This will work fine, provided that you're using a compiler that generates compatible vtables. Microsoft C++ has generated the same format vtable since (at least, I think) MSVC6.1 for DOS, where the vtable is a simple list of pointers to functions (with thunking in the multiple-inheritance case). GNU C++ (if I recall correctly) generates vtables with function pointers and relative offsets. These are not compatible with each other.
The biggest problem to consider when using a DLL compiled with a different C++ compiler than the calling EXE is memory allocation and object lifetime. I'm assuming that you can get past the name mangling (and calling convention), which isn't difficult if you use a compiler with compatible mangling (I think VC6 is broadly compatible with VS2008), or if you use extern "C". Where you'll run into problems is when you allocate something using `new` (or `malloc`) from the DLL, and then you return this to the caller. The caller's `delete` (or `free`) will attempt to free the object from a different heap. This will go horribly wrong. You can either do a COM-style `IFoo::Release` thing, or a `MyDllFree()` thing. Both of these, because they call back into the DLL, will use the correct implementation of `delete` (or `free()`), so they'll delete the correct object. Or, you can make sure that you use `LocalAlloc` (for example), so that the EXE and the DLL are using the same heap.
Using C++ DLLs with different compiler versions
[ "", "c++", "windows", "visual-c++-6", "visual-c++-2008", "name-decoration", "" ]
I have a condition in which I need to close the application and so I call this.Dispose () when I set a certian flag. At first I thought it was a problem of calling functions after I call this.Dispose () and so I moved the code to be the last thing called, but I still get an "ArgumentException was unhandled" "Parameter is not valid." On the Application.Run (new myApp (); line. What am I doing wrong? Did I miss something along the way? Or maybe there is a better way to close the application?
Try using `Application.Exit()` to exit the application. When you use `Application.Run(new MyForm());`, a message loop is created on the thread using the form object as the main form. It tries to deliver Win32 messages that are coming to the application to their respective objects. However, when you call `Dispose()` on the form object, you haven't exited the message loop yet. When it tries to deliver the next message to your form object, it fails since it's already disposed and throws the exception. You should either request the form to be closed (by calling `Close` on the form), which will then ask the form to process the event and if completed, exit the message loop afterwards. The other way (more direct way) is to shut down the message loop on the thread altogether by calling `Application.Exit()` which will cause all related forms to be closed.
You should use this.Close() rather than this.Dispose() to close your main form.
Application.Run throws ArgumentException was unhandled
[ "", "c#", "winforms", "dispose", "argumentexception", "" ]
I am evaluating options for efficient data storage in Java. The data set is time stamped data values with a named primary key. e.g. ``` Name: A|B|C:D Value: 124 TimeStamp: 01/06/2009 08:24:39,223 ``` Could be a stock price at a given point in time, so it is, I suppose, a classic time series data pattern. However, I really need a generic RDBMS solution which will work with any reasonable JDBC compatible database as I would like to use Hibernate. Consequently, time series extensions to databases like Oracle are not really an option as I would like the implementor to be able to use their own JDBC/Hibernate capable database. The challenge here is simply the massive volume of data that can accumulate in a short period of time. So far, my implementations are focused around defining periodical rollup and purge schedules where raw data is aggregated into DAY, WEEK, MONTH etc. tables, but the downside is the early loss of granularity and the slight inconvenience of period mismatches between periods stored in different aggregates. The challenge has limited options since there is an absolute limit to how much data can be physically compressed while retaining the original granularity of the data, and this limit is exacerbated by the directive of using a relational database, and a generic JDBC capable one at that. Borrowing a notional concept from classic data compression algorithms, and leveraging the fact that many consecutive values for the same named key can expected to be identical, I am wondering if there is way I can seamlessly reduce the number of stored records by conflating repeating values into one logical row while also storing a counter that indicates, effectively, "the next **n** records have the same value". The implementation of just that seems simple enough, but the trade off is that the data model is now hideously complicated to query against using standard SQL, especially when using any sort of aggregate SQL functions. This significantly reduces the usefulness of the data store since only complex custom code can restore the data back to a "decompressed" state resulting in an impedance mismatch with hundreds of tools that will not be able to render this data properly. I considered the possibility of defining custom Hibernate types that would basically "understand" the compressed data set and blow it back up and return query results with the dynamically created synthetic rows. (The database will be read only to all clients except the tightly controlled input stream). Several of the tools I had in mind will integrate with Hibernate/POJOS in addition to raw JDBC (eg. JasperReports) But this does not really address the aggregate functions issue and probably has a bunch of other issues as well. So I am part way to resigning myself to possibly having to use a more proprietary [possibly non-SQL] data store (any suggestions appreciated) and then focus on the possibly less complex task of writing a pseudo JDBC driver to at least ease integration with external tools. I heard reference to something called a "*bit packed file*" as a mechanism to achieve this data compression, but I do not know of any databases that supply this and the last thing I want to do (or can do, really....) is write my own database. Any suggestions or insight ?
I would look at a [column oriented database](http://en.wikipedia.org/wiki/Column-oriented_DBMS "Column Oriented Database"). It would be great for this sort of application
Hibernate (or any JPA solution) is the wrong tool for this job. JPA/Hibernate isn't a lightweight solution. In high-volume applications, the overhead is not only significant but prohibitive. You really need to look into [grid and cluster solutions](https://stackoverflow.com/questions/383920/what-is-the-best-library-for-java-to-grid-cluster-enable-your-application#383929). I won't repeat the overview of the various technologies here. I've got a lot of experience in financial market information systems. A few of the things you said stuck out to me: * You have a lot of raw data; * You want to apply various aggregations to that data (eg open/high/low/close daily summaries); * High availability is probably an issue (it always is in these kinds of systems); and * Low latency is probably an issue (ditto). Now for grid/cluster type solutions I divide them loosely into two categories: 1. Map-based solutions like Coherence or Terracotta; and 2. Javaspaces-based solutions like GigaSpaces. I've used Coherence a lot and the Map solution can be nice but it can be problematic too. Coherence maps can have listeners on them and you can use this sort of thing to do things like: * Market price alerts (users may want a notification when a price reaches a certain level); * Derivative pricing (eg an exchange-traded option pricing system will want to reprice when an underlying security changes last traded price); * A trade-matching/booking system may want to match received trade notifications for reconciliation purposes; * etc. All of these can be done with listeners but in Coherence for example listeners have to be cheap, which leads to things like a Map having a listener than writes something to another Map and this can chain on for awhile. Also, modifying the cache entry can be problematic (although there are mechanisms for dealing with that kind of problem too; I'm talking about situations like turning off a market price alert so it doesn't trigger a second time). I found GigaSpaces type grid solutions to be far more compelling for this kind of application. The read (or destructive read) operation is a highly elegant and scalable solution and you can get transactional grid updates with sub-millisecond performance. Consider the two classic queueing architectures: * Request/Response: a bad message can block the queue and while you can many senders and receivers (for scalability) scaling up the number of pipes isn't always straightforward; and * Publish/Subscribe: this decouples the sender and receiver but lacks scalability in that if you have multiple subscribers they'll each receive the message (not necessarily what you want with say a booking system). In GigaSpaces, a destructive read is like a scalable publish-subscribe system and a read operation is like the traditional publish-subscribe model. There is a Map and JMS implementation built on top of the grid and it can do FIFO ordering. Now whaqt about persistence I hear you ask? Persistence is a consequence of deciding all the other stuff. For this kind of application, I like the [Persistence as a Service](http://natishalom.typepad.com/nati_shaloms_blog/2007/09/paas-persistenc.html) model (ironically written about Hibernate but it applies to anything). Basically this means your date store hits are asynchronous and it works nicely with doing summary data. Like you can have a service listening for trade notifications and persist just the ones it's interested in (aggregating in memory if required). You can do open/high/low/close prices this way. For high volume data you don't really want to write it all to the database. Not synchronously anyway. A persistent store plus a data warehouse is probably more the route you want to go but again this depends on requirements, volumes, etc. It's a complicated topic and I've only really touche don it. Hope that helps you.
What are my options to store and query huge amounts of data where a lot of it is repeating?
[ "", "java", "database", "hibernate", "jdbc", "compression", "" ]
I have a simulation that reads large binary data files that we create (10s to 100s of GB). We use binary for speed reasons. These files are system dependent, converted from text files on each system that we run, so I'm not concerned about portability. The files currently are many instances of a POD struct, written with fwrite. I need to change the struct, so I want to add a header that has a file version number in it, which will be incremented anytime the struct changes. Since I'm doing this, I want to add some other information as well. I'm thinking of the size of the struct, byte order, and maybe the svn version number of the code that created the binary file. Is there anything else that would be useful to add?
In my experience, second-guessing the data you'll need is invariably wasted time. What's important is to structure your *metadata* in a way that is extensible. For XML files, that's straightforward, but binary files require a bit more thought. I tend to store metadata in a structure at the END of the file, not the beginning. This has two advantages: * Truncated/unterminated files are easily detected. * Metadata footers can often be appended to existing files without impacting their reading code. The simplest metadata footer I use looks something like this: ``` struct MetadataFooter{ char[40] creatorVersion; char[40] creatorApplication; .. or whatever } struct FileFooter { int64 metadataFooterSize; // = sizeof(MetadataFooter) char[10] magicString; // a unique identifier for the format: maybe "MYFILEFMT" }; ``` After the raw data, the metadata footer and THEN the file footer are written. When reading the file, seek to the end - sizeof(FileFooter). Read the footer, and verify the magicString. Then, seek back according to metadataFooterSize and read the metadata. Depending on the footer size contained in the file, you can use default values for missing fields. As [KeithB](https://stackoverflow.com/users/2298/keithb) points out, you could even use this technique to store the metadata as an XML string, giving the advantages of both totally extensible metadata, with the compactness and speed of binary data.
For large binaries I'd look seriously at HDF5 (Google for it). Even if it's not something you want to adopt it might point you in some useful directions in designing your own formats.
What to put in a binary data file's header
[ "", "c++", "c", "binaryfiles", "" ]
The following code will generate a link to the page I want to get to. ``` <%= Html.ActionLink(image.Title, image.Id.ToString(), "Image") %> ``` The following code will cause the correct url to be rendered on the page. ``` <%= Url.Action("Index", "Image", new { id = image.Id })%> ``` But when I try to use it in javascript it fails. (with some strange error about page inheritance) ``` <div onclick="window.location = '<%= Url.Action("Index", "Image", new { id = image.Id })%>'"> ... </div> ``` Should the above code work? What is the correct way to generate the javascript attempted above? **Update** There error I get is > Views\Home\Index.aspx.cs(9): error > ASPNET: Make sure that the class > defined in this code file matches the > 'inherits' attribute, and that it > extends the correct base class (e.g. > Page or UserControl). Looks like it indicates a bigger problem. **Fixed** Thanks for your help, the code contained a div with `runat="server"`. When I removed this it runs OK. This maybe because there is no form with `runat="server"` but I would expect a different error for that. **As this question doesn't seem meaningful should I delete it?**
This should work actually. ASP.NET MVC will substitute all <%= ... %> or similar tags, it does not recognize whether it's a html definition or javascript. What is the output of your view? Is the "strange error" coming from Javascript or ASP.NET? **EDIT**: regarding your update: make sure your Index.aspx has the "Codebehind" attribute (this is in Page-tag on the very first line) pointing to Index.aspx.cs and the attribute "Inherits" contains the class name of the Page/User-Control class in the code-behind.
Take a look at [this](http://codeguru302.blogspot.com/2008/11/aspnet-make-sure-that-class-defined-in.html) for a possible solution to your error message. Codebehind vs Codefile.
How do I do a JavaScript redirect in ASP.Net MVC?
[ "", "javascript", "asp.net-mvc", "" ]
I have a .mbm file that I copy to my device using this line in the .pkg file ``` "$(EPOCROOT)epoc32\data\z\resource\apps\MyApp.mbm" -"!:\resource\apps\MyApp.mbm" ``` Then in the draw function of my container I do this.. ``` _LIT(KMBMFile , "\\resource\\apps\\MyApp.mbm" ); CFbsBitmap* iBitmap; iBitmap->Load(KMBMFile, 0); gc.BitBlt(Rect().iTl, iBitmap); ``` However the line `iBitmap->Load(KMBMFile, 0);` raises a KERN-EXEC:0 PANIC "This panic is raised when the Kernel cannot find an object in the object index for the current process or current thread using the specified object index number (the raw handle number)." Can anyone spot where I am going wrong? Thanks!
You were dereferencing an uninitialized pointer, you could also use this: ``` // remember to include the EIK environemnt include file #include <eikenv.h> _LIT(KMBMFile , "\\resource\\apps\\MyApp.mbm" ); CFbsBitmap* iBitmap; iBitmap = iEikonEnv->CreateBitmapL( KMBMFile, 0 ); gc.BitBlt( Rect().iTl, iBitmap ); ```
I have solved this problem so will post answer here for future lookers.. Create an MBM file in your MMP file using a snippet like this ``` START BITMAP MyApp.mbm HEADER TARGETPATH \resource\apps SOURCEPATH ..\gfx SOURCE c24 background.bmp END ``` **ensure your .bmp images are saved in 32 bit from photoshop or similar** Then sure your MBM file is copied to your device in your PKG file ``` "$(EPOCROOT)epoc32\data\z\resource\apps\MyApp.mbm" -"!:\resource\apps\MyApp.mbm" ``` Then in the draw function of your container use code such as this.. ``` _LIT(KMBMFile , "C:\\RESOURCE\\APPS\\MyApp.mbm" ); CFbsBitmap* iBitmap = new (ELeave) CFbsBitmap; TInt retval = iBitmap->Load(KMBMFile, 0); gc.BitBlt(Rect().iTl, iBitmap); ``` This will draw your bitmap at the top left point of the screen (useful for background image)
Symbian C++ - Load and display image from .mbm file
[ "", "c++", "mobile", "symbian", "carbide", "" ]
I recently wrote a piece of code which did ``` SomeClass someObject; mysqlpp::StoreQueryResult result = someObject.getResult(); ``` where SomeClass::getResult() looks like: ``` mysqlpp::StoreQueryResult SomeClass::getResult() { mysqlpp::StoreQueryResult res = ...<something>...; return res; } ``` Now, using the example in the first code snippet, when I compiled and ran, the program crashed with an ABORT signal. I then changed the first snippet to: ``` SomeClass someObject; mysqlpp::StoreQueryResult result(someObject.getResult()); ``` which worked fine. Also, just to try it out, I changed it again to: ``` SomeClass someObject; mysqlpp::StoreQueryResult result; result = someObject.getResult(); ``` which also worked fine. Now, I just can't figure out why the first example failed, and the next two succeeded. As I understand, in the first example, the copy constructor is used to initialise result. But isn't this also the case in the second example? So why did the second example succeed? The 3rd example makes a bit more sense - since the copy const isn't used, we just assign after construction. In short, what's the difference between: ``` FooClass a = someObject.someMethodReturningFooClassInstance(); ``` and ``` FooClass a(someObject.someMethodReturningFooClassInstance());? ``` Muchos thanks!
I don't think there's any difference in the two cases. The same copy constructor is called both times. Are you sure this is *exactly* what you've written in your code?
Strictly speaking in the first case the default constructor is called followed by the assignment operator and in the second case it uses just the copy constructor. Okay, my initial assumption was wrong, and apparently in both cases just the copy constructor would get called (well in the assignment case an additional "conversion" constructor may be called as well). I'll fire up the compiler after some sleep and verify this in my development environment.
C++ Copy Constructors
[ "", "c++", "constructor", "" ]
i have very simple problem. I need to create model, that represent element of ordered list. This model can be implemented like this: ``` class Item(models.Model): data = models.TextField() order = models.IntegerField() ``` or like this: ``` class Item(models.Model): data = models.TextField() next = models.ForeignKey('self') ``` What way is preferred? What drawbacks have each solution?
Essentially, the second solution you propose is a linked list. Linked list implemented at the database level are usually not a good idea. To retrieve a list of `n` elements, you will need `n` database access (or use complicated queries). Performance wise, retrieving a list in O(n) is awfully not efficient. In regular code, linked list are used to get better insert performance compared to arrays (no need to move all elements around). In your database, updating all elements is not that complicated in only 2 queries : ``` UPDATE item.order = item.order + 1 FROM item WHERE order > 3 INSERT INTO item (order, ...) VALUES (3, ...) ``` I remember seeing a reuseable app that implemented all that and a nice admin interface, but I cant find it right now ... To summarize, definitly use solution #1 and stay away from solution #2 unless you have a very very good reason not to !
That depends on what you want to do. The first one seems better to make a single query in the database and get all data in the correct order The second one seems better to insert an element between two existing elements (because in the first one you'd have to change a lot of items if the numbers are sequential) I'd use the first one, because it seems to fit better a database table, which is how django stores model data behind the hood.
Ordered lists in django
[ "", "python", "django", "django-models", "" ]
I want to handle F1-F12 keys using JavaScript and jQuery. I am not sure what pitfalls there are to avoid, and I am not currently able to test implementations in any other browsers than Internet Explorer 8, Google Chrome and Mozilla FireFox 3. Any suggestions to a full cross-browser solution? Something like a well-tested jQuery library or maybe just vanilla jQuery/JavaScript?
The best source I have for this kind of question is this page: <http://www.quirksmode.org/js/keys.html> What they say is that the key codes are odd on Safari, and consistent everywhere else (except that there's no keypress event on IE, but I believe keydown works).
I agree with William that in general it is a bad idea to hijack the function keys. That said, I found the [shortcut](http://www.openjs.com/scripts/events/keyboard_shortcuts/) library that adds this functionality, as well as other keyboard shortcuts and combination, in a very slick way. Single keystroke: ``` shortcut.add("F1", function() { alert("F1 pressed"); }); ``` Combination of keystrokes: ``` shortcut.add("Ctrl+Shift+A", function() { alert("Ctrl Shift A pressed"); }); ```
Handling key-press events (F1-F12) using JavaScript and jQuery, cross-browser
[ "", "javascript", "jquery", "events", "keyboard", "keyboard-events", "" ]
I have two constructor : ``` function clsUsagerEmailUserName($nickName, $email) { $this->nickName = $nickName; $this->email = $email; } function clsUsagerEmailUserName($email) { $this->email = $email; } ``` But this is not working? What's wrong, isn't supposed to be OO at this version of PHP? (I just migrate from PHP4)
PHP5 doesn't allow overloaded constructor. Alternativly you can use function to set or you can use this trick (found at EE): ``` function __construct ($var1, $var2 = null) { if (isset($var2)) { //Do one thing } else { //Do another } } ```
If you have a good reason to want to keep the function arguments in that order, do something like this: ``` function __construct() { switch ( func_num_args() ) { case 1: $this->email = func_get_arg(0); break; case 2: $this->nickName = func_get_arg(0); $this->email = func_get_arg(1); break; // [...] default: throw new Exception('Wrong number of values passed to constructor'); } } ``` I'd strongly recommend Daok's answer over this though.
How can I have over loaded constructor in Php5?
[ "", "php", "constructor", "" ]
I have to login into a https web page and download a file using Java. I know all the URLs beforehand: ``` baseURL = // a https URL; urlMap = new HashMap<String, URL>(); urlMap.put("login", new URL(baseURL, "exec.asp?login=username&pass=XPTO")); urlMap.put("logout", new URL(baseURL, "exec.asp?exec.asp?page=999")); urlMap.put("file", new URL(baseURL, "exec.asp?file=111")); ``` If I try all these links in a web browser like firefox, they work. Now when I do: ``` urlConnection = urlMap.get("login").openConnection(); urlConnection.connect(); BufferedReader in = new BufferedReader( new InputStreamReader(urlConnection.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) System.out.println(inputLine); in.close(); ``` I just get back the login page HTML again, and I cannot proceed to file download. Thanks!
I agree with Alnitak that the problem is likely storing and returning cookies. Another good option I have used is [HttpClient](http://hc.apache.org/httpclient-3.x/) from Jakarta Commons. It's worth noting, as an aside, that if this is a server you control, you should be aware that sending the username and password as querystrings is not secure (even if you're using HTTPS). HttpClient supports sending parameters using POST, which you should consider.
As has been noted, you must maintain the session cookie between requests (see [CookieHandler](http://java.sun.com/javase/6/docs/api/java/net/CookieHandler.html)). Here is a sample implementation: ``` class MyCookieHandler extends CookieHandler { private Map<String, List<String>> cookies = new HashMap<String, List<String>>(); @Override public Map<String, List<String>> get(URI uri, Map<String, List<String>> requestHeaders) throws IOException { String host = uri.getHost(); Map<String, List<String>> ret = new HashMap<String, List<String>>(); synchronized (cookies) { List<String> store = cookies.get(host); if (store != null) { store = Collections.unmodifiableList(store); ret.put("Cookie", store); } } return Collections.unmodifiableMap(ret); } @Override public void put(URI uri, Map<String, List<String>> responseHeaders) throws IOException { List<String> newCookies = responseHeaders.get("Set-Cookie"); if (newCookies != null) { String host = uri.getHost(); synchronized (cookies) { List<String> store = cookies.get(host); if (store == null) { store = new ArrayList<String>(); cookies.put(host, store); } store.addAll(newCookies); } } } } ```
How do I login and download a file from a https web page from Java?
[ "", "java", "https", "" ]
I'm trying to build the search for a Django site I am building, and in that search, I am searching across three different models. And to get pagination on the search result list, I would like to use a generic object\_list view to display the results. But to do that, I have to merge three QuerySets into one. How can I do that? I've tried this: ``` result_list = [] page_list = Page.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term)) article_list = Article.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term)) post_list = Post.objects.filter( Q(title__icontains=cleaned_search_term) | Q(body__icontains=cleaned_search_term) | Q(tags__icontains=cleaned_search_term)) for x in page_list: result_list.append(x) for x in article_list: result_list.append(x) for x in post_list: result_list.append(x) return object_list( request, queryset=result_list, template_object_name='result', paginate_by=10, extra_context={ 'search_term': search_term}, template_name="search/result_list.html") ``` But this doesn't work. I get an error when I try to use that list in the generic view. The list is missing the clone attribute. How can I merge the three lists, `page_list`, `article_list` and `post_list`?
Concatenating the querysets into a list is the simplest approach. If the database will be hit for all querysets anyway (e.g. because the result needs to be sorted), this won't add further cost. ``` from itertools import chain result_list = list(chain(page_list, article_list, post_list)) ``` Using `itertools.chain` is faster than looping each list and appending elements one by one, since `itertools` is implemented in C. It also consumes less memory than converting each queryset into a list before concatenating. Now it's possible to sort the resulting list e.g. by date (as requested in hasen j's comment to another answer). The `sorted()` function conveniently accepts a generator and returns a list: ``` from operator import attrgetter result_list = sorted( chain(page_list, article_list, post_list), key=attrgetter('date_created') ) ``` You can reverse the sort order: ``` result_list = sorted( chain(page_list, article_list, post_list), key=attrgetter('date_created'), reverse=True, ) ``` `attrgetter` is equivalet to the following `lambda` (this was the way it had to be done before Python 2.4): ``` result_list = sorted( chain(page_list, article_list, post_list), key=lambda instance: instance.date_created, ) ```
Try this: ``` matches = pages | articles | posts ``` It retains all the functions of the querysets which is nice if you want to `order_by` or similar. **Please note:** this doesn't work on querysets from two different models.
How to combine multiple QuerySets in Django?
[ "", "python", "django", "search", "django-queryset", "django-q", "" ]
I'm writing a C++ static library that needs to be shared among several applications, one of them written in Perl. Unfortunately, I only barely know the core Perl language (I read the Llama book), and I'm not all that familiar with its libraries. How do you make calls to an external C++ binary from a Perl script? By Google search, I found information on the Perl [Inline](http://search.cpan.org/~ingy/Inline-0.44/Inline.pod) module, but if I understand it correctly, that isn't what I need. I'm pretty sure that's for writing C and C++ code directly in your Perl scripts, not for calling external C++ libraries. The C++ static library is being cross-compiled for an ARM processor, so there will be no C++ compiler on the target machine where the Perl script will be running. (If that makes a difference.)
You want to look at using XS, which is how Perl normally interfaces with C/C++ libraries. It's not quite trivial. A couple of relevant portions of the Perl documentation: * [perlxs](http://perldoc.perl.org/perlxs.html) * [perlxstut](http://perldoc.perl.org/perlxs.html)
You can call code from other libraries via Inline::C (and likely the same via Inline::CPP) - have a look at [Inline::C::Cookbook](http://search.cpan.org/~sisyphus/Inline-0.45/C/C-Cookbook.pod). Most likely you want to start out with Inline and after you're done experimenting use the resulting .XS file to work further.
How do I call a C++ static library from Perl?
[ "", "c++", "perl", "" ]
I cannot get std::tr1::shared\_ptr for my WinMobile project since the STL for WinCE is maintained by a different team at Microsoft :( aarrgh... Anyone worked with another thread-safe, reference counting smart pointers? I'm actually using yasper which seems to be good. Thank you very much.
I'd also recommend boost::shared\_ptr. You can do what I did for a library, and use a #define to switch between std::tr1::shared\_ptr and boost::shared\_ptr, depending on the capabilities of the compiler. That way your code doesn't need to be modified [much] if/when the CE team add tr1 support. Just my 2c.
[Boost Smart Pointers](http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/smart_ptr.htm). In particular [boost::shared\_ptr](http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm). I am pretty sure they are supported for your platform. This is where tr1::shared\_ptr came from.
Smart pointers for Windows Mobile 6 SDK
[ "", "c++", "windows-mobile", "windows-ce", "smart-pointers", "" ]
All of us who work with relational databases have learned (or are learning) that SQL is different. Eliciting the desired results, and doing so efficiently, involves a tedious process partly characterized by learning unfamiliar paradigms, and finding out that some of our most familiar programming patterns don't work here. What are the common antipatterns you've seen (or yourself committed)?
I am consistently disappointed by most programmers' tendency to mix their UI-logic in the data access layer: ``` SELECT FirstName + ' ' + LastName as "Full Name", case UserRole when 2 then "Admin" when 1 then "Moderator" else "User" end as "User's Role", case SignedIn when 0 then "Logged in" else "Logged out" end as "User signed in?", Convert(varchar(100), LastSignOn, 101) as "Last Sign On", DateDiff('d', LastSignOn, getDate()) as "Days since last sign on", AddrLine1 + ' ' + AddrLine2 + ' ' + AddrLine3 + ' ' + City + ', ' + State + ' ' + Zip as "Address", 'XXX-XX-' + Substring( Convert(varchar(9), SSN), 6, 4) as "Social Security #" FROM Users ``` Normally, programmers do this because they intend to bind their dataset directly to a grid, and its just convenient to have SQL Server format server-side than format on the client. Queries like the one shown above are extremely brittle because they tightly couple the data layer to the UI layer. On top of that, this style of programming thoroughly prevents stored procedures from being reusable.
Here are my top 3. Number 1. Failure to specify a field list. (Edit: to prevent confusion: this is a production code rule. It doesn't apply to one-off analysis scripts - unless I'm the author.) ``` SELECT * Insert Into blah SELECT * ``` should be ``` SELECT fieldlist Insert Into blah (fieldlist) SELECT fieldlist ``` Number 2. Using a cursor and while loop, when a while loop with a loop variable will do. ``` DECLARE @LoopVar int SET @LoopVar = (SELECT MIN(TheKey) FROM TheTable) WHILE @LoopVar is not null BEGIN -- Do Stuff with current value of @LoopVar ... --Ok, done, now get the next value SET @LoopVar = (SELECT MIN(TheKey) FROM TheTable WHERE @LoopVar < TheKey) END ``` Number 3. DateLogic through string types. ``` --Trim the time Convert(Convert(theDate, varchar(10), 121), datetime) ``` Should be ``` --Trim the time DateAdd(dd, DateDiff(dd, 0, theDate), 0) ``` --- I've seen a recent spike of "One query is better than two, amiright?" ``` SELECT * FROM blah WHERE (blah.Name = @name OR @name is null) AND (blah.Purpose = @Purpose OR @Purpose is null) ``` This query requires two or three different execution plans depending on the values of the parameters. Only one execution plan is generated and stuck into the cache for this SQL text. That plan will be used regardless of the value of the parameters. This results in intermittent poor performance. It is much better to write two queries (one query per intended execution plan).
What are the most common SQL anti-patterns?
[ "", "sql", "anti-patterns", "" ]
I want to know if it is possible to end a for loop in C++ when an ending condition (different from the reacheing right number of iterations) is verified. For instance: ``` for (int i = 0; i < maxi; ++i) for (int j = 0; j < maxj; ++j) // But if i == 4 < maxi AND j == 3 < maxj, // then jump out of the two nested loops. ``` I know that this is possible in Perl with the next LABEL or last LABEL calls and labeled blocks, is it possible to do it in C++ or I should use a while loop? Thank you.
You can use the `return` keyword: move the nested loop into a subroutine, invoke the subroutine to run the nested loops, and 'return' from the subroutine to exit [all] the loops.
Despite the "`goto` considered harmful" arguments, this seems like the perfect place for `goto`. That's essentially what you are doing in Perl. Seriously... consider the alternatives: **Extra State Variables** --- ``` for (int i=0; i<maxi; ++i) { bool leaveLoop = false; for (int j=0; j<maxj; ++j) { if (i == 4 && j == 3) { leaveLoop = true; break; // leave the inner loop } } if (leaveLoop) { break; // leave the outside loop } } ``` **Leave by Exception** --- ``` try { for (int i=0; i<maxi; ++i) { for (int j=0; j<maxj; ++j) { if (i == 4 && j == 3) { throw leave_loop(); } } } } catch (leave_loop const&) { } ``` **Complex Logic** --- ``` int j = 0; for (int i=0; i<maxi && !(i==4 && j==3); ++i) { for (j=0; j<maxj && !(i==4 && j==3); ++j) { // inner loop } } ``` **`goto`** --- ``` for (int i=0; i<maxi; ++i) { for (int j=0; j<maxj; ++j) { if (i==4 && j==3) { goto leave_loop; } } } leave_loop: ``` Is the last one less clear? I don't believe that it is. Is it any more fragile? IMHO, the others are quite error prone and fragile compared to the `goto` version. Sorry to be standing on the soapbox here but this is something that has bothered me for a while ;) The only thing that you have to be congnizent of is that `goto` and exceptions are pretty similar. They both open up the opportunity for leaking resources and what not so treat them with care.
Is it possible to exit a for before time in C++, if an ending condition is reached?
[ "", "c++", "loops", "label", "nested-loops", "" ]
I am running into the same problem as in this question: [How do you prevent leading zeros from being stripped when importing an excel doc using c#](https://stackoverflow.com/questions/22879/how-do-you-prevent-leading-zeros-from-being-stripped-when-importing-an-excel-doc) But I am not sure if that is the best solution for my scenario. Here is the code I am using to do the export. Does anyone know what I can change to prevent the leading 0's from being stripped off? ``` private static void Export_with_XSLT_Web(DataSet dsExport, string[] sHeaders, string[] sFileds, ExportFormat FormatType, string FileName) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.Buffer = true; HttpContext.Current.Response.ContentType = "application/vnd.ms-excel"; HttpContext.Current.Response.AppendHeader("content-disposition", "attachment; filename=" + FileName); } // XSLT to use for transforming this dataset. MemoryStream stream = new MemoryStream(); XmlTextWriter writer = new XmlTextWriter(stream, Encoding.UTF8); CreateStylesheet(writer, sHeaders, sFileds, FormatType); writer.Flush(); stream.Seek(0, SeekOrigin.Begin); XmlDataDocument xmlDoc = new XmlDataDocument(dsExport); XslTransform xslTran = new XslTransform(); xslTran.Load(new XmlTextReader(stream), null, null); using(StringWriter sw = new StringWriter()) { xslTran.Transform(xmlDoc, null, sw, null); HttpContext.Current.Response.Write(sw.ToString()); writer.Close(); stream.Close(); HttpContext.Current.Response.End(); } } } ``` Here is the method that creates the stylesheet, is there anything in here that I can change to bring in some or all fields as text. ``` private static void CreateStylesheet(XmlTextWriter writer, string[] sHeaders, string[] sFileds, ExportFormat FormatType) { try { // xsl:stylesheet string ns = "http://www.w3.org/1999/XSL/Transform"; writer.Formatting = Formatting.Indented; writer.WriteStartDocument(); writer.WriteStartElement("xsl", "stylesheet", ns); writer.WriteAttributeString("version", "1.0"); writer.WriteStartElement("xsl:output"); writer.WriteAttributeString("method", "text"); writer.WriteAttributeString("version", "4.0"); writer.WriteEndElement(); // xsl-template writer.WriteStartElement("xsl:template"); writer.WriteAttributeString("match", "/"); // xsl:value-of for headers for(int i = 0; i < sHeaders.Length; i++) { writer.WriteString("\""); writer.WriteStartElement("xsl:value-of"); writer.WriteAttributeString("select", "'" + sHeaders[i] + "'"); writer.WriteEndElement(); // xsl:value-of writer.WriteString("\""); } // xsl:for-each writer.WriteStartElement("xsl:for-each"); writer.WriteAttributeString("select", "Export/Values"); writer.WriteString("\r\n"); // xsl:value-of for data fields for(int i = 0; i < sFileds.Length; i++) { writer.WriteString("\""); writer.WriteStartElement("xsl:value-of"); writer.WriteAttributeString("select", sFileds[i]); writer.WriteEndElement(); // xsl:value-of writer.WriteString("\""); } writer.WriteEndElement(); // xsl:for-each writer.WriteEndElement(); // xsl-template writer.WriteEndElement(); // xsl:stylesheet writer.WriteEndDocument(); } catch(Exception Ex) { throw Ex; } } ```
I don't know the output of your XSL transformation: I will assume it's the xml format for Excel. Trying to reverse the process I wrote three numbers (007) in an Excel sheet: once as number, once as text and once as number but formatted to show 3 digits padded with zeros. Then I saved it as xml and looked at it. Here is the fragment: ``` <Row> <Cell><Data ss:Type="Number">7</Data></Cell> </Row> <Row> <Cell><Data ss:Type="String" x:Ticked="1">007</Data></Cell> </Row> <Row> <Cell ss:StyleID="s22"><Data ss:Type="Number">7</Data></Cell> </Row> ``` I'm not copying the style but you can easily do it. Edit: as always Google Is Your Friend (and mine, too ;-) ): <http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm#CSVAndExcel>. Edit (2): I thought the link was enough. The article is saying that (if you are sure the target is **only** Excel) you can use an Excel-specific CSV syntax. So in your case and looking at your code I think you should *insert the missing commas* and change the opening ``` writer.WriteString("\""); ``` into ``` writer.WriteString("=\""); ``` Beware that I didn't try. Just one question out of curiosity: wouldn't it be simpler to just output what you need working on the DataSet instead of * transforming it in XML * generating an ad-hoc XSL * performing the XSL transformation * copying the result to the Response stream ?
add a ' (single quote) to the front of the string.
How to stop the leading 0's from being stripped off when exporting to excel from a datatable?
[ "", "c#", "asp.net", "excel", "" ]
My coworker is new to C# and didn't know about the coalesce operator. So, I saw him write a line of code like this: ``` string foo = "" + str; ``` The idea being that if str is null, this expression would return an empty string. Of course, that could be rewritten as this: ``` string foo = str ?? ""; ``` And I feel that would be more readable. But is it really that big a deal? Are the readability benefits enough to suggest going back and making those lines look like the second? Or is this one of those things that I should learn to let go (provided that my coworker is educated on the best way to do this in the future)? **EDIT**: Just a note, I appreciate the efficiency comments, but this isn't really being used in any situations where that performance would be critical. So while that info is interesting, it's not necessarily what I feel is important.
IMO, it is much better to clearly define such logic i.e. don't use string concatenation to avoid null string and use conditional statement or ?? operator. Regarding other comments: > there is also a performance benefit to > using the null-coalescing operator, > since no concatenation need take place > (and, therefore, no extra string > instance is being created > unnecessarily) Not true. C# compiler compiles "" + string2 to the 100% same code as string1 ?? "". Moreover C# compiler translates + operator to call to string.Concat method which in turn checks arguments with string.IsNullOrEmpty function and doesn't allocate a new string in this case. > Also I would recommend the use of > String.Empty over "" as it is a > constant and does not require the > creation of a new String .NET framework supports [string interning](http://en.wikipedia.org/wiki/String_intern_pool) so "" and string.Empty point to the same memory region
I don't think one is any more readable than the other personally. I prefer this: ``` string foo = str ?? ""; ``` simply because I really like the ?? operator. If you are a brand newbie, I think that this would be a little easier to understand: ``` string foo = str == null ? "" : str; ``` or ``` string foo = ""; if (str != null) foo = str; ``` However, you kinda need to ask youself, "How simple do you really want to get?"
Coalesce vs empty string concatenation
[ "", "c#", "string", "concatenation", "coalesce", "" ]
I make a lot of use of boost::scoped\_ptr in my code and it is great but I'm currently working with software that uses shared\_ptr all over the place and I'm wondering if I'm missing something. AFAIK a shared\_ptr is only useful if different threads are going to be accessing the same data and you don't know what order the threads are going to finish (with the shared\_ptr ensuring that the object exists until the last thread has finished with it). Are there other use cases?
Threads are irrelevant here. What's relevant is whether it's easy to specify a point at which the object is no longer of use. Suppose several different objects want to use the same object. It might be a pack of data, or for input/output, or some geometric object, or whatever. You want the shared object to be deleted after all of the using objects are deleted, and not a clock cycle before. Rather than figure out which owning object is going to have the longest lifespan (and that can change if you change the program, or perhaps through user interaction), you can use a shared\_ptr to force this behavior. It doesn't matter whether the using objects are in the same or different threads. Objects can have unpredictable lifetimes even if they're all in the same thread.
> AFAIK a shared\_ptr is only useful if > different threads are going to be > accessing the same data Well, it's for situations where multiple **owners** own the same object pointed to by the smart pointer. They may access the smart pointers from different threads, and **shared\_ptr** is usable in that area too, but that's not the main point. If the last owner loses its reference to the object pointed to, the shared\_ptr mechanism deletes the object. You can use a **scoped\_ptr** if all you want to have is a pointer that is deleted when the scope it's created in is left (either by exceptions, by a *goto* to a place outside, or by normal control flow or some other mechanism). If you use it like that, there is no need to change to shared\_ptr.
shared_ptr: what's it used for
[ "", "c++", "boost", "shared-ptr", "raii", "" ]
Say that I have two tables like those: ``` Employers (id, name, .... , deptId). Depts(id, deptName, ...). ``` But Those data is not going to be modified so often and I want that a query like this ``` SELECT name, deptName FROM Employers, Depts WHERE deptId = Depts.id AND Employers.id="ID" ``` be as faster as it can. To my head comes two possible solutions: * Denormalize the table: Despite that with this solution I will lose some of the great advantages of have "normalized databases, but here the performance is a MUST. * Create a View for that Denormalize data. I will keep the Data Normalized and (**here is my question**), the performance of a query over that view will be faster that without that view. Or another way to ask the same question, the View is "Interpreted" every time that you make a query over it, or how works the views Stuff in a DBA?.
Generally, unless you "materialize" a view, which is an option in some software like MS SQL Server, the view is just translated into queries against the base tables, and is therefore no faster or slower than the original (minus the minuscule amount of time it takes to translate the query, which is nothing compared to actually executing the query). How do you know you've got performance problems? Are you profiling it under load? Have you verified that the performance bottleneck is these two tables? Generally, until you've got hard data, don't assume you know where performance problems come from, and don't spend any time optimizing until you know you're optimizing the right thing - 80% of the performance issues come from 20% of the code.
If Depts.ID is the primary key of that table, and you index the Employers.DeptID field, then this query should remain very fast even over millions of records. Denormalizing doesn't make sense to me in that scenario. Generally speaking, performance of a view will be almost exactly the same as performance when running the query itself. The advantage of a view is simply to abstract that query away, so you don't have to think about it. You could use a Materialized View (or "snapshot" as some say), but then your data is only going to be as recent as your last refresh.
How do Views work in a DBM?
[ "", "sql", "mysql", "performance", "view", "" ]
I have six string variables say str11, str12, str13, str21, str21 and str23. I need to compare combination of these variables. The combinations I have to check is str11 -- str12 -- str13 as one group and str21 -- str22 -- str23 as other group. I have to compare these two groups. Now I'm in confusion which method should I use for comparison? Can I append strings of same group and compare, which is only one comparison say `( str11 append str12 append str13 ) eqauls ( str21 append str22 append str23 )` Or Should I go for individual 3 comparisons? ``` if( str11 equals str21 ) { if( str12 equals str22 ) { if( str13 equals str23 ) { } } } ``` What is performance factor which costs me because of string length when I do string comparison? Lets us assume all strings are of same( approx ) length.
I’d test individually. Is “AB” “CD” “EF” equal to “ABC” “DE” “F”? Me thinks not. P.S. If it is, then it’s a VERY special case, and if you decide to code it that way (as a concatenated comparison) then comment the hell out of it.
Splitting the comparison into three if statements is definitely not necessary. You could also simply do an AND with your comparisons, eg ``` if ( str11 equals str21 && str12 equals str22 && str13 equals str23) ... ```
String Comparison : individual comparison Vs appended string comparison
[ "", "java", "string", "comparison", "" ]
This is just a question to satisfy my curiosity. But to me it is interesting. I wrote this little simple benchmark. It calls 3 variants of Regexp execution in a random order a few thousand times: Basically, I use the same pattern but in different ways. 1. Your ordinary way without any `RegexOptions`. Starting with .NET 2.0 these do not get cached. But should be "cached" because it is held in a pretty global scope and not reset. 2. With `RegexOptions.Compiled` 3. With a call to the static `Regex.Match(pattern, input)` which does get cached in .NET 2.0 Here is the code: ``` static List<string> Strings = new List<string>(); static string pattern = ".*_([0-9]+)\\.([^\\.])$"; static Regex Rex = new Regex(pattern); static Regex RexCompiled = new Regex(pattern, RegexOptions.Compiled); static Random Rand = new Random(123); static Stopwatch S1 = new Stopwatch(); static Stopwatch S2 = new Stopwatch(); static Stopwatch S3 = new Stopwatch(); static void Main() { int k = 0; int c = 0; int c1 = 0; int c2 = 0; int c3 = 0; for (int i = 0; i < 50; i++) { Strings.Add("file_" + Rand.Next().ToString() + ".ext"); } int m = 10000; for (int j = 0; j < m; j++) { c = Rand.Next(1, 4); if (c == 1) { c1++; k = 0; S1.Start(); foreach (var item in Strings) { var m1 = Rex.Match(item); if (m1.Success) { k++; }; } S1.Stop(); } else if (c == 2) { c2++; k = 0; S2.Start(); foreach (var item in Strings) { var m2 = RexCompiled.Match(item); if (m2.Success) { k++; }; } S2.Stop(); } else if (c == 3) { c3++; k = 0; S3.Start(); foreach (var item in Strings) { var m3 = Regex.Match(item, pattern); if (m3.Success) { k++; }; } S3.Stop(); } } Console.WriteLine("c: {0}", c1); Console.WriteLine("Total milliseconds: " + (S1.Elapsed.TotalMilliseconds).ToString()); Console.WriteLine("Adjusted milliseconds: " + (S1.Elapsed.TotalMilliseconds).ToString()); Console.WriteLine("c: {0}", c2); Console.WriteLine("Total milliseconds: " + (S2.Elapsed.TotalMilliseconds).ToString()); Console.WriteLine("Adjusted milliseconds: " + (S2.Elapsed.TotalMilliseconds*((float)c2/(float)c1)).ToString()); Console.WriteLine("c: {0}", c3); Console.WriteLine("Total milliseconds: " + (S3.Elapsed.TotalMilliseconds).ToString()); Console.WriteLine("Adjusted milliseconds: " + (S3.Elapsed.TotalMilliseconds*((float)c3/(float)c1)).ToString()); } ``` Everytime I call it the result is along the lines of: ``` Not compiled and not automatically cached: Total milliseconds: 6185,2704 Adjusted milliseconds: 6185,2704 Compiled and not automatically cached: Total milliseconds: 2562,2519 Adjusted milliseconds: 2551,56949184038 Not compiled and automatically cached: Total milliseconds: 2378,823 Adjusted milliseconds: 2336,3187176891 ``` So there you have it. Not much, but about 7-8% difference. It is not the only mystery. I cannot explain why the first way would be that much slower because it is never re-evaluated but held in a global static variable. By the way, this is on .Net 3.5 and Mono 2.2 which behave exactly the same. On Windows. So, any ideas, why the compiled variant would even fall behind? EDIT1: After fixing the code the results now look like this: ``` Not compiled and not automatically cached: Total milliseconds: 6456,5711 Adjusted milliseconds: 6456,5711 Compiled and not automatically cached: Total milliseconds: 2668,9028 Adjusted milliseconds: 2657,77574842168 Not compiled and automatically cached: Total milliseconds: 6637,5472 Adjusted milliseconds: 6518,94897724836 ``` Which pretty much obsoletes all of the other questions as well. Thanks for the answers.
In the Regex.Match version you are looking for the input in the pattern. Try swapping the parameters around. ``` var m3 = Regex.Match(pattern, item); // Wrong var m3 = Regex.Match(item, pattern); // Correct ```
[I noticed](https://stackoverflow.com/questions/414328/using-static-regex-ismatch-vs-creating-an-instance-of-regex#414411) similar behavior. I also wondered why the compiled version would be slower, but noticed that above a certain number of calls, the compiled version is faster. So I dug into [Reflector](http://www.red-gate.com/products/reflector/) a little, and I noticed that for a compiled Regex, there's still a little setup that is performed on first call (specifically, creating an instance of the appropriate [`RegexRunner`](http://msdn.microsoft.com/en-us/library/system.text.regularexpressions.regexrunner.aspx) object). In my test, I found that if I moved both the constructor and an initial throw-away call to the regex outside the timer start, the compiled regex won no matter how many iterations I ran. --- Incidentally, the caching that the framework is doing when using static `Regex` methods is an optimization that's only needed when using static `Regex` methods. This is because every call to a static `Regex` method creates a new `Regex` object. In the `Regex` class's constructor it must parse the pattern. The caching allows subsequent calls of static `Regex` methods to reuse the `RegexTree` parsed from the first call, thereby avoiding the parsing step. When you use instance methods on a single `Regex` object, then this is not an issue. The parsing is still only performed one time (when you create the object). In addition, you get to avoid running all the other code in the constructor, as well as the heap allocation (and subsequent garbage collection). Martin Brown [noticed](https://stackoverflow.com/questions/428196/why-is-a-cached-regexp-outperforming-a-compiled-one#428265) that you reversed the arguments to your static `Regex` call (good catch, Martin). I think you'll find that if you fix that, the instance (not-compiled) regex will beat the static calls every time. You should also find that, given my findings above, the compiled instance will beat the not-compiled one, too. **BUT**: You should really read [Jeff Atwood's post](http://blog.codinghorror.com/to-compile-or-not-to-compile/) on compiled regexes before you go blindly applying that option to every regex you create.
Why is a cached Regexp outperforming a compiled one?
[ "", "c#", "regex", "performance", "benchmarking", "" ]
I have small problem with my .net 2.0 winforms application. I want to embed a few different icons to app. There are also other requirements: * must be automatic rebuildable by ms-build; using external gui apps for preparation isn't allowed * application must contain versioninfo. After embeding multiple icons, I want to register let's say two file associations to application documents / extension files. ``` [Registry] ... Root: HKCR; Subkey: "MyFileExt\DefaultIcon"; ValueType: string; ValueName: ""; ValueData: "{app}\MyApp.exe,2" ``` where "2" it's icon index. I know that I need to use somehow old-style Win32 resource file. I also found somewhere that if using Visual Studio 2005 it's possible to add 'native resource file' but it doesn't exist in 2008 anymore. Is it possible to meet all this requirements and if yes - how?
I decided to put all icons into separate old-school plain win32 c++ \*.dll. It's more convienient and I don't need to do any tricks with versioninfo/win32resources.
The solution is actually quite simple, although it required that I think back to my first encounter with RC files... In a plain text file, you can write the following ``` #include <windows.h> // The following is some Win32 candy for // -- the Windows styles in XP, Vista & 7 // does the UAC too. 1 RT_MANIFEST "App.manifest" // -- the versioning info, which we find usually in // AssemblyInfo.cs, but we need to add this one // because including Win32 resources overrides the .cs // file! VS_VERSION_INFO VERSIONINFO FILEVERSION 1,0,0,0 PRODUCTVERSION 1,0,0,0 FILEFLAGSMASK VS_FFI_FILEFLAGSMASK FILEFLAGS VS_FF_DEBUG FILEOS VOS__WINDOWS32 FILETYPE VFT_DLL FILESUBTYPE VFT2_UNKNOWN BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904E4" // en-US/cp-1252 BEGIN VALUE "CompanyName", "My Company" VALUE "ProductName", "My C# App" VALUE "ProductVersion", "1.0.0.0" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1252 // en-US in ANSI (cp-1252) END END END // And now the icons. // Note that the icon with the lowest ID // Will be used as the icon in the Explorer. 101 ICON "Icon1.ico" 102 ICON "Icon2.ico" 103 ICON "Icon3.ico" ``` (Details about the VERSIONINFO structure can be found in MSDN: [`VERSIONINFO` structure](http://msdn.microsoft.com/en-us/library/aa381058(VS.85).aspx)) You compile using `rc`, which should either be part of VS, or in the Windows Platform SDK. The result of the compilation of your `.rc` file is a `.res` file which can be included in the properties page of your C# project -- or add the following in the `.csproj` file itself. ``` <Win32ResourceFile>C:\path\to\my\resource\file.res</Win32ResourceFile> ``` Compile your project and look in the explorer, the info and icons should be there. The CSC compiler also provides a `/win32res` switch that embeds the `.res` file into you app. Hope this helps!
Multiple Icons into c# 2.0 WinApp
[ "", "c#", "winapi", "icons", "" ]
I am attempting to get the information from one table (games) and count the entries in another table (tickets) that correspond to each entry in the first. I want each entry in the first table to be returned even if there aren't any entries in the second. My query is as follows: ``` SELECT g.*, count(*) FROM games g, tickets t WHERE (t.game_number = g.game_number OR NOT EXISTS (SELECT * FROM tickets t2 WHERE t2.game_number=g.game_number)) GROUP BY t.game_number; ``` What am I doing wrong?
You need to do a left-join: ``` SELECT g.Game_Number, g.PutColumnsHere, count(t.Game_Number) FROM games g LEFT JOIN tickets t ON g.Game_Number = t.Game_Number GROUP BY g.Game_Number, g.PutColumnsHere ``` Alternatively, I think this is a little clearer with a correlated subquery: ``` SELECT g.Game_Number, G.PutColumnsHere, (SELECT COUNT(*) FROM Tickets T WHERE t.Game_Number = g.Game_Number) Tickets_Count FROM Games g ``` Just make sure you check the query plan to confirm that the optimizer interprets this well.
You need to learn more about how to use joins in SQL: ``` SELECT g.*, count(*) FROM games g LEFT OUTER JOIN tickets t USING (game_number) GROUP BY g.game_number; ``` Note that unlike some database brands, MySQL permits you to list many columns in the select-list even if you only GROUP BY their primary key. As long as the columns in your select-list are *functionally dependent* on the GROUP BY column, the result is unambiguous. Other brands of database (Microsoft, Firebird, etc.) give you an error if you list any columns in the select-list without including them in GROUP BY or in an aggregate function.
SQL GROUP BY/COUNT even if no results
[ "", "sql", "mysql", "" ]
I need to check a string to see if any word in it has multiple occurences. So basically I will accept: "google makes love" but I don't accept: "google makes google love" or "google makes love love google" etc. Any ideas? Really don't know any way to approach this, any help would be greatly appreciated.
Based on Wicked Flea code: ``` function single_use_of_words($str) { $words = explode(' ', trim($str)); //Trim to prevent any extra blank if (count(array_unique($words)) == count($words)) { return true; //Same amount of words } return false; } ```
Try this: ``` function single_use_of_words($str) { $words = explode(' ', $str); $words = array_unique($words); return implode(' ', $words); } ```
PHP Multiple Occurences Of Words Within A String
[ "", "php", "algorithm", "string", "substring", "" ]
This is my first post and I'm quite a novice on C++ and compiling in general. I'm compiling a program which requires some graphs to be drawn. The program create a *.dat file and then i should open gnuplot and write plot '*.dat'. That's fine. Is there a way to make gnuplot automatically open and show me the plot I need? I should use some system() function in the code to call gnuplot but how can I make him plot what I need? Sorry for my non-perfect English :s Thanks for the attention anyway!
Depending on your OS, you might be able to use [popen().](http://www.opengroup.org/onlinepubs/007908799/xsh/popen.html) This would let you spawn a gnuplot process and just just write to it like any other FILE\*. If you have datapoints to plot, you can pass them inline with the *plot "-" ...* option. Similarly, you may want to explore *set data style points/lines/linespoints/etc* options. --- Without pause or persist, gnuplot will terminate upon end-of-input-stream. In your example case, that would be when the end of the file is reached. --- To produce (write) an output file (graph), use: ``` set terminal png small set output "filename.png" ``` There's lots of options to *set terminal*. Png is usually there. If not, perhaps gif, tiff, or jpeg? Watch out for overwriting the file! You may want to use *set size 2,2* to make a larger graph. Some *set terminal* variants also allow you to specify the size.
I'm learning this today too. Here is a small example I cooked up. ``` #include <iostream> #include <fstream> using namespace std; int main(int argc, char **argv) { ofstream file("data.dat"); file << "#x y" << endl; for(int i=0; i<10; i++){ file << i << ' ' << i*i << endl; } file.close(); return 0; } ``` Save that as plot.cpp and compile that with g++: ``` g++ plot.cpp -o plot ``` Run the program to create the .dat file: ``` ./plot ``` Save the following gnuplot script as plot.plt: ``` set terminal svg enhanced size 1000 1000 fname "Times" fsize 36 set output "plot.svg" set title "A simple plot of x^2 vs. x" set xlabel "x" set ylabel "y" plot "./data.dat" using 1:2 title "" ``` Run the script with gnuplot to generate your .svg file: ``` gnuplot plot.plt ``` The resulting plot will be in plot.svg. If you leave out the first couple lines that specify the output, it will render in a window. Have fun!
C++ and gnuplot
[ "", "c++", "gnuplot", "" ]
I need to find the caller of a method. Is it possible using stacktrace or reflection?
``` StackTraceElement[] stackTraceElements = Thread.currentThread().getStackTrace() ``` According to the Javadocs: > The last element of the array represents the bottom of the stack, which is the least recent method invocation in the sequence. A `StackTraceElement` has `getClassName()`, `getFileName()`, `getLineNumber()` and `getMethodName()`. You will have to experiment to determine which index you want (probably `stackTraceElements[1]` or `[2]`).
**Note**: if you are using Java 9 or later you should use `StackWalker.getCallerClass()` as described in [Ali Dehghani's answer](https://stackoverflow.com/a/45812871/350692). The comparison of different methods below is mostly interesting for historical reason. --- An alternative solution can be found in a comment to [this request for enhancement](https://bugs.java.com/bugdatabase/view_bug?bug_id=4851444). It uses the `getClassContext()` method of a custom `SecurityManager` and seems to be faster than the stack trace method. The following program tests the speed of the different suggested methods (the most interesting bit is in the inner class `SecurityManagerMethod`): ``` /** * Test the speed of various methods for getting the caller class name */ public class TestGetCallerClassName { /** * Abstract class for testing different methods of getting the caller class name */ private static abstract class GetCallerClassNameMethod { public abstract String getCallerClassName(int callStackDepth); public abstract String getMethodName(); } /** * Uses the internal Reflection class */ private static class ReflectionMethod extends GetCallerClassNameMethod { public String getCallerClassName(int callStackDepth) { return sun.reflect.Reflection.getCallerClass(callStackDepth).getName(); } public String getMethodName() { return "Reflection"; } } /** * Get a stack trace from the current thread */ private static class ThreadStackTraceMethod extends GetCallerClassNameMethod { public String getCallerClassName(int callStackDepth) { return Thread.currentThread().getStackTrace()[callStackDepth].getClassName(); } public String getMethodName() { return "Current Thread StackTrace"; } } /** * Get a stack trace from a new Throwable */ private static class ThrowableStackTraceMethod extends GetCallerClassNameMethod { public String getCallerClassName(int callStackDepth) { return new Throwable().getStackTrace()[callStackDepth].getClassName(); } public String getMethodName() { return "Throwable StackTrace"; } } /** * Use the SecurityManager.getClassContext() */ private static class SecurityManagerMethod extends GetCallerClassNameMethod { public String getCallerClassName(int callStackDepth) { return mySecurityManager.getCallerClassName(callStackDepth); } public String getMethodName() { return "SecurityManager"; } /** * A custom security manager that exposes the getClassContext() information */ static class MySecurityManager extends SecurityManager { public String getCallerClassName(int callStackDepth) { return getClassContext()[callStackDepth].getName(); } } private final static MySecurityManager mySecurityManager = new MySecurityManager(); } /** * Test all four methods */ public static void main(String[] args) { testMethod(new ReflectionMethod()); testMethod(new ThreadStackTraceMethod()); testMethod(new ThrowableStackTraceMethod()); testMethod(new SecurityManagerMethod()); } private static void testMethod(GetCallerClassNameMethod method) { long startTime = System.nanoTime(); String className = null; for (int i = 0; i < 1000000; i++) { className = method.getCallerClassName(2); } printElapsedTime(method.getMethodName(), startTime); } private static void printElapsedTime(String title, long startTime) { System.out.println(title + ": " + ((double)(System.nanoTime() - startTime))/1000000 + " ms."); } } ``` An example of the output from my 2.4 GHz Intel Core 2 Duo MacBook running Java 1.6.0\_17: ``` Reflection: 10.195 ms. Current Thread StackTrace: 5886.964 ms. Throwable StackTrace: 4700.073 ms. SecurityManager: 1046.804 ms. ``` The internal Reflection method is *much* faster than the others. Getting a stack trace from a newly created `Throwable` is faster than getting it from the current `Thread`. And among the non-internal ways of finding the caller class the custom `SecurityManager` seems to be the fastest. ## Update As **lyomi** points out in [this comment](https://stackoverflow.com/questions/421280/how-do-i-find-the-caller-of-a-method-using-stacktrace-or-reflection/2924426#comment26408146_2924426) the `sun.reflect.Reflection.getCallerClass()` method has been disabled by default in Java 7 update 40 and removed completely in Java 8. Read more about this in [this issue in the Java bug database](https://bugs.openjdk.java.net/browse/JDK-8014925). ## Update 2 As **zammbi** has found, Oracle was [forced to back out of the change](https://bugs.openjdk.java.net/browse/JDK-8021946) that removed the `sun.reflect.Reflection.getCallerClass()`. It is still available in Java 8 (but it is deprecated). ## Update 3 3 years after: Update on timing with current JVM. ``` > java -version java version "1.8.0" Java(TM) SE Runtime Environment (build 1.8.0-b132) Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode) > java TestGetCallerClassName Reflection: 0.194s. Current Thread StackTrace: 3.887s. Throwable StackTrace: 3.173s. SecurityManager: 0.565s. ```
How do I find the caller of a method using stacktrace or reflection?
[ "", "java", "stack-trace", "" ]
I compiled 2 different binaries on the same GNU/Linux server using g++ version 4.2.3. The first one uses: ``` GLIBC_2.0 GLIBC_2.2 GLIBC_2.1 GLIBCXX_3.4 GLIBC_2.1.3 ``` The second one uses: ``` GLIBC_2.0 GLIBC_2.2 GLIBC_2.1 GLIBCXX_3.4.9 GLIBCXX_3.4 GLIBC_2.1.3 ``` Why the second binary uses GLIBCXX\_3.4.9 that is only available on libstdc++.so.6.0.9 and *not* in libstdc++.so.6.0.8 What is the new feature generated by g++ that require an ABI break and force the system to have GLIBCXX\_3.4.9? Is there a way to disable this new feature to not require GLIBCXX\_3.4.9?
To find out which of the listed GLIBCXX\_3.4.9 symbol(s) your binary actually depends on, do this: ``` readelf -s ./a.out | grep 'GLIBCXX_3\.4\.9' | c++filt ``` Once you know which symbols to look for, you can trace back to the object which needs them: ``` nm -A *.o | grep _ZN<whatever> ``` Finally, to tie this back to source, you can do: ``` objdump -dS foo.o ``` and see which code is referencing the 3.4.9 symbol(s).
Since you asked for it, here are symbols having at least ABI version 3.4.9: ``` GLIBCXX_3.4.9 { _ZNSt6__norm15_List_node_base4hook*; _ZNSt6__norm15_List_node_base4swap*; _ZNSt6__norm15_List_node_base6unhookEv; _ZNSt6__norm15_List_node_base7reverseEv; _ZNSt6__norm15_List_node_base8transfer*; _ZNSo9_M_insertI[^g]*; _ZNSt13basic_ostreamIwSt11char_traitsIwEE9_M_insertI[^g]*; _ZNSi10_M_extractI[^g]*; _ZNSt13basic_istreamIwSt11char_traitsIwEE10_M_extractI[^g]*; _ZSt21__copy_streambufs_eofI[cw]St11char_traitsI[cw]EE[il]PSt15basic_streambuf*; _ZSt16__ostream_insert*; _ZN11__gnu_debug19_Safe_sequence_base12_M_get_mutexEv; _ZN11__gnu_debug19_Safe_iterator_base16_M_attach_singleEPNS_19_Safe_sequence_baseEb; _ZN11__gnu_debug19_Safe_iterator_base16_M_detach_singleEv; _ZN11__gnu_debug19_Safe_iterator_base12_M_get_mutexEv; _ZNKSt9bad_alloc4whatEv; _ZNKSt8bad_cast4whatEv; _ZNKSt10bad_typeid4whatEv; _ZNKSt13bad_exception4whatEv; } GLIBCXX_3.4.8; ``` Run the file `libstdc++-v3/config/abi/post/i386-linux-gnu/baseline_symbols.txt` through c++filt, grepping for GLIBCXX\_3.4.9 to make sense of those names (they look like wildcards only). I didn't do it because those names become quite long and nested. Later versions mostly include c++1x stuff. See the file `libstdc++-v3/config/abi/pre/gnu.ver` for the above. Read [here](http://www.bilmuh.gyte.edu.tr/gokturk/introcpp/gcc/ld_3.html#SEC39) about the VERSION linker script command.
What make g++ include GLIBCXX_3.4.9?
[ "", "c++", "linux", "compiler-construction", "g++", "gnu", "" ]
I just started Shader programming(GLSL) and created a few with RenderMonkey. Now I want to use this Shaders in my java code. Are there any simple examples of how I do that?
I have found a very simple example ``` int v = gl.glCreateShader(GL.GL_VERTEX_SHADER); int f = gl.glCreateShader(GL.GL_FRAGMENT_SHADER); BufferedReader brv = new BufferedReader(new FileReader("vertexshader.glsl")); String vsrc = ""; String line; while ((line=brv.readLine()) != null) { vsrc += line + "\n"; } gl.glShaderSource(v, 1, vsrc, (int[])null); gl.glCompileShader(v); BufferedReader brf = new BufferedReader(new FileReader("fragmentshader.glsl")); String fsrc = ""; String line; while ((line=brf.readLine()) != null) { fsrc += line + "\n"; } gl.glShaderSource(f, 1, fsrc, (int[])null); gl.glCompileShader(f); int shaderprogram = gl.glCreateProgram(); gl.glAttachShader(shaderprogram, v); gl.glAttachShader(shaderprogram, f); gl.glLinkProgram(shaderprogram); gl.glValidateProgram(shaderprogram); gl.glUseProgram(shaderprogram); ```
I don't have any myself, but if I have a problem along these lines I have often found the best place for 3D programming and Java advice is over at [JavaGaming.org](http://www.javagaming.org/) - I've not been there for a while, but it was always a helpful and knowledgeable community.
Jogl Shader programming
[ "", "java", "opengl", "jogl", "shader", "" ]
I'm writing a C# program to convert a FoxPro database to XML, and everything works except the memo field is blank. Is there something I'm missing to convert that bit? I'm using C# .Net 3.5 SP1, Visual FoxPro 9 SP 1 OLE DB Driver. Connection string is okay, as all other data is being pulled properly. When I converted the FoxPro database to SQL Server, the memo field is also blank there, so I can't convert twice.
Ended up having to do some work myself, but maybe it can help someone else out in the future: ``` public static object GetDbaseOrFoxproRawValue(string DBPath, string TableName, string ColumnName, string CompareColumnName, string CompareValue, bool CompareColumnIsAutoKey) { using (BinaryReader read = new BinaryReader(File.Open( Path.Combine(DBPath, TableName + ".dbf"), FileMode.Open, FileAccess.Read, FileShare.ReadWrite))) { // Is it a type of file that I can handle? if (new byte[] { 0x02, 0x03, 0x30, 0x43, 0x63, 0x83, 0x8b, 0xcb, 0xf5, 0xfb }.Contains(read.ReadByte())) { // Skip date. read.BaseStream.Seek(3, SeekOrigin.Current); // Read useful datas... uint RecordCount = read.ReadUInt32(); ushort FirstRecord = read.ReadUInt16(); ushort RecordLength = read.ReadUInt16(); int FieldCount = FirstRecord - 296 / 32; // Make sure things aren't stupid. ColumnName = ColumnName.ToLower(); CompareColumnName = CompareColumnName.ToLower(); // Find target column (field) string temp; UInt32 CompareFieldOffset = uint.MaxValue, FieldOffset = uint.MaxValue; byte CompareFieldLength = 0, FieldLength = 0; char FieldType = ' '; for (int i = 0; i < FieldCount; i++) { read.BaseStream.Seek(32 + (i * 32), SeekOrigin.Begin); temp = Encoding.ASCII.GetString(read.ReadBytes(11)).Replace("\0", "").ToLower(); if (temp == CompareColumnName) { read.ReadChar(); CompareFieldOffset = read.ReadUInt32(); CompareFieldLength = read.ReadByte(); } if (temp == ColumnName) { FieldType = read.ReadChar(); FieldOffset = read.ReadUInt32(); FieldLength = read.ReadByte(); } if (CompareFieldOffset != uint.MaxValue && FieldOffset != uint.MaxValue) break; } // Make sure we can continue. if (CompareFieldOffset == uint.MaxValue || FieldOffset == uint.MaxValue) return null; // Iterate through each record to find the one we want. for (int index = 0; index < RecordCount; index++) { read.BaseStream.Seek(FirstRecord + (index * RecordLength) + CompareFieldOffset, SeekOrigin.Begin); temp = Encoding.Default.GetString(read.ReadBytes(CompareFieldLength)).Replace("\0", ""); if (temp == CompareValue) { read.BaseStream.Seek(FirstRecord + (index * RecordLength) + FieldOffset, SeekOrigin.Begin); switch (FieldType) { case 'M': case 'I': return read.ReadUInt32(); case 'C': default: return Encoding.Default.GetString(read.ReadBytes(FieldLength)).Replace("\0", ""); } } } } else { return null; } } return null; } ``` Just grab the result from that and use it as an index into the memo file (that code is pretty simple using the MSDN documentation).
I am not terribly familiar with C# or FoxPro or SQL Server, so I cannot give you much advice in that regard. However, if you cannot find a suitable driver, you may consider parsing the raw data and memo files yourself. Another question has dealt with this: [What's the easiest way to read a FoxPro DBF file from Python?](https://stackoverflow.com/questions/37535/whats-the-easiest-way-to-read-a-foxpro-dbf-file-from-python) Believe it or not, these file formats are quite simple to parse should you decide to write your own C# parser. These specifications are available from Microsoft: * [Visual FoxPro File Structures](http://msdn.microsoft.com/en-us/library/d863bcf2(VS.80).aspx) * [Table File Structure](http://msdn.microsoft.com/en-us/library/st4a0s68(VS.80).aspx) * [Memo File Structure](http://msdn.microsoft.com/en-us/library/8599s21w(VS.80).aspx)
How do I extract the data in a FoxPro memo field using .NET?
[ "", "c#", "oledb", "visual-foxpro", "foxpro", "" ]
I've recently written this with help from SO. Now could someone please tell me how to make it actually log onto the board. It brings up everything just in a non logged in format. ``` import urllib2, re import urllib, re logindata = urllib.urlencode({'username': 'x', 'password': 'y'}) page = urllib2.urlopen("http://www.woarl.com/board/index.php", logindata) pagesource = page.read() print pagesource ```
Someone recently asked [the same question you're asking](https://stackoverflow.com/questions/301924/python-urlliburllib2httplib-confusion). If you read through the answers to that question you'll see code examples showing you how to stay logged in while browsing a site in a Python script using only stuff in the standard library. The accepted answer might not be as useful to you as [this other answer](https://stackoverflow.com/questions/301924/python-urlliburllib2httplib-confusion#302184), since the accepted answer deals with a specific problem involving redirection. However, I recommend reading through all of the answers regardless.
You probably want to look into preserving cookies from the server. [Pycurl](http://pycurl.sourceforge.net/) or [Mechanize](http://wwwsearch.sourceforge.net/mechanize/) will make this much easier for you
urllib2 data sending
[ "", "python", "urllib2", "" ]
Is there a compelling reason to not use [`debug_backtrace`](https://www.php.net/debug_backtrace) for the sole purpose of determining the calling method's class, name, and parameter list? Not for debugging purposes. It has the word "debug" in the function name, which makes me feel a little dirty to be using it in this way, but it fit the bill for what I needed to do (a single function that can be called from many places and needs to call the calling method from another system). It works, but is this still a bad idea? If so, why?
It does feel a little dirty, but as has been well documented, opined, and beaten to death elsewhere, PHP isn't a system designed for elegance. One highly convoluted reason **not** to use debug\_backtrace for application logic is it's possible some future developer working on PHP could decide "it's just a debug function, performance doesn't matter". If you're interested in a "better" way of doing this, you could probably use [PHP's magic constants](http://us.php.net/language.constants.predefined) to pass in the calling method and class name, and then use a [ReflectionMethod](http://us.php.net/Reflection) object to extract any other information you need. I put better in quotes because, while this would be cleaner and more correct, the overhead of instantiating a Reflection object *may* be greater than using the debug\_backtrace function.
> Is there a compelling reason to not use debug\_backtrace for the sole purpose of determining the calling method's class, name, and parameter list? Yes. The point is, it's generally a sign of bad design if your code requires such a tight coupling that the callee has to have these information about its caller because it breaks [referential transparency](https://en.wikipedia.org/wiki/Referential_transparency). Therefore, if you feel the need to use these information, you probably should rethink your design. The callee should not need to have these information to perform its task. The exceptions, of course, revolve around debugging, logging and more generally other kinds of code introspection (but even there, beware of it).
PHP debug_backtrace in production code to get information about calling method?
[ "", "php", "" ]
I checked out a copy of a C++ application from SourceForge (HoboCopy, if you're curious) and tried to compile it. Visual Studio tells me that it can't find a particular header file. I found the file in the source tree, but where do I need to put it, so that it will be found when compiling? Are there special directories?
Visual Studio looks for headers in this order: * In the current source directory. * In the Additional Include Directories in the project properties (*Project* -> *[project name] Properties*, under C/C++ | General). * In the Visual Studio C++ *Include directories* under *Tools* → *Options* → *Projects and Solutions* → *VC++ Directories*. * In new versions of Visual Studio (2015+) the above option is deprecated and a list of default include directories is available at *Project Properties* → *Configuration* → *VC++ Directories* In your case, add the directory that the header is to the project properties (*Project Properties* → *Configuration* → *C/C++* → *General* → *Additional Include Directories*).
Actually, on my windows 10 with visual studio 2017 community, the path of the C++ header are: 1. `C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.15.26726\include` 2. `C:\Program Files (x86)\Windows Kits\10\Include\10.0.17134.0\ucrt` The 1st contains standard C++ headers such as `<iostream>`, `<algorithm>`. The 2nd contains old C headers such as `<stdio.h>`, `<string.h>`. The version number can be different based on your software.
Where does Visual Studio look for C++ header files?
[ "", "c++", "visual-studio", "header", "" ]
I'm getting a syntax error (undefined line 1 test.js) in Firefox 3 when I run this code. The alert works properly (it displays 'work') but I have no idea why I am receiving the syntax error. jQuery code: ``` $.getJSON("json/test.js", function(data) { alert(data[0].test); }); ``` test.js: ``` [{"test": "work"}] ``` Any ideas? I'm working on this for a larger .js file but I've narrowed it down to this code. What's crazy is if I replace the local file with a remote path there is no syntax error (here's an example): <http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?>
I found a solution to kick that error ``` $.ajaxSetup({'beforeSend': function(xhr){ if (xhr.overrideMimeType) xhr.overrideMimeType("text/plain"); } }); ``` Now the explanation: In firefox 3 (and I asume only firefox THREE) every file that has the mime-type of "text/xml" is parsed and syntax-checked. If you start your JSON with an "[" it will raise an Syntax Error, if it starts with "{" it's an "Malformed Error" (my translation for "nicht wohlgeformt"). If I access my json-file from an local script - no server is included in this progress - I have to override the mime-type... Maybe you set your MIME-Type for that very file wrong... How ever, adding this little piece of code will save you from an error-message **Edit:** In jquery 1.5.1 or higher, you can use the mimeType option to achieve the same effect. To set it as a default for all requests, use ``` $.ajaxSetup({ mimeType: "text/plain" }); ``` You can also use it with $.ajax directly, i.e., your calls translates to ``` $.ajax({ url: "json/test.js", dataType: "json", mimeType: "textPlain", success: function(data){ alert(data[0].test); } }); ```
getJSON may be insisting on at least one name:value pair. A straight array `["item0","item1","Item2"]` is valid JSON, but there's nothing to reference it with in the callback function for getJSON. In this little array of Zip codes: ``` {"result":[["43001","ALEXANDRIA"],["43002","AMLIN"],["43003","ASHLEY"],["43004","BLACKLICK"],["43005","BLADENSBURG"],["43006","BRINKHAVEN"]]} ``` ... I was stuck until I added the {"result": tag. Afterward I could reference it: ``` <script> $.getJSON("temp_test_json.php","", function(data) { $.each(data.result, function(i, item) { alert(item[0]+ " " + i); if (i > 4 ) return false; }); }); </script> ``` ... I also found it was just easier to use $.each().
jQuery .getJSON Firefox 3 Syntax Error Undefined
[ "", "javascript", "jquery", "html", "json", "firefox", "" ]
I'd like to swap out an sql:query for some Java code that builds a complex query with several parameters. The current sql is a simple select. ``` <sql:query var="result" dataSource="${dSource}" sql="select * from TABLE "> </sql:query> ``` How do I take my Java ResultSet (ie. rs = stmt.executeQuery(sql);) and make the results available in my JSP so I can do this textbook JSP? To be more clear, I want to remove the above query and replace it with Java. ``` <% ResultSet rs = stmt.executeQuery(sql); // Messy code will be in some Controller %> ``` ``` <c:forEach var="row" items="${result.rows}"> <c:out value="${row.name}"/> </c:forEach> ``` Do I set the session/page variable in the Java section or is there some EL trick that I can use to access the variable?
Model (Row): ``` public class Row { private String name; // Add/generate constructor(s), getters and setters. } ``` DAO: ``` public List<Row> list() throws SQLException { Connection connection = null; Statement statement = null; ResultSet resultSet = null; List<Row> rows = new ArrayList<Row>(); try { connection = database.getConnection(); statement = connection.createStatement(); resultSet = statement.executeQuery(SQL_LIST); while (resultSet.next()) { Row row = new Row(); row.setName(resultSet.getString("name")); // ... rows.add(row); } } finally { if (resultSet != null) try { resultSet.close(); } catch (SQLException logOrIgnore) {} if (statement != null) try { statement.close(); } catch (SQLException logOrIgnore) {} if (connection != null) try { connection.close(); } catch (SQLException logOrIgnore) {} } return rows; } ``` Controller (servlet): ``` protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { List<Row> rows = someDAO.list(); request.setAttribute("rows", rows); } catch (SQLException e) { request.setAttribute("error", "Retrieving rows failed."); e.printStackTrace(); } request.getRequestDispatcher("page.jsp").forward(request, response); } ``` View (page.jsp): ``` <c:forEach items="${rows}" var="row"> <c:out value="${row.name}" /> ... </c:forEach> <c:if test="${not empty error}">Error: ${error}</c:if> ```
You set up a session/request attribute from the Java code. However, I would suggest *not* using a ResultSet, as it has some lifecycle issues (i.e. needs to be closed). I would suggest fetching the ResultSet object in the Java code, iterating over it building, say a List, closing the ResultSet and pass the List to the JSP. If you are using Spring, the JdbcTemplates provide methods that take an SQL string and parameters and return a List> with the results of the query, which might come in very handy for this.
How do I make a Java ResultSet available in my jsp?
[ "", "java", "jsp", "jdbc", "jstl", "" ]
Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: ``` class Vector4 { private: GLfloat vector[4]; ... } ``` Then I have these operators, which are causing the problem: ``` operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } ``` I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: ``` enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); ``` Here are the candidates: ``` candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> ``` Why would it try to convert my Vector4 to a GLfloat\* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.
This is explained in the book "C++ Templates - The Complete Guide". It's because your operator[] takes size\_t, but you pass a different type which first has to undergo an implicit conversion to size\_t. On the other side, the conversion operator can be chosen too, and then the returned pointer can be subscript. So there is the ambiguity. Solution is to drop the conversion operator. They should generally be avoided as they just introduce problems, as you see. Provide a `begin` and `end` member function that returns `vector` and `vector + 4` respectively. Then you can use `v.begin()` if you want to pass to native openGL functions. There is a bit confusion in the comments. I think i will update this answer now to reflect the most recent concept of this. ``` struct Vector4 { // some of container requirements typedef GLfloat value_type; typedef GLfloat& reference; typedef GLfloat const& const_reference; typedef GLfloat * iterator; typedef GLfloat const * const_iterator; typedef std::ptrdiff_t difference_type; typedef std::size_t size_type; static const size_type static_size = 4; // returns iterators to the begin and end iterator begin() { return vector; } iterator end() { return vector + size(); } const_iterator begin() const { return vector; } const_iterator end() const { return vector + size(); } size_type size() const { return static_size; } size_type max_size() const { return static_size; } void swap(Vector4 & that) { std::swap(*this, that); } // some of sequences reference operator[](size_type t) { return vector[t]; } const_reference operator[](size_type t) const { return vector[t]; } // specific for us. returns a pointer to the begin of our buffer. // compatible with std::vector, std::array and std::string of c++1x value_type * data() { return vector; } value_type const* data() const { return vector; } // comparison stuff for containers friend bool operator==(Vector4 const&a, Vector4 const&b) { return std::equal(a.begin(), a.end(), b.begin()); } friend bool operator!=(Vector4 const&a, Vector4 const&b) { return !(a == b); } friend bool operator<(Vector4 const&a, Vector4 const&b) { return std::lexicographical_compare(a.begin(), a.end(), b.begin(), b.end()); } friend bool operator> (Vector4 const&a, Vector4 const&b) { return b < a; } friend bool operator<=(Vector4 const&a, Vector4 const&b) { return !(b < a); } friend bool operator>=(Vector4 const&a, Vector4 const&b) { return !(a < b); } private: GLfloat vector[4]; } ```
It's too hard to get rid of the ambiguity. It could easily interpret it as the direct [] access, or cast-to-float\* followed by array indexing. My advice is to drop the operator GLfloat\*. It's just asking for trouble to have implicit casts to float this way. If you must access the floats directly, make a get() (or some other name of your choice) method to Vector4 that returns a pointer to the raw floats underneath. Other random advice: rather than reinvent your own vector classes, you should use the excellent ones in the "IlmBase" package that is part of [OpenEXR](http://www.openexr.com/downloads.html)
C++ Operator Ambiguity
[ "", "c++", "opengl", "operators", "ambiguity", "operator-keyword", "" ]
Both `System.Timers.Timer` and `System.Threading.Timer` fire at intervals that are considerable different from the requested ones. For example: ``` new System.Timers.Timer(1000d / 20); ``` yields a timer that fires 16 times per second, not 20. To be sure that there are no side-effects from too long event handlers, I wrote this little test program: ``` int[] frequencies = { 5, 10, 15, 20, 30, 50, 75, 100, 200, 500 }; // Test System.Timers.Timer foreach (int frequency in frequencies) { int count = 0; // Initialize timer System.Timers.Timer timer = new System.Timers.Timer(1000d / frequency); timer.Elapsed += delegate { Interlocked.Increment(ref count); }; // Count for 10 seconds DateTime start = DateTime.Now; timer.Enabled = true; while (DateTime.Now < start + TimeSpan.FromSeconds(10)) Thread.Sleep(10); timer.Enabled = false; // Calculate actual frequency Console.WriteLine( "Requested frequency: {0}\nActual frequency: {1}\n", frequency, count / 10d); } ``` The output looks like this: Requested: 5 Hz; actual: 4,8 Hz Requested: 10 Hz; actual: 9,1 Hz Requested: 15 Hz; actual: 12,7 Hz Requested: 20 Hz; actual: 16 Hz Requested: 30 Hz; actual: 21,3 Hz Requested: 50 Hz; actual: 31,8 Hz Requested: 75 Hz; actual: 63,9 Hz Requested: 100 Hz; actual: 63,8 Hz Requested: 200 Hz; actual: 63,9 Hz Requested: 500 Hz; actual: 63,9 Hz The actual frequency deviates by up to 36% from the requested one. (And evidently cannot exceed 64 Hz.) Given that Microsoft recommends this timer for its "greater accuracy" over `System.Windows.Forms.Timer`, this puzzles me. Btw, these are not random deviations. They are the same values every time. And a similar test program for the other timer class, `System.Threading.Timer`, shows the exact same results. In my actual program, I need to collect measurements at precisely 50 samples per second. This should not yet require a real-time system. And it is very frustrating to get 32 samples per second instead of 50. Any ideas? @Chris: You are right, the intervals all seem to be integer multiples of something around 1/64th second. Btw, adding a Thread.Sleep(...) in the event handler doesn't make any difference. This makes sense given that `System.Threading.Timer` uses the thread pool, so each event is fired on a free thread.
Well, I'm getting different number up to 100 Hz actually, with some big deviations, but in most cases closer to the requested number (running XP SP3 with most recent .NET SPs). The System.Timer.Timer is implemented using System.Threading.Timer, so this explains why you see same results. I suppose that the timer is implemented using some kind of scheduling algorithm etc. (it's internal call, maybe looking at Rotor 2.0 might shed some light on it). I would suggest to implement a kind of timer using another thread (or combination thereof) calling Sleep and a callback. Not sure about the outcome though. Otherwise you might take a look at [multimedia timers](http://msdn.microsoft.com/en-us/library/ms713423(VS.85).aspx) (PInvoke).
If you use winmm.dll you can use more CPU time, but have better control. Here is your example modified to use the winmm.dll timers ``` const String WINMM = "winmm.dll"; const String KERNEL32 = "kernel32.dll"; delegate void MMTimerProc (UInt32 timerid, UInt32 msg, IntPtr user, UInt32 dw1, UInt32 dw2); [DllImport(WINMM)] static extern uint timeSetEvent( UInt32 uDelay, UInt32 uResolution, [MarshalAs(UnmanagedType.FunctionPtr)] MMTimerProc lpTimeProc, UInt32 dwUser, Int32 fuEvent ); [DllImport(WINMM)] static extern uint timeKillEvent(uint uTimerID); // Library used for more accurate timing [DllImport(KERNEL32)] static extern bool QueryPerformanceCounter(out long PerformanceCount); [DllImport(KERNEL32)] static extern bool QueryPerformanceFrequency(out long Frequency); static long CPUFrequency; static int count; static void Main(string[] args) { QueryPerformanceFrequency(out CPUFrequency); int[] frequencies = { 5, 10, 15, 20, 30, 50, 75, 100, 200, 500 }; foreach (int freq in frequencies) { count = 0; long start = GetTimestamp(); // start timer uint timerId = timeSetEvent((uint)(1000 / freq), 0, new MMTimerProc(TimerFunction), 0, 1); // wait 10 seconds while (DeltaMilliseconds(start, GetTimestamp()) < 10000) { Thread.Sleep(1); } // end timer timeKillEvent(timerId); Console.WriteLine("Requested frequency: {0}\nActual frequency: {1}\n", freq, count / 10); } Console.ReadLine(); } static void TimerFunction(UInt32 timerid, UInt32 msg, IntPtr user, UInt32 dw1, UInt32 dw2) { Interlocked.Increment(ref count); } static public long DeltaMilliseconds(long earlyTimestamp, long lateTimestamp) { return (((lateTimestamp - earlyTimestamp) * 1000) / CPUFrequency); } static public long GetTimestamp() { long result; QueryPerformanceCounter(out result); return result; } ``` And here is the output I get: ``` Requested frequency: 5 Actual frequency: 5 Requested frequency: 10 Actual frequency: 10 Requested frequency: 15 Actual frequency: 15 Requested frequency: 20 Actual frequency: 19 Requested frequency: 30 Actual frequency: 30 Requested frequency: 50 Actual frequency: 50 Requested frequency: 75 Actual frequency: 76 Requested frequency: 100 Actual frequency: 100 Requested frequency: 200 Actual frequency: 200 Requested frequency: 500 Actual frequency: 500 ``` Hope this helps.
C# Why are timer frequencies extremely off?
[ "", "c#", "timer", "frequency", "deviation", "" ]
I know that if an IP falls outside the Subnet Mask + Local IP rules, it will be only reachable through an Gateway. The problem is that I don't know how to obtain the local IP address, neither the local subnet mask, programatically using .NET. Any of you can help me? I will use this information to squeeze the maximum performance from my batch SQL insertion queue. If the SQL server falls in the same subnet, then it will use an algorithm optimized for mininum latency, otherwise I´ll use one optimized for high latency.
You can use the classes inside the System.Net.NetworkInformation namespace (introduced in .NET 2.0): ``` NetworkInterface[] interfaces = NetworkInterface.GetAllNetworkInterfaces(); foreach (NetworkInterface iface in interfaces) { IPInterfaceProperties properties = iface.GetIPProperties(); foreach (UnicastIPAddressInformation address in properties.UnicastAddresses) { Console.WriteLine( "{0} (Mask: {1})", address.Address, address.IPv4Mask ); } } ```
There is an alternative way using the [NetworkInformation](http://msdn.microsoft.com/en-us/library/system.net.networkinformation.networkinterface.aspx) class: ``` public static void ShowNetworkInterfaces() { // IPGlobalProperties computerProperties = IPGlobalProperties.GetIPGlobalProperties(); NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces(); if (nics == null || nics.Length < 1) { Console.WriteLine(" No network interfaces found."); return; } Console.WriteLine(" Number of interfaces .................... : {0}", nics.Length); foreach (NetworkInterface adapter in nics) { IPInterfaceProperties properties = adapter.GetIPProperties(); Console.WriteLine(); Console.WriteLine(adapter.Description); Console.WriteLine(String.Empty.PadLeft(adapter.Description.Length,'=')); Console.WriteLine(" Interface type .......................... : {0}", adapter.NetworkInterfaceType); Console.WriteLine(" Physical Address ........................ : {0}", adapter.GetPhysicalAddress().ToString()); string versions =""; // Create a display string for the supported IP versions. if (adapter.Supports(NetworkInterfaceComponent.IPv4)) { versions = "IPv4"; } if (adapter.Supports(NetworkInterfaceComponent.IPv6)) { if (versions.Length > 0) { versions += " "; } versions += "IPv6"; } Console.WriteLine(" IP version .............................. : {0}", versions); UnicastIPAddressInformationCollection uniCast = properties.UnicastAddresses; if (uniCast != null) { foreach (UnicastIPAddressInformation uni in uniCast) { Console.WriteLine(" Unicast Address ......................... : {0}", uni.Address); Console.WriteLine(" Subnet Mask ......................... : {0}", uni.IPv4Mask); } } Console.WriteLine(); } } ``` *The code sample is a mashup form the examples provided by Msdn, simplified to only show the information you probably need.* EDIT: Took me too long (too many things at the same time :) ) to make this post, and Mitch beat me to it :)
How to determine whether an IP is from the same LAN programatically in .NET C#
[ "", "c#", ".net", "ip", "" ]
Here's the problem: 1.) We have page here... www.blah.com/mypage.html 2.) That page requests a js file www.foo.com like this... ``` <script type="text/javascript" src="http://www.foo.com/jsfile.js" /> ``` 3.) "jsfile.js" uses Prototype to make an Ajax request back to www.foo.com. 4.) The ajax request calls www.foo.com/blah.html. The callback function gets the html response and throws it into a div. This doesn't seem to work though, I guess it is XSS. Is that correct? If so, how can I solve this problem? Is there any other way to get my html from www.foo.com to www.blah.com on the client without using an iframe?
It is XSS and it is forbidden. You should really not do things that way. If you really need to, make your AJAX code call the local code (PHP, ASP, whatever) on blah.com and make it behave like client and fetch whatever you need from foo.com and return that back to the client. If you use PHP, you can do this with fopen('www.foo.com/blah.html', 'r') and then reading the contents as if it was a regular file. Of course, allow\_remote\_url\_fopen (or whatever it is called exactly) needs to be enabled in your php.ini.
There is a [w3c proposal](http://www.w3.org/TR/access-control/) for allowing sites to specify other sites which are allowed to make cross site queries to them. (Wikipedia might want to allow all request for articles, say, but google mail wouldn't want to allow requests - since this might allow any website open when you are logged into google mail to read your mail). This might be available at some point in the future.
Cross domain Ajax request from within js file
[ "", "javascript", "html", "ajax", "" ]
I have been using anonymous namespaces to store local data and functions and wanted to know when the data is initialized? Is it when the application starts in the same way as static data or is it compiler dependent? For example: ``` // foo.cpp #include "foo.h" namespace { const int SOME_VALUE = 42; } void foo::SomeFunc(int n) { if (n == SOME_VALUE) { ... } } ``` The question arises out of making some code thread-safe. In the above example I need to be certain that `SOME_VALUE` is initialized before SomeFunc is called for the first time.
C++ Standard, **3.6.2/1** : > Zero-initialization and > initialization with a constant > expression are collectively called > static initialization; all other > initialization is dynamic > initialization. Objects of POD types > (3.9) with static storage duration > initialized with constant expressions > (5.19) shall be initialized before any > dynamic initialization takes place. > Objects with static storage duration > defined in namespace scope in the same > translation unit and dynamically > initialized shall be initialized in > the order in which their definition > appears in the translation unit. This effectively means, even when another translation unit calls your SomeFunc function from outside, your SOME\_VALUE constant will always be correctly initialized, because it's initialized with a *constant expression*. The only way for your function being called early (before main) is while initializing an object with dynamic initialiation. But by that time, according to the standard quote, the initialization of your POD variable is already done.
In this particular case (a global variable that is const) the variable is "initialized" at compile time. SOME\_VALUE is always equal to 42. In fact, most (all?) compiler will actually compile this as if it was hardcoded : ``` void foo::SomeFunc(int n) { if (n == 42) { ... } } ```
When is anonymous namespace data initialized?
[ "", "c++", "namespaces", "" ]
hey guys, i'm getting an exception on the following inner exception: {"Value cannot be null.\r\nParameter name: String"} Which reads like a simple error message, but none of the values (image, fileName) are null. How can i find out where this null String is? ``` RipHelper.UploadImage(image, fileName); ``` which calls ``` public static void UploadImage(System.Drawing.Image image, string fileName) { // this line is never reached } ``` Here is the full error log # # System.ArgumentNullException: Value cannot be null. Parameter name: String at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info) at System.Int32.Parse(String s) at Helpers.RipHelper..cctor() in C:\Helpers\RipHelper.cs:line 23 --- End of inner exception stack trace --- at Helpers.RipHelper.UploadImage(HttpPostedFile uploadFile, String fileName) at Helpers.UploadHelper.UploadImage(HttpContext context) in C:\Helpers\UploadHelper.cs:line 79
The exception is in the static constructor of the class Helpers.RipHelper, at line 23 of RipHelper.cs. This line is calling Int32.Parse, passing a null value. Perhaps the static constructor is referencing a static field that has not yet been initialized. If you are having trouble debugging this, post the code of the class including the static constructor and any field initializers.
The error is occuring in the static constructor of the RipHelper class.
Exception with System.Drawing.Image
[ "", "c#", "exception", "" ]
I would like to write a utility that will provide me with a relatively unique ID in Java. Something pretty simple, like x bits from timestamp + y bits from random number. So, how would I implement the following method: ``` long getUniqueID() { long timestamp = System.currentTimeMillis(); long random = some random long ... return id; } ``` BONUS Any suggestions for other easily obtainable information I could use to form my ID? note: I am aware of GUIDs and I know Java has a UUID class, but I don't want something that is 128 bits long.
Just clip the bits you don't need: ``` return java.util.UUID.randomUUID().getLeastSignificantBits(); ```
What you are trying to do is create a [hash function](http://en.wikipedia.org/wiki/Hash_function) that combines two long values into a single long value. In this case, the [uniformity](http://en.wikipedia.org/wiki/Hash_function#Uniformity) of the hash function will be of utmost importance since collisions in created unique ID values are unacceptable. However, if you can compare hash values to previously created identifiers, then collisions can be resolved by modifying the hash until no collision occurs. For example, you could take the time stamp and perform an [exclusive-or](http://en.wikipedia.org/wiki/Exclusive_or) (using the caret ^ operator in Java) with the random value. If a collision is detected, then add one to the result.
How can I assemble bits into a long to create a unique ID?
[ "", "java", "guid", "uuid", "bit", "" ]
When ever I think I can use the yield keyword, I take a step back and look at how it will impact my project. I always end up returning a collection instead of yeilding because I feel the overhead of maintaining the state of the yeilding method doesn't buy me much. In almost all cases where I am returning a collection I feel that 90% of the time, the calling method will be iterating over all elements in the collection, or will be seeking a series of elements throughout the entire collection. I do understand its usefulness in linq, but I feel that only the linq team is writing such complex queriable objects that yield is useful. Has anyone written anything like or not like linq where yield was useful?
I recently had to make a representation of mathematical expressions in the form of an Expression class. When evaluating the expression I have to traverse the tree structure with a post-order treewalk. To achieve this I implemented IEnumerable<T> like this: ``` public IEnumerator<Expression<T>> GetEnumerator() { if (IsLeaf) { yield return this; } else { foreach (Expression<T> expr in LeftExpression) { yield return expr; } foreach (Expression<T> expr in RightExpression) { yield return expr; } yield return this; } } ``` Then I can simply use a foreach to traverse the expression. You can also add a Property to change the traversal algorithm as needed.
Note that with yield, you are iterating over the collection once, but when you build a list, you'll be iterating over it twice. Take, for example, a filter iterator: ``` IEnumerator<T> Filter(this IEnumerator<T> coll, Func<T, bool> func) { foreach(T t in coll) if (func(t)) yield return t; } ``` Now, you can chain this: ``` MyColl.Filter(x=> x.id > 100).Filter(x => x.val < 200).Filter (etc) ``` You method would be creating (and tossing) three lists. My method iterates over it just once. Also, when you return a collection, you are forcing a particular implementation on you users. An iterator is more generic.
Is yield useful outside of LINQ?
[ "", "c#", ".net", "linq", "yield", "" ]
My table has data like so ``` products_parent_id | products_quantity 2 5 2 7 2 9 2 4 ``` My SQL statement looks like so (so far): ``` UPDATE ' . TABLE_PRODUCTS . ' SET products_quantity = products_quantity +' . $order['products_quantity'] . ', products_ordered = products_ordered - ' . $order['products_quantity'] . ' WHERE products_parent_id = ' . (int)$order['products_id'] ``` Now what I want to do, is update only say the 3rd occurrence of products\_parent\_id (in this case, 3rd from the top is 9) To SELECT the 3rd occurrence, I used this at the end of my statements LIMIT($grade, 1) Is there a way to UPDATE using MySQL but only the 3rd/4th/etc occurrence? Will it be reliable? Thank you
Short Answer: No Long Answer: Sort Of. Because the order the rows are returned is not defined by SQL. Technically two different requests to the DB could return the rows in a different order. So even if you update the third row, which is the third will depend on the implementation. The only way to mitigate this is to apply a specific ordering to the rows. (Order by) I don't think it is part of the language specification but most SQL implementations allow you to get a particular row from a query. I am not sure what the MYSQL specific details are but a quick google got this page: <http://forums.mysql.com/read.php?10,36490,36511>
Your question doesn't make your reasons for doing this update entirely clear, but it does remind me of when I've been generating sample entries in a DB table and wanted to move a handful of them into category A or category B. Your SQL query doesn't specify an ORDER BY, so Martin's answer is correct in saying that you may not get the same result order each time. However, if your ORDER BY was based on a particular ID, and that ID was sequential, you would be able to use [modulo](http://en.wikipedia.org/wiki/Modulo_operation) to access each third record. Eg: ``` mysql> create table foo ( id int ) ; Query OK, 0 rows affected (0.01 sec) mysql> insert into foo values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (15) ; Query OK, 12 rows affected (0.00 sec) Records: 12 Duplicates: 0 Warnings: 0 mysql> select * from foo where id%3=0; +------+ | id | +------+ | 3 | | 6 | | 9 | | 15 | +------+ 4 rows in set (0.00 sec) ``` If that method works for you, you could do your update to hit every third entry by using ``` UPDATE [ ... ] WHERE id % 3 = 0 ; ``` (Update, where id / 3 has a [remainder](http://en.wikipedia.org/wiki/Remainder) of 0).
Is it possible in MySQL to update only the nth occurence?
[ "", "php", "mysql", "" ]
If I have a class as follows ``` class Example_Class { private: int x; int y; public: Example_Class() { x = 8; y = 9; } ~Example_Class() { } }; ``` And a struct as follows ``` struct { int x; int y; } example_struct; ``` Is the structure in memory of the `example_struct` simmilar to that in `Example_Class` for example if I do the following ``` struct example_struct foo_struct; Example_Class foo_class = Example_Class(); memcpy(&foo_struct, &foo_class, sizeof(foo_struct)); ``` will `foo_struct.x = 8` and `foo_struct.y = 9` (ie: the same values as the x,y values in the foo\_class) ? The reason I ask is I have a C++ library (don't want to change it) that is sharing an object with C code and I want to use a struct to represent the object coming from the C++ library. I'm only interested in the attributes of the object. I know the ideal situation would be to have Example\_class wrap arround a common structure between the C and C++ code but it is not going to be easy to change the C++ library in use.
The C++ standard *guarantees* that memory layouts of a C `struct` and a C++ `class` (or `struct` -- same thing) will be identical, provided that the C++ `class`/`struct` fits the criteria of being **POD** ("Plain Old Data"). So what does POD mean? A class or struct is POD if: * All data members are public and themselves POD or fundamental types (but not reference or pointer-to-member types), or arrays of such * It has no user-defined constructors, assignment operators or destructors * It has no virtual functions * It has no base classes About the only "C++-isms" allowed are non-virtual member functions, static members and member functions. Since your class has both a constructor and a destructor, it is formally speaking not of POD type, so the guarantee does not hold. (Although, as others have mentioned, in practice the two layouts are likely to be identical on any compiler that you try, so long as there are no virtual functions). See section [26.7] of the [C++ FAQ Lite](http://www.dietmar-kuehl.de/mirror/c++-faq/intrinsic-types.html#faq-26.7) for more details.
> Is the structure in memory of the example\_struct simmilar to that in Example\_Class The behaviour isn't guaranteed, and is compiler-dependent. Having said that, the answer is "yes, on my machine", provided that the Example\_Class contains no virtual method (and doesn't inherit from a base class).
Structure of a C++ Object in Memory Vs a Struct
[ "", "c++", "struct", "" ]
I need to set/get the cookies stored at `first.example` while browsing `second.example`, I have full access of `first.example` but I only have JavaScript access (can manipulate the DOM as I want) on `second.example`. My first approach was to create an iframe on `second.example` (with JS) that loaded a page like `first.example/doAjax?setCookie=xxx` and that did an AJAX call to say `first.example/setCookie?cookieData=xxx` which would set the cookie on `first.example` with the data we passed around. That pretty much worked fine for setting the cookie on `first.example` from `second.example` - for getting a cookie I basically followed the same procedure, created the iframe that loaded `first.example/doAjax?getCookie` and that would do an AJAX call to say `first.example/getCookie` which would read the cookie info on `first.example` and return it as a JSON object. The problem is that I'm unable to bring that JSON cookie object back to `second.example` so I can read it, well maybe I could just bring it when the AJAX call is complete using "window.top" but there's timing issues because its not relative to when the iframe has been loaded. I hope I am clear and was wondering if there's an easier solution rather than this crazy iframe->ajax crap, also seems like this won't even work for getting cookies in SAFARI.
You could inject a script element into HEAD of the document with a callback that passes the cookie you need to whatever function needs it. Something like: ``` <script type="text/javascript"> var newfile=document.createElement('script'); newfile.setAttribute("type","text/javascript"); newfile.setAttribute("src", 'http://first.com/doAjax?getCookie&callback=passCookie'); document.getElementsByTagName("head")[0].appendChild(newfile); </script> ``` And the page first.com/doAjax?getCookie could do this: ``` passCookie({'name':'mycookie', 'value':'myvalue'}); ```
Put this PHP-File to first.com: ``` //readcookie.php echo $_COOKIE['cookiename']; ``` On second.com you can use this javascript to get the value: ``` function readCookieCallback() { if ((this.readyState == 4) && (this.status == 200)) { alert("the value of the cookie is: "+this.responseText); } else if ((this.readyState == 4) && (this.status != 200)) { //error... } } function buttonClickOrAnything() { var refreshObject = new XMLHttpRequest(); if (!refreshObject) { //IE6 or older try { refreshObject = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { refreshObject = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) { return; } } } refreshObject.onreadystatechange = readCookieCallback; refreshObject.open("GET", "http://www.first.com/readcookie.php"); refreshObject.send(); } ``` Regards, Robert
Getting setting cookies on different domains, with JavaScript or other
[ "", "javascript", "dns", "cookies", "" ]
I am trying to connect to a remote MySQL database using Visual C# 2008 Express Edition. Is there a way to connect using the editor, or do I have to code the connection manually? The editor has a clear and easy-to-follow wizard for connecting to Microsoft SQL Server and Access databases, but I don't see an easy way to add a remote MySQL datasource. I tried searching the help, but couldn't find anything useful. Has anyone done this using the editor? Or can point me in a useful direction?
You will have to code the connection manually to connect to a remote MySQL database using Visual C# 2008 Express Edition. VS 2008 Express (and VS 2005 Express too) doesn't allow you to use MySQL .Net Provider through the Data Source Dialog. The non-Express edition allow you to do the same. To use MySQL in VS Express, you will have to include a reference to the MySQL DLLs. If you have installed the MySQL .Net Provider, the DLLs will be in C:\Program Files\MySQL\MySQL Connector Net x.x.x). Or copy the DLLs to the Bin folder of your project. After including the DLLs, you can make a ConnectionString to connect to the remote MySQL Database. The MySQL .Net Provider can be found [here](http://dev.mysql.com/downloads/connector/net/5.2.html) A similar question was asked in thread [396593 here](https://stackoverflow.com/questions/396593/mysql-net-provider-5-2-does-not-show-up-in-data-source-dialog-of-vs-2008-express)
**EDIT:** I didn't check Rishi Agarwal's answer before posting. I think his answer has more insight on the express edition I am not sure about this and express edition, but you should try [MySQL Connector/Net](http://dev.mysql.com/downloads/connector/net/5.2.html). It works fine with my VS2008 Pro.
Connect to remote MySQL database with Visual C#
[ "", "c#", ".net", "mysql", "" ]
Can you simply delete the directory from your python installation, or are there any lingering files that you must delete?
It varies based on the options that you pass to `install` and the contents of the [distutils configuration files](http://docs.python.org/install/index.html#inst-config-files) on the system/in the package. I don't believe that any files are modified outside of directories specified in these ways. Notably, [distutils does not have an uninstall command](http://bugs.python.org/issue4673) at this time. It's also noteworthy that deleting a package/egg can cause dependency issues – utilities like [`easy_install`](http://peak.telecommunity.com/DevCenter/EasyInstall) attempt to alleviate such problems.
The three things that get installed that you will need to delete are: 1. Packages/modules 2. Scripts 3. Data files Now on my linux system these live in: 1. /usr/lib/python2.5/site-packages 2. /usr/bin 3. /usr/share But on a windows system they are more likely to be entirely within the Python distribution directory. I have no idea about OSX except it is more likey to follow the linux pattern.
How do you uninstall a python package that was installed using distutils?
[ "", "python", "" ]
I have some model objects I'm using in my Java client application. Later these model objects will be populated / retrieved from remote services (e.g. SOAP). Now I want to do manual / automatic testing of the frontend before implementing these services. The model objects are mostly POJO and I want to store some sample test data in files and populate them with some easy method. E.g. having model object School (with name (String) and teachers (List)) and Teacher with lastname and firstname, I want to store actual test data in some XML / text file and create some schools containing teachers from these data. What are you using in this situation? I'm not familiar with TTD yet, but I can't imagine that there is no generic framework for doing this. [edit] I've choosen Spring to mock up my sample data / services, but the other alternatives mentioned here would have worked as well.
You can also use Spring to mock your remote service(s) and their responses. In this case, all you have to do is loading an applicationContext that will simulate your backend system(s) by replying exactly what you want for your test purpose.
Sounds like a good use of XML serialization. You can use any XML serialization tool you like: XStream, etc. Another nice tool is SOAP UI. If you point it to the WSDL for your service it'll create the XML request for you. Fill in the values and off you go. These can be saved, so perhaps that's a good way to generate test cases.
How to prepopulate model objects with test data from file?
[ "", "java", "windows", "" ]
So I have this regex: ``` (^(\s+)?(?P<NAME>(\w)(\d{7}))((01f\.foo)|(\.bar|\.goo\.moo\.roo))$|(^(\s+)?(?P<NAME2>R1_\d{6}_\d{6}_)((01f\.foo)|(\.bar|\.goo\.moo\.roo))$)) ``` Now if I try and do a match against this: ``` B048661501f.foo ``` I get this error: ``` File "C:\Python25\lib\re.py", line 188, in compile return _compile(pattern, flags) File "C:\Python25\lib\re.py", line 241, in _compile raise error, v # invalid expression sre_constants.error: redefinition of group name 'NAME' as group 9; was group 3 ``` If I can't define the same group twice in the same regex expression for two different cases, what do I do?
No, you can't have two groups of the same name, this would somehow defy the purpose, wouldn't it? What you probably *really* want is this: ``` ^\s*(?P<NAME>\w\d{7}|R1_(?:\d{6}_){2})(01f\.foo|\.(?:bar|goo|moo|roo))$ ``` I refactored your regex as far as possible. I made the following assumptions: You want to (correct me if I'm wrong): * ignore white space at the start of the string * match either of the following into a group named "NAME": + a letter followed by 7 digits, or + `"R1_"`, and two times (6 digits + `"_"`) * followed by either: + `"01f.foo"` or + `"."` and (`"bar"` or `"goo"` or `"moo"` or `"roo"`) * followed by the end of the string --- You could also have meant: ``` ^\s*(?P<NAME>\w\d{7}01f|R1_(?:\d{6}_){2})\.(?:foo|bar|goo|moo|roo)$ ``` Which is: * ignore white space at the start of the string * match either of the following into a group named "NAME": + a letter followed by 7 digits and "01f" + `"R1_"`, and two times (6 digits + `"_"`) * a dot * `"foo"`, `"bar"`, `"goo"`, `"moo"` or `"roo"` * the end of the string
Reusing the same name makes sense in your case, contrary to Tamalak's reply. Your regex compiles with python2.7 and also re2. Maybe this problem has been resolved.
Regex Problem Group Name Redefinition?
[ "", "python", "regex", "" ]
Consider two web pages with the following in their body respectively: ``` <body> <script> document.writeln('<textarea></textarea>') </script> </body> ``` and ``` <body> <script> var t = document.createElement('textarea'); document.body.appendChild(t); </script> </body> ``` (think of them as part of something larger, where the textareas *have* to be generated from JavaScript and can't be hard-coded into the page). They both produce the same output, but the former is considered "bad", while the latter is considered the "right" way to do it. (Right?) On the other hand, if you type something in the page and then either refresh it, or go somewhere else and hit Back, then in the former case, what you typed in the textarea is preserved, while in the later it is lost. (At least on Firefox.) Is there a way to use the latter method and still have the useful feature that what the user has typed into a form is saved even if they accidentally hit refresh or come back via the Back button (at least on Firefox)?
I believe the document.write version actually blows away an existing content on the page. Ie, the body and script tags will no longer be there. That is why people usually use appendChild. Keeping the text or not is very browser specific. I wouldn't bet that Firefox would not change it's behavior on that in a future version, either. I would suggest implementing an alert dialog when the user trys to navigate away from the page and there is unsaved content in edit fields. Usually you would do this with an unload event.
document.write() won't blow away the page content as long as it is executed inline as the page is rendered. Online advertising makes extensive use of document.write() to dynamically write adverts into the page as it loads. If however, you executed the document.write() method at a later time in the page history (after the body is completely rendered) then as Chase says, it would blow away the existing body and display the argument to document.write(). Other than that I agree that the preserving forms behaviour is really pretty browser specific, and not something you should rely on in many cases. It's a feature there to help the user rather than something for developers to be aware of or attempt to utilize.
document.write() vs inserting DOM nodes: preserve form information?
[ "", "javascript", "firefox", "usability", "" ]
I've taken over a mixed PHP4/PHP5 project which has been handed down from developer to developer, with each one making things worse. Before I spend too much time on it I'd like to develop a base-standard, with consistent formatting at a minimum. Can anyone recommend a utility (Linux or Mac OS X preferably) that will reformat the code? If I can set parameters which influence output (like tab-indentation, brace/bracket placement, when to split array values onto new lines etc.) then that's a benefit, though not crucial. Preference goes to Open Source tools, though I'd like to know your experiences with purchased software too.
You can do that with [Netbeans](http://netbeans.org) or with [Eclipse PDT](http://www.eclipse.org/pdt/), both excellent PHP editors. There's a Format Code option in each, which will reformat a selected file according to your code style preferences. There may be an option to format code in bulk as well. I do recommend that if you do this, do the format and immediately check the code back in with a big fat "REFORMATTING!" comment. Trying to mix in other bug fixes with the reformat makes for nasty-looking diff files to try to decipher down the road.
I use phptidy: ~~<http://cmr.cx/phptidy/>~~ <https://github.com/cmrcx/phptidy> By design, it's not as aggressive as perltidy, but it's still very useful. It can be configured and will do all the php files in your project as a batch. Update: for a more aggressive, standards-based tool, consider PHP CS Fixer: <https://github.com/FriendsOfPhp/PHP-CS-Fixer>
Automatically reformatting inherited PHP spaghetti code
[ "", "php", "code-formatting", "" ]